Skip to content

Commit 7419e27

Browse files
authored
Merge branch 'v1.15' into endgame_1.15-updates
2 parents 44f5ce4 + 870aeac commit 7419e27

File tree

21 files changed

+273
-141
lines changed

21 files changed

+273
-141
lines changed

daprdocs/content/en/concepts/building-blocks-concept.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@ Dapr provides the following building blocks:
2222
|----------------|----------|-------------|
2323
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | `/v1.0/invoke` | Service invocation enables applications to communicate with each other through well-known endpoints in the form of http or gRPC messages. Dapr provides an endpoint that acts as a combination of a reverse proxy with built-in service discovery, while leveraging built-in distributed tracing and error handling.
2424
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | `/v1.0/publish` `/v1.0/subscribe`| Pub/Sub is a loosely coupled messaging pattern where senders (or publishers) publish messages to a topic, to which subscribers subscribe. Dapr supports the pub/sub pattern between applications.
25-
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
25+
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | `/v1.0/workflow` | The Workflow API enables you to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows. The Workflow API can be combined with other Dapr API building blocks. For example, a workflow can call another service with service invocation or retrieve secrets, providing flexibility and portability.
2626
| [**State management**]({{< ref "state-management-overview.md" >}}) | `/v1.0/state` | Application state is anything an application wants to preserve beyond a single session. Dapr provides a key/value-based state and query APIs with pluggable state stores for persistence.
2727
| [**Bindings**]({{< ref "bindings-overview.md" >}}) | `/v1.0/bindings` | A binding provides a bi-directional connection to an external cloud/on-premise service or system. Dapr allows you to invoke the external service through the Dapr binding API, and it allows your application to be triggered by events sent by the connected service.
2828
| [**Actors**]({{< ref "actors-overview.md" >}}) | `/v1.0/actors` | An actor is an isolated, independent unit of compute and state with single-threaded execution. Dapr provides an actor implementation based on the virtual actor pattern which provides a single-threaded programming model and where actors are garbage collected when not in use.

daprdocs/content/en/concepts/components-concept.md

-7
Original file line numberDiff line numberDiff line change
@@ -78,13 +78,6 @@ Pub/sub broker components are message brokers that can pass messages to/from ser
7878
- [List of pub/sub brokers]({{< ref supported-pubsub >}})
7979
- [Pub/sub broker implementations](https://github.com/dapr/components-contrib/tree/master/pubsub)
8080

81-
### Workflows
82-
83-
A [workflow]({{< ref workflow-overview.md >}}) is custom application logic that defines a reliable business process or data flow. Workflow components are workflow runtimes (or engines) that run the business logic written for that workflow and store their state into a state store.
84-
85-
<!--- [List of supported workflows]()
86-
- [Workflow implementations](https://github.com/dapr/components-contrib/tree/master/workflows)-->
87-
8881
### State stores
8982

9083
State store components are data stores (databases, files, memory) that store key-value pairs as part of the [state management]({{< ref "state-management-overview.md" >}}) building block.

daprdocs/content/en/concepts/dapr-services/scheduler.md

+96-6
Original file line numberDiff line numberDiff line change
@@ -5,28 +5,118 @@ linkTitle: "Scheduler"
55
description: "Overview of the Dapr scheduler service"
66
---
77

8-
The Dapr Scheduler service is used to schedule jobs, running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}).
8+
The Dapr Scheduler service is used to schedule different types of jobs, running in [self-hosted mode]({{< ref self-hosted >}}) or on [Kubernetes]({{< ref kubernetes >}}).
9+
- Jobs created through the Jobs API
10+
- Actor reminder jobs (used by the actor reminders)
11+
- Actor reminder jobs created by the Workflow API (which uses actor reminders)
912

10-
The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in an embedded Etcd database.
13+
From Dapr v1.15, the Scheduler service is used by default to schedule actor reminders as well as actor reminders for the Workflow API.
14+
15+
There is no concept of a leader Scheduler instance. All Scheduler service replicas are considered peers. All receive jobs to be scheduled for execution and the jobs are allocated between the available Scheduler service replicas for load balancing of the trigger events.
16+
17+
The diagram below shows how the Scheduler service is used via the jobs API when called from your application. All the jobs that are tracked by the Scheduler service are stored in an embedded etcd database.
1118

1219
<img src="/images/scheduler/scheduler-architecture.png" alt="Diagram showing the Scheduler control plane service and the jobs API">
1320

14-
## Actor reminders
21+
## Actor Reminders
1522

1623
Prior to Dapr v1.15, [actor reminders]({{< ref "actors-timers-reminders.md#actor-reminders" >}}) were run using the Placement service. Now, by default, the [`SchedulerReminders` feature flag]({{< ref "support-preview-features.md#current-preview-features" >}}) is set to `true`, and all new actor reminders you create are run using the Scheduler service to make them more scalable.
1724

18-
When you deploy Dapr v1.15, any _existing_ actor reminders are migrated from the Placement service to the Scheduler service as a one time operation for each actor type. You can prevent this migration by setting the `SchedulerReminders` flag to `false` in application configuration file for the actor type.
25+
When you deploy Dapr v1.15, any _existing_ actor reminders are automatically migrated from the Actor State Store to the Scheduler service as a one time operation for each actor type. Each replica will only migrate the reminders whose actor type and id are associated with that host. This means that only when all replicas implementing an actor type are upgraded to 1.15, will all the reminders associated with that type be migrated. There will be _no_ loss of reminder triggers during the migration. However, you can prevent this migration and keep the existing actor reminders running using the Actor State Store by setting the `SchedulerReminders` flag to `false` in the application configuration file for the actor type.
26+
27+
To confirm that the migration was successful, check the Dapr sidecar logs for the following:
28+
29+
```sh
30+
Running actor reminder migration from state store to scheduler
31+
```
32+
coupled with
33+
```sh
34+
Migrated X reminders from state store to scheduler successfully
35+
```
36+
or
37+
```sh
38+
Skipping migration, no missing scheduler reminders found
39+
```
40+
41+
## Job Locality
42+
43+
### Default Job Behavior
44+
45+
By default, when the Scheduler service triggers jobs, they are sent back to a single replica for the same app ID that scheduled the job in a randomly load balanced manner. This provides basic load balancing across your application's replicas, which is suitable for most use cases where strict locality isn't required.
46+
47+
### Using Actor Reminders for Perfect Locality
48+
49+
For users who require perfect job locality (having jobs triggered on the exact same host that created them), actor reminders provide a solution. To enforce perfect locality for a job:
50+
51+
1. Create an actor type with a random UUID that is unique to the specific replica
52+
2. Use this actor type to create an actor reminder
53+
54+
This approach ensures that the job will always be triggered on the same host which created it, rather than being randomly distributed among replicas.
55+
56+
## Job Triggering
57+
58+
### Job Failure Policy and Staging Queue
59+
60+
When the Scheduler service triggers a job and it has a client side error, the job is retried by default with a 1s interval and 3 maximum retries.
61+
62+
For non-client side errors, for example, when a job cannot be sent to an available Dapr sidecar at trigger time, it is placed in a staging queue within the Scheduler service. Jobs remain in this queue until a suitable sidecar instance becomes available, at which point they are automatically sent to the appropriate Dapr sidecar instance.
1963

2064
## Self-hosted mode
2165

2266
The Scheduler service Docker container is started automatically as part of `dapr init`. It can also be run manually as a process if you are running in [slim-init mode]({{< ref self-hosted-no-docker.md >}}).
2367

2468
## Kubernetes mode
2569

26-
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
70+
The Scheduler service is deployed as part of `dapr init -k`, or via the Dapr Helm charts. When running in Kubernetes mode, the Scheduler service is configured to run with exactly 3 replicas to ensure data integrity.
71+
72+
You can run Scheduler in high availability (HA) mode. [Learn more about setting HA mode in your Kubernetes service.]({{< ref "kubernetes-production.md#individual-service-ha-helm-configuration" >}})
73+
74+
When a Kubernetes namespace is deleted, all the Job and Actor Reminders corresponding to that namespace are deleted.
75+
76+
## Back Up and Restore Scheduler Data
77+
78+
In production environments, it's recommended to perform periodic backups of this data at an interval that aligns with your recovery point objectives.
79+
80+
### Port Forward for Backup Operations
81+
82+
To perform backup and restore operations, you'll need to access the embedded etcd instance. This requires port forwarding to expose the etcd ports (port 2379).
83+
84+
#### Docker Compose Example
85+
86+
Here's how to expose the etcd ports in a Docker Compose configuration for standalone mode:
87+
88+
```yaml
89+
scheduler-1:
90+
image: "diagrid/dapr/scheduler:dev110-linux-arm64"
91+
command: ["./scheduler",
92+
"--etcd-data-dir", "/var/run/dapr/scheduler",
93+
"--replica-count", "3",
94+
"--id","scheduler-1",
95+
"--initial-cluster", "scheduler-1=http://scheduler-1:2380,scheduler-0=http://scheduler-0:2380,scheduler-2=http://scheduler-2:2380",
96+
"--etcd-client-ports", "scheduler-0=2379,scheduler-1=2379,scheduler-2=2379",
97+
"--etcd-client-http-ports", "scheduler-0=2330,scheduler-1=2330,scheduler-2=2330",
98+
"--log-level=debug"
99+
]
100+
ports:
101+
- 2379:2379
102+
volumes:
103+
- ./dapr_scheduler/1:/var/run/dapr/scheduler
104+
networks:
105+
- network
106+
```
107+
108+
When running in HA mode, you only need to expose the ports for one scheduler instance to perform backup operations.
109+
110+
### Performing Backup and Restore
111+
112+
Once you have access to the etcd ports, you can follow the [official etcd backup and restore documentation](https://etcd.io/docs/v3.5/op-guide/recovery/) to perform backup and restore operations. The process involves using standard etcd commands to create snapshots and restore from them.
113+
114+
## Disabling the Scheduler service
115+
116+
If you are not using any features that require the Scheduler service (Jobs API, Actor Reminders, or Workflows), you can disable it by setting `global.scheduler.enabled=false`.
27117

28118
For more information on running Dapr on Kubernetes, visit the [Kubernetes hosting page]({{< ref kubernetes >}}).
29119

30120
## Related links
31121

32-
[Learn more about the Jobs API.]({{< ref jobs_api.md >}})
122+
[Learn more about the Jobs API.]({{< ref jobs_api.md >}})

daprdocs/content/en/concepts/faq/faq.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -27,11 +27,11 @@ Creating a new actor follows a local call like `http://localhost:3500/v1.0/actor
2727

2828
The Dapr runtime SDKs have language-specific actor frameworks. For example, the .NET SDK has C# actors. The goal is for all the Dapr language SDKs to have an actor framework. Currently .NET, Java, Go and Python SDK have actor frameworks.
2929

30-
### Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework?
30+
## Does Dapr have any SDKs I can use if I want to work with a particular programming language or framework?
3131

3232
To make using Dapr more natural for different languages, it includes [language specific SDKs]({{<ref sdks>}}) for Go, Java, JavaScript, .NET, Python, PHP, Rust and C++. These SDKs expose the functionality in the Dapr building blocks, such as saving state, publishing an event or creating an actor, through a typed language API rather than calling the http/gRPC API. This enables you to write a combination of stateless and stateful functions and actors all in the language of your choice. And because these SDKs share the Dapr runtime, you get cross-language actor and functions support.
3333

34-
### What frameworks does Dapr integrate with?
34+
## What frameworks does Dapr integrate with?
3535
Dapr can be integrated with any developer framework. For example, in the Dapr .NET SDK you can find ASP.NET Core integration, which brings stateful routing controllers that respond to pub/sub events from other services.
3636

3737
Dapr is integrated with the following frameworks;

daprdocs/content/en/concepts/overview.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -46,7 +46,7 @@ Each of these building block APIs is independent, meaning that you can use any n
4646
|----------------|-------------|
4747
| [**Service-to-service invocation**]({{< ref "service-invocation-overview.md" >}}) | Resilient service-to-service invocation enables method calls, including retries, on remote services, wherever they are located in the supported hosting environment.
4848
| [**Publish and subscribe**]({{< ref "pubsub-overview.md" >}}) | Publishing events and subscribing to topics between services enables event-driven architectures to simplify horizontal scalability and make them resilient to failure. Dapr provides at-least-once message delivery guarantee, message TTL, consumer groups and other advance features.
49-
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows or workflow components.
49+
| [**Workflows**]({{< ref "workflow-overview.md" >}}) | The workflow API can be combined with other Dapr building blocks to define long running, persistent processes or data flows that span multiple microservices using Dapr workflows.
5050
| [**State management**]({{< ref "state-management-overview.md" >}}) | With state management for storing and querying key/value pairs, long-running, highly available, stateful services can be easily written alongside stateless services in your application. The state store is pluggable and examples include AWS DynamoDB, Azure Cosmos DB, Azure SQL Server, GCP Firebase, PostgreSQL or Redis, among others.
5151
| [**Resource bindings**]({{< ref "bindings-overview.md" >}}) | Resource bindings with triggers builds further on event-driven architectures for scale and resiliency by receiving and sending events to and from any external source such as databases, queues, file systems, etc.
5252
| [**Actors**]({{< ref "actors-overview.md" >}}) | A pattern for stateful and stateless objects that makes concurrency simple, with method and state encapsulation. Dapr provides many capabilities in its actor runtime, including concurrency, state, and life-cycle management for actor activation/deactivation, and timers and reminders to wake up actors.

daprdocs/content/en/developing-applications/building-blocks/actors/actors-timers-reminders.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -108,7 +108,7 @@ Refer [api spec]({{< ref "actors_api.md#invoke-timer" >}}) for more details.
108108
## Actor reminders
109109

110110
{{% alert title="Note" color="primary" %}}
111-
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}).
111+
In Dapr v1.15, actor reminders are stored by default in the [Scheduler service]({{< ref "scheduler.md#actor-reminders" >}}). When upgrading to Dapr v1.15 all existing reminders are automatically migrated to the Scheduler service with no loss of reminders as a one time operation for each actor type.
112112
{{% /alert %}}
113113

114114
Reminders are a mechanism to trigger *persistent* callbacks on an actor at specified times. Their functionality is similar to timers. But unlike timers, reminders are triggered under all circumstances until the actor explicitly unregisters them or the actor is explicitly deleted or the number in invocations is exhausted. Specifically, reminders are triggered across actor deactivations and failovers because the Dapr actor runtime persists the information about the actors' reminders using Dapr actor state provider.

daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md

+4-2
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ description: "Overview of the jobs API building block"
88

99
Many applications require job scheduling, or the need to take an action in the future. The jobs API is an orchestrator for scheduling these future jobs, either at a specific time or for a specific interval.
1010

11-
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the scheduler service to schedule actor reminders.
11+
Not only does the jobs API help you with scheduling jobs, but internally, Dapr uses the Scheduler service to schedule actor reminders.
1212

1313
Jobs in Dapr consist of:
1414
- [The jobs API building block]({{< ref jobs_api.md >}})
@@ -57,7 +57,9 @@ The jobs API provides several features to make it easy for you to schedule jobs.
5757

5858
### Schedule jobs across multiple replicas
5959

60-
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 scheduler service instance.
60+
When you create a job, it replaces any existing job with the same name. This means that every time a job is created, it resets the count and only keeps 1 record in the embedded etcd for that job. Therefore, you don't need to worry about multiple jobs being created and firing off — only the most recent job is recorded and executed, even if all your apps schedule the same job on startup.
61+
62+
The Scheduler service enables the scheduling of jobs to scale across multiple replicas, while guaranteeing that a job is only triggered by 1 Scheduler service instance.
6163

6264
## Try out the jobs API
6365

0 commit comments

Comments
 (0)