You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: daprdocs/content/en/developing-applications/building-blocks/jobs/jobs-overview.md
-5
Original file line number
Diff line number
Diff line change
@@ -16,11 +16,6 @@ Jobs in Dapr consist of:
16
16
17
17
[See example scenarios.]({{< ref "#scenarios" >}})
18
18
19
-
{{% alert title="Warning" color="warning" %}}
20
-
By default, job data is not resilient to [Scheduler]({{< ref scheduler.md >}}) service restarts.
21
-
A persistent volume must be provided to Scheduler to ensure job data is not lost in either [Kubernetes]({{< ref kubernetes-persisting-scheduler.md >}}) or [Self-hosted]({{< ref self-hosted-persisting-scheduler.md >}}) mode.
22
-
{{% /alert %}}
23
-
24
19
<imgsrc="/images/scheduler/scheduler-architecture.png"alt="Diagram showing the Scheduler control plane service and the jobs API">
Copy file name to clipboardexpand all lines: daprdocs/content/en/developing-applications/building-blocks/service-invocation/howto-invoke-discover-services.md
Copy file name to clipboardexpand all lines: daprdocs/content/en/operations/hosting/kubernetes/kubernetes-persisting-scheduler.md
+43-6
Original file line number
Diff line number
Diff line change
@@ -6,12 +6,15 @@ weight: 50000
6
6
description: "Configure Scheduler to persist its database to make it resilient to restarts"
7
7
---
8
8
9
-
The [Scheduler]({{< ref scheduler.md >}}) service is responsible for writing jobs to its embedded database and scheduling them for execution.
10
-
By default, the Scheduler service database writes this data to an in-memory ephemeral tempfs volume, meaning that **this data is not persisted across restarts**. Job data will be lost during these events.
9
+
The [Scheduler]({{< ref scheduler.md >}}) service is responsible for writing jobs to its embedded Etcd database and scheduling them for execution.
10
+
By default, the Scheduler service database writes this data to a Persistent Volume Claim of 1Gb of size using the cluster's default [storage class](https://kubernetes.io/docs/concepts/storage/storage-classes/). This means that there is no additional parameter required to run the scheduler service reliably on most Kubernetes deployments, although you will need additional configuration in some deployments or for a production environment.
11
11
12
-
To make the Scheduler data resilient to restarts, a persistent volume must be mounted to the Scheduler `StatefulSet`.
13
-
This persistent volume is backed by a real disk that is provided by the hosted Cloud Provider or Kubernetes infrastructure platform.
14
-
Disk size is determined by how many jobs are expected to be persisted at once; however, 64Gb should be more than sufficient for most use cases.
12
+
## Production Setup
13
+
14
+
In case your Kubernetes deployment does not have a default storage class or you are configuring a production cluster, defining a storage class is required.
15
+
16
+
A persistent volume is backed by a real disk that is provided by the hosted Cloud Provider or Kubernetes infrastructure platform.
17
+
Disk size is determined by how many jobs are expected to be persisted at once; however, 64Gb should be more than sufficient for most production scenarios.
15
18
Some Kubernetes providers recommend using a [CSI driver](https://kubernetes.io/docs/concepts/storage/volumes/#csi) to provision the underlying disks.
16
19
Below are a list of useful links to the relevant documentation for creating a persistent disk for the major cloud providers:
@@ -23,7 +26,7 @@ Below are a list of useful links to the relevant documentation for creating a pe
23
26
-[Alibaba Cloud Disk Storage](https://www.alibabacloud.com/help/ack/ack-managed-and-ack-dedicated/user-guide/create-a-pvc)
24
27
25
28
26
-
Once the persistent volume class is available, you can install Dapr using the following command, with Scheduler configured to use the persistent volume class (replace `my-storage-class` with the name of the storage class):
29
+
Once the storage class is available, you can install Dapr using the following command, with Scheduler configured to use the storage class (replace `my-storage-class` with the name of the storage class):
27
30
28
31
{{% alert title="Note" color="primary" %}}
29
32
If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler `StatefulSet` to be recreated with the new persistent volume.
Scheduler can be optionally made to use Ephemeral storage, which is in-memory storage which is **not** resilient to restarts, i.e. all Job data will be lost after a Scheduler restart.
63
+
This is useful for deployments where storage is not available or required, or for testing purposes.
64
+
65
+
{{% alert title="Note" color="primary" %}}
66
+
If Dapr is already installed, the control plane needs to be completely [uninstalled]({{< ref dapr-uninstall.md >}}) in order for the Scheduler `StatefulSet` to be recreated without the persistent volume.
Copy file name to clipboardexpand all lines: daprdocs/content/en/reference/api/jobs_api.md
-5
Original file line number
Diff line number
Diff line change
@@ -10,11 +10,6 @@ weight: 1300
10
10
The jobs API is currently in alpha.
11
11
{{% /alert %}}
12
12
13
-
{{% alert title="Warning" color="warning" %}}
14
-
By default, job data is not resilient to [Scheduler]({{< ref scheduler.md >}}) service restarts.
15
-
A persistent volume must be provided to Scheduler to ensure job data is not lost in either [Kubernetes]({{< ref kubernetes-persisting-scheduler.md >}}) or [Self-Hosted]({{< ref self-hosted-persisting-scheduler.md >}}) mode.
16
-
{{% /alert %}}
17
-
18
13
With the jobs API, you can schedule jobs and tasks in the future.
19
14
20
15
> The HTTP APIs are intended for development and testing only. For production scenarios, the use of the SDKs is strongly
| `redisHost` | Y | Output | The Redis host address | `"localhost:6379"` |
42
-
| `redisPassword` | Y | Output | The Redis password | `"password"` |
42
+
| `redisPassword` | N | Output | The Redis password | `"password"` |
43
43
| `redisUsername` | N | Output | Username for Redis host. Defaults to empty. Make sure your redis server version is 6 or above, and have created acl rule correctly. | `"username"` |
44
44
| `useEntraID` | N | Output | Implements EntraID support for Azure Cache for Redis. Before enabling this: <ul><li>The `redisHost` name must be specified in the form of `"server:port"`</li><li>TLS must be enabled</li></ul> Learn more about this setting under [Create a Redis instance > Azure Cache for Redis]({{< ref "#create-a-redis-instance" >}}) | `"true"`, `"false"` |
45
45
| `enableTLS` | N | Output | If the Redis instance supports TLS with public certificates it can be configured to enable or disable TLS. Defaults to `"false"` | `"true"`, `"false"` |
Copy file name to clipboardexpand all lines: daprdocs/content/en/reference/components-reference/supported-bindings/servicebusqueues.md
+4-2
Original file line number
Diff line number
Diff line change
@@ -73,7 +73,7 @@ The above example uses secrets as plain strings. It is recommended to use a secr
73
73
|`namespaceName`| N | Input/Output | Parameter to set the address of the Service Bus namespace, as a fully-qualified domain name. Required if using Microsoft Entra ID authentication. |`"namespace.servicebus.windows.net"`|
74
74
| `disableEntityManagement` | N | Input/Output | When set to true, queues and subscriptions do not get created automatically. Default: `"false"` | `"true"`, `"false"`
75
75
| `lockDurationInSec` | N | Input/Output | Defines the length in seconds that a message will be locked for before expiring. Used during subscription creation only. Default set by server. | `"30"`
76
-
| `autoDeleteOnIdleInSec` | N | Input/Output | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Default: `"0"` (disabled) | `"3600"`
76
+
| `autoDeleteOnIdleInSec` | N | Input/Output | Time in seconds to wait before auto deleting idle subscriptions. Used during subscription creation only. Must be 300s or greater. Default: `"0"` (disabled) | `"3600"`
77
77
| `defaultMessageTimeToLiveInSec` | N | Input/Output | Default message time to live, in seconds. Used during subscription creation only. | `"10"`
78
78
| `maxDeliveryCount` | N | Input/Output | Defines the number of attempts the server will make to deliver a message. Used during subscription creation only. Default set by server. | `"10"`
79
79
| `minConnectionRecoveryInSec` | N | Input/Output | Minimum interval (in seconds) to wait before attempting to reconnect to Azure Service Bus in case of a connection failure. Default: `"2"` | `"5"`
@@ -164,7 +164,9 @@ In addition to the [settable metadata listed above](#sending-a-message-with-meta
164
164
- `metadata.EnqueuedTimeUtc`
165
165
- `metadata.SequenceNumber`
166
166
167
-
To find out more details on the purpose of any of these metadata properties, please refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
167
+
To find out more details on the purpose of any of these metadata properties refer to [the official Azure Service Bus documentation](https://docs.microsoft.com/rest/api/servicebus/message-headers-and-properties#message-headers).
168
+
169
+
In addition, all entries of `ApplicationProperties` from the original Azure Service Bus message are appended as `metadata.<application property's name>`.
168
170
169
171
{{% alert title="Note" color="primary" %}}
170
172
All times are populated by the server and are not adjusted for clock skews.
0 commit comments