Interface: WorkerOptions
worker.WorkerOptions
Options to configure the Worker
Some options can significantly affect Worker's performance. Default settings are generally appropriate for day-to-day development, but unlikely to be suitable for production use. We recommend that you explicitly set values for every performance-related option on production deployment.
Propertiesâ
activitiesâ
âĸ Optional
activities: object
Mapping of activity name to implementation
buildIdâ
âĸ Optional
buildId: string
A string that should be unique to the exact worker code/binary being executed.
This is used to uniquely identify the worker's code for a handful of purposes, including the
worker versioning feature if you have opted into that with
WorkerOptions.useVersioning. It will also populate the binaryChecksum
field
on older servers.
âšī¸ Required if useVersioning is true
.
â ī¸ NOTE: When used with versioning, you must pass this build ID to updateBuildIdCompatibility. Otherwise, this Worker will not pick up any tasks.
Default
@temporalio/worker
package name and version + checksum of workflow bundle's code
bundlerOptionsâ
âĸ Optional
bundlerOptions: Object
Type declarationâ
Name | Type | Description |
---|---|---|
ignoreModules? | string [] | List of modules to be excluded from the Workflows bundle. Use this option when your Workflow code references an import that cannot be used in isolation, e.g. a Node.js built-in module. Modules listed here MUST not be used at runtime. > NOTE: This is an advanced option that should be used with care. |
webpackConfigHook? | (config : Configuration ) => Configuration | Before Workflow code is bundled with Webpack, webpackConfigHook is called with the Webpack configuration object so you can modify it. |
connectionâ
âĸ Optional
connection: NativeConnection
A connected NativeConnection instance.
If not provided, the worker will default to connect insecurely to localhost:7233
.
dataConverterâ
âĸ Optional
dataConverter: DataConverter
Provide a custom DataConverter.
When bundling workflows ahead of time, make sure to provide custom payload and failure
converter paths as options to bundleWorkflowCode
.
debugModeâ
âĸ Optional
debugMode: boolean
If true
Worker runs Workflows in the same thread allowing debugger to
attach to Workflow instances.
Workflow execution time will not be limited by the Worker in debugMode
.
Default
false unless the TEMPORAL_DEBUG
environment variable is set.
defaultHeartbeatThrottleIntervalâ
âĸ Optional
defaultHeartbeatThrottleInterval: Duration
Default interval for throttling activity heartbeats in case
ActivityOptions.heartbeat_timeout
is unset.
When the timeout is set in the ActivityOptions
, throttling is set to
heartbeat_timeout * 0.8
.
Format
number of milliseconds or ms-formatted string
Default
30 seconds
enableNonLocalActivitiesâ
âĸ Optional
enableNonLocalActivities: boolean
Whether or not to poll on the Activity task queue.
If disabled and activities are registered on the Worker, it will run only local Activities. This setting is ignored if no activity is registed on the Worker.
Default
true
enableSDKTracingâ
âĸ Optional
enableSDKTracing: boolean
Deprecated
SDK tracing is no longer supported. This option is ignored.
identityâ
âĸ Optional
identity: string
A human-readable string that can identify your worker
Note that in most production environments, the identity
value set by default may be unhelpful for traceability
purposes. It is highly recommended that you set this value to something that will allow you to efficiently identify
that particular Worker container/process/logs in your infrastructure (ex: the task ID allocated to this container
by your orchestrator).
Default
${process.pid}@${os.hostname()}
interceptorsâ
âĸ Optional
interceptors: WorkerInterceptors
A mapping of interceptor type to a list of factories or module paths.
Interceptors are called in order, from the first to the last, each one making the call to the next one, and the last one calling the original (SDK provided) function.
By default, WorkflowInboundLogInterceptor is installed. If you wish to customize the interceptors while keeping the defaults, use appendDefaultInterceptors.
When using workflowBundle, these Workflow interceptors (WorkerInterceptors.workflowModules
) are not used.
Instead, provide them via BundleOptions.workflowInterceptorModules when calling bundleWorkflowCode.
Before v1.9.0, calling appendDefaultInterceptors()
was required when registering custom interceptors in order to
preserve SDK's logging interceptors. This is no longer the case.
maxActivitiesPerSecondâ
âĸ Optional
maxActivitiesPerSecond: number
Limits the number of Activities per second that this Worker will process. (Does not limit the number of Local Activities.) The Worker will not poll for new Activities if by doing so it might receive and execute an Activity which would cause it to exceed this limit. Must be a positive number.
If unset, no rate limiting will be applied to Worker's Activities. (tctl task-queue describe
will display the
absence of a limit as 100,000.)
maxCachedWorkflowsâ
âĸ Optional
maxCachedWorkflows: number
The number of Workflow isolates to keep in cached in memory
Cached Workflows continue execution from their last stopping point. If the Worker is asked to run an uncached Workflow, it will need to fetch and replay the entire Workflow history.
When reuseV8Context
is disabledâ
The major factors contributing to a Workflow Execution's memory weight are:
- its input arguments;
- allocations made and retained by the Workflow itself;
- allocations made and retained by all loaded librairies (including the Node JS builtin context);
- the size of all Payloads sent or received by the Workflow (see Core SDK issue #363).
Most users are able to fil at least 250 Workflows per GB of available memory. In some performance test, we managed to fit 750 Workflows per GB. Your millage may vary.
When reuseV8Context
is enabledâ
The major factors contributing to a Workflow Execution's memory weight are:
- its input arguments;
- allocations made and retained by the Workflow itself;
- the size of all Payloads sent or received by the Workflow (see Core SDK issue #363).
Since most objects are shared/reused across Workflows, the per-Workflow memory footprint is much smaller. Most users are able to fit at least 600 Workflows per GB of available memory. In one reference performance test, memory usage grew by approximately 1 MB per cached Workflow (that is including memory used for activity executions of these Workflows). Your millage may vary.
Default
if reuseV8Context = true
, then max(floor(max(maxHeapMemory - 200MB, 0) * (600WF / 1024MB)), 10)
.
Otherwise max(floor(max(maxHeapMemory - 400MB, 0) * (250WF / 1024MB)), 10)
maxConcurrentActivityTaskExecutionsâ
âĸ Optional
maxConcurrentActivityTaskExecutions: number
Maximum number of Activity tasks to execute concurrently. Adjust this to improve Worker resource consumption.
Mutually exclusive with the tuner option.
Default
100 if no {@link tuner} is set
maxConcurrentActivityTaskPollsâ
âĸ Optional
maxConcurrentActivityTaskPolls: number
Maximum number of Activity tasks to poll concurrently.
Increase this setting if your Worker is failing to fill in all of its
maxConcurrentActivityTaskExecutions
slots despite a backlog of Activity
Tasks in the Task Queue (ie. due to network latency). Can't be higher than
maxConcurrentActivityTaskExecutions
.
Default
min(10, maxConcurrentActivityTaskExecutions)
maxConcurrentLocalActivityExecutionsâ
âĸ Optional
maxConcurrentLocalActivityExecutions: number
Maximum number of Activity tasks to execute concurrently. Adjust this to improve Worker resource consumption.
Mutually exclusive with the tuner option.
Default
100 if no {@link tuner} is set
maxConcurrentWorkflowTaskExecutionsâ
âĸ Optional
maxConcurrentWorkflowTaskExecutions: number
Maximum number of Workflow Tasks to execute concurrently.
In general, a Workflow Worker's performance is mostly network bound (due to communication latency with the Temporal server). Accepting multiple Workflow Tasks concurrently helps compensate for network latency, until the point where the Worker gets CPU bound.
Increasing this number will have no impact if Workflow Task pollers can't fill available execution slots fast
enough. Therefore, when adjusting this value, you may want to similarly adjust maxConcurrentWorkflowTaskPolls
.
See WorkerOptions.maxConcurrentWorkflowTaskPolls for more information.
Also, setting this value too high might cause Workflow Task timeouts due to the fact that the Worker is not able to complete processing accepted Workflow Tasks fast enough. Increasing the number of Workflow threads (see WorkerOptions.workflowThreadPoolSize) may help in that case.
General guidelines:
- High latency to Temporal Server => Increase this number
- Very short Workflow Tasks (no lengthy Local Activities) => increase this number
- Very long/heavy Workflow Histories => decrease this number
- Low CPU usage despite backlog of Workflow Tasks => increase this number
- High number of Workflow Task timeouts => decrease this number
In some performance test against Temporal Cloud, running with a single Workflow thread and the Reuse V8 Context
option enabled, we reached peak performance with a maxConcurrentWorkflowTaskExecutions
of 120
, and
maxConcurrentWorkflowTaskPolls
of 60
(worker machine: Apple M2 Max; ping of 74 ms to Temporal Cloud;
load test scenario: "activityCancellation10kIters", which has short histories, running a single activity).
Your millage may vary.
Can't be lower than 2 if maxCachedWorkflows
is non-zero.
Mutually exclusive with the tuner option.
Default
40 if no {@link tuner} is set
maxConcurrentWorkflowTaskPollsâ
âĸ Optional
maxConcurrentWorkflowTaskPolls: number
Maximum number of Workflow Tasks to poll concurrently.
In general, a Workflow Worker's performance is mostly network bound (due to communication latency with the Temporal server). Polling multiple Workflow Tasks concurrently helps compensate for this latency, by ensuring that the Worker is not starved waiting for the server to return new Workflow Tasks to execute.
This setting is highly related with WorkerOptions.maxConcurrentWorkflowTaskExecutions. In various
performance tests, we generally got optimal performance by setting this value to about half of
maxConcurrentWorkflowTaskExecutions
. Your millage may vary.
Setting this value higher than needed may have negative impact on the server's performance. Consequently, the server may impose a limit on the total number of concurrent Workflow Task pollers.
General guidelines:
- By default, set this value to half of
maxConcurrentWorkflowTaskExecutions
. - Increase if actual number of Workflow Tasks being processed concurrently is lower than
maxConcurrentWorkflowTaskExecutions
despite a backlog of Workflow Tasks in the Task Queue. - Keep this value low for Task Queues which have very few concurrent Workflow Executions.
Can't be higher than maxConcurrentWorkflowTaskExecutions
, and can't be lower than 2.
Default
min(10, maxConcurrentWorkflowTaskExecutions)
maxHeartbeatThrottleIntervalâ
âĸ Optional
maxHeartbeatThrottleInterval: Duration
Longest interval for throttling activity heartbeats
Format
number of milliseconds or ms-formatted string
Default
60 seconds
maxTaskQueueActivitiesPerSecondâ
âĸ Optional
maxTaskQueueActivitiesPerSecond: number
Sets the maximum number of activities per second the task queue will dispatch, controlled server-side. Note that this only takes effect upon an activity poll request. If multiple workers on the same queue have different values set, they will thrash with the last poller winning.
If unset, no rate limiting will be applied to the task queue.
namespaceâ
âĸ Optional
namespace: string
The namespace this worker will connect to
Default
"default"
nonStickyToStickyPollRatioâ
âĸ Optional
nonStickyToStickyPollRatio: number
maxConcurrentWorkflowTaskPolls
* this number = the number of max pollers that will
be allowed for the nonsticky queue when sticky tasks are enabled. If both defaults are used,
the sticky queue will allow 8 max pollers while the nonsticky queue will allow 2. The
minimum for either poller is 1, so if maxConcurrentWorkflowTaskPolls
is 1 and sticky queues are
enabled, there will be 2 concurrent polls.
â ī¸ This API is experimental and may be removed in the future if the poll scaling algorithm changes.
This API is experimental and may be removed in the future if the poll scaling algorithm changes.
Default
0.2
reuseV8Contextâ
âĸ Optional
reuseV8Context: boolean
Toggle whether to reuse a single V8 context for the workflow sandbox.
Context reuse significantly decreases the amount of resources taken up by workflows. From running basic stress tests we've observed 2/3 reduction in memory usage and 1/3 to 1/2 in CPU usage with this feature turned on.
NOTE: We strongly recommend enabling the Reuse V8 Context execution model, and there is currently no known reason
not to use it. Support for the legacy execution model may get removed at some point in the future. Please report
any issue that requires you to disable reuseV8Context
.
Default
true
showStackTraceSourcesâ
âĸ Optional
showStackTraceSources: boolean
Whether or not to send the sources in enhanced stack trace query responses
Default
false
shutdownForceTimeâ
âĸ Optional
shutdownForceTime: Duration
Time to wait before giving up on graceful shutdown and forcefully terminating the worker.
After this duration, the worker will throw GracefulShutdownPeriodExpiredError and any running activities and workflows will not be cleaned up. It is recommended to exit the process after this error is thrown.
Use this option if you must guarantee that the worker eventually shuts down.
Format
number of milliseconds or ms-formatted string
shutdownGraceTimeâ
âĸ Optional
shutdownGraceTime: Duration
Time to wait for pending tasks to drain after shutdown was requested.
In-flight activities will be cancelled after this period and their current attempt will be resolved as failed if
they confirm cancellation (by throwing a CancelledFailure or AbortError
).
Format
number of milliseconds or ms-formatted string
Default
0
sinksâ
âĸ Optional
sinks: InjectedSinks
<any
>
Registration of a SinkFunction, including per-sink-function options.
Sinks are a mechanism for exporting data out of the Workflow sandbox. They are typically used to implement in-workflow observability mechanisms, such as logs, metrics and traces.
To prevent non-determinism issues, sink functions may not have any observable side effect on the execution of a workflow. In particular, sink functions may not return values to the workflow, nor throw errors to the workflow (an exception thrown from a sink function simply get logged to the Runtime's logger).
For similar reasons, sink functions are not executed immediately when a call is made from workflow code. Instead, calls are buffered until the end of the workflow activation; they get executed right before returning a completion response to Core SDK. Note that the time it takes to execute sink functions delays sending a completion response to the server, and may therefore induce Workflow Task Timeout errors. Sink functions should thus be kept as fast as possible.
Sink functions are always invoked in the order that calls were maded in workflow code. Note however that async sink functions are not awaited individually. Consequently, sink functions that internally perform async operations may end up executing concurrently.
Please note that sink functions only provide best-effort delivery semantics, which is generally
suitable for log messages and general metrics collection. However, in various situations, a sink
function call may execute more than once even though the sink function is configured with
callInReplay: false
. Similarly, sink function execution errors only results in log messages,
and are therefore likely to go unnoticed. For use cases that require at-least-once execution
guarantees, please consider using local activities instead. For use cases that require
exactly-once or at-most-once execution guarantees, please consider using regular activities.
Sink names starting with __temporal_
are reserved for use by the SDK itself. Do not register
or use such sink. Registering a sink named defaultWorkerLogger
to redirect workflow logs to a
custom logger is deprecated. Register a custom logger through Runtime.logger instead.
stickyQueueScheduleToStartTimeoutâ
âĸ Optional
stickyQueueScheduleToStartTimeout: Duration
How long a workflow task is allowed to sit on the sticky queue before it is timed out and moved to the non-sticky queue where it may be picked up by any worker.
Format
number of milliseconds or ms-formatted string
Default
10s
taskQueueâ
âĸ taskQueue: string
The task queue the worker will pull from
tunerâ
âĸ Optional
tuner: WorkerTuner
Provide a custom WorkerTuner.
Mutually exclusive with the maxConcurrentWorkflowTaskExecutions, maxConcurrentActivityTaskExecutions, and maxConcurrentLocalActivityExecutions options.
useVersioningâ
âĸ Optional
useVersioning: boolean
If set true, this worker opts into the worker versioning feature. This ensures it only receives workflow tasks for workflows which it claims to be compatible with. The buildId field is used as this worker's version when enabled.
For more information, see https://docs.temporal.io/workers#worker-versioning
workflowBundleâ
âĸ Optional
workflowBundle: WorkflowBundleOption
Use a pre-built bundle for Workflow code. Use bundleWorkflowCode to generate the bundle. The version of
@temporalio/worker
used when calling bundleWorkflowCode
must be the exact same version used when calling
Worker.create
.
This is the recommended way to deploy Workers to production.
See https://docs.temporal.io/typescript/production-deploy#pre-build-code for more information.
When using this option, workflowsPath, bundlerOptions and any Workflow interceptors modules provided in * interceptors are not used. To use workflow interceptors, pass them via BundleOptions.workflowInterceptorModules when calling bundleWorkflowCode.
workflowThreadPoolSizeâ
âĸ Optional
workflowThreadPoolSize: number
Controls the number of threads to be created for executing Workflow Tasks.
Adjusting this value is generally not useful, as a Workflow Worker's performance is mostly network bound (due to
communication latency with the Temporal server) rather than CPU bound. Increasing this may however help reduce
the probability of Workflow Tasks Timeouts in some particular situations, for example when replaying many very
large Workflow Histories at the same time. It may also make sense to tune this value if
maxConcurrentWorkflowTaskExecutions
and maxConcurrentWorkflowTaskPolls
are increased enough so that the Worker
doesn't get starved waiting for Workflow Tasks to execute.
There is no major downside in setting this value _slightly) higher than needed; consider however that there is a per-thread cost, both in terms of memory footprint and CPU usage, so arbitrarily setting some high number is definitely not advisable.
Threading modelâ
All interactions with Core SDK (including polling for Workflow Activations and sending back completion results) happens on the main thread. The main thread then dispatches Workflow Activations to some worker thread, which create and maintain a per-Workflow isolated execution environments (aka. the Workflow Sandbox), implemented as VM contexts.
When reuseV8Context
is disabled, a new VM context is created for each Workflow handled by the Worker.
Creating a new VM context is a relatively lengthy operation which blocks the Node.js event loop. Using multiple
threads helps compensate the impact of this operation on the Worker's performance.
When reuseV8Context
is enabled, a single VM context is created for each worker thread, then reused for every
Workflows handled by that thread (per-Workflow objects get shuffled in and out of that context on every Workflow
Task). Consequently, there is generally no advantage in using multiple threads when reuseV8Context
is enabled.
If more than one thread is used, Workflows will be load-balanced evenly between worker threads on the first Activation of a Workflow Execution, based on the number of Workflows currently owned by each worker thread; futher Activations of that Workflow Execution will then be handled by the same thread, until the Workflow Execution gets evicted from cache.
Default
1 if 'reuseV8Context' is enabled; 2 otherwise. Ignored if debugMode
is enabled.
workflowsPathâ
âĸ Optional
workflowsPath: string
Path to look up workflows in, any function exported in this path will be registered as a Workflows in this Worker.
If this option is provided to Worker.create, Webpack compliation will be triggered.
This option is typically used for local development, for production it's preferred to pre-build the Workflow bundle and pass that to Worker.create via the workflowBundle option.
See https://docs.temporal.io/typescript/production-deploy#pre-build-code for more information.