Configure wrangler.toml
Wrangler optionally uses a wrangler.toml
configuration file to customize the development and deploying setup for a Worker.
It is best practice to treat wrangler.toml
as the source of truth for configuring a Worker.
Sample wrangler.toml
configuration
wrangler.toml# Top-level configuration
name = "my-worker"
main = "src/index.js"
compatibility_date = "2022-07-12"
workers_dev = false
route = { pattern = "example.org/*", zone_name = "example.org" }
kv_namespaces = [ { binding = "<MY_NAMESPACE>", id = "<KV_ID>" }
]
[env.staging]
name = "my-worker-staging"
route = { pattern = "staging.example.org/*", zone_name = "example.org" }
kv_namespaces = [ { binding = "<MY_NAMESPACE>", id = "<STAGING_KV_ID>" }
]
Environments
The configuration for a Worker can become complex when you define different environments, and each environment has its own configuration. There is a default (top-level) environment and named environments that provide environment-specific configuration.
These are defined under [env.name]
keys, such as [env.staging]
which you can then preview or deploy with the -e
/ --env
flag in the wrangler
commands like npx wrangler deploy --env staging
.
The majority of keys are inheritable, meaning that top-level configuration can be used in environments. Bindings, such as vars
or kv_namespaces
, are not inheritable and need to be defined explicitly.
Inheritable keys
Inheritable keys are configurable at the top-level, and can be inherited (or overridden) by environment-specific configuration.
-
name
string
- The name of your Worker. Alphanumeric and dashes only.
-
main
string
- The path to the entrypoint of your Worker that will be executed. For example:
./src/index.ts
.
- The path to the entrypoint of your Worker that will be executed. For example:
-
compatibility_date
string
- A date in the form
yyyy-mm-dd
, which will be used to determine which version of the Workers runtime is used. Refer to Compatibility dates.
- A date in the form
-
account_id
string
- This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the zone/route you provide, if you provide one. It can also be specified through the
CLOUDFLARE_ACCOUNT_ID
environment variable.
- This is the ID of the account associated with your zone. You might have more than one account, so make sure to use the ID of the account associated with the zone/route you provide, if you provide one. It can also be specified through the
-
compatibility_flags
string[]
- A list of flags that enable features from upcoming features of the Workers runtime, usually used together with
compatibility_date
. Refer to compatibility dates.
- A list of flags that enable features from upcoming features of the Workers runtime, usually used together with
-
workers_dev
boolean
- Enables use of
*.workers.dev
subdomain to test and deploy your Worker. If you have a Worker that is only forscheduled
events, you can set this tofalse
. Defaults totrue
.
- Enables use of
-
route
Route
- A route that your Worker should be deployed to. Only one of
routes
orroute
is required. Refer to types of routes.
- A route that your Worker should be deployed to. Only one of
-
routes
Route[]
- An array of routes that your Worker should be deployed to. Only one of
routes
orroute
is required. Refer to types of routes.
- An array of routes that your Worker should be deployed to. Only one of
-
tsconfig
string
- Path to a custom
tsconfig
.
- Path to a custom
-
triggers
object
- Cron definitions to trigger a Worker’s
scheduled
function. Refer to triggers.
- Cron definitions to trigger a Worker’s
-
usage_model
string
- The usage model of your Worker. Refer to usage models.
-
rules
Rule
- An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use
Text
,Data
andCompiledWasm
modules, or when you wish to have a.js
file be treated as anESModule
instead ofCommonJS
.
- An ordered list of rules that define which modules to import, and what type to import them as. You will need to specify rules to use
-
build
Build
- Configures a custom build step to be run by Wrangler when building your Worker. Refer to Custom builds.
-
no_bundle
boolean
- Skip internal build steps and directly deploy your Worker script. You must have a plain JavaScript Worker with no dependencies.
-
minify
boolean
- Minify the Worker script before uploading.
-
node_compat
boolean
- Add polyfills for node builtin modules and globals. Refer to node compatibility.
-
send_metrics
boolean
- Whether Wrangler should send usage metrics to Cloudflare for this project.
-
keep_vars
boolean
- Whether Wrangler should keep variables configured in the dashboard on deploy. Refer to source of truth.
-
logpush
boolean
- Enables Workers Trace Events Logpush for a Worker. Any scripts with this property will automatically get picked up by the Workers Logpush job configured for your account. Defaults to
false
.
- Enables Workers Trace Events Logpush for a Worker. Any scripts with this property will automatically get picked up by the Workers Logpush job configured for your account. Defaults to
Non-inheritable keys
Non-inheritable keys are configurable at the top-level, but cannot be inherited by environments and must be specified for each environment.
-
define
Record<string, string>
- A map of values to substitute when deploying your Worker.
-
vars
object
- A map of environment variables to set when deploying your Worker. Refer to Environment variables.
-
durable_objects
object
- A list of Durable Objects that your Worker should be bound to. Refer to Durable Objects.
-
kv_namespaces
object
- A list of KV namespaces that your Worker should be bound to. Refer to KV namespaces.
-
r2_buckets
object
- A list of R2 buckets that your Worker should be bound to. Refer to R2 buckets.
-
services
object
- A list of service bindings that your Worker should be bound to. Refer to service bindings.
Types of routes
There are four types of routes.
Simple route
This is a simple route that only requires a pattern.
Example: "example.com/*"
Zone ID route
-
pattern
string
- The pattern that your Worker can be run on, for example,
"example.com/*"
.
- The pattern that your Worker can be run on, for example,
-
zone_id
string
- The ID of the zone that your
pattern
is associated with. Refer to Find zone and account IDs.
- The ID of the zone that your
-
custom_domain
boolean
- Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to
false
. Refer to Custom Domains.
- Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to
Example: { pattern = "example.com/*", zone_id = "foo" }
Zone name route
-
pattern
string
- The pattern that your Worker should be run on, for example,
"example.com/*"
.
- The pattern that your Worker should be run on, for example,
-
zone_name
string
- The name of the zone that your
pattern
is associated with. If you are using API tokens, this will require theAccount
scope.
- The name of the zone that your
-
custom_domain
boolean
- Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to
false
. Refer to Custom Domains.
- Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to
Example: { pattern = "example.com/*", zone_name = "example.com" }
Custom Domain route
This will use a Custom Domain as opposed to a route. Refer to Custom Domains.
-
pattern
string
- The pattern that your Worker should be run on, for example,
"example.com"
.
- The pattern that your Worker should be run on, for example,
-
custom_domain
boolean
- Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to
false
. Refer to Custom Domains.
- Whether the Worker should be on a Custom Domain as opposed to a route. Defaults to
Example:
wrangler.tomlroute = { pattern = "example.com", custom_domain = true }
Triggers
Triggers allow you to define the cron
expression to invoke your Worker’s scheduled
function. Refer to Supported cron expressions.
-
crons
string[]
- An array of
cron
expressions. - To disable a Cron Trigger, set
crons = []
. Commenting out thecrons
key will not disable a Cron Trigger.
- An array of
Example:
wrangler.toml[triggers]
crons = ["* * * * *"]
Custom builds
You can configure a custom build step that will be run before your Worker is deployed. Refer to Custom builds.
-
command
string
- The command used to build your Worker. On Linux and macOS, the command is executed in the
sh
shell and thecmd
shell for Windows. The&&
and||
shell operators may be used.
- The command used to build your Worker. On Linux and macOS, the command is executed in the
-
cwd
string
- The directory in which the command is executed.
-
watch_dir
string | string[]
- The directory to watch for changes while using
wrangler dev
. Defaults to the current working directory.
- The directory to watch for changes while using
Example:
wrangler.toml[build]
command = "npm run build"
cwd = "build_cwd"
watch_dir = "build_watch_dir"
Bindings
D1 databases
D1 is Cloudflare’s serverless SQL database. A Worker can query a D1 database (or databases) by creating a binding to each database for D1’s client API.
To bind D1 databases to your Worker, assign an array of the below object to the [[d1_databases]]
key.
-
binding
string
- The binding name used to refer to the D1 database. The value (string) you set will be used to reference this database in your Worker. The binding must be
a valid JavaScript variable name. For example,
binding = "MY_DB"
orbinding = "productionDB"
would both be valid names for the binding.
- The binding name used to refer to the D1 database. The value (string) you set will be used to reference this database in your Worker. The binding must be
a valid JavaScript variable name. For example,
-
name
string
- The name of the database. This a human-readable name that allows you to distinguish between different databases, and is set when you first create the database.
-
database_id
string
- The ID of the database. The database ID is available when you first use
wrangler d1 create
or when you callwrangler d1 list
, and uniquely identifies your database.
- The ID of the database. The database ID is available when you first use
Example:
wrangler.toml[[d1_databases]]
binding = "PROD_DB"
database_name = "test-db"
database_id = "c020574a-5623-407b-be0c-cd192bab9545"
Durable Objects
Durable Objects provide low-latency coordination and consistent storage for the Workers platform.
To bind Durable Objects to your Worker, assign an array of the below object to the durable_objects.bindings
key.
-
name
string
- The name of the binding used to refer to the Durable Object.
-
class_name
string
- The exported class name of the Durable Object.
-
script_name
string
- The script where the Durable Object is defined, if it is external to this Worker.
-
environment
string
- The environment of the
script_name
to bind to.
- The environment of the
Example:
wrangler.tomldurable_objects.bindings = [ { name = "<TEST_OBJECT>", class_name = "<TEST_CLASS>" }
]
Migrations
When making changes to your Durable Object classes, you must perform a migration. Refer to Durable Object migrations.
-
tag
string
- A unique identifier for this migration.
-
new_classes
string[]
- The new Durable Objects being defined.
-
renamed_classes
{from: string, to: string}[]
- The Durable Objects being renamed.
-
deleted_classes
string[]
- The Durable Objects being removed.
Example:
wrangler.toml[[migrations]]
tag = ""
new_classes = [""]
renamed_classes = [{from = "DurableObjectExample", to = "UpdatedName" }]
deleted_classes = ["DeprecatedClass"]
KV namespaces
Workers KV is a global, low-latency, key-value data store. It stores data in a small number of centralized data centers, then caches that data in Cloudflare’s data centers after access.
To bind KV namespaces to your Worker, assign an array of the below object to the kv_namespaces
key.
-
binding
string
- The binding name used to refer to the KV namespace.
-
id
string
- The ID of the KV namespace.
-
preview_id
string
- The ID of the KV namespace used during
wrangler dev
.
- The ID of the KV namespace used during
Example:
wrangler.tomlkv_namespaces = [ { binding = "<TEST_NAMESPACE>", id = "<TEST_ID>" }
]
Queues
Queues is Cloudflare’s global message queueing service, providing guaranteed delivery and message batching. To interact with a queue with Workers, you need a producer Worker to send messages to the queue and a consumer Worker to pull batches of messages out of the Queue. A single Worker can produce to and consume from multiple Queues.
To bind Queues to your producer Worker, assign an array of the below object to the [[queues.producers]]
key.
-
queue
string
- The name of the queue, used on the Cloudflare dashboard.
-
binding
string
- The binding name used to refer to the queue in your Worker. The binding must be
a valid JavaScript variable name. For example,
binding = "MY_QUEUE"
orbinding = "productionQueue"
would both be valid names for the binding.
- The binding name used to refer to the queue in your Worker. The binding must be
a valid JavaScript variable name. For example,
Example:
wrangler.toml[[queues.producers]] queue = "my-queue" binding = "MY_QUEUE"
To bind Queues to your consumer Worker, assign an array of the below object to the [[queues.consumers]]
key.
-
queue
string
- The name of the queue, used on the Cloudflare dashboard.
-
max_batch_size
number
- The maximum number of messages allowed in each batch.
-
max_batch_timeout
number
- The maximum number of seconds to wait for messages to fill a batch before the batch is sent to the consumer Worker.
-
max_retries
number
- The maximum number of retries for a message, if it fails or
retryAll()
is invoked.
- The maximum number of retries for a message, if it fails or
-
dead_letter_queue
string
- The name of another queue to send a message if it fails processing at least
max_retries
times. - If a
dead_letter_queue
is not defined, messages that repeatedly fail processing will be discarded. - If there is no queue with the specified name, it will be created automatically.
- The name of another queue to send a message if it fails processing at least
-
max_concurrency
number
- The maximum number of concurrent consumers allowed to run at once. Leaving this unset will mean that the number of invocations will scale to the currently supported maximum.
- Refer to Consumer concurrency for more information on how consumers autoscale, particularly when messages are retried.
Example:
wrangler.toml[[queues.consumers]] queue = "my-queue" max_batch_size = 10 max_batch_timeout = 30 max_retries = 10 dead_letter_queue = "my-queue-dlq" max_concurrency = 5
R2 buckets
Cloudflare R2 Storage allows developers to store large amounts of unstructured data without the costly egress bandwidth fees associated with typical cloud storage services.
To bind R2 buckets to your Worker, assign an array of the below object to the r2_buckets
key.
-
binding
string
- The binding name used to refer to the R2 bucket.
-
bucket_name
string
- The name of this R2 bucket.
-
preview_bucket_name
string
- The preview name of this R2 bucket used during
wrangler dev
.
- The preview name of this R2 bucket used during
Example:
wrangler.tomlr2_buckets = [ { binding = "<TEST_BUCKET>", bucket_name = "<TEST_BUCKET>"}
]
Service bindings
A service binding allows you to send HTTP requests to another Worker without those requests going over the Internet. The request immediately invokes the downstream Worker, reducing latency as compared to a request to a third-party service. Refer to About Service Bindings.
To bind other Workers to your Worker, assign an array of the below object to the services
key.
-
binding
string
- The binding name used to refer to the bound service.
-
service
string
- The name of the service.
-
environment
string
- The environment of the service (for example,
production
,staging
, etc). Refer to Environments.
- The environment of the service (for example,
Example:
wrangler.tomlservices = [ { binding = "<TEST_BINDING>", service = "<TEST_WORKER>" }
]
Analytics Engine Datasets
Workers Analytics Engine provides analytics, observability and data logging from Workers. Write data points to your Worker binding then query the data using the SQL API.
To bind Analytics Engine datasets to your Worker, assign an array of the below object to the analytics_engine_datasets
key.
-
binding
string
- The binding name used to refer to the dataset.
-
dataset
string
- The dataset name to write to. This will default to the same name as the binding if it is not supplied.
Example:
wrangler.toml[[analytics_engine_datasets]]
binding = "<BINDING_NAME>"
dataset = "<DATASET_NAME>"
mTLS Certificates
To communicate with origins that require client authentication, a Worker can present a certificate for mTLS in subrequests. Wrangler provides the mtls-certificate
command to upload and manage these certificates.
To create a binding to an mTLS certificate for your Worker, assign an array of objects with the following shape to the mtls_certificates
key.
-
binding
string
- The binding name used to refer to the certificate.
-
certificate_id
string
- The ID of the certificate. Wrangler displays this via the
mtls-certificate upload
andmtls-certificate list
commands.
- The ID of the certificate. Wrangler displays this via the
Example of a wrangler.toml
configuration that includes an mTLS certificate binding:
wrangler.tomlmtls_certificates = [ { binding = "<BINDING_NAME>", certificate_id = "<CERTIFICATE_ID>" }
]
mTLS certificate bindings can then be used at runtime to communicate with secured origins via their fetch
method.
Email bindings
You can send an email about your Worker’s activity from your Worker to an email address verified on Email Routing. This is useful for when you want to know about certain types of events being triggered, for example.
Before you can bind an email address to your Worker, you need to enable Email Routing and have at least one verified email address. Then, assign an array to the object send_email
with the type of email binding you need.
-
type
string
- Defines that you are creating bindings for sending emails from your Worker.
-
attribute
string
- Defines the type of binding. Refer to Types of bindings for more information.
You can add one or more types of bindings to your wrangler.toml
file. However, each attribute must be on its own line:
send_email = [{type = "send_email", name = "<NAME_FOR_BINDING1>"},{type = "send_email", name = "<NAME_FOR_BINDING2>", destination_address = "<YOUR_EMAIL>@example.com"},{type = "send_email", name = "<NAME_FOR_BINDING3>", allowed_destination_addresses = ["<YOUR_EMAIL>@example.com", "<YOUR_EMAIL2>@example.com"]},]
Bundling
You can bundle assets into your Worker using the rules
key, making these assets available to be imported when your Worker is invoked. The rules
key will be an array of the below object.
-
type
string
- The type of asset. Must be one of:
ESModule
,CommonJS
,CompiledWasm
,Text
orData
.
- The type of asset. Must be one of:
-
globs
string[]
- An array of glob rules (for example,
["**/*.md"]
). Refer to glob.
- An array of glob rules (for example,
-
fallthrough
boolean
- When set to
true
on a rule, this allows you to have multiple rules for the sameType
.
- When set to
Example:
wrangler.tomlrules = [ { type = "Text", globs = ["**/*.md"], fallthrough = true }
]
Importing assets within a Worker
You can import and refer to these assets within your Worker, like so:
index.jsimport markdown from './example.md'
export default { async fetch() { return new Response(markdown) }
}
Local development settings
You can configure various aspects of local development, such as the local protocol or port.
-
ip
string
- IP address for the local dev server to listen on. Defaults to
localhost
.
- IP address for the local dev server to listen on. Defaults to
-
port
number
- Port for the local dev server to listen on. Defaults to
8787
.
- Port for the local dev server to listen on. Defaults to
-
local_protocol
string
- Protocol that local dev server listens to requests on. Defaults to
http
.
- Protocol that local dev server listens to requests on. Defaults to
-
upstream_protocol
string
- Protocol that the local dev server forwards requests on. Defaults to
https
.
- Protocol that the local dev server forwards requests on. Defaults to
-
host
string
- Host to forward requests to, defaults to the host of the first
route
of the Worker.
- Host to forward requests to, defaults to the host of the first
Example:
wrangler.toml[dev]
ip = "192.168.1.1"
port = 8080
local_protocol = "http"
Environmental variables
When developing your Worker or Pages Functions, create a .dev.vars
file in the root of your project to define variables that will be used when running wrangler dev
or wrangler pages dev
, as opposed to using another environment and [vars]
in wrangler.toml
. This works both in the local and remote development modes.
This file should be formatted like a dotenv
file, such as KEY=VALUE
.
.dev.varsSECRET_KEY=value
Node compatibility
If you depend on Node.js APIs, either directly in your own code or via a library you depend on, you can either use a subset of Node.js APIs available directly in the Workers runtime, or add polyfills for a subset of node.js APIs to your own code.
Use runtime APIs directly
A growing subset of Node.js APIs are available directly as Runtime APIs, with no need to add polyfills to your own code. To enable these APIs in your Worker, add the nodejs_compat
compatibility flag to your wrangler.toml
:
wrangler.tomlcompatibility_flags = [ "nodejs_compat" ]
Add polyfills using Wrangler
Add polyfills for subset of Node.js APIs to your Worker by adding the node_compat
key to your wrangler.toml
or by passing the --node-compat
flag to wrangler
.
wrangler.tomlnode_compat = true
It is not possible to polyfill all Node APIs or behaviors, but it is possible to polyfill some of them.
This is currently powered by @esbuild-plugins/node-globals-polyfill
which in itself is powered by
rollup-plugin-node-polyfills.
Workers Sites
Workers Sites allows you to host static websites, or dynamic websites using frameworks like Vue or React, on Workers.
-
bucket
string
- The directory containing your static assets. It must be a path relative to your
wrangler.toml
file.
- The directory containing your static assets. It must be a path relative to your
-
include
string[]
- An exclusive list of
.gitignore
-style patterns that match file or directory names from your bucket location. Only matched items will be uploaded.
- An exclusive list of
-
exclude
string[]
- A list of
.gitignore
-style patterns that match files or directories in your bucket that should be excluded from uploads.
- A list of
Example:
wrangler.toml[site]
bucket = "./public"
include = ["upload_dir"]
exclude = ["ignore_dir"]
Proxy support
Corporate networks will often have proxies on their networks and this can sometimes cause connectivity issues. To configure Wrangler with the appropriate proxy details, use the below environmental variables:
https_proxy
HTTPS_PROXY
http_proxy
HTTP_PROXY
To configure this on macOS, add HTTP_PROXY=http://<YOUR_PROXY_HOST>:<YOUR_PROXY_PORT>
before your Wrangler commands.
Example:
$ HTTP_PROXY=http://localhost:8080 wrangler dev
If your IT team has configured your computer’s proxy settings, be aware that the first non-empty environment variable in this list will be used when Wrangler makes outgoing requests.
For example, if both https_proxy
and http_proxy
are set, Wrangler will only use https_proxy
for outgoing requests.
Source of truth
We recommend treating your wrangler.toml
file as the source of truth for your Worker configuration, and to avoid making changes to your Worker via the Cloudflare dashboard if you are using Wrangler.
If you need to make changes to your Worker from the Cloudflare dashboard, the dashboard will generate a TOML snippet for you to copy into your wrangler.toml
file, which will help ensure your wrangler.toml
file is always up to date.
If you change your environment variables in the Cloudflare dashboard, Wrangler will override them the next time you deploy. If you want to disable this behavior, add keep_vars = true
to your wrangler.toml
.
If you change your routes in the dashboard, Wrangler will override them in the next deploy with the routes you have set in your wrangler.toml
. To manage routes via the Cloudflare dashboard only, remove any route and routes keys from your wrangler.toml
file. Then add workers_dev = false
to your wrangler.toml
file. For more information, refer to Deprecations.
Note that Wrangler will not delete your secrets (encrypted environment variables) unless you run wrangler secret delete <key>
.