D1 now returns a count of rows_written and rows_read for every query executed, allowing you to assess the cost of query for both pricing and index optimization purposes.
The meta object returned in D1’s Client API contains a total count of the rows read (rows_read) and rows written (rows_written) by that query. For example, a query that performs a full table scan (for example, SELECT * FROM users) from a table with 5000 rows would return a rows_read value of 5000:
"meta":{
"duration":0.20472300052642825,
"size_after":45137920,
"rows_read":5000,
"rows_written":0
}
Refer to D1 pricing documentation to understand how reads and writes are measured. D1 remains free to use during the alpha period.
Stopped collecting data in the old Layer 3 data source.
Updated Layer 3
timeseriesOpen API docs link endpoint
to start using the new Layer 3 data source by default, fetching the old data source now requires sending the parameter
metric=bytes_old.
Deprecated Layer 3
summaryOpen API docs link endpoint, this will stop
receiving data after 2023-08-14.
You can now bind a D1 database to your Workers directly in the Cloudflare dashboardOpen external link. To bind D1 from the Cloudflare dashboard, select your Worker project -> Settings -> Variables -> and select D1 Database Bindings.
Note: If you have previously deployed a Worker with a D1 database binding with a version of wrangler prior to 3.5.0, you must upgrade to wrangler v3.5.0Open external link first before you can edit your D1 database bindings in the Cloudflare dashboard. New Workers projects do not have this limitation.
Legacy D1 alpha users who had previously prefixed their database binding manually with __D1_BETA__ should remove this as part of this upgrade. Your Worker scripts should call your D1 database via env.BINDING_NAME only. Refer to the latest D1 getting started guide for best practices.
We recommend all D1 alpha users begin using wrangler 3.5.0 (or later) to benefit from improved TypeScript types and future D1 API improvements.
Stream now supports adding a scheduled deletion date to new and existing videos. Live inputs support deletion policies for automatic recording deletion.
#3704Open external link8e231afdOpen external link Thanks @JacobMGEvansOpen external link! - secret:bulk exit 1 on failure
Previously secret"bulk would only log an error on failure of any of the upload requests.
Now when ‘secret:bulk’ has an upload request fail it throws an Error which sends an process.exit(1) at the root .catch() signal.
This will enable error handling in programmatic uses of secret:bulk.
Databases using D1’s new storage subsystem can now grow to 500 MB each, up from the previous 100 MB limit. This applies to both existing and newly created databases.
Updated HTTP timeseries endpoints urls
to timeseries_groups (exampleOpen API docs link)
due to consistency. Old timeseries endpoints are still available, but will soon be removed.
Databases created via the Cloudflare dashboard and Wrangler (as of v3.4.0) now use D1’s new storage subsystem by default. The new backend can be 6 - 20x fasterOpen external link than D1’s original alpha backend.
To understand which storage subsystem your database uses, run wrangler d1 info YOUR_DATABASE and inspect the version field in the output.
Databases with version: beta use the new storage backend and support the Time Travel API. Databases with version: alpha only use D1’s older, legacy backend.
Time Travel is now available. Time Travel allows you to restore a D1 database back to any minute within the last 30 days (Workers Paid plan) or 7 days (Workers Free plan), at no additional cost for storage or restore operations.
Refer to the Time Travel documentation to learn how to travel backwards in time.
#3649Open external linke2234bbcOpen external link Thanks @JacobMGEvansOpen external link! - Feature: ‘stdin’ support for ‘secret:bulk’
Added functionality that allows for files and strings to be piped in, or other means of standard input. This will allow for a broader variety of use cases and improved DX.
This implementation is also fully backward compatible with the previous input method of file path to JSON.
#3610Open external linkbfbe49d0Open external link Thanks @Skye-31Open external link! - Wrangler Capnp Compilation
This PR replaces logfwdr’s schema property with a new unsafe.capnp object. This object accepts either a compiled_schema property, or a base_path and array of source_schemas to get Wrangler to compile the capnp schema for you.
Use Time Travel to restore, fork or copy a database at a specific point-in-time.
Commands:
wrangler d1 time-travel info Retrieve information about a database at a specific point-in-time using Time Travel.
Options:
–timestamp accepts a Unix (seconds from epoch) or RFC3339 timestamp (e.g. 2023-07-13T08:46:42.228Z) to retrieve a bookmark for [string]
–json return output as clean JSON [boolean] [default: false]
wrangler d1 time-travel restore Restore a database back to a specific point-in-time.
Options:
–bookmark Bookmark to use for time travel [string]
–timestamp accepts a Unix (seconds from epoch) or RFC3339 timestamp (e.g. 2023-07-13T08:46:42.228Z) to retrieve a bookmark for [string]
–json return output as clean JSON [boolean] [default: false]
#3384Open external linkccc19d57Open external link Thanks @Peter-SparksuiteOpen external link! - feature: add wrangler deploy option: –old-asset-ttl [seconds]
wrangler deploy immediately deletes assets that are no longer current, which has a side-effect for existing progressive web app users of seeing 404 errors as the app tries to access assets that no longer exist.
This new feature:
does not change the default behavior of immediately deleting no-longer needed assets.
allows users to opt-in to expiring newly obsoleted assets after the provided number of seconds hence, so that current users will have a time buffer before seeing 404 errors.
is careful to avoid extension of existing expiration targets on already expiring old assets, which may have contributed to unexpectedly large KV storage accumulations (perhaps why, in Wrangler 1.x, the reversion https://github.com/cloudflare/wrangler-legacy/pull/2228Open external link happened).
no breaking changes for users relying on the default behavior, but some output changes exist when the new option is used, to indicate the change in behavior.
#3426Open external link5a74cb55Open external link Thanks @matthewdavidrodgersOpen external link! - Prefer non-force deletes unless a Worker is a dependency of another.
If a Worker is used as a service binding, a durable object namespace, an outbounds for a dynamic dispatch namespace, or a tail consumer, then deleting that Worker will break those existing ones that depend upon it. Deleting with ?force=true allows you to delete anyway, which is currently the default in Wrangler.
Force deletes are not often necessary, however, and using it as the default has unfortunate consequences in the API. To avoid them, we check if any of those conditions exist, and present the information to the user. If they explicitly acknowledge they’re ok with breaking their other Workers, fine, we let them do it. Otherwise, we’ll always use the much safer non-force deletes. We also add a “–force” flag to the delete command to skip the checks and confirmation and proceed with ?force=true
#3585Open external linke33bb44aOpen external link Thanks @mrbbotOpen external link! - fix: register middleware once for module workers
Ensure middleware is only registered on the first request when using module workers.
This should prevent stack overflows and slowdowns when making large number of requests to wrangler dev with infrequent reloads.
You can now also connect directly to databases over TCP from a Worker, starting with PostgreSQL. Support for PostgreSQL is based on the popular pg driver, and allows you to connect to any PostgreSQL instance over TLS from a Worker directly.
The R2 Migrator (Super Slurper), which automates the process of migrating from existing object storage providers to R2, is now Generally Available.
Improved performance for ranged reads on very large files. Previously ranged reads near the end of very large files would be noticeably slower than
ranged reads on smaller files. Performance should now be consistently good independent of filesize.
#3414Open external link6b1870adOpen external link Thanks @rozenmdOpen external link! - fix: in D1, lift error.cause into the error message
Prior to this PR, folks had to console.log the error.cause property to understand why their D1 operations were failing. With this PR, error.cause will continue to work, but we’ll also lift the cause into the error message.
#3454Open external linka2194043Open external link Thanks @mrbbotOpen external link! - chore: upgrade miniflare to 3.0.1
This version ensures root CA certificates are trusted on Windows.
It also loads extra certificates from the NODE_EXTRA_CA_CERTS environment variable,
allowing wrangler dev to be used with Cloudflare WARP enabled.
New documentation has been published on how to use D1’s support for generated columns to define columns that are dynamically generated on write (or read). Generated columns allow you to extract data from JSON objects or use the output of other SQL functions.
Fixed a bug where calling GetBucketOpen API docs link on a non-existent bucket would return a 500 instead of a 404.
Improved S3 compatibility for ListObjectsV1, now nextmarker is only set when truncated is true.
The R2 worker bindings now support parsing conditional headers with multiple etags. These etags can now be strong, weak or a wildcard. Previously the bindings only accepted headers containing a single strong etag.
S3 putObject now supports sha256 and sha1 checksums. These were already supported by the R2 worker bindings.
CopyObject in the S3 compatible api now supports Cloudflare specific headers which allow the copy operation to be conditional on the state of the destination object.
To facilitate a transition from the previous Error.cause behaviour, detailed error messages will continue to be populated within Error.cause as well as the top-level Error object until approximately July 14th, 2023. Future versions of both wrangler and the D1 client API will no longer populate Error.cause after this date.
Following an update to the WHATWG URL specOpen external link, the delete() and has() methods of the URLSearchParams class now accept an optional second argument to specify the search parameter’s value. This is potentially a breaking change, so it is gated behind the new urlsearchparams_delete_has_value_arg and url_standard compatibility flags.
#3124Open external link2956c31dOpen external link Thanks @verokarhuOpen external link! - fix: failed d1 migrations not treated as errors
This PR teaches wrangler to return a non-success exit code when a set of migrations fails.
It also cleans up wrangler d1 migrations apply output significantly, to only log information relevant to migrations.
A new Hibernatable WebSockets API
(beta) has been added to Durable Objects. The Hibernatable
WebSockets API allows a Durable Object that is not currently running an event
handler (for example, processing a WebSocket message or alarm) to be removed from
memory while keeping its WebSockets connected (“hibernation”). A Durable Object
that hibernates will not incur billable Duration (GB-sec) charges.
D1 has a new experimental storage back end that dramatically improves query throughput, latency and reliability. The experimental back end will become the default back end in the near future. To create a database using the experimental backend, use wrangler and set the --experimental-backend flag when creating a database:
You can now provide a location hint when creating a D1 database, which will influence where the leader (writer) is located. By default, D1 will automatically create your database in a location close to where you issued the request to create a database. In most cases this allows D1 to choose the optimal location for your database on your behalf.
New documentation has been published that covers D1’s extensive JSON function support. JSON functions allow you to parse, query and modify JSON directly from your SQL queries, reducing the number of round trips to your database, or data queried.
The V2 build system is now available in open beta. Enable the V2 build system by going to your Pages project in the Cloudflare dashboard and selecting Settings > Build & deploymentsOpen external link > Build system version.
#3197Open external link3b3fadfaOpen external link Thanks @mrbbotOpen external link! - feature: rename wrangler publish to wrangler deploy
This ensures consistency with other messaging, documentation and our dashboard,
which all refer to deployments. This also avoids confusion with the similar but
very different npm publish command. wrangler publish will remain a
deprecated alias for now, but will be removed in the next major version of Wrangler.
#3197Open external linkbac9b5deOpen external link Thanks @mrbbotOpen external link! - feature: enable local development with Miniflare 3 and workerd by default
wrangler dev now runs fully-locally by default, using the open-source Cloudflare Workers runtime workerdOpen external link.
To restore the previous behaviour of running on a remote machine with access to production data, use the new --remote flag.
The --local and --experimental-local flags have been deprecated, as this behaviour is now the default, and will be removed in the next major version.
#3197Open external link02a672edOpen external link Thanks @mrbbotOpen external link! - feature: enable persistent storage in local mode by default
Wrangler will now persist local KV, R2, D1, Cache and Durable Object data
in the .wrangler folder, by default, between reloads. This persistence
directory can be customised with the --persist-to flag. The --persist flag
has been removed, as this is now the default behaviour.
#3197Open external linkdc755fdcOpen external link Thanks @mrbbotOpen external link! - feature: remove delegation to locally installed versions
Previously, if Wrangler was installed globally and locally within a project,
running the global Wrangler would instead invoke the local version.
This behaviour was contrary to most other JavaScript CLI tools and has now been
removed. We recommend you use npx wrangler instead, which will invoke the
local version if installed, or install globally if not.
#3048Open external link6ccc4fa6Open external link Thanks @oustnOpen external link! - Fix: fix local registry server closed
Closes #1920Open external link. Sometimes start the local dev server will kill
the devRegistry server so that the devRegistry server can’t be used. We can listen the devRegistry server close event
and reset server to null. When registerWorker is called, we can check if the server is null and start a new server.
#3001Open external linkf9722873Open external link Thanks @rozenmdOpen external link! - fix: make it possible to create a D1 database backed by the experimental backend, and make d1 execute’s batch size configurable
With this PR, users will be able to run wrangler d1 create <NAME> --experimental-backend to create new D1 dbs that use an experimental backend. You can also run wrangler d1 migrations apply <NAME> experimental-backend to run migrations against an experimental database.
On top of that, both wrangler d1 migrations apply <NAME> and wrangler d1 execute <NAME> now have a configurable batch-size flag, as the experimental backend can handle more than 10000 statements at a time.
The new connect() method allows you to connect to any TCP-speaking services directly from your Workers. To learn more about other protocols supported on the Workers platform, visit the new Protocols documentation.
We have added new native database integrations for popular serverless database providers, including Neon, PlanetScale, and Supabase. Native integrations automatically handle the process of creating a connection string and adding it as a Secret to your Worker.
You can now also connect directly to databases over TCP from a Worker, starting with PostgreSQL. Support for PostgreSQL is based on the popular pg driver, and allows you to connect to any PostgreSQL instance over TLS from a Worker directly.
The R2 Migrator (Super Slurper), which automates the process of migrating from existing object storage providers to R2, is now Generally Available.
Cursor, an experimental AI assistant, trained to answer
questions about Cloudflare’s Developer Platform, is now available to preview!
Cursor can answer questions about Workers and the Cloudflare Developer Platform,
and is itself built on Workers. You can read more about Cursor in the announcement
blogOpen external link.
The new nodeJsCompatModule type can be used with a Worker bundle to emulate a Node.js environment. Common Node.js globals such as process and Buffer will be present, and require('...') can be used to load Node.js built-ins without the node: specifier prefix.
Fixed an issue where websocket connections would be disconnected when updating workers. Now, only websockets connected to Durable Object instances are disconnected by updates to that Durable Object’s code.
Cloudflare Stream now supports player enhancement properties.
With player enhancements, you can modify your video player to incorporate elements of your branding, such as your logo, and customize additional options to present to your viewers.
For more, refer to the documentation to get started.
URL.canParse(...) is a new standard API for testing that an input string can be parsed successfully as a URL without the additional cost of creating and throwing an error.
The Workers-specific IdentityTransformStream and FixedLengthStream classes now support specifying a highWaterMark for the writable-side that is used for backpressure signaling using the standard writer.desiredSize/writer.ready mechanisms.
Queue consumers will now automatically scale up based on the number of messages being written to the queue. To control or limit concurrency, you can explicitly define a max_concurrency for your consumer.
Fixed a bug in Wrangler tail and and live logs on the dashboard that
prevented the Administrator Read-Only and Workers Tail Read roles from successfully
tailing Workers.
Previously, generating a download for a live recording exceeding four hours resulted in failure.
To fix the issue, now video downloads are only available for live recordings under four hours. Live recordings exceeding four hours can still be played but cannot be downloaded.
Queue consumers will soon automatically scale up concurrently as a queues’ backlog grows in order to keep overall message processing latency down. Concurrency will be enabled on all existing queues by 2023-03-28.
To opt-out, or to configure a fixed maximum concurrency, set max_concurrency = 1 in your wrangler.toml file or via the queues dashboardOpen external link.
To opt-in, you do not need to take any action: your consumer will begin to scale out as needed to keep up with your message backlog. It will scale back down as the backlog shrinks, and/or if a consumer starts to generate a higher rate of errors. To learn more about how consumers scale, refer to the consumer concurrency documentation.
This allows you to mark a message as delivered as you process it within a batch, and avoids the entire batch from being redelivered if your consumer throws an error during batch processing. This can be particularly useful when you are calling external APIs, writing messages to a database, or otherwise performing non-idempotent actions on individual messages within a batch.
Fixed a bug where transferring large request bodies to a Durable Object was unexpectedly slow.
Previously, an error would be thrown when trying to access unimplemented standard Request and Response properties. Now those will be left as undefined.
IPv6 percentage started to be calculated as (IPv6 requests / requests for dual-stacked content), where as before it
was calculated as (IPv6 requests / IPv4+IPv6 requests).
Added new Layer 3 data source and related endpoints.
Updated Layer 3
timeseriesOpen API docs link endpoint
to support fetching both current and new data sources. For retro-compatibility
reasons, fetching the new data source requires sending the parameter metric=bytes else the current data
source will be returned.
Cloudflare Stream now detects non-video content on upload using the POST API and returns a 400 Bad Request HTTP error with code 10059.
Previously, if you or one of your users attempted to upload a file that is not a video (ex: an image), the request to upload would appear successful, but then fail to be encoded later on.
With this change, Stream responds to the upload request with an error, allowing you to give users immediate feedback if they attempt to upload non-video content.
Queues now allows developers to create up to 100 queues per account, up from the initial beta limit of 10 per account. This limit will continue to increase over time.
You can now deep-link to a Pages deployment in the dashboard with :pages-deployment. An example would be https://dash.cloudflare.com?to=/:account/pages/view/:pages-project/:pages-deployment.
The “per-video” analytics API is being deprecated. If you still use this API, you will need to switch to using the GraphQL Analytics API by February 1, 2023. After this date, the per-video analytics API will be no longer available.
The GraphQL Analytics API provides the same functionality and more, with additional filters and metrics, as well as the ability to fetch data about multiple videos in a single request. Queries are faster, more reliable, and built on a shared analytics system that you can use across many Cloudflare products.
Cloudflare Stream now has no limit on the number of live inputsOpen API docs link you can create. Stream is designed to allow your end-users to go live — live inputs can be created quickly on-demand via a single API request for each of user of your platform or app.
For more on creating and managing live inputs, get started with the docs.
Multipart upload part sizes are always expected to be of the same size, but this enforcement is now done when you complete an upload instead of being done very time you upload a part.
Fixed a performance issue where concurrent multipart part uploads would get rejected.
When playing live video, Cloudflare Stream now provides significantly more accurate estimates of the bandwidth needs of each quality level to client video players. This ensures that live video plays at the highest quality that viewers have adequate bandwidth to play.
As live video is streamed to Cloudflare, we transcode it to make it available to viewers at mulitple quality levels. During transcoding, we learn about the real bandwidth needs of each segment of video at each quality level, and use this to provide an estimate of the bandwidth requirements of each quality level the in HLS (.m3u8) and DASH (.mpd) manifests.
If a live stream contains content with low visual complexity, like a slideshow presentation, the bandwidth estimates provided in the HLS manifest will be lower, ensuring that the most viewers possible view the highest quality level, since it requires relatively little bandwidth. Conversely, if a live stream contains content with high visual complexity, like live sports with motion and camera panning, the bandwidth estimates provided in the HLS manifest will be higher, ensuring that viewers with inadequate bandwidth switch down to a lower quality level, and their playback does not buffer.
This change is particularly helpful if you’re building a platform or application that allows your end users to create their own live streams, where these end users have their own streaming software and hardware that you can’t control. Because this new functionality adapts based on the live video we receive, rather than just the configuration advertised by the broadcaster, even in cases where your end users’ settings are less than ideal, client video players will not receive excessively high estimates of bandwidth requirements, causing playback quality to decrease unnecessarily. Your end users don’t have to be OBS Studio experts in order to get high quality video playback.
No work is required on your end — this change applies to all live inputs, for all customers of Cloudflare Stream. For more, refer to the docs.
You can now deep-link to a Pages project in the dashboard with :pages-project. An example would be https://dash.cloudflare.com?to=/:account/pages/view/:pages-project.
Cloudflare Stream now supports playback of live videos and live recordings using the AV1 codecOpen external link, which uses 46% less bandwidth than H.264.
CORS preflight responses and adding CORS headers for other responses is now implemented for S3 and public buckets. Currently, the only way to configure CORS is via the S3 API.
Fixup for bindings list truncation to work more correctly when listing keys with custom metadata that have " or when some keys/values contain certain multi-byte UTF-8 values.
The S3 GetObject operation now only returns Content-Range in response to a ranged request.
The R2 put() binding options can now be given an onlyIf field, similar to get(), that performs a conditional upload.
The R2 delete() binding now supports deleting multiple keys at once.
The R2 put() binding now supports user-specified SHA-1, SHA-256, SHA-384, SHA-512 checksums in options.
User-specified object checksums will now be available in the R2 get() and head() bindings response. MD5 is included by default for non-multipart uploaded objects.
You can now enable and disable individual live outputs via the API or Stream dashboard, allowing you to control precisely when you start and stop simulcasting to specific destinations like YouTube and Twitch. For more, read the docs.
The S3 DeleteObjects operation no longer trims the space from around the keys before deleting. This would result in files with leading / trailing spaces not being able to be deleted. Additionally, if there was an object with the trimmed key that existed it would be deleted instead. The S3 DeleteObject operation was not affected by this.
Fixed presigned URL support for the S3 ListBuckets and ListObjects operations.
URLs in the Stream Dashboard and Stream API now use a subdomain specific to your Cloudflare Account: customer-{CODE}.cloudflarestream.com. This change allows you to:
Use Content Security PolicyOpen external link (CSP) directives specific to your Stream subdomain, to ensure that only videos from your Cloudflare account can be played on your website.
Allowlist only your Stream account subdomain at the network-level to ensure that only videos from a specific Cloudflare account can be accessed on your network.
No action is required from you, unless you use Content Security Policy (CSP) on your website. For more on CSP, read the docs.
Uploads will automatically infer the Content-Type based on file body
if one is not explicitly set in the PutObject request. This functionality will
come to multipart operations in the future.
Added dummy implementation of the following operation that mimics
the response that a basic AWS S3 bucket will return when first created: GetBucketAcl.
Fixed an S3 compatibility issue for error responses with MinIO .NET SDK and any other tooling that expects no xmlns namespace attribute on the top-level Error tag.
List continuation tokens prior to 2022-07-01 are no longer accepted and must be obtained again through a new list operation.
The list() binding will now correctly return a smaller limit if too much data would otherwise be returned (previously would return an Internal Error).
Improvements to 500s: we now convert errors, so things that were previously concurrency problems for some operations should now be TooMuchConcurrency instead of InternalError. We’ve also reduced the rate of 500s through internal improvements.
ListMultipartUpload correctly encodes the returned Key if the encoding-type is specified.
S3 XML documents sent to R2 that have an XML declaration are not rejected with 400 Bad Request / MalformedXML.
Minor S3 XML compatibility fix impacting Arq Backup on Windows only (not the Mac version). Response now contains XML declaration tag prefix and the xmlns attribute is present on all top-level tags in the response.
Support the r2_list_honor_include compat flag coming up in an upcoming runtime release (default behavior as of 2022-07-14 compat date). Without that compat flag/date, list will continue to function implicitly as include: ['httpMetadata', 'customMetadata'] regardless of what you specify.
cf-create-bucket-if-missing can be set on a PutObject/CreateMultipartUpload request to implicitly create the bucket if it does not exist.
Fix S3 compatibility with MinIO client spec non-compliant XML for publishing multipart uploads. Any leading and trailing quotes in CompleteMultipartUpload are now optional and ignored as it seems to be the actual non-standard behavior AWS implements.
Pages now supports .dev.vars in wrangler pages, which allows you to use use environmental variables during your local development without chaining --envs.
This functionality requires Wrangler v2.0.16 or higher.
Unsupported search parameters to ListObjects/ListObjectsV2 are
now rejected with 501 Not Implemented.
Fixes for Listing:
Fix listing behavior when the number of files within a folder exceeds the limit (you’d end
up seeing a CommonPrefix for that large folder N times where N = number of children
within the CommonPrefix / limit).
Fix corner case where listing could cause
objects with sharing the base name of a "folder" to be skipped.
Fix listing over some files that shared a certain common prefix.
DeleteObjects can now handle 1000 objects at a time.
S3 CreateBucket request can specify x-amz-bucket-object-lock-enabled with a value of false and not have the requested rejected with a NotImplemented
error. A value of true will continue to be rejected as R2 does not yet support
object locks.
We now keep track of the files that make up each deployment and intelligently only upload the files that we have not seen. This means that similar subsequent deployments should only need to upload a minority of files and this will hopefully make uploads even faster.
This functionality requires Wrangler v2.0.11 or higher.
Fixed a bug where the S3 API’s PutObject or the .put() binding could fail but still show the bucket upload as successful.
If conditional headersOpen external link are provided to S3 API UploadObject or CreateMultipartUpload operations, and the object exists, a 412 Precondition Failed status code will be returned if these checks are not met.
Add support for S3 virtual-hosted style pathsOpen external link, such as <BUCKET>.<ACCOUNT_ID>.r2.cloudflarestorage.com instead of path-based routing (<ACCOUNT_ID>.r2.cloudflarestorage.com/<BUCKET>).
Implemented GetBucketLocation for compatibility with external tools, this will always return a LocationConstraint of auto.
During or after uploading a video to Stream, you can now specify a value for a new field, creator. This field can be used to identify the creator of the video content, linking the way you identify your users or creators to videos in your Stream account. For more, read the blog postOpen external link.
When using the S3 API, an empty string and us-east-1 will now alias to the auto region for compatibility with external tools.
GetBucketEncryption, PutBucketEncryption and DeleteBucketEncrypotion are now supported (the only supported value currently is AES256).
Unsupported operations are explicitly rejected as unimplemented rather than implicitly converting them into ListObjectsV2/PutBucket/DeleteBucket respectively.
S3 API CompleteMultipartUploads requests are now properly escaped.
Pagination cursors are no longer returned when the keys in a bucket is the same as the MaxKeys argument.
The S3 API ListBuckets operation now accepts cf-max-keys, cf-start-after and cf-continuation-token headers behave the same as the respective URL parameters.
The S3 API ListBuckets and ListObjects endpoints now allow per_page to be 0.
The S3 API CopyObject source parameter now requires a leading slash.
The S3 API CopyObject operation now returns a NoSuchBucket error when copying to a non-existent bucket instead of an internal error.
Enforce the requirement for auto in SigV4 signing and the CreateBucketLocationConstraint parameter.
The S3 API CreateBucket operation now returns the proper location response header.
The Stream Dashboard now has an analytics panel that shows the number of minutes of both live and recorded video delivered. This view can be filtered by Creator ID, Video UID, and Country. For more in-depth analytics data, refer to the bulk analytics documentation.
The Stream Player can now be configured to use a custom letterbox color, displayed around the video (’letterboxing’ or ‘pillarboxing’) when the video’s aspect ratio does not match the player’s aspect ratio. Refer to the documentation on configuring the Stream Player here.
Cloudflare Stream now supports the SRT live streaming protocol. SRT is a modern, actively maintained streaming video protocol that delivers lower latency, and better resilience against unpredictable network conditions. SRT supports newer video codecs and makes it easier to use accessibility features such as captions and multiple audio tracks.
When viewers manually change the resolution of video they want to receive in the Stream Player, this change now happens immediately, rather than once the existing resolution playback buffer has finished playing.
If you choose to use a third-party player with Cloudflare Stream, you can now easily access HLS and DASH manifest URLs from within the Stream Dashboard. For more about using Stream with third-party players, read the docs here.
When a live input is connected, the Stream Dashboard now displays technical details about the connection, which can be used to debug configuration issues.
You can now configure Stream to send webhooks each time a live stream connects and disconnects. For more information, refer to the Webhooks documentation.
You can now start and stop live broadcasts without having to provide a new video UID to the Stream Player (or your own player) each time the stream starts and stops. Read the docs.
Once a live video has ended and been recorded, you can now give viewers the option to download an MP4 video file of the live recording. For more, read the docs here.
The Stream Player now displays preview images when viewers hover their mouse over the seek bar, making it easier to skip to a specific part of a video.
All Cloudflare Stream customers can now give viewers the option to download videos uploaded to Stream as an MP4 video file. For more, read the docs here.
You can now opt-in to the Stream Connect beta, and use Cloudflare Stream to restream live video to any platform that accepts RTMPS input, including Facebook, YouTube and Twitch.
You can now use Cloudflare Stream to restream or simulcast live video to any platform that accepts RTMPS input, including Facebook, YouTube and Twitch.
If you use Cloudflare Stream with a third party player, and send the clientBandwidthHint parameter in requests to fetch video manifests, Cloudflare Stream now selects the ideal resolution to provide to your client player more intelligently. This ensures your viewers receive the ideal resolution for their network connection.
If you use Cloudflare Stream with a third party player, and send the clientBandwidthHint parameter in requests to fetch video manifests, Cloudflare Stream now selects the ideal resolution to provide to your client player more intelligently. This ensures your viewers receive the ideal resolution for their network connection.
Cloudflare Stream now delivers video using 3-10x less bandwidth, with no reduction in quality. This ensures faster playback for your viewers with less buffering, particularly when viewers have slower network connections.
Videos with multiple audio tracks (ex: 5.1 surround sound) are now mixed down to stereo when uploaded to Stream. The resulting video, with stereo audio, is now playable in the Stream Player.