Using VCL
Fastly VCL is a domain specific programming language which has evolved from the Varnish proxy cache, which is part of Fastly's platform architecture. It's intentionally limited in range, which allows us to run it extremely fast, make it available to all requests that pass through Fastly, and maintain the security of the Fastly network. With Fastly VCL, you can do anything from adding a cookie or setting a Cache-Control
header to implementing a complete paywall solution.
VCL services on Fastly do not provide a single entry point for your application code. Instead, we expose a number of "hooks", in the form of built-in subroutines, which are called at significant moments in the lifecycle of each HTTP request that passes through your service. As a result, code that you upload to a Fastly VCL service is known as a configuration, not an application.
The VCL request lifecycle
The following subroutines are triggered by Fastly, in this order:
Name | Trigger point | Default return state | Alternative return states |
---|---|---|---|
vcl_recv | Client request received | lookup recv-note | pass , error , restart , upgrade |
vcl_hash | A cache key will be calculated | hash hash-note | |
vcl_hit | An object has been found in cache | deliver | pass , error , restart |
vcl_miss | Nothing was found in the cache, preparing backend fetch | fetch | deliver_stale , pass , error |
vcl_pass | Cache bypassed, preparing backend fetch | pass pass-note | error |
vcl_fetch | Origin response headers received | deliver fetch-note | deliver_stale , pass , error , restart |
vcl_error | Error triggered (explicitly or by Fastly) | deliver | restart |
vcl_deliver | Preparing to deliver response to client | deliver | restart |
vcl_log | Finished sending response to client | deliver log-note |
Some subroutines can return error
, restart
, or upgrade
. Any error
return state will result in the execution flow passing to vcl_error
, while restart
will result in the execution flow passing to vcl_recv
. The special upgrade
return state will terminate the VCL flow and create a managed WebSocket connection (learn more).
Adding VCL to your service configuration
Everything that your VCL service does is powered by VCL. Even features that you enable in the web interface or via the API without writing any code yourself, will ultimately generate VCL code written by Fastly, so we need to be able to combine your own VCL with code generated by these features. To support combining your own VCL logic with Fastly's generated VCL code, we include macros in the VCL program, one in each subroutine, such as #FASTLY recv
.
You can mix and match creating configuration using high-level VCL generative objects, VCL snippets, and full custom VCL files, and all are interoperable - though it's typically more maintainable to choose a single approach.
VCL generative objects
Using the web interface or API, you can create configuration objects that generate VCL for you.
HINT: Using these constructs is a good way to get started with VCL services, but if you start to have a lot of them, it may be better to manage your own VCL with a custom VCL file.
Object | Purpose | Instructions |
---|---|---|
Header | Setting HTTP headers or VCL variables | Web interface, API |
Response | Creating a predefined response to be served from the edge | Web interface, API |
Condition | Restricting actions to only requests that meet criteria defined as a VCL expression | Web interface, API |
Apex redirect | Redirecting a bare domain such as example.com to add a www. prefix | API |
Cache settings | Changing the TTL or cache behavior of an HTTP response | Web interface, API |
GZip | Compressing HTTP responses before inserting them into cache | Web interface, API |
HTTP3 | Advertising HTTP/3 support | Web interface, API |
Rate limiter | Creating rate limiters to stop individual clients from bombarding your site | Web interface, API |
Request settings | Changing the cache behavior of a request (similar to a cache setting but applied before the request has been forwarded to origin) | Web interface, API |
Settings | Update default values for cache TTLs | API |
VCL snippets
By adding your custom VCL code using snippets, you can insert raw code into VCL subroutines alongside Fastly-generated code. Your code snippets will be added at the end of the subroutine you select, which can have an impact on what is possible with snippets.
Snippets can be regular or dynamic.
- Regular snippets are versioned in the same way as the rest of your service. Changes require a new version of the service configuration, and can therefore also be rolled back with a rollback of the service version. These are a good choice for VCL code that performs logical actions like routing, setting headers, or authentication.
- Dynamic snippets are not versioned. After attaching a dynamic snippet to a version of your service and activating it, any subsequent changes you make to the snippet apply immediately. This also means if you roll back a service configuration to a earlier version, and the snippet was present in that earlier version, the snippet will remain unchanged and contain the latest code. Dynamic snippets are useful for including generated logic or declarative data, such as redirection rules or allowlists (although if you can use an edge dictionary that's typically a better solution).
To create a VCL snippet:
- Web interface
- API
HINT: All snippets created in the web interface are regular snippets. To create a dynamic snippet, use the API.
- Log in to manage.fastly.com and select the appropriate service. You can use the search box to search by ID, name, or domain.
- Click Edit configuration and then select the option to clone the active version.
- Click VCL Snippets.
- Click Create your first snippet.
- Give the snippet a name. This is a label for your reference but may also be used to import the snippet into your code if you choose none in the next step.
- Choose a Type:
- Select init to insert the snippet in the global scope of your VCL.
- Select within subroutine to insert it within the specified subroutine (snippets render at the end of the relevant Fastly macro block)
- Select none (advanced) to insert it manually, in which case you must write code in your custom VCL to include the snippet
- Write the snippet VCL.
- Click Create.
Custom VCL
Custom VCL allows you to upload a full VCL source file, which will entirely replace the one that would otherwise be generated by Fastly. To make sure that features you create using VCL generative objects still work, we require that custom VCL files include Fastly's code macros, one in each subroutine.
We recommend that you start from the following boilerplate, which includes all the required Fastly macro placeholders and also presents VCL subroutines in the order in which they are executed.
sub vcl_recv {#FASTLY recv
# Normally, you should consider requests other than GET and HEAD to be uncacheable # (to this we add the special FASTLYPURGE method) if (req.method != "HEAD" && req.method != "GET" && req.method != "FASTLYPURGE") { return(pass); }
# If you are using image optimization, insert the code to enable it here # See https://www.fastly.com/documentation/reference/io/ for more information.
return(lookup);}
sub vcl_hash { set req.hash += req.url; set req.hash += req.http.host;#FASTLY hash return(hash);}
sub vcl_hit {#FASTLY hit return(deliver);}
sub vcl_miss {#FASTLY miss return(fetch);}
sub vcl_pass {#FASTLY pass return(pass);}
sub vcl_fetch {#FASTLY fetch
# Unset headers that reduce cacheability for images processed using the Fastly image optimizer if (req.http.X-Fastly-Imageopto-Api) { unset beresp.http.Set-Cookie; unset beresp.http.Vary; }
# Log the number of restarts for debugging purposes if (req.restarts > 0) { set beresp.http.Fastly-Restarts = req.restarts; }
# If the response is setting a cookie, make sure it is not cached if (beresp.http.Set-Cookie) { return(pass); }
# By default we set a TTL based on the `Cache-Control` header but we don't parse additional directives # like `private` and `no-store`. Private in particular should be respected at the edge: if (beresp.http.Cache-Control ~ "(?:private|no-store)") { return(pass); }
# If no TTL has been provided in the response headers, set a default if (!beresp.http.Expires && !beresp.http.Surrogate-Control ~ "max-age" && !beresp.http.Cache-Control ~ "(?:s-maxage|max-age)") { set beresp.ttl = 3600s;
# Apply a longer default TTL for images processed using Image Optimizer if (req.http.X-Fastly-Imageopto-Api) { set beresp.ttl = 2592000s; # 30 days set beresp.http.Cache-Control = "max-age=2592000, public"; } }
return(deliver);}
sub vcl_error {#FASTLY error return(deliver);}
sub vcl_deliver {#FASTLY deliver return(deliver);}
sub vcl_log {#FASTLY log}
Explicit snippet includes
If you have VCL snippets defined on a service that also has custom VCL, the snippets will typically be rendered as part of the Fastly macro, replacing the placeholders such as #FASTLY recv
that you must include in any custom VCL file. However, if your snippet has a type of "none", you may include the snippet explicitly at any point in your custom VCL file using the include
statement:
include "snippet::<snippet name>";
Snippets can be included as many times and in as many places as desired, subject to compiler rules (For example, if your snippet attempts to set bereq.http.cookie
you cannot include that snippet in the vcl_recv
subroutine, because bereq
is not available in the vcl_recv
scope. See VCL variables for more details).
Writing VCL
Whether you use snippets or custom VCL to write VCL code, the features available in the language are the same.
Many common use cases for VCL are explored in our code examples gallery. The best practices guide also helps you understand how to avoid pitfalls and write safer, more secure edge code. Our fiddle tool also allows you to interactively write and execute VCL code without logging into Fastly, giving you space to experiment and test your ideas.
This section summarises some of the most common VCL use cases.
Manipulating headers
The set
and unset
statements allow for setting and unsetting HTTP headers on requests and responses. For example, in vcl_fetch
, you could write:
set beresp.http.Cache-Control = "public, max-age=3600";unset beresp.http.x-goog-request-id;
The {OBJ-NAME}.http.{HEADER-NAME}
pattern is available for req
, bereq
, resp
, beresp
, and obj
. See VCL variables for details of where each of these is available, but in general:
To add/remove headers on... | ...use this | Example use cases |
---|---|---|
Client request | req.http.{NAME} in vcl_recv | Remove cookie header to strip credentials Store data to refer to later in VCL |
Backend request | bereq.http.{NAME} in vcl_miss and vcl_pass | Add authentication headers |
Backend response | beresp.http.{NAME} in vcl_fetch | Set browser cache TTL Remove superfluous origin response headers |
Client response | resp.http.{NAME} in vcl_deliver | Set cookies |
Synthetic response | obj.http.{NAME} in vcl_error | Set the content-type of the synthetic response |
URLs and query strings
The req.url
variable contains the URL (path and query) being requested by the client, and is copied into bereq.url
when making a request to a backend. The path and query can be separately accessed as req.url.path
and req.url.qs
. Consider using querystring.get
and querystring.set
to manipulate query parameters. querystring.filter
can remove unwanted query parameters:
Using regular expressions on the URL path is a common way to route requests to different backends, by setting req.backend
:
Cookies
Since the Cookie
header is a comma-delimited list of individual cookies, you can access a named cookie using subfield accessor syntax. Often this is usefully combined with a regular expression match to extract parts of a structured cookie value. For example, if you have a cookie called "auth", which has a value such as "52b93cff.165826435.d783dad8-ebb9-4475-b6fb-68ce83f90f12", you could use the following VCL to isolate the auth
cookie, and then extract the various parts of it into distinct HTTP headers:
if (req.http.cookie:auth ~ "^([0-9a-f]+).(\d+).([\w-]+)$") { set req.http.Auth-SessionID = re.group.1; set req.http.Auth-CreditCount = re.group.2; set req.http.Auth-DisplayName = re.group.3;}
To write cookies, construct a Set-Cookie
header on the client response, normally in vcl_deliver
. Using set
will overwrite any existing header with the same name, so if you may be setting multiple cookies in the same response, use add
instead. It's also wise, when setting cookies on a response, to prevent the client or any downstream entity from caching it.
add resp.http.set-cookie = "auth=52b93cff.165826435.d783dad8-ebb9-4475-b6fb-68ce83f90f12; max-age=86400; path=/";set resp.http.cache-control = "private, no-store";
Logging
Fastly supports logging data to a variety of specific vendors and generic endpoints. In VCL, you can emit a log message from anywhere in your VCL code using the log
statement:
log "syslog " + req.service_id + " my-log-endpoint :: " + req.url;
All log statements in VCL take the form log "syslog {service_id} {log_endpoint_name} :: {log_message}
. For more information on configuring log endpoints, and how to use them, see our Logging overview.
Controlling the cache
Fastly respects freshness-related HTTP headers sent in origin responses, such as Cache-Control
, Last-Modified
, and Expires
. You can override this behavior using VCL in vcl_fetch
, by setting the values of beresp.ttl
, beresp.stale_while_revalidate
, and beresp.stale_if_error
.
set beresp.ttl = 30m;
Regardless of HTTP headers or explicit instructions in VCL, the cache may be disabled if the response has an HTTP status that does not support caching. A 200
(OK) response is considered cacheable, while a 500
(Internal Server Error) is not. You can change this decision by setting beresp.cacheable
. For more information read our HTTP semantics overview.
IMPORTANT: Setting the value of headers such as Cache-Control
using VCL will not have any affect on whether or for how long the response is cached by Fastly (use beresp.ttl
instead), but setting a Cache-Control
header on a response is a good way to control whether the response is cached on the end user's device.
To disable caching entirely, execute a return(pass)
from vcl_recv
or vcl_fetch
. Doing so in vcl_recv
offers better performance because it allows us to skip request collapsing.
Synthetic responses
When an error occurs during request or response processing, the vcl_error
subroutine will be executed, and an HTTP response will be created within Fastly. You can trigger this behavior explicitly using the error
statement:
error 601;
If you trigger an error manually as shown above, pass a number in the 600-699 range (learn more about HTTP statuses used by Fastly). Then catch that error number in vcl_error
:
if (obj.status == 601) { set obj.status = 200; set obj.http.content-type = "text/plain"; synthetic "OK"; return(deliver);}
When vcl_error
is executed, a new, 'synthetic' HTTP response is created and represented by obj
. Use set
with obj.http.{NAME}
and obj.status
to set the headers and response status of the object, and the synthetic
statement to populate the response body.
Constraints and limitations
VCL services are subject to the following restrictions or limits:
Item | Limit | Implications of exceeding the limit |
---|---|---|
URL size | 8KB | VCL processing is skipped and a "Too long request string" error is emitted. |
Cookie header size | 32KB | The cookie header will be unset and Fastly will set req.http.Fastly-Cookie-Overflow = "1" , then run your VCL as normal. |
Request header size | 69KB | Depending on the circumstances, exceeding the limit can result in Fastly closing the client connection abruptly, the client receiving a 502 Gateway Error response with "I/O error" in the body, or receiving a 503 Service Unavailable response with the text "Header overflow" in the body. |
Response header size | 69KB | A 503 error is triggered with obj.response value of "backend read error". This error can be intercepted in vcl_error . See Fastly generated errors to learn about all synthetic errors generated by Fastly. |
Request header count | 96 | VCL processing is skipped or aborted if in progress, and a response with "Header overflow" in the body is emitted. A number of headers are added to the request by Fastly, so the practical limit is lower, but is not a predictable constant. Assuming a practical limit of 85 is safe. |
Response header count | 96 | VCL processing is skipped or aborted if in progress, and a response with "Header overflow" in the body is emitted. A number of headers are added to the response by Fastly, so the practical limit is lower, but is not a predictable constant. Assuming a practical limit of 85 is safe. |
req.body size | 8KB | Larger requests will have an empty req.body , so request body payload is available in req.body only for payloads smaller than 8KB. |
Surrogate key size | 1KB | Requests to the purge API that cite longer keys will fail, so in practical terms it is useless to tag content with keys exceeding the length limit. |
Surrogate key header size | 16KB | Only keys that are entirely within the first 16KB of the surrogate key header value will be applied to the cache object. |
VCL file size | 1MB | Attempts to upload VCL via the API will fail if the VCL payload is larger. |
VCL total size | 3MB | Attempts to upload VCL via the API will fail if the VCL payload would cause your total service VCL to be larger than this. |
restart limit | 3 restarts | The 4th invocation of the restart statement will trigger a 503 error. This error can be intercepted in vcl_error . |
Edge dictionary item count | 1000 | Attempts to create dictionary items will fail if they exceed the limit. Contact Fastly support to discuss raising this limit. |
Edge dictionary item key length | 256 characters | Attempts to create dictionary items will fail. |
Edge dictionary item value length | 8000 characters | Attempts to create dictionary items will fail. |
WARNING: Personal data should not be incorporated into VCL. Our Compliance and Law FAQ describes in detail how Fastly handles personal data privacy.
- The return state from
vcl_log
simply terminates request processing.↩ - Returning with
return(deliver)
fromvcl_fetch
cannot override an earlier pass, butreturn(pass)
here will prevent the response being cached.↩ - The
return(pass)
exit fromvcl_pass
triggers a backend fetch, similarly toreturn(fetch)
invcl_miss
but the altered return state is a reminder that the object is flagged for pass, so that it cannot be cached when processed invcl_fetch
.↩ - The only possible return state from
vcl_hash
ishash
but it will trigger different behavior depending on the earlier return state ofvcl_recv
. The defaultreturn(lookup)
invcl_recv
will prompt Fastly to perform a cache lookup and runvcl_hit
orvcl_miss
after hash. Ifvcl_recv
returnserror
, thenvcl_error
is executed after hash. Ifvcl_recv
returnsreturn(pass)
, thenvcl_pass
is executed after hash. The hash process is required in all these cases to create a cache object to enable hit-for-pass.↩ - All return states from
vcl_recv
(exceptrestart
) pass throughvcl_hash
first.return(lookup)
andreturn(pass)
both move control tovcl_hash
but flag the request differently, which will determine the exit state fromvcl_hash
.↩