Layer 4 Routing

Traffic Server supports a limited set of layer 4 routing options. In such use Traffic Server acts effectively as a router, moving network data between two endpoints without modifying the data. The routing is accomplished by examining the initial data from the inbound connection to decide the outbound destination. The initial data is then sent to the destination and subsequently Traffic Server forwards all data read on one connection to the other and vice versa.


In this way it acts similary to nc.

The primary differences between different types of layer 4 routing is the mechanism by which Traffic Server creates the outbound connection. This is described in detail in the type specific documentation.


Transparency is in some sense layer 4 routing because the outbound connection is determined by examining the destination address in the client TCP packets. This is discussed in detail elsewhere.

SNI Routing

Currently the only directly supported layer 4 routing (as of version 8.0) is SNI based routing. This imposes some requirements on the traffic.

  • The inbound connection must be TLS.

  • The outbound destination must handle the HTTP CONNECT method.

SNI routing is configured by ssl_server_name.yaml.

If SNI Routing is enabled the initial “CLIENT HELLO” data of an inbound TLS connection is examined to extract the “SNI” value. This is matched against the configuration data to select an action for the inbound connection. In this case the option of interest is tunnel_route. If this is set then Traffic Server will connect to the specified destination and issue an HTTP CONNECT request using the SNI value as the URL for the request. Because the destination and the CONNECT are the same in general it will be necessary to use a plugin to change the URL in the CONNECT.


Consider a Content Delivery Network (CDN) that has an edge layer of externally facing Traffic Server instances. The goal is to enable external clients to connect to internal services and do their own client certificate verification, possibly because distribution of private keys to the edge Traffic Server instances is too difficult or too risky. To achieve this, the edge Traffic Server instances can be configured to route inbound TLS connections with specific SNI values directly to the internal services without TLS termination on the edge. This enables the edge to provide controlled external access to the internal services without each internal service having to stand up its own edge. Note the services do not require global routable addresses as long as the edge Traffic Server instances can route to the services.

The basic set up is therefore


A Client connects to an edge Traffic Server which forwards the connection to the internal Service. The Client then negotiates TLS with the Service.

For the example, let us define two services inside the corporate network of Example, Inc. service-1 and service-2. service-1 is on port 443 on host app-server-29 while service-2 is on port 4443 on host app-server-56. The SNI routing set up for this would be

SNI value




The ssl_server_name.yaml contents would be

server_config = {
      fqdn = ''
      tunnel_route = 'app-server-29:443'
      fqdn = ''
      tunnel_route = 'app-server-56:4443'

In addition to this, in the records.config file, edit the following variables:

The sequence of network activity for a Client connecting to service-2 is


Note the destination for the outbound TCP connection and the HTTP CONNECT is the same. If this is a problem (which it will be in general) a plugin is needed to change the URL in the CONNECT. In this case the proxy request is available in the TS_HTTP_TXN_START_HOOK hook. This cannot be done using remap because for a CONNECT there is no remap phase. Note that for a tunneled connection like this, the only transaction hooks that will be triggered are TS_HTTP_TXN_START_HOOK and TS_HTTP_TXN_CLOSE_HOOK. In addition, because Traffic Server does not terminate (and thefore does not decrypt) the connection, it cannot be cached or served from cache.