TLS Bridge

This plugin is used to provide TLS tunnels for connections between a Client and a Service via two gateway Traffic Server instances using explicit proxying. By configuring the Traffic Server instances the level of security in the tunnel can be easily controlled for all communications across the tunnels without having to update the client or service.

Description

The tunnel is sustained by two instances of Traffic Server.

hide empty members

cloud "Cloud\nUntrusted\nNetworks" as Cloud
node "Ingress ATS"
node "Peer ATS"

[Client] <--> [Ingress ATS] : Unsecure
[Ingress ATS] <-> [Cloud] : Secure
[Cloud] <-> [Peer ATS] : Secure
[Peer ATS] <-u-> [Service] : Unsecure

[Ingress ATS] ..> [tls_bridge\nPlugin] : Uses

The ingress Traffic Server accepts an HTTP CONNECT request from the Client. This connection gets intercepted by the TLS Bridge plugin inside Traffic Server if the destination matches one of the configured destinations. The plugin then makes a TLS connection to the peer Traffic Server using the configured level of security. The original CONNECT request from the Client to the ingress Traffic Server is then sent to the peer Traffic Server to create a connection from the peer Traffic Server to the Service. After this the Client has a virtual circuit to the Service and can use any TCP based communication (including TLS). Effectively the plugin causes the explicit proxy to work as if the Client had done the CONNECT directly to the peer Traffic Server. Note this means the DNS lookup for the Service is done by the peer Traffic Server, not the ingress Traffic Server.

The plugin is configured with a mapping of Service names to peer Traffic Server instances. The Service names are URLs which will be in the original HTTP request made by the Client after connecting to the ingress Traffic Server. This means the FQDN for the Service is resolved in the environment of the peer Traffic Server and not the ingress Traffic Server.

Configuration

TLS Bridge requires at least two instances of Traffic Server (Ingress and Peer). The client connects to the ingress Traffic Server, and the peer Traffic Server connects to the service. The Peer could in theory be configured to connect on to a further Traffic Server instance, acting as the ingress to that peer, but that doesn't seem a useful case.

  1. Disable caching on Traffic Server in records.yaml:

1records:
2   http:
3      cache:
4         http: 0
  1. Configure the ports.

    • The Peer Traffic Server must be listening on an SSL enabled proxy port. For instance, if the proxy port for the Peer is 4443, then configuration in records.yaml would have:

    1records:
    2http:
    3   server_ports: 4443:ssl
    
    • The Ingress Traffic Server must allow CONNECT to the Peer proxy port. This would be set in records.yaml by:

    1records:
    2   http:
    3      connect_ports: 4443
    4
    5
    6The Ingress |TS| also needs ``proxy.config.http.server_ports`` configured to have proxy ports
    7to which the Client can connect.
    
  2. By default Traffic Server requires remap in order to allow to outbound request to the peer. To disable this requirement and allow all connections, use the setting:

    1records:
    2  url_remap:
    3     remap_required: 0
    

    In this case Traffic Server will act as an open proxy which is unlikely to be a good idea. Therefore if this approach is used Traffic Server will need to run in a restricted environment or use access control (via ip_allow.yaml or iptables).

    If this is unsuitable then an identity remap rule can be added for the peer Traffic Server. If the peer Traffic Server was named "peer.ats" and it listens on port 4443, then the remap rule would be

    map https://peer.ats:4443 https://peer.ats:4443
    

    Remapping will be disabled for the user agent connection and so it will not need a rule.

  3. If remap is required on the peer to enable the outbound connection from the peer to the service (e.g. required remapping is not explicitly disabled) the destination port must be explicitly stated [1]. E.g.

    map https://service:4443 https://service:4443
    

    Note this remap rule cannot alter the actual HTTP transactions between the client and service because those happen inside what is effectively a tunnel between the client and service, supported by the two Traffic Server instances. This rule serves to allows the CONNECT sent from the ingress to cause a tunnel connection from the peer to the service.

  4. Configure the Ingress Traffic Server to verify the Peer server certificate:

    1records:
    2   ssl:
    3      client:
    4         verify:
    5         server:
    6            policy: ENFORCED
    
  5. Configure Certificate Authority used by the Ingress Traffic Server to verify the Peer server certificate. If this is a directory, all of the certificates in the directory are treated as Certificate Authorities.

    1records:
    2   ssl:
    3      client:
    4         CA:
    5         cert:
    6            filename: </path/to/CA_certificate_file_name>
    
  6. Configure the Ingress Traffic Server to provide a client certificate:

    1records:
    2   ssl:
    3      client:
    4         cert:
    5           filename: <server_certificate_file_name>
    6           path: </path/to/certificate/dir>
    
  7. Configure the Peer Traffic Server to verify the Ingress client certificate:

    1records:
    2   ssl:
    3      client:
    4         certification_level: 2
    
  8. Enable the TLS Bridge plugin in plugin.config. The plugin is configured by arguments in plugin.config. These are arguments are in pairs of a destination and a peer. The destination is an anchored regular expression which is matched against the host name in the Client CONNECT. The destinations are checked in order and the first match is used to select the peer Traffic Server. The peer should be an FQDN or IP address with an optional port. For the example above, if the Peer Traffic Server was named "peer.ats" on port 4443 and the Service at *.service.com, the peer argument would be "peer.ats:4443". In plugin.config this would be:

    tls_bridge.so .*[.]service[.]com peer.ats:4443
    

    Note the '.' characters are escaped with brackets so that, for instance, "someservice.com" does not match the rule.

    If there was another service, "*.altsvc.ats", via a different peer "altpeer.ats" on port 4443, the configuration would be

    tls_bridge.so .*[.]service[.]com peer.ats:4443 .*[.]altsvc.ats altpeer.ats:4443
    

    Mappings can also be specified in an external file. For instance, if there was file named "bridge.config" in the default Traffic Server configuration directory which contained mappings, the plugin.config configuration line could look like

    tls_bridge.so .*[.]service[.]com peer.ats:4443 --file bridge.config
    

    or

    tls_bridge.so --file bridge.config .*[.]service[.]com peer.ats:4443

    These are not identical - direct mappings and file mappings are processed in order. This means in the first example, the direct mapping is checked before any mapping in "bridge.config", and in the latter example the mappings in "bridge.config" are checked before the direct mappings. There can be multiple "--file" arguments, which are processed in the order they appear in "plugin.config". The file name can be absolute, or relative. If the file name is relative, it is relative to the Traffic Server configuration directory. Therefore, in these examples, "bridge.config" must be in the same directory as plugin.config.

    The contents of "bridge.config" must be one mapping per line, with a regular expression separated by white space from the destination service. This is identical to the format in plugin.config except there is only one pair per line. E.g., valid content for "bridge.config" could be

    # Primary service location.
    .*[.]service[.]com peer.ats:4443
    
    # Secondary.
    .*[.]altsvc.ats      altpeer.ats:4443
    

    Leading whitespace on a line is ignored, and if the first non-whitespace character is '#' then the entire line is ignored. Therefore if that is the content of "bridge.config", these two lines in "plugin.config" would behave identically

    tls_bridge.so --file bridge.config
    
    tls_bridge.so .*[.]service[.]com peer.ats:4443     .*[.]altsvc.ats altpeer.ats:4443
    

Notes

TLS Bridge is distinct from more basic Layer 4 Routing available in Traffic Server. For the latter there is no intercept or change of the TLS exchange between the Client and the Service. The exchange looks like this

actor Client
participant "Ingress TS" as Ingress
participant Service

Client <-[#green]> Ingress : //TCP Connect//
Client -[#blue]-> Ingress : <font color="blue">TLS: ""CLIENT HELLO""</font>
note over Ingress : Map SNI to upstream Service
Ingress <-[#green]> Service : //TCP Connect//
Ingress -[#blue]-> Service : <font color="blue">TLS: ""CLIENT HELLO""</font>
note right : Duplicate of data from Client.
note over Ingress : Forward bytes between Client <&arrow-thick-left> <&arrow-thick-right> Service
Client <--> Service

The key points are

  • Traffic Server does no TLS negotiation at all. The properties of the connection between the Ingress Traffic Server and the Service are completely determined by the Client and Server negotiation.

  • No packets are modified, the ""CLIENT HELLO"" sent by the Ingress Traffic Server is an exact copy of that sent to the Ingress Traffic Server by the Client. It is only examined for the SNI data in order to select the Service.

Implementation

The TLS Bridge plugin uses TSHttpTxnIntercept to gain control of the ingress Client session. If the session is valid then a separate connection to the peer Traffic Server is created using TSHttpConnect.

After the ingress Traffic Server connects to the peer Traffic Server it sends a duplicate of the Client CONNECT request. This is processed by the peer Traffic Server to connect to the Service. After this both Traffic Server instances then tunnel data between the Client and the Service, in effect becoming a transparent tunnel.

The overall exchange looks like the following:

@startuml

box "Client Network" #DDFFDD
actor Client
entity "User Agent\nVConn" as lvc
participant "Ingress ATS" as ingress
entity "Upstream\nVConn" as rvc
end box
box "Corporate Network" #DDDDFF
participant "Peer ATS" as peer
database Service
end box

Client -> ingress : TCP or TLS connect
activate lvc
Client -> ingress : HTTP CONNECT
ingress -> lvc : Intercept Transaction
ingress -> peer : TLS connect
activate rvc
note over ingress,peer : Secure Tunnel
ingress -> peer : HTTP CONNECT
note over peer : DNS for Service is\ndone here.
peer -> Service : TCP Connect

note over Client, Service : At this point data can flow between the Client and Server\nover the secure link as a virtual connection, including any TLS handshake.
Client <--> Service
lvc <-> ingress : <&arrow-thick-left> Move data <&arrow-thick-right>
ingress <-> rvc : <&arrow-thick-left> Move data <&arrow-thick-right>
note over ingress : Plugin explicitly moves this data.

@enduml

A detailed view of the plugin operation.

../../../_images/TLS-Bridge-Plugin.svg

A sequence diagram focusing on the request / response data flow. There is a NetVConn for the connection to the Peer Traffic Server which is omitted for clarity.

  • Blue dotted lines are request or response data

  • Green lines are network connections.

  • Red lines are programmatic interactions.

  • Black lines are hook call backs.

The 200 OK sent from the Peer Traffic Server is parsed and consumed by the plugin. An non-200 response means there was an error and the tunnel is shut down. To deal with the Client response clean up the response code is stored and used later during cleanup.

../../../_images/TLS-Bridge-Messages.svg

A restartable state machine is used to recognize the end of the Peer Traffic Server response. The initial part of the response is easy because all that is needed is to wait until there is sufficient data for a minimal parse. The end can be an arbitrary distance in to the stream and may not all be in the same socket read.

@startuml
[*] -r> State_0
State_0 --> State_1 : CR
State_1 --> State_0 : *
State_1 --> State_1 : CR
State_1 --> State_2 : LF
State_2 --> State_3 : CR
State_2 --> State_0 : *
State_3 -r> [*] : LF
State_3 --> State_1 : CR
State_3 --> State_0 : *
@enduml

Debugging

Debugging messages for the plugin can be enabled with the "tls_bridge" debug tag.

Footnotes