Connecting to Token.io

Connecting to the Token.io Platform often means rapidly preparing your environment for integration, and a fast-tracked implementation may lead to common fundamentals being overlooked. At Token.io, we're proud proponents of site reliability engineering and firm believers in sharing tools and knowledge. Hence, should you require assistance with any aspect of the information in the preceding topics or those that follow here, please contact our support team. We'll strive to help resolve any issues you encounter.

In the meantime, let's take a brief look at potential connectivity pain points with an eye to avoiding unnecessary server-to-server communication issues in terms of:

Protocols

A protocol is a set of rules that govern the data communication mechanisms between clients (for example web browsers used by internet users to request information) and servers (the machines containing the requested information).

Protocols usually consist of three main parts: Header, Payload and Footer. The Header, placed before the Payload, contains information including source and destination addresses as well as other details (such as size and type) regarding the Payload. Payload is the actual information transmitted using the protocol. The Footer follows the Payload and works as a control field to route client-server requests to the intended recipients along with the Header to ensure the Payload data is transmitted free of errors.

Token.io leverages the gRPC protocol to exchange protocol buffers between server applications (our clients' applications) and client applications (Token.io Cloud). gRPC permits a client application to directly call a method on a remote server application as if it were a local object, making it simple to create distributed services (which is effectively what the Token.io Cloud is — a massive, distributed application linking banks to TPPs and users).

In order to operate in the manner required, gRPC makes use of HTTP/2 for its binary framing and compression capabilities, as well as HTTP/2's native support for connection multiplexing. Whilst a service may support HTTP/2, it may not support the trailing headers implemented by the gRPC protocol. This is a common issue that can create complications in environments not accustomed to serving HTTP/2 or gRPC. It's also an issue that can be easily overcome.

Load Balancers

Load balancers at the cloud or network edge are typically the first devices to receive a connection from a source ultimately seeking to connect with a destination within an organisation's environment. With respect to the Token.io Cloud, this entails connecting to the SDK operated by our client. However, not all load balancers are created equal. Some have full support for HTTP/2, some have only partial support, and others refuse to accept HTTP/2 altogether. Understanding the difference and why it exists is key to a successful implementation.

Google Cloud load balancers fully support HTTP/2 and merely require creating a load balancer, pointing a public IP address at its outside interface, then configuring a listener pool to receive traffic. Seeing that Google drove the initial development of gRPC, it's no surprise that its load balancers include native L7 support for gRPC and HTTP/2.

Within Amazon Web Services (AWS), the waters begin to muddy somewhat. A network load balancer with TCP listeners is fully compatible because it's an L4 load balancer that distributes TCP connections to hosts within the associated target group. A network load balancer becomes incompatible the moment a TLS listener is provisioned, since it effectively makes the load balancer intercept TLS negotiation and pass a non-comforming connect to the application within the target group. Thus, when using an AWS NLB, you simply create TCP listeners and never worry about your load balancer again.

In October 2020, AWS introduced full end-to-end HTTP/2 support in its Application Load Balancer (ALB) which also encompasses full support for the gRPC protocol.

If you are running your services on Azure, look to their layer 4 Azure Load Balancer, which supports load balancing TCP and UDP connections, meaning that it will not interfere with the higher level protocols used by gRPC. Support for gRPC is not provided at all with Azure’s App Service because these are hosted within Internet Information Server (IIS), which does not support gRPC. With regard to Azure Container Instances (ACI), problems have been detected with Windows-based images. However, implementing the Token.io SDK within a Linux container is known to work.

While Token.io lacks the ability to test your vendor security and load balancing products, issues are known to exist with the following:

  • F5 devices do not support gRPC, even with an HTTP/2 profile for a pool enabled. This limitation is a result of gRPC utilizing mandatory trailing headers (trailers) within its specification. At present, F5 devices can be made to work by enabling SSL Pass Through for the pool. A bug has been filed for this at https://support.f5.com/csp/article/K61517014
  • Citrix Netscaler devices support end-to-end gRPC as of their 13.0 release on ADC devices. The implementation details can be found at https://docs.citrix.com/en-us/citrix-adc/current-release/system/grpc/grpc-end-to-end-configuration.html
  • Neustar, a cloud based DDoS service, leverages Citrix Netscaler devices as their proxying layer and relies on Citrix for gRPC support. At present, Neustar requires engineering support to enable HTTP/2 for a given application profile. When configured, TLS connections terminate within their service and results in mangled gRPC frames reaching the Token.io SDK.

The takeaway here is that your mileage may indeed vary when it comes to your load balancer's support for HTTP/2 and gRPC. It is worth taking a look at your load balancer documentation to see if HTTP/2 is supported, and if so, how and to what extent? Many devices will accept an inbound HTTP/2 connection and pass on HTTP/1.1 connections to any target for which it is configured to pass traffic.WAF devices, such as Barracuda's WAF, are an example of this limitation. Others may support HTTP/2, but lack the ability to support gRPC frames as their implementation does not support trailing headers, a distinction that is important to consider when implementing Token.io’s SDK.

Proxy Servers

In environments where a proxy server is mandated for outbound connections to services on the Internet, Token.io SDKs support common proxy directives as environment variables or parameters passed to the runtime. HTTP_CONNECT proxies are supported in gRPC by default. Use the following settings to support your respective SDK.

Java SDK Proxy Support

To support proxy servers from the Java SDK, pass these flags to the JVM:

-Dhttp.proxyHost=proxy -Dhttp.proxyPort=port

-Dhttps.proxyHost=proxy -Dhttps.proxyPort=port

To bypass the proxy for addresses your application may need to support, add:

-Dhttp.nonProxyHosts="localhost|127.0.0.1|10.*.*.*|*.domain.tld"

C# SDK Support

To have your Token.io C# SDK-based application honour your network's proxy settings, set the following environment variables:

Environment.SetEnvironmentVariable("http_proxy", "<http://proxy:port>");

Environment.SetEnvironmentVariable("https_proxy", "<https://proxy:port>")

To bypass the proxy for addresses your application may need to support, add:

Environment.SetEnvironmentVariable("no_proxy", "127.0.0.1")

Environment Variables

The gRPC core libraries support common proxy variables in the following forms:

HTTP_PROXY="<http://proxy:port>"

HTTPS_PROXY="<https://proxy:port>"

To bypass the proxy for addresses your application may need to support:

NO_PROXY="localhost,127.0.0.1,10.0.0.0/24,*.domain.tld"

Mutual TLS

Token.io uses Mutual TLS (mTLS) to authenticate sessions between Token.io Cloud and servers built with our SDKs. In brief, mTLS is invoked when both the server (the application built with the SDK) and the client (Token.io Cloud) present certificates to validate their identity. Once TLS is mutually verified, communication between the two entities can commence.

Configuring mTLS

Within the application integrating with the SDK, a certificate and key must be provided. These sample applications demonstrate the structure:

Note: These will live in the config/tls directory within your project root.

To configure your integrated SDK application for mTLS:

  1. Generate a cert.pem and a key.pem
  2. Share the cert.pem with Token.io via a support ticket (https://support.token.io)
  3. Let Token.io upload the cert to our trust store.

Within the config/tls directory, you will find a file called trusted-certs.pem. This contains the root CA, intermediate and leaf certificates for Token.io's sandbox and production environments (api-gprc.sandbox.token.io and api-grpc.token.io, respectively). These are the certificates that Token.io will present when connecting to the application built with the SDK.

Self-signed Certificates

Token.io provides a script for creating self-signed certificates for use with mTLS. This script creates a private key and a self-signed certificate that is suitable for mTLS communication between entities, and is the lowest friction approach to implementing mTLS. The script will replace the cert.pem and key.pem files packaged with the sample applications.

Third-party Authority Certificates

If you wish to use a certificate from a third party certificate authority, you are more than welcome to do so; in which case, key.pem is replaced with the output of the key created during CSR, and cert.pem is replaced with the certificate returned to you by your certificate authority. If your CA is not a root CA, you will need to ensure that you send your certificate bundle to Token.io, in addition to cert.pem, so that we have your full certificate chain and can validate your leaf certificate (cert.pem) against the root and intermediate certificates provided with the bundle. All certificate issuers include instructions on accessing the bundle within their proprietary documentation.

* * * *

With these fundamentals in mind, you're ready to explore onboarding with Token.io using our TPP SDK.