Tidepool services are hosted by Amazon Web Services (AWS). So, when an HTTP request is directed to our production backend — api.tidepool.org — it is routed through Amazon elastic load balancers to an API gateway running in Kubernetes.
Our legacy API gateway, styx, provides a number of basic features, including path-based routing and CORS support. However, it doesn't provide a number of features that we want, including metrics, timeouts, and retries.
Fortunately, there are a number of alternatives that do! Open source Kubernetes-native gateways such as Contour, Ambassador, and Gloo are under active development and integrate well with Kubernetes.
After evaluating the numerous alternatives, we replaced our custom gateway with Gloo by Solo.io. Gloo offers a number of benefits to Tidepool:
It has a free, open source version
It enables us to avoid lock-in to cloud vendor API gateways
The Gloo API Gateway design may be confusing at first glance if you are new to Kubernetes. However, Gloo is a Kubernetes native project that leverages concepts introduced in Kubernetes, including custom resources and custom controllers. Once you become comfortable with these concepts, the Gloo design may seem natural.
Control plane
We deploy Gloo in gateway mode. We configure Gloo by providing three types of Kubernetes custom resources (CRs):
Route table CRs specify collections of routes, i.e. mappings from HTTP requests to the Kubernetes services that field those requests. Gloo supports many other types of destinations or “upstreams”, but we only use Kubernetes upstreams now.
Routes may consider HTTP paths, verbs (methods), and host names when determining which service should receive a request.
Virtual service CRs specify a route table, a set of DNS names, and (optionally) a TLS certificate to use for termination of traffic.
Gateway CRs specify a set of virtual service CRs and a port to listen to.
Route table CRs, virtual service CRs, and gateway CRs are all simply resources that exist to define the control plane. They exist only as data that is stored in the Kubernetes persistent store, etcd.
Data plane
When you deploy Gloo (typically via Helm), you create one or more proxy pods or simply proxies.
Each proxy is associated with (one of more) Kubernetes Service(s). A Kubernetes Service may be associated with an actual cloud provided load balancer — e.g. AWS ELB — that provides actual traffic.
Translation from control plane resources to data plane configuration
Two custom controllers translate custom resources to data plane configurations: the Gloo controller and the gateway controller.
Gateway CRs are at the top of a hierarchy of control plane resources that describe how to control the data plane.
Each Gateway CR may be associated with one or more proxies. The gateway controller translates the Gateway CRs associated with each proxy into a single control plane resource called a Proxy CR that defines how to route within that proxy.
The Proxy CR is consumed by aGloo controller. The Gloo controller configures the Envoy container that runs within each proxy of the data plane.
Tidepool implementation
Tidepool uses most of the features of Gloo to route Tidepool traffic.
Specifically, we create three Gateway CRs:
A Gateway CR to listen to external HTTP traffic from a load balancer
A Gateway CR to listen to external HTTPS traffic from a load balancer
A Gateway CR to listen to internal HTTP traffic
The two externally-oriented Gateway CRs configure a single proxy that receives traffic from an AWS classic load balancer that employs the proxy protocol to propagate source IP addresses.
The one internally-oriented Gateway CR configures a proxy that receives traffic from internal Tidepool pods. In this way, internal Tidepool pods do not need to know the actual name of the services that consume their traffic. This level of indirection allows one to leverage the features of Gloo.
In a cluster with multiple Tidepool environments, each environment contributes three Virtual Service CRs, one Virtual Service CR for each of the three Gateway CRs. That is, all the Tidepool environments share one set of Gateway CRs. This means that only one actual AWS elastic load balance (ELB) is needed for all Tidepool traffic to a single Kubernetes cluster.
Conclusion
Gloo has become an integral part of the modernized Tidepool backend. We are grateful for the excellent work that Solo.io has done and look forward to a long and fruitful partnership.
Upcoming
In our next engineering blog post, we will discuss how we deploy our software continuously with GitOps.
Stay in the Tidepool Loop
We'll keep you updated with all the need to know news and announcements for Tidepool and Tidepool Loop
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Support our work
Donate today to support Tidepool's mission to improve the lives of People with Diabetes through affordable, accessible and interoperable technologies