Kong API Gateway with Microservices — Part I — How to Install and Config Kong on Kubernetes?
Without a doubt, microservices solved many problems in complex enterprise landscape. However, their sheer numbers and increasing need for coordination constitutes another major challenge in the road to microservices from monolith. For instance, log aggregation and monitoring became much more complex compared to conventional architectures.
Presenting a complete list of such challenges is out of the scope of this story. Instead, I’ll go over a tool which provides solutions to some of them. In this regard, this series will be in two parts. In the first one, Kong API gateway’s OSS (Open-source software) version will be installed and configured on a Kubernetes (K8S hereafter) cluster. Second one will present a prototype implementation of microservices and Kong API Gateway.
Do we really need an API Gateway? If so, why?
API Gateway tool set is becoming richer with solutions from public cloud providers (AWS API Gateway, GCP API Gateway etc.) and projects that works on on-premise environments (Kong, Tyk, Krakend etc.). But before diving deep on installation of an API Gateway, do we really need them? The major rationale behind microservice use is that they need to be lightweight with a single responsibility. For instance, if it is an e-commerce application, a microservice might only be responsible for ordering. But without an API gateway, developer needs to implement authentication and authorization logic within the ordering microservice violating the design principles. This is just being a use case, API Gateways provide functionality which otherwise needs to be dealt with elsewhere. To answer the previous question, yes you do need and API Gateway for a manageable and healthy microservice environment.
Kong API Gateway
The reason behind my preference of Kong is mainly three-fold. First, it has a wide protocol support (HTTP, gRPC, GraphQL, web socket etc.). Second, it is quite popular such that it has the most “stars” in GitHub. Third, its OSS version stands as a complete usable product even in an enterprise environment.
Kong has many flavors in terms of installation. You can use it in public cloud as SaaS, or spin up a Kong server on a bare metal server, VM, container or K8S cluster. Each deployment option has its own steps and use cases which were explained here. In general, it has some kind of database to store configuration, some control node to manipulate this database and some data node that serves user traffic according to the rules given in the database. The flavor I chose is to install Kong on a K8S cluster with a Postgresql database. It is an on-premise K8S cluster similar to what is given in here and here. Please note that it is also possible to install it on K8S without a database. In that case, Kong uses etcd to store objects as CRDs (Custom Resource Definition).
Kong’s abundant choices for installation and configuration might come overwhelming at the beginning. For instance, only on K8S, at least three installation path is possible. First, DB-less Kong Ingress Controller (KIC); second KIC with DB (my choice) and third installation without KIC (This flavor uses Kong Gateway API (Similar to K8S Gateway API)). To make things more complicated, KIC and Kong API Gateway terminology became tricky to grasp. In short, if it is installed, KIC does Kong API Gateway functionality. For my chosen installation path, Kong’s official GitHub repo combines all yaml files in a single manifest here.
Choosing this flavor might end up having one more ingress controller in your K8S environment (If this is an existing cluster with an ingress controller already installed). Some additional research would be beneficial if you plan to include a service mesh in this equation. For a while, I’ve been personally using Nginx Ingress Controller for monoliths, KIC for microservice apps in the same cluster. No serious issues so far. Another option would be leaving KIC as the only ingress controller for both type of workloads. I could not comment on this since I’ve not experienced this type of usage.
Resulting topology is as presented below. In this installment, configuration is given to K8S apiserver with a client (e.g. kubectl) which then be processed by control plane and written to Postgresql database. Data plane (proxy pod) implements what is written to database when routing the incoming traffic.
Beware that there is only one deployment in this setup with one pod as default. Both control and data plane works inside this pod as separate containers. Objects below were created as the result of this installation.
Kong installation comes with its CRDs which can be seen below. Two of them worth a parenthesis here. Let me go through briefly.
- KongConsumer: This object represents a user of a service. It might be a human user or another API.
- KongPlugin: Plugins provide extensibility by offering functions to Kong. They could be free or enterprise. You can also develop your own plugin if you like. Kong has a platform for them here.
For more illustration, screencast below demonstrates the creation of a KongConsumer via kubectl. After its creation, you can see it is written in postgresql database. Please note that postgresql database has tables for different Kong objects such as plugins, upstreams and routes.
That’s all for this story. Now we have a Kong deployment in our K8S cluster up and running. In the next one, Kong will be used to handle authentication and authorization in a microservice environment.
PS: If you liked the article, please support it with claps to reach more people. Cheers!