DDoS Detection, Mitigation, Orchestration, and Threat Intelligence
Consolidated Security & CGNAT
TLS/SSL Inspection
Web Application Firewall
Application Security & Load Balancing
Analytics & Management
CGNAT & IPv6 Migration
According to a 2018 survey from Portworx, four out of five enterprises are now running containers, and 83 percent are running them in production. Given that only 67 percent were doing so in 2017, it’s clear that containers are more than a fad.
With containers’ newfound popularity, though, some companies are struggling to establish an efficient traffic flow and effectively implement security policies within Kubernetes, one of the most popular container-orchestration platforms.
As a container orchestrator and cluster manager, Kubernetes focuses on providing fantastic infrastructure, and has been adopted by countless companies as a result. It just had its five-year anniversary, and a recent Forbes article called Kubernetes “the most popular open source project of our times”, and revealed that Kubernetes is used by companies like Capital One, ING Group, Philips, VMware and Huawei.
Companies that use a microservices architecture (MSA) for developing applications, tend to find that Kubernetes offers a number of advantages when it comes time to deploy those applications.
For all those reasons, it’s essential that organizations understand the unique traffic flow and security requirements that Kubernetes entails. Here, we’ll explain:
Let’s get started.
Kubernetes is an open-source container-orchestration system. According to Kubernetes’ own definition, it’s a portable and extensible program for managing containerized workloads and services, and provides a container-centric management environment.
Let’s look at a diagram to understand the basic way in which Kubernetes works. There, you can see one master node and two worker nodes. The master node functions by telling the worker nodes what to do, and the worker nodes function to carry out the instructions provided to them. Additional Kubernetes worker nodes can be added to scale out the infrastructure.
If you look closely, you’ll notice that the word “Docker” appears in each section. Docker is a container platform that’s ideal for running containers on a single piece of hardware or virtual machine (VM).
However, if you’re working with hundreds of containers across several different applications, you won’t want to put them all on one machine. This is one of the challenges that spawned Kubernetes.
With the overlay network, illustrated as a red bar in the diagram above, a container in the master node doesn’t have to know that the container it needs to talk to is in worker two node. Instead, it can simply talk to it.
Another primary function of Kubernetes is to package up information into what are known as “pods,” multiples of which can run inside the same node. This way, if an application consists of several containers, those containers can be grouped into one pod that starts and stops as one.
Like all other container-orchestration systems, Kubernetes comes with its own set of obstacles.
These include:
Let’s dive a little deeper into those challenges. The networking of Kubernetes is unconventional in that, despite the use of an overlay network, the internal and external networks are distinct from one another.
Plus, Kubernetes intentionally isolates malfunctioning or failing nodes or pods in order to keep them from bringing down the entire application. This can result in frequently changing IP addresses between nodes. Services that rely on knowing a pod or container’s IP address then have to figure out what the new IP addresses are.
When it comes to access control between microservices, it’s important for companies to realize that traffic flowing between Kubernetes nodes are also capable of flowing to an external physical box or VM. This can both eat up resources and weaken security.
Finally, an inability to examine information at the application layer is a big problem. Without that visibility, enterprises can miss key opportunities to gather detailed analytics and actionable insights.
So far, we’ve discussed the basic functionality of Kubernetes as well as the challenges it presents. Now, we’ll move on to the requirements of Kubernetes and cloud security, based on A10 Networks’ 15 years of experience.
We’ll be talking about seven requirements:
Let’s dig deeper into each one.
While companies may already use an advanced Application Delivery Controller for other areas of their infrastructure, it’s necessary to deploy one for Kubernetes as well. This will allow administrators to do more advanced load balancing than what’s available with Kubernetes by default.
Kubernetes is already equipped with a network proxy called kube-proxy. It’s designed to provide simple usage and works by adjusting iptables rules in Layer Three. However, it’s very basic and is different than what most enterprises are used to.
Many people will place an ADC or load balancer above their cloud. This provides the ability to create a virtual IP that’s static and available to everyone and configure everything dynamically.
As pods and containers start up, the ADCs can be dynamically configured to provide access to the new application, while implementing network security policies and, in some cases, enforcing business data rules. This is usually accomplished through the use of an “Ingress controller” that sees the new pods and containers start up, and either configures an ADC to provide access to the new application or informs another “Kubernetes controller” node about the change.
Since everything can be constantly shifting within the Kubernetes cloud, there is simply no practical way for the box that’s sitting above it to keep track of everything. Unless, however, you have something like the purple box pictured above.
That purple box is generally referred to as an Ingress controller. When a container starts or stops, that creates an event within Kubernetes. Then, the Ingress controller identifies that event and responds to it accordingly.
In the example pictured above, the Ingress controller is recognizing that a container has started and thus must go to the load balancing pool. This way, the application controller, whether it’s above or within the cloud, is kept up-to-date.
This takes a great burden off of administrators and is significantly more efficient than manual management.
North-south and east-west are both general terms to describe the direction of traffic flow. In the case of north-south traffic, traffic is flowing in and out of the Kubernetes cloud.
As mentioned before, companies need something placed above the Kubernetes cloud to watch the traffic. For example, a firewall, a DDoS mitigation system or anything else that can catch malicious traffic.
These things are also useful in terms of traffic management. So, if there’s traffic that needs to go to specific places, this is the ideal place to do that. The Ingress controller is also helpful in this area.
If enterprises can automate this kind of functionality with a unified solution, they can achieve:
Scaling out is something else that enterprises need to take into account, especially in terms of security.
As shown in the diagram above, the Ingress controller (represented by the purple box) is still there, but this time it’s handling multiple Kubernetes nodes and is observing the entire Kubernetes cluster.
Above the Ingress controller is the blue circle, which in this case represents the A10 Networks Harmony Controller. Such a controller allows for efficient load distribution and can quickly send information to the appropriate location.
With a central controller like this, it’s imperative to choose one that can handle scaling in and scaling out with little to no additional configuration on existing solutions.
Contrary to north-south traffic, which flows in and out of the Kubernetes cloud, east-west traffic flows between Kubernetes nodes. In the diagram above, you can see how east-west traffic operates for many organizations.
When traffic flows between Kubernetes nodes, this traffic can be sent over physical networks, virtual or overlay networks, or both. Keeping tabs on how traffic flows from one pod or container to another can become quite complex without some way of monitoring those east-west traffic flows.
Plus, it can also present a serious security risk: attackers who gain access to one container can gain access to the entire internal network.
Luckily, companies can implement something called a “service mesh” like the A10 Secure Service Mesh. This can secure east-west traffic by acting as a proxy between containers to implement security rules, and is also able to help with scaling, load balancing, service monitoring and more.
Additionally, a service mesh can function inside the Kubernetes cloud, without sending traffic to a physical box or VM. Here’s what east-west traffic can look like with an efficient service mesh in place:
With this type of solution, companies like financial institutions can easily keep information where it should be without compromising security.
Without proper encryption, unencrypted information can flow from one physical Kubernetes node to another. This presents a serious problem, especially for financial institutions and other enterprises that handle particularly sensitive information.
That’s why when evaluating a cloud security product, it’s important for enterprises to select one that encrypts traffic when it leaves a node and unencrypts it when it enters.
There are two ways that vendors provide this type of protection:
The first option, sidecar proxy deployment, is arguably the most popular.
With such a deployment, administrators can tell Kubernetes that whenever a particular pod is started, one or more other containers should be started in that pod as well.
Typically, that other container is some type of proxy that can manage the traffic flowing out of the pod.
The downside of sidecar proxy deployment is that, as you can see in the above diagram, it requires multiple instances, or sidecars, and will therefore take up a certain amount of resources.
On the other hand, companies can opt for the hub-spoke proxy deployment. In this type of deployment, one proxy handles the traffic flowing out of every Kubernetes node. As a result, fewer resources are required.
Last but far from least, it’s of vital importance that enterprises understand the details of traffic at the application layer.
With controllers in place to monitor both north-south and east-west traffic, there are already two ideal points to collect traffic information.
Doing so can aid in both application optimization and security and allows for several different functions. Organized from the simplest to the most advanced, those functions can allow for:
So, when companies are talking to vendors, it’s essential that they determine which of those benefits their products can offer.
With products like ours at A10 Networks, it’s possible to see big-picture analytics as well as the individual packets, log entries or transactions in question. Products with that type of granularity are those that organizations should be seeking out.
To wrap up, let’s look at the things that companies should be looking for in terms of traffic flow and security in Kubernetes. All of these considerations also have the benefit of drastically simplifying things for dev and ops teams:
If companies prioritize those items, enterprises can enjoy streamlined, automated and secure traffic flow within Kubernetes. That’s something that your hardware, your bottom line and your ops teams will all appreciate.
At a recent webinar, A10 Networks’ own John Allen, Sr. Systems Engineer, took a deep dive into this topic. Click here to view the full webinar.