Microservice Security Design Patterns for Kubernetes (Part 1)

In this multi-part blog series, I will describe some microservice security design patterns to implement micro-segmentation and deep inspection in the interior of your Kubernetes cluster to further secure your microservice applications, not just the cluster. I will also demonstrate the design patterns with working Proof of Concept deployments that you can use as a starting point.

Follow up posts:
Part 2 Setting up the Insecure Deployment
Part 3 The Security Service Layer Pattern
Part 4 The Security Sidecar Pattern
Part 5 The Service Mesh Sidecar-on-Sidecar Pattern

There are many tutorials on microservice security and how to secure your Kubernetes clusters and they include many of the following topics:

These are worthy topics and they encompass many of the issues that are relevant to securing modern microservice architectures. But there are a couple of important items that I’d like to emphasize:

  • Controlling East/West traffic (layers 3 and 4) between Pods within the Kubernetes cluster (aka micro-segmentation)
  • Deep inspection of the application traffic (layers 5, 6, and 7) between Pods within the Kubernetes cluster (aka IPS or WAF)

These are concepts that have been around for a while in the traditional on-premises and virtualized data center world. Long ago it was recognized that it was no longer adequate to create a hard, crusty edge and leave a soft, gooey interior for attackers to exploit. The attack surface area can include vulnerabilities buried deep inside the application architecture that can be exploited. These include well-known OWASP top 10 web application attacks such as Cross-site Scripting (XSS), SQL Injection, Remote Code Execution (RCE), API attacks, and more.

Let’s discuss some microservice security patterns that can help.

Kubernetes Application Security Patterns

There are three fairly intuitive design patterns that I will be describing:

  1. Security Service Layer Pattern
  2. Security Sidecar Pattern
  3. Service Mesh Security Plugin Pattern

I’ll be using the following simple Kubernetes deployment to show how we can layer micro-segmentation and application inspection within the cluster to provide better microservice security.

Figure 1: Simple Simulated Microservice Deployment

Simple simulated microservice deployment

In this microservice architecture we see three simulated services:

  1. Public Web interface service
  2. Internal Authentication service
  3. Internal Database service

I’m using my microservice traffic and attack generation simulator called microsim to provide a realistic environment with a majority of ‘normal’ JSON/HTTP traffic between services with occasional SQL Injection attack traffic from the WWW service to the internal Auth and DB services.

Now let’s get into the different design patterns.

Security Service Layer Pattern

The Security Service Layer Pattern is probably the simplest to understand, since it is analogous to how micro-segmentation and deep inspection are deployed in traditional environments.

Figure 2: Security Service Layer Pattern

In this design pattern we see the insertion of a security layer in front of each microservice. In this case we are using the official OWASP modsecurity-crs container on Docker Hub. This container provides WAF functionality with the OWASP Core Rule Set and will detect attacks over HTTP, including the simulated SQL Injection attack traffic between microservices. Layer 3 and 4 micro-segmentation is implemented via a network provider that supports Network Policy.

Some of the pros and cons of this design include:

Pros:

  • Simple to understand
  • Allows scaling of the security tiers independent of the microservices they are protecting
  • Treats application security as a microservice
  • No need to change microservice ports

Cons:

  • Creates additional services in the cluster
  • Adds traffic flow complexity
  • Requires more micro-segmentation rules

Security Sidecar Pattern

The Security Sidecar Pattern takes the concept of the Security Service Layer Pattern and collapses the additional services into the microservice Pods.  Sidecar proxy containers, such as modsecurity, can be explicitly configured as part of the Deployment spec or can be injected into the Pods via MutatingWebhook.

Figure 3: Security Sidecar Pattern

In this design pattern we see the insertion of a security proxy container within each Pod, so both the security proxy and application containers are running in the same Pod. In this case we are also using the official OWASP modsecurity-crs container on Docker Hub. Layer 3 and 4 micro-segmentation is implemented via a network provider that supports Network Policy.

Some of the pros and cons of this design include:

Pros:

  • Simple to understand
  • Unifies the scaling of the security and application microservices
  • The security proxy can be automatically injected into the Pod
  • Works with an existing Service Mesh using the Sidecar on Sidecar pattern
  • Requires fewer micro-segmentation rules

Cons:

  • Requires the Security container and Application container to run on different TCP ports within the Pod
  • May result in over-provisioning of the security layer resources

Service Mesh Security Plugin Pattern

The Service Mesh Security Plugin Pattern takes the concept of the Security Sidecar Pattern but implements the security functionality as a plugin to the Service Mesh’s data plane sidecar container (e.g. Envoy in an Istio Service Mesh). 

Figure 4: Service Mesh Security Plugin Pattern

Service mesh security plugin pattern

In this design pattern we see the insertion of a service mesh data plane container (e.g. Envoy) within each Pod, so both the Service Mesh proxy and application containers are running in the same Pod. In this case the application layer inspection is handled through a modsecurity plugin for Envoy. Layer 3 and 4 micro-segmentation is implemented via the Service Mesh policy.

Some of the pros and cons of this design include:

Pros:

  • More cleanly extends security into an existing Service Mesh
  • Unifies the scaling of the security and application microservices
  • The Service Mesh proxy can be automatically injected into the Pod
  • Micro-segmentation rules can be implemented via Service Mesh policy
  • Service Mesh enables many advanced application delivery features

Cons:

  • Service Mesh deployments are more complex
  • May result in over-provisioning of the security layer resources

Secure Microservice POC Deployments

In my opinion, the Security Sidecar Pattern is the most convenient for small projects, but using a Service Mesh is probably a better idea for larger, more complex architectures. In some cases a hybrid approach will make more sense.

Leave a reply if you know of any other designs you’ve seen in the field! Stay tuned for future posts where I will demonstrate simple proof of concept implementations of these security design patterns.

Next in the series: Part 2

Bringing the Unix Philosophy to the 21st Century

Try the jc web demo!

Do One Thing Well

The Unix philosophy of using compact expert tools that do one thing well and pipelining them together to manipulate data is a great idea and has worked well for the past few decades. This philosophy was outlined in the 1978 Foreward to the Bell System Technical Journal describing the UNIX Time-Sharing System:

Foreward to the Bell System Technical Journal

Items i and ii are oft repeated, and for good reason. But it is time to take this philosophy to the 21st century by further defining a standard output format for non-interactive use.

Unfortunately, this is the state of things today if you want to grab the IP address of one of the ethernet interfaces on your linux system:

$ ifconfig ens33 | grep inet | awk '{print $2}' | cut -d/ -f1 | head -n 1

This is not beautiful.

Up until about 2013 it made just as much sense as anything to assume unstructured text was a good way to output data at the command line. Unix/linux has many text parsing tools like sed, awk, grep, tr, cut, rev, etc. that can be pipelined together to reformat the desired data before sending it to the next program. Of course, this has always been a pain and is the source of many questions all over the web about how to parse the output of so-and-so program. The requirement to parse unstructured (in some cases only human readable) data manually has made life much more difficult than it needs to be for the average linux administrator.

But in 2013 a certain data format called JSON was standardized as ECMA-404 and later in 2017 as RFC 8259 and ISO/IEC 21778:2017. JSON is ubiquitous these days in REST APIs and is used to serialize everything from data between web applications, to Indicators of Compromise in the STIX2 specification, to configuration files. There are JSON parsing libraries in all modern programming languages and even JSON parsing tools for the command line, like jq. JSON is everywhere, it’s easy to use, and it’s a standard.

Had JSON been around when I was born in the 1970’s Ken Thompson and Dennis Ritchie may very well have embraced it as a recommended output format to help programs “do one thing well” in a pipeline.

To that end, I argue that linux and all of its supporting GNU and non-GNU utilities should offer JSON output options. We already see some limited support of this in systemctl and the iproute2 utilities like ip where you can output in JSON format with the -j option. The problem is that many linux distros do not include a version that offers JSON output (e.g. centos, currently). And even then, not all functions support JSON output as shown below:

Here is ip addr with JSON output:

$ ip -j addr show dev ens33
 [{
         "addr_info": [{},{}]
     },{
         "ifindex": 2,
         "ifname": "ens33",
         "flags": ["BROADCAST","MULTICAST","UP","LOWER_UP"],
         "mtu": 1500,
         "qdisc": "fq_codel",
         "operstate": "UP",
         "group": "default",
         "txqlen": 1000,
         "link_type": "ether",
         "address": "00:0c:29:99:45:17",
         "broadcast": "ff:ff:ff:ff:ff:ff",
         "addr_info": [{
                 "family": "inet",
                 "local": "192.168.71.131",
                 "prefixlen": 24,
                 "broadcast": "192.168.71.255",
                 "scope": "global",
                 "dynamic": true,
                 "label": "ens33",
                 "valid_life_time": 1732,
                 "preferred_life_time": 1732
             },{
                 "family": "inet6",
                 "local": "fe80::20c:29ff:fe99:4517",
                 "prefixlen": 64,
                 "scope": "link",
                 "valid_life_time": 4294967295,
                 "preferred_life_time": 4294967295
             }]
     }
 ]

And here is ip route not outputting JSON, even with the -j flag:

$ ip -j route
 default via 192.168.71.2 dev ens33 proto dhcp src 192.168.71.131 metric 100 
 192.168.71.0/24 dev ens33 proto kernel scope link src 192.168.71.131 
 192.168.71.2 dev ens33 proto dhcp scope link src 192.168.71.131 metric 100

Some other more modern tools like, kubectl and the aws-cli tool offer more consistent JSON output options which allow much easier parsing and pipelining of the output. But there are many older tools that still output nearly unparsable text. (e.g. netstat, lsblk, ifconfig, iptables, etc.) Interestingly Windows PowerShell has embraced using structured data, and that’s a good thing that the linux community can learn from.

How do we move forward?

The solution is to start an effort to go back to all of these legacy GNU and non-GNU command line utilities that output text data and add a JSON output option to them. All operating system APIs, like the /proc and /sys filesystems should serialize their files in JSON or provide the data in an alternative API that outputs JSON.

https://github.com/kellyjonbrazil/jc

In the meantime, I have created a tool called jc (https://github.com/kellyjonbrazil/jc) that converts the output of dozens of GNU and non-GNU commands and configuration files to JSON. Instead of everyone needing to create their own custom parsers for these common utilities and files, jc acts as a central clearinghouse of parsing libraries that just need to be written once and can be used by everyone.

Try the jc web demo!

jc is now available as an Ansible filter plugin!

JC In Action

Here’s how jc can be used to make your life easier today. Let’s take that same example of grabbing an ethernet IP address from above:

$ ifconfig ens33 | grep inet | awk '{print $2}' | cut -d/ -f1 | head -n 1
192.168.71.138

And here’s how you do the same thing with jc and a CLI JSON parsing tool like jq:

$ ifconfig ens33 | jc --ifconfig | jq -r '.[].ipv4_addr'
192.168.71.138

or

$ jc ifconfig ens33 | jq -r '.[].ipv4_addr'
192.168.71.138

Here’s another example of listing the listening TCP ports on the system:

$ netstat -tln | tr -s ' ' | cut -d ' ' -f 4 | rev | cut -d : -f 1 | rev | tail -n +3
25
22

That’s a lot of text manipulation just to get a simple list of port numbers! Here’s the same thing using jc and jq:

$ netstat -tln | jc --netstat | jq '.[].local_port_num'
25
22

or

$ jc netstat -tln | jq '.[].local_port_num'
25
22

Notice how much more intuitive it is to search and compare semantically enhanced structured data vs. awkwardly parsing low-level text? Also, the JSON output can be preserved to be used by any higher-level programming language like Python or JavaScript without line parsing. This is the future, my friends!

jc currently supports the following parsers: arp, df, dig, env, free, /etc/fstab, history, /etc/hosts, ifconfig, iptables, jobs, ls, lsblk, lsmod, lsof, mount, netstat, ps, route, ss, stat, systemctl, systemctl list-jobs, systemctl list-sockets, systemctl list-unit-files, uname -a, uptime, and w.

Note: jc now supports over 100 programs and file-types.

If you have a recommendation for a command or file type that is not currently supported by jc, add it to the comments and I’ll see if I can figure out how to parse and serialize it. If you would like to contribute a parser, please feel free!

With jc, we can make the linux world a better place until the OS and GNU tools join us in the 21’st century!

Hello World!

I’m Kelly Brazil and I dabble in a few interests, including music and tech spanning network security, open source projects, song writing, guitar, and Oxford commas. I’m particularly proud of my most popular Open Source Software project, jc, which is used in production around the world to simplify the lives of engineers. I’ve recently rekindled my love of making and performing music and had a blast performing at my 50th birthday party. I also just released a new single on the major streaming platforms called Breaking Apart.

Welcome to my blog – I’ll get the urge to write on one of these topics from time to time and I encourage open dialogue.

Where to find me:

My Music Projects:

My Open Source Projects:

Media, Mentions, and Events:

Some of my prior blog posts can be found here: