The Security Sidecar Pattern
In Part 3 of my series on Microservice Security Patterns for Kubernetes we dove into the Security Service Layer Pattern and configured a working application with micro-segmentation enforcement and deep inspection for application-layer protection. We were able to secure the application with that configuration, but, as we saw, the micro-segmentation configuration can get a bit unwieldy when you have more than a couple services.
In this post we’ll configure a Security Sidecar Pattern which will provide the same level of security but with a simpler configuration. I really like the Security Sidecar Pattern because it tightly couples the application security layer with the application without requiring any changes to the application.
This also means you can scale the application and your security together, so you don’t have to worry about scaling the security layer separately as your application needs grow. The only downside to this is that the application security layer (we’ll be using the Modsecurity WAF) may be overprovisioned and could waste cluster resources if not kept in check.
Let’s find out how the Security Sidecar Pattern works.
Sidecar where art thou?
One of the really cool things about Kubernetes is that the smallest workload unit is a Pod and a Pod can be made up of multiple containers. Even better, these containers share the loopback network interface. (127.0.0.1
) This means you can communicate between containers using normal network protocols without needing to expose these ports to the rest of the cluster.
In practice, what this means is that you can deploy a reverse proxy, such as the one we have been using in Part 3, but instead of setting the origin server as the Kubernetes cluster DNS name of the service, we can just use localhost
or 127.0.0.1
. Pretty neat!
Sidecar Injection
Another cool thing about Pods is that there are multiple ways to define how the containers within the Pod are defined. In the most basic scenario (and the one we will be deploying in this post) you can simply manually define the application and the WAF container in the Deployment YAML.
But there are fancier ways to automatically inject a sidecar container, like the WAF, by using Mutating Webhooks. Some examples of how this can be done can be found here and here. The nice thing about automatic sidecar injection is that the developers or DevOps team can define their Deployment YAML per usual and the sidecar will be injected without them needing to change their process. Automatic application layer protection!
One more thing about automatic sidecar injection – this is how the Envoy dataplane proxy sidecar is typically injected in an Istio Service Mesh deployment. Istio has its own sidecar injection service, but you can also manually configure the Envoy sidecar if you would like.
The Security Sidecar Pattern
Let’s dive in and see how to configure the Security Sidecar Pattern. We will be using the same application that we set up in Part 2, so go ahead and take a look there to refresh your memory on how things are set up. Here is the diagram:
Figure 1: Insecure Application

As demonstrated before, all microsim
services can communicate with each other and there is no deep inspection implemented to block application layer attacks like SQLi. In this post, we will be implementing this sidecar.yaml
deployment that adds modsecurity
reverse proxy WAF containers with the Core Rule Set as sidecars in front of the microsim
services. modsecurity
will perform deep inspection on the JSON/HTTP traffic and block application layer attacks.
Then we will add on a Kubernetes Network Policy to enforce segmentation between the services.
Security Sidecar Pattern Deployment Spec
We’ll immediately notice how much smaller and simpler the Security Sidecar Pattern configuration is compared to the Security Service Layer Pattern. We went from 238 lines of configuration down to 142!
Instead of creating separate security deployments and services to secure the application like we did in the Security Service Layer Pattern, we will simply add the WAF container to the same Pod as the application. We will need to make sure the WAF and the application listen on different TCP Ports since they share the loopback interface which doesn’t allow overlapping ports.
In this case, the WAF will become the front-end and will be listening on behalf of the application and will forward on the clean, inspected traffic to the application via the loopback interface. We will only need to expose the WAF listening port to the cluster. Since we don’t want to allow bypassing the WAF we don’t want to expose the application port directly any longer.
Note: Container TCP and UDP ports are still accessible via IP within the Kubernetes cluster even if they are not explicitly configured in the deployment YAML via
containerPort
configuration. To completely lock down direct access to the application TCP port so the WAF cannot be bypassed we will need to configure Network Policy.
Figure 2: Security Sidecar Pattern

Let’s take a closer look at the spec.
www
Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: www spec: replicas: 3 selector: matchLabels: app: www template: metadata: labels: app: www spec: containers: - name: modsecurity image: owasp/modsecurity-crs:v3.2-modsec2-apache ports: - containerPort: 80 env: - name: SETPROXY value: "True" - name: PROXYLOCATION value: "http://127.0.0.1:8080/" - name: microsimserver image: kellybrazil/microsimserver env: - name: STATS_PORT value: "5000" - name: microsimclient image: kellybrazil/microsimclient env: - name: STATS_PORT value: "5001" - name: REQUEST_URLS value: "http://auth.default.svc.cluster.local:8080/,http://db.default.svc.cluster.local:8080/" - name: SEND_SQLI value: "True"
We see three replicas of the www
pods that are made up of both the official OWASP modsecurity
container available on Docker Hub configured as a reverse proxy WAF listening on TCP port 80. The microsimserver
application container listening on TCP port 8080 remains unchanged. Note that it is important that services listen on different ports since they are sharing the same loopback interface in the Pod.
All requests that go to the WAF containers will be inspected and proxied to the microsimserver
application container within the same Pod at http://127.0.0.1:8080/
.
These WAF containers are effectively impersonating the original service so the user or application does not need to modify its configuration. One nice thing about this design is that it allows you to scale the security layer along with the application, so as you scale up the application, security scales along with it automatically.
The microsimclient
container configuration remains unchanged from the original, which is nice. This shows that you can implement the Security Sidecar Pattern with little to no application logic changes if you are careful about how you set up the ports.
Now, let’s take a look at the www
Service that points to this deployment.
www
Service
apiVersion: v1 kind: Service metadata: labels: app: www name: www spec: externalTrafficPolicy: Local ports: - port: 8080 targetPort: 80 selector: app: www sessionAffinity: None type: LoadBalancer
Here we are just forwarding TCP port 8080 application traffic to TCP port 80 on the www
Pods since that is the port the modsecurity
reverse proxy containers listen on. Since this is an externally facing service we are using type: LoadBalancer
and externalTrafficPolicy: Local
just like the original Service did.
Next we’ll take a look at the internal microservices. Since the auth
and db
deployments and services are configured identically we’ll just go over the db
configuration.
db
Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: db spec: replicas: 3 selector: matchLabels: app: db template: metadata: labels: app: db spec: containers: - name: modsecurity image: owasp/modsecurity-crs:v3.2-modsec2-apache ports: - containerPort: 80 env: - name: SETPROXY value: "True" - name: PROXYLOCATION value: "http://127.0.0.1:8080/" - name: microsimserver image: kellybrazil/microsimserver env: - name: STATS_PORT value: "5000"
Again, we have just added the modsecurity
WAF container to the Pod listening on TCP Port 80. Since this is different than the listening port of the microsimserver
container we are good to go without any changes to the app. Just like on the www
Deployment, we have configured the modsecurity
reverse proxy to send inspected traffic locally within the Pod to http://127.0.0.1:8080/
.
Note that even though we aren’t explicitly configuring the microsimserver
TCP port 8080 via containerPort
in the Deployment spec, this port is still technically available on the cluster via direct IP access. To fully lock down connectivity, we will be using Network Policy later on.
db
Service
apiVersion: v1 kind: Service metadata: labels: app: db name: db spec: ports: - port: 8080 targetPort: 80 selector: app: db sessionAffinity: None
Nothing fancy here – just listening on TCP port 8080 and forwarding to port 80, which is what the modsecurity
WAF containers listen on. This is an internal service so no need for type: LoadBalancer
or externalTrafficPolicy: Local
.
Now that we understand how the Deployment and Service specs work, let’s apply them on our Kubernetes cluster.
See Part 2 for more information on setting up the cluster.
Applying the Deployments and Services
First, let’s delete the original insecure deployment in Cloud Shell if it is still running:
$ kubectl delete -f simple.yaml
Your Pods, Deployments, and Services should be empty before you proceed:
$ kubectl get pods No resources found.
$ kubectl get deploy No resources found.
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 3m46s
Next, copy/paste the deployment text into a file called sidecar.yaml
using vi
. Then apply the deployment with kubectl
:
$ kubectl create -f sidecar.yaml deployment.apps/www created deployment.apps/auth created deployment.apps/db created service/www created service/auth created service/db created
Testing the Deployment
Once the www
service has an external IP, you can send an HTTP GET or POST request to it from Cloud Shell or your laptop:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE auth ClusterIP 10.12.7.96 <none> 8080/TCP 90m db ClusterIP 10.12.8.118 <none> 8080/TCP 90m kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 93m www LoadBalancer 10.12.14.67 35.238.35.208 8080:32032/TCP 90m
$ curl 35.238.35.208:8080 ...vME2NtSGaTBnt2zsprKdes5KKXCCAG9pk0yUr4K Thu Jan 9 22:09:27 2020 hostname: www-5bfc744996-tdzsk ip: 10.8.2.3 remote: 127.0.0.1 hostheader: 127.0.0.1:8080 path: /
The originating IP address is now the IP address of the local WAF in the Pod that handled the request. (always 127.0.0.1, since it is a sidecar). Since the WAF is deployed as a reverse proxy, the only way to get the originating IP information will be via HTTP headers, such as X-Forwarded-For
(XFF). Also, the host
header has now changed, so keep this in mind if the application is expecting certain values in the headers.
We can do a quick check to see if the modsecurity
WAF is inspecting traffic by sending an HTTP POST request to an IP address with no data or size information. This will be seen as an anomalous request and blocked:
$ curl -X POST 35.238.35.208:8080 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access / on this server.<br /> </p> </body></html>
Excellent! Now let’s take a look at the microsim
stats to see if the WAF layers are blocking the East/West SQLi attacks. Let’s open two tabs in Cloud Shell: one for shell access to a www
microsimclient
container and another for shell access to a db
microsimserver
container.
In the first tab, use kubectl
to find the name of one of the www
pods and shell into the microsimclient
container running in it:
$ kubectl get pods NAME READY STATUS RESTARTS AGE auth-7559599f89-d8tnw 2/2 Running 0 102m auth-7559599f89-k8qht 2/2 Running 0 102m auth-7559599f89-wfbp4 2/2 Running 0 102m db-59f8d84df-4kbvg 2/2 Running 0 102m db-59f8d84df-5csh8 2/2 Running 0 102m db-59f8d84df-ncksp 2/2 Running 0 102m www-5bfc744996-6jbr7 3/3 Running 0 102m www-5bfc744996-bgh9h 3/3 Running 0 102m www-5bfc744996-tdzsk 3/3 Running 0 102m
$ kubectl exec www-5bfc744996-6jbr7 -c microsimclient -it sh /app #
Then curl
to the microsimclient
stats server on localhost:5001:
/app # curl localhost:5001 { "time": "Thu Jan 9 22:23:25 2020", "runtime": 6349, "hostname": "www-5bfc744996-6jbr7", "ip": "10.8.0.4", "stats": { "Requests": 6320, "Sent Bytes": 6547520, "Received Bytes": 112275897, "Internet Requests": 0, "Attacks": 64, "SQLi": 64, "XSS": 0, "Directory Traversal": 0, "DGA": 0, "Malware": 0, "Error": 0 }, "config": { "STATS_PORT": 5001, "STATSD_HOST": null, "STATSD_PORT": 8125, "REQUEST_URLS": "http://auth.default.svc.cluster.local:8080/,http://db.default.svc.cluster.local:8080/", "REQUEST_INTERNET": false, "REQUEST_MALWARE": false, "SEND_SQLI": true, "SEND_DIR_TRAVERSAL": false, "SEND_XSS": false, "SEND_DGA": false, "REQUEST_WAIT_SECONDS": 1.0, "REQUEST_BYTES": 1024, "STOP_SECONDS": 0, "STOP_PADDING": false, "TOTAL_STOP_SECONDS": 0, "REQUEST_PROBABILITY": 1.0, "EGRESS_PROBABILITY": 0.1, "ATTACK_PROBABILITY": 0.01 } }
Here we see 64 SQLi attacks have been sent to the auth
and db
services in the last 6349 seconds.
Now, let’s see if the attacks are getting through like they did in the insecure deployment. In the other tab, find the name of one of the db
pods and shell into the microsimserver
container running in it:
$ kubectl exec db-59f8d84df-4kbvg -c microsimserver -it sh /app #
/app # curl localhost:5000 { "time": "Thu Jan 9 22:39:30 2020", "runtime": 7316, "hostname": "db-59f8d84df-4kbvg", "ip": "10.8.0.5", "stats": { "Requests": 3659, "Sent Bytes": 60563768, "Received Bytes": 3790724, "Attacks": 0, "SQLi": 0, "XSS": 0, "Directory Traversal": 0 }, "config": { "LISTEN_PORT": 8080, "STATS_PORT": 5000, "STATSD_HOST": null, "STATSD_PORT": 8125, "RESPOND_BYTES": 16384, "STOP_SECONDS": 0, "STOP_PADDING": false, "TOTAL_STOP_SECONDS": 0 }
In the insecure deployment we saw the SQLi value incrementing. Now that the modsecurity
WAF is inspecting the East/West traffic, the SQLi attacks are no longer getting through, though we still see normal Requests
, Sent Bytes
, and Received Bytes
incrementing.
modsecurity
Logs
Now, let’s check the modsecurity
logs to see how the East/West application attacks are being identified. To see the modsecurity
audit log we’ll need to shell into one of the WAF containers and look at the /var/log/modsec_audit.log
file:
$ kubectl exec db-59f8d84df-4kbvg -c modsecurity -it sh # grep -C 60 sql /var/log/modsec_audit.log <snip> --a05a312e-A-- [09/Jan/2020:23:41:46 +0000] Xhe6OmUpgBRl4hgX8QIcmAAAAIE 10.8.0.4 50990 10.8.0.5 80 --a05a312e-B-- GET /?username=joe%40example.com&password=%3BUNION+SELECT+1%2C+version%28%29+limit+1%2C1-- HTTP/1.1 Host: db.default.svc.cluster.local:8080 User-Agent: python-requests/2.22.0 Accept-Encoding: gzip, deflate Accept: */* Connection: keep-alive --a05a312e-F-- HTTP/1.1 403 Forbidden Content-Length: 209 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 --a05a312e-E-- <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access / on this server.<br /> </p> </body></html> --a05a312e-H-- Message: Warning. Pattern match "(?i:(?:[\"'`](?:;?\\s*?(?:having|select|union)\\b\\s*?[^\\s]|\\s*?!\\s*?[\"'`\\w])|(?:c(?:onnection_id|urrent_user)|database)\\s*?\\([^\\)]*?|u(?:nion(?:[\\w(\\s]*?select| select @)|ser\\s*?\\([^\\)]*?)|s(?:chema\\s*?\\([^\\)]*?|elect.*?\\w?user\\()|in ..." at ARGS:password. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "190"] [id "942190"] [msg "Detects MSSQL code execution and information gathering attempts"] [data "Matched Data: UNION SELECT found within ARGS:password: ;UNION SELECT 1, version() limit 1,1--"] [severity "CRITICAL"] [ver "OWASP_CRS/3.2.0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-sqli"] [tag "OWASP_CRS"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"] Message: Warning. Pattern match "(?i:(?:^[\\W\\d]+\\s*?(?:alter\\s*(?:a(?:(?:pplication\\s*rol|ggregat)e|s(?:ymmetric\\s*ke|sembl)y|u(?:thorization|dit)|vailability\\s*group)|c(?:r(?:yptographic\\s*provider|edential)|o(?:l(?:latio|um)|nversio)n|ertificate|luster)|s(?:e(?:rv(?:ice|er)| ..." at ARGS:password. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "471"] [id "942360"] [msg "Detects concatenated basic SQL injection and SQLLFI attempts"] [data "Matched Data: ;UNION SELECT found within ARGS:password: ;UNION SELECT 1, version() limit 1,1--"] [severity "CRITICAL"] [ver "OWASP_CRS/3.2.0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-sqli"] [tag "OWASP_CRS"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"] Message: Access denied with code 403 (phase 2). Operator GE matched 5 at TX:anomaly_score. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "91"] [id "949110"] [msg "Inbound Anomaly Score Exceeded (Total Score: 10)"] [severity "CRITICAL"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] Message: Warning. Operator GE matched 5 at TX:inbound_anomaly_score. [file "/etc/modsecurity.d/owasp-crs/rules/RESPONSE-980-CORRELATION.conf"] [line "86"] [id "980130"] [msg "Inbound Anomaly Score Exceeded (Total Inbound Score: 10 - SQLI=10,XSS=0,RFI=0,LFI=0,RCE=0,PHPI=0,HTTP=0,SESS=0): individual paranoia level scores: 10, 0, 0, 0"] [tag "event-correlation"] Apache-Error: [file "apache2_util.c"] [line 273] [level 3] [client 10.8.0.4] ModSecurity: Warning. Pattern match "(?i:(?:[\\\\"'`](?:;?\\\\\\\\s*?(?:having|select|union)\\\\\\\\b\\\\\\\\s*?[^\\\\\\\\s]|\\\\\\\\s*?!\\\\\\\\s*?[\\\\"'`\\\\\\\\w])|(?:c(?:onnection_id|urrent_user)|database)\\\\\\\\s*?\\\\\\\\([^\\\\\\\\)]*?|u(?:nion(?:[\\\\\\\\w(\\\\\\\\s]*?select| select @)|ser\\\\\\\\s*?\\\\\\\\([^\\\\\\\\)]*?)|s(?:chema\\\\\\\\s*?\\\\\\\\([^\\\\\\\\)]*?|elect.*?\\\\\\\\w?user\\\\\\\\()|in ..." at ARGS:password. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "190"] [id "942190"] [msg "Detects MSSQL code execution and information gathering attempts"] [data "Matched Data: UNION SELECT found within ARGS:password: ;UNION SELECT 1, version() limit 1,1--"] [severity "CRITICAL"] [ver "OWASP_CRS/3.2.0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-sqli"] [tag "OWASP_CRS"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"] [hostname "db.default.svc.cluster.local"] [uri "/"] [unique_id "Xhe6OmUpgBRl4hgX8QIcmAAAAIE"] Apache-Error: [file "apache2_util.c"] [line 273] [level 3] [client 10.8.0.4] ModSecurity: Warning. Pattern match "(?i:(?:^[\\\\\\\\W\\\\\\\\d]+\\\\\\\\s*?(?:alter\\\\\\\\s*(?:a(?:(?:pplication\\\\\\\\s*rol|ggregat)e|s(?:ymmetric\\\\\\\\s*ke|sembl)y|u(?:thorization|dit)|vailability\\\\\\\\s*group)|c(?:r(?:yptographic\\\\\\\\s*provider|edential)|o(?:l(?:latio|um)|nversio)n|ertificate|luster)|s(?:e(?:rv(?:ice|er)| ..." at ARGS:password. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "471"] [id "942360"] [msg "Detects concatenated basic SQL injection and SQLLFI attempts"] [data "Matched Data: ;UNION SELECT found within ARGS:password: ;UNION SELECT 1, version() limit 1,1--"] [severity "CRITICAL"] [ver "OWASP_CRS/3.2.0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-sqli"] [tag "OWASP_CRS"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"] [hostname "db.default.svc.cluster.local"] [uri "/"] [unique_id "Xhe6OmUpgBRl4hgX8QIcmAAAAIE"] Apache-Error: [file "apache2_util.c"] [line 273] [level 3] [client 10.8.0.4] ModSecurity: Access denied with code 403 (phase 2). Operator GE matched 5 at TX:anomaly_score. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "91"] [id "949110"] [msg "Inbound Anomaly Score Exceeded (Total Score: 10)"] [severity "CRITICAL"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "db.default.svc.cluster.local"] [uri "/"] [unique_id "Xhe6OmUpgBRl4hgX8QIcmAAAAIE"] Apache-Error: [file "apache2_util.c"] [line 273] [level 3] [client 10.8.0.4] ModSecurity: Warning. Operator GE matched 5 at TX:inbound_anomaly_score. [file "/etc/modsecurity.d/owasp-crs/rules/RESPONSE-980-CORRELATION.conf"] [line "86"] [id "980130"] [msg "Inbound Anomaly Score Exceeded (Total Inbound Score: 10 - SQLI=10,XSS=0,RFI=0,LFI=0,RCE=0,PHPI=0,HTTP=0,SESS=0): individual paranoia level scores: 10, 0, 0, 0"] [tag "event-correlation"] [hostname "db.default.svc.cluster.local"] [uri "/"] [unique_id "Xhe6OmUpgBRl4hgX8QIcmAAAAIE"] Action: Intercepted (phase 2) Apache-Handler: proxy-server Stopwatch: 1578613306195047 3522 (- - -) Stopwatch2: 1578613306195047 3522; combined=2944, p1=904, p2=1734, p3=0, p4=0, p5=306, sr=353, sw=0, l=0, gc=0 Response-Body-Transformed: Dechunked Producer: ModSecurity for Apache/2.9.3 (http://www.modsecurity.org/); OWASP_CRS/3.2.0. Server: Apache Engine-Mode: "ENABLED" --a05a312e-Z--
Here we see modsecurity
has blocked and logged the East/West SQLi attack from one of the www
Pods to a db
Pod. Sweet!
Yet, we’re still not done. Even though we are now inspecting and protecting traffic at the application layer, we are not yet enforcing micro-segmentation between the services. That means that, even with the WAFs in place, any auth
Pod can communicate with any db
Pod. We can demonstrate this by opening a shell on any auth
microsimserver
container and attempting to send a request to a db
Pod from it:
/app # curl 'http://db:8080' ...JsHT4A8GK8H0Am47jSG7MppM3o7BOlTrRZl4EEA9bNzsjND Thu Jan 9 23:57:54 2020 hostname: db-59f8d84df-5csh8 ip: 10.8.2.5 remote: 127.0.0.1 hostheader: 127.0.0.1:8080 path: /
Even worse, if I know the IP address of the db
pod, I can even bypass the WAF and send a successful SQLi attack:
/app # curl 'http://10.8.2.5:8080/?username=joe%40example.com&password=%3BUNION+SELECT+1%2C+version%28%29+limit+1%2C1--' ...7Z7Kw2JxEgXipBnDZyyoZI4TK3RswBuZ509y2WY1wJTsERJFoRW6ZYY1QiA Fri Jan 10 00:01:37 2020 hostname: db-59f8d84df-5csh8 ip: 10.8.2.5 remote: 10.8.2.4 hostheader: 10.8.2.5:8080 path: /?username=joe%40example.com&password=%3BUNION+SELECT+1%2C+version%28%29+limit+1%2C1--
Not good! Now, let’s add Network Policy to provide micro-segmentation and button this thing up.
Adding Micro-segmentation
Here is a simple Network Policy spec that will control the ingress to each internal service. I tried to keep the rules simple, but in a production deployment a tighter policy would likely be desired. For example, you would probably also want to include Egress policies.
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: auth-ingress namespace: default spec: podSelector: matchLabels: app: auth policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: www to: ports: - protocol: TCP port: 80 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: db-ingress namespace: default spec: podSelector: matchLabels: app: db policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: www to: ports: - protocol: TCP port: 80
Another big difference here is the simplicity of the Network Policy when compared to the Security Service Layer Pattern. We went from 104 lines of configuration down to 41.
This policy is says:
- On the
auth
Pods only accept traffic from thewww
Pods that is destined to TCP port 80 - On the
db
Pods only accept traffic from thewww
Pods that is destined to TCP port 80
Let’s try it out. Copy the Nework Policy text to a file named sidecar-network-policy.yaml
in vi
and apply the Network Policy to the cluster with kubectl
:
$ kubectl create -f sidecar-network-policy.yaml networkpolicy.networking.k8s.io/auth-ingress created networkpolicy.networking.k8s.io/db-ingress created
Next, let’s try that simulated SQLi attack again from auth
to db
:
$ kubectl exec auth-7559599f89-d8tnw -c microsimserver -it sh /app #
/app # curl 'http://10.8.2.5:8080/?username=joe%40example.com&password=%3BUNION+SELECT+1%2C+version%28%29+limit+1%2C1--' curl: (7) Failed to connect to 10.8.2.5 port 8080: Operation timed out
Good stuff – no matter how you try to connect from auth
to db
it will now fail.
Finally, let’s ensure that the rest of the application is still working correctly by checking the db
logs. If we are still getting legitimate requests then we should be good to go:
$ kubectl logs -f db-59f8d84df-4kbvg microsimserver <snip> 127.0.0.1 - - [10/Jan/2020 00:27:57] "POST / HTTP/1.1" 200 - 127.0.0.1 - - [10/Jan/2020 00:27:58] "POST / HTTP/1.1" 200 - 127.0.0.1 - - [10/Jan/2020 00:27:59] "POST / HTTP/1.1" 200 - 127.0.0.1 - - [10/Jan/2020 00:28:02] "POST / HTTP/1.1" 200 - 127.0.0.1 - - [10/Jan/2020 00:28:04] "POST / HTTP/1.1" 200 - {"Total": {"Requests": 6987, "Sent Bytes": 115648879, "Received Bytes": 7235424, "Attacks": 1, "SQLi": 1, "XSS": 0, "Directory Traversal": 0}, "Last 30 Seconds": {"Requests": 15, "Sent Bytes": 248280, "Received Bytes": 15540, "Attacks": 0, "SQLi": 0, "XSS": 0, "Directory Traversal": 0}} 127.0.0.1 - - [10/Jan/2020 00:28:04] "POST / HTTP/1.1" 200 -
The service is still getting requests with the Network Policy in place. We can even see the test SQLi request we sent earlier when we bypassed the WAF, but no SQLi attacks are seen since the Network Policy was applied.
Conclusion
We have successfully secured the intra-cluster service communication (East/West communications) via micro-segmentation and WAF utilizing the Sidecar Security Pattern. This pattern is great for quickly and easily adding security to your cluster without creating a lot of overhead for the developers or DevOps teams. The configuration is also smaller and simpler than the Security Service Layer Pattern. It is also possible to automate the injection of the security sidecar with Mutating Webhooks. The nice thing about this pattern is that the security layer scales alongside the application automatically, though one downside to this pattern is that you could waste cluster resources if the WAF containers are not being fully utilized.
What’s next?
My goal is to demonstrate the Service Mesh Security Plugin Pattern in a future post. There are a couple of commercial and open source projects that provide this option, but it’s still early days in this space. In my opinion this pattern makes the most sense since it tightly integrates security with the cluster and cleanly provides both micro-segmentation and application layer security as code, which is the direction everything is moving.
I’m also looking at implementing a Security Sidecar Pattern in conjunction with Istio Service Mesh. This is effectively a Sidecar on Sidecar Pattern. (The Envoy container and WAF container are both added to the application Pod) We’ll see how that goes, and if successful I’ll write that one up as well.
I hope this series has been helpful and if you have suggestions for future topics, please feel free to let me know!
Next in the series: Part 5
HI I really like this post and have deployed a variation of the sidecar pattern. One thing I noticed is that changing the modsec paranoia level from 1 to 2 (or 3, or whatever) did not seem to take. I noticed this because after setting up the waf sidecar, I tested it by attacking it with an OWASP ZAP container:
docker run -v $(pwd):/zap/wrk/:rw --name zap -t owasp/zap2docker-weekly:latest zap-baseline.py -t "http://dev.my.url.com" -c generated_file.conf -r shiny_report_internet.html
That last bit on the command outputs a report of the vulnerabilities found in the attack. No matter where I set the paranoia level in the kubectl deploy file, the report results are exactly the same, looks very much like its always executing paranoia level 1 no matter what I set it to (and I’m not setting the EXECUTING_PARANOIA at all).
This is very different from what this report looks like when I attack this kind of set up locally or in an EC2, or anywhere else besides a kubernetes cluster.
I verified the paranoia level was set up properly by checking the logs inside the running modsec running container:
grep --color -i paranoia /etc/modsecurity.d/owasp-crs/crs-setup.conf
and it confirms the paranoia level is getting picked up:
setvar:tx.paranoia_level=3"
Not sure what the issue could be, but I suspect its something to do with how the kubectl executes the deployment.
I’m wondering if you’d noticed this yourself. If not, and you felt like testing it out on your end, I’d be interested to see if you could repeat the results.