The Security Service Layer Pattern
In Part 1 of this series on microservices security patterns for Kubernetes we went over three design patterns that enable micro-segmentation and deep inspection of the application and API traffic between microservices:
- Security Service Layer Pattern
- Security Sidecar Pattern
- Service Mesh Security Plugin Pattern
In Part 2 we set up a simple, insecure deployment and demonstrated application layer attacks and the lack of micro-segmentation. In this post we will take that insecure deployment and implement a Security Service Layer Pattern to block application layer attacks and enforce strict segmentation between services.
The Insecure Deployment
Let’s take a quick look at the insecure deployment from Part 2:
Figure 1: Insecure Deployment

As demonstrated before, all microsim
services can communicate with each other and there is no deep inspection implemented to block application layer attacks like SQLi. In this post, we will be implementing this servicelayer.yaml
deployment that adds modsecurity
reverse proxy WAF Pods with the Core Rule Set in front of the microsim
services. modsecurity
will perform deep inspection on the JSON/HTTP traffic and block application layer attacks.
Then we will add on a Kubernetes Network Policy to enforce segmentation between the services. In the end, the deployment will look like this:
Figure 2: Security Service Layer Pattern

Security Service Layer Deployment Spec
You’ll notice that each original service has been split into two services: a modsecurity
WAF service (in orange) and the original service (in blue). Let’s take a look at the deployment YAML file to understand how this pattern works.
The Security Service Layer Pattern does add quite a bit of lines to our deployment file, but they are simple additions. We’ll just need to keep our port numbers and service names straight as we add the WAF layers into the deployment.
Let’s take a closer look at the components that have changed from the insecure deployment.
www
Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: www spec: replicas: 3 selector: matchLabels: app: www template: metadata: labels: app: www spec: containers: - name: modsecurity image: owasp/modsecurity-crs:v3.2-modsec2-apache ports: - containerPort: 80 env: - name: SETPROXY value: "True" - name: PROXYLOCATION value: "http://wwworigin.default.svc.cluster.local:8080/"
We see three replicas of the official OWASP modsecurity
container available on Docker Hub configured as a reverse proxy WAF listening on TCP port 80. All requests that go to any of these WAF instances will be inspected and proxied to the origin service, wwworigin
, on TCP port 8080. wwworigin
is the original Service and Deployment from the insecure deployment.
These WAF containers are effectively impersonating the original service so the user or application does not need to modify its configuration. One nice thing about this design is that it allows you to scale the security layer independent from the application. For instance, you might only require two modsecurity
Pods to secure 10 of your application Pods.
Now, let’s take a look at the www
Service that points to this WAF deployment.
www
Service
apiVersion: v1 kind: Service metadata: labels: app: www name: www spec: externalTrafficPolicy: Local ports: - port: 80 targetPort: 80 selector: app: www sessionAffinity: None type: LoadBalancer
Nothing too fancy here – just forwarding TCP port 80 application traffic to TCP port 80 on the modsecurity
WAF Pods since that is the port they listen on. Since this is an externally facing service we are using type: LoadBalancer
and externalTrafficPolicy: Local
just like the original Service did.
Next, let’s check out the wwworigin
Deployment spec where the original application Pods are defined.
wwworigin
Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: wwworigin spec: replicas: 3 selector: matchLabels: app: wwworigin template: metadata: labels: app: wwworigin spec: containers: - name: microsimserver image: kellybrazil/microsimserver env: - name: STATS_PORT value: "5000" ports: - containerPort: 8080 - name: microsimclient image: kellybrazil/microsimclient env: - name: REQUEST_URLS value: "http://auth.default.svc.cluster.local:80,http://db.default.svc.cluster.local:80" - name: SEND_SQLI value: "True" - name: STATS_PORT value: "5001"
There’s a lot going on here, but basically it’s nearly identical to what we had in the insecure deployment. The only thing that has changed is the name of the deployment from www
to wwworigin
and we changed the REQUEST_URLS
destination ports from 8080 to 80. This is because the modsecurity
WAF containers listen on port 80 and they are the true front-end to the auth
and db
services.
Next, let’s take a look at the wwworigin
Service spec.
wwworigin
Service
apiVersion: v1 kind: Service metadata: labels: app: wwworigin name: wwworigin spec: ports: - port: 8080 targetPort: 8080 selector: app: wwworigin sessionAffinity: None
The only change to the original deployment here is that we changed the name from www
to wwworigin
and the port
from 80 to 8080 since the origin Pods are now internal and not directly exposed to the internet.
Now we need to repeat this process for the auth
and db
services. Since they are configured the same way, we will only go over the db
Deployment and Service. Remember, there is now a db
(WAF) and dborigin
(application) Deployment and Service that we need to define.
db
Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: db spec: replicas: 3 selector: matchLabels: app: db template: metadata: labels: app: db spec: containers: - name: modsecurity image: owasp/modsecurity-crs:v3.2-modsec2-apache ports: - containerPort: 80 env: - name: SETPROXY value: "True" - name: PROXYLOCATION value: "http://dborigin.default.svc.cluster.local:8080/"
This is essentially the same as the www
Deployment except we are proxying to dborigin
. The WAF containers listen on port 80 and then they proxy the traffic to port 8080 on the origin application service.
db
Service
apiVersion: v1 kind: Service metadata: labels: app: db name: db spec: ports: - port: 80 targetPort: 80 selector: app: db sessionAffinity: None
Again, nothing fancy here – just listening on TCP port 80, which is what the modsecurity
WAF containers listen on. This is an internal service so no need for type: LoadBalancer
or externalTrafficPolicy: Local
.
Finally, let’s take a look at the dborigin
Deployment and Service.
dborigin
Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: dborigin spec: replicas: 3 selector: matchLabels: app: dborigin template: metadata: labels: app: dborigin spec: containers: - name: microsimserver image: kellybrazil/microsimserver ports: - containerPort: 8080 env: - name: STATS_PORT value: "5000"
This Deployment is essentially the same as the original, except the name has been changed from db
to dborigin
.
dborigin
Service
apiVersion: v1 kind: Service metadata: labels: app: dborigin name: dborigin spec: ports: - port: 8080 targetPort: 8080 selector: app: dborigin sessionAffinity: None
Again, the only change from the original here is the name from db
to dborigin
.
Now that we understand how the Deployment and Service specs work, let’s apply them on our Kubernetes cluster.
See Part 2 for more information on setting up the cluster.
Applying the Deployments and Services
First, let’s delete the original insecure deployment in Cloud Shell if it is still running:
$ kubectl delete -f simple.yaml
Your Pods, Deployments, and Services should be empty before you proceed:
$ kubectl get pods No resources found.
$ kubectl get deploy No resources found.
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 3m46s
Next, copy/paste the deployment text into a file called servicelayer.yaml
using vi
. Then apply the deployment with kubectl
:
$ kubectl apply -f servicelayer.yaml deployment.apps/www created deployment.apps/wwworigin created deployment.apps/auth created deployment.apps/authorigin created deployment.apps/db created deployment.apps/dborigin created service/www created service/auth created service/db created service/wwworigin created service/authorigin created service/dborigin created
Testing the Deployment
Once the www
service has an external IP, you can send an HTTP GET or POST request to it from Cloud Shell or your laptop:
$ kubectl get services NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE auth ClusterIP 10.12.14.41 <none> 80/TCP 52s authorigin ClusterIP 10.12.5.222 <none> 8080/TCP 52s db ClusterIP 10.12.9.224 <none> 80/TCP 52s dborigin ClusterIP 10.12.13.80 <none> 8080/TCP 51s kubernetes ClusterIP 10.12.0.1 <none> 443/TCP 7m43s www LoadBalancer 10.12.13.193 34.66.99.16 80:30394/TCP 52s wwworigin ClusterIP 10.12.6.122 <none> 8080/TCP 52s
$ curl 34.66.99.16 ...o7yXXg70Olfu2MvVsm9kos8ksEXyzX4oYnZ7wQh29FaqSF Thu Dec 19 00:58:15 2019 hostname: wwworigin-6c8fb48f79-frmk9 ip: 10.8.1.9 remote: 10.8.0.7 hostheader: wwworigin.default.svc.cluster.local:8080 path: /
You can probably already see some interesting side effects of this deployment. The originating IP address is now the IP address of the WAF that handled the request. (10.8.0.7 in this case). Since the WAF is deployed as a reverse proxy, the only way to get the originating IP information will be via HTTP headers, such as X-Forwarded-For
(XFF). Also, the host
header has now changed, so keep this in mind if the application is expecting certain values in the headers.
We can do a quick check to see if the modsecurity
WAF is inspecting traffic by sending an HTTP POST request with no data or size information. This will be seen as an anomalous request and blocked:
$ curl -X POST http://34.66.99.16 <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access / on this server.<br /> </p> </body></html>
That looks good! Now let’s take a look at the microsim
stats to see if the WAF layers are blocking the East/West SQLi attacks. Let’s open two tabs in Cloud Shell: one for shell access to a wwworigin
container and another for shell access to a dborigin
container.
In the first tab, use kubectl
to find the name of one of the wwworigin
pods and shell into the microsimclient
container running in it:
$ kubectl get pods NAME READY STATUS RESTARTS AGE auth-865675dd7f-4nld7 1/1 Running 0 23m auth-865675dd7f-7xsks 1/1 Running 0 23m auth-865675dd7f-lzdzg 1/1 Running 0 23m authorigin-5f6b795dcd-47gwn 1/1 Running 0 23m authorigin-5f6b795dcd-r5lr2 1/1 Running 0 23m authorigin-5f6b795dcd-xb68n 1/1 Running 0 23m db-dc6f6f5f9-b2j2f 1/1 Running 0 23m db-dc6f6f5f9-kb5q9 1/1 Running 0 23m db-dc6f6f5f9-wmj4n 1/1 Running 0 23m dborigin-7dc8d69f86-6mj2d 1/1 Running 0 23m dborigin-7dc8d69f86-bvpdn 1/1 Running 0 23m dborigin-7dc8d69f86-n42vg 1/1 Running 0 23m www-7cdc675f9-bhrhp 1/1 Running 0 23m www-7cdc675f9-dldhq 1/1 Running 0 23m www-7cdc675f9-rlqwv 1/1 Running 0 23m wwworigin-6c8fb48f79-9tq5t 2/2 Running 0 23m wwworigin-6c8fb48f79-frmk9 2/2 Running 0 23m wwworigin-6c8fb48f79-tltzd 2/2 Running 0 23m
$ kubectl exec wwworigin-6c8fb48f79-9tq5t -c microsimclient -it sh /app #
Then curl
to the microsimclient
stats server on localhost:5001:
/app # curl localhost:5001 { "time": "Thu Dec 19 01:26:24 2019", "runtime": 1855, "hostname": "wwworigin-6c8fb48f79-9tq5t", "ip": "10.8.0.10", "stats": { "Requests": 1848, "Sent Bytes": 1914528, "Received Bytes": 30650517, "Internet Requests": 0, "Attacks": 18, "SQLi": 18, "XSS": 0, "Directory Traversal": 0, "DGA": 0, "Malware": 0, "Error": 0 }, "config": { "STATS_PORT": 5001, "STATSD_HOST": null, "STATSD_PORT": 8125, "REQUEST_URLS": "http://auth.default.svc.cluster.local:80,http://db.default.svc.cluster.local:80", "REQUEST_INTERNET": false, "REQUEST_MALWARE": false, "SEND_SQLI": true, "SEND_DIR_TRAVERSAL": false, "SEND_XSS": false, "SEND_DGA": false, "REQUEST_WAIT_SECONDS": 1.0, "REQUEST_BYTES": 1024, "STOP_SECONDS": 0, "STOP_PADDING": false, "TOTAL_STOP_SECONDS": 0, "REQUEST_PROBABILITY": 1.0, "EGRESS_PROBABILITY": 0.1, "ATTACK_PROBABILITY": 0.01 } }
Here we see 18 SQLi attacks have been sent to the auth
and db
services in the last 1855 seconds.
Now, let’s see if the attacks are getting through like they did in the insecure deployment. In the other tab, find the name of one of the dborigin
pods and shell into the microsimserver
container running in it:
$ kubectl exec dborigin-7dc8d69f86-6mj2d -c microsimserver -it sh /app #
Then curl
to the microsimserver
stats server on localhost:5000:
/app # curl localhost:5000 { "time": "Thu Dec 19 01:29:00 2019", "runtime": 2013, "hostname": "dborigin-7dc8d69f86-6mj2d", "ip": "10.8.2.10", "stats": { "Requests": 1009, "Sent Bytes": 16733599, "Received Bytes": 1045324, "Attacks": 0, "SQLi": 0, "XSS": 0, "Directory Traversal": 0 }, "config": { "LISTEN_PORT": 8080, "STATS_PORT": 5000, "STATSD_HOST": null, "STATSD_PORT": 8125, "RESPOND_BYTES": 16384, "STOP_SECONDS": 0, "STOP_PADDING": false, "TOTAL_STOP_SECONDS": 0 } }
Remember in the insecure deployment we saw the SQLi value incrementing. Now that the modsecurity
WAF is inspecting the East/West traffic, the SQLi attacks are no longer getting through, though we still see normal Requests
, Sent Bytes
, and Received Bytes
incrementing.
modsecurity
Logs
Let’s check the modsecurity
logs to see how the East/West application attacks are being identified. To see the modsecurity
audit log we’ll need to shell into one of the WAF containers and look at the /var/log/modsec_audit.log
file:
$ kubectl exec db-dc6f6f5f9-b2j2f -it sh /app # grep -C 60 sql /var/log/modsec_audit.log <snip> --fa628b64-A-- [19/Dec/2019:03:06:44 +0000] XfrpRArFgedF@mTDKh9QvAAAAI4 10.8.1.9 60612 10.8.2.9 80 --fa628b64-B-- GET /?username=joe%40example.com&password=%3BUNION+SELECT+1%2C+version%28%29+limit+1%2C1-- HTTP/1.1 Host: db.default.svc.cluster.local User-Agent: python-requests/2.22.0 Accept-Encoding: gzip, deflate Accept: */* Connection: keep-alive --fa628b64-F-- HTTP/1.1 403 Forbidden Content-Length: 209 Keep-Alive: timeout=5, max=100 Connection: Keep-Alive Content-Type: text/html; charset=iso-8859-1 --fa628b64-E-- <!DOCTYPE HTML PUBLIC "-//IETF//DTD HTML 2.0//EN"> <html><head> <title>403 Forbidden</title> </head><body> <h1>Forbidden</h1> <p>You don't have permission to access / on this server.<br /> </p> </body></html> --fa628b64-H-- Message: Warning. Pattern match "(?i:(?:[\"'`](?:;?\\s*?(?:having|select|union)\\b\\s*?[^\\s]|\\s*?!\\s*?[\"'`\\w])|(?:c(?:onnection_id|urrent_user)|database)\\s*?\\([^\\)]*?|u(?:nion(?:[\\w(\\s]*?select| select @)|ser\\s*?\\([^\\)]*?)|s(?:chema\\s*?\\([^\\)]*?|elect.*?\\w?user\\()|in ..." at ARGS:password. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "190"] [id "942190"] [msg "Detects MSSQL code execution and information gathering attempts"] [data "Matched Data: UNION SELECT found within ARGS:password: ;UNION SELECT 1, version() limit 1,1--"] [severity "CRITICAL"] [ver "OWASP_CRS/3.2.0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-sqli"] [tag "OWASP_CRS"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"] Message: Warning. Pattern match "(?i:(?:^[\\W\\d]+\\s*?(?:alter\\s*(?:a(?:(?:pplication\\s*rol|ggregat)e|s(?:ymmetric\\s*ke|sembl)y|u(?:thorization|dit)|vailability\\s*group)|c(?:r(?:yptographic\\s*provider|edential)|o(?:l(?:latio|um)|nversio)n|ertificate|luster)|s(?:e(?:rv(?:ice|er)| ..." at ARGS:password. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "471"] [id "942360"] [msg "Detects concatenated basic SQL injection and SQLLFI attempts"] [data "Matched Data: ;UNION SELECT found within ARGS:password: ;UNION SELECT 1, version() limit 1,1--"] [severity "CRITICAL"] [ver "OWASP_CRS/3.2.0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-sqli"] [tag "OWASP_CRS"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"] Message: Access denied with code 403 (phase 2). Operator GE matched 5 at TX:anomaly_score. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "91"] [id "949110"] [msg "Inbound Anomaly Score Exceeded (Total Score: 10)"] [severity "CRITICAL"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] Message: Warning. Operator GE matched 5 at TX:inbound_anomaly_score. [file "/etc/modsecurity.d/owasp-crs/rules/RESPONSE-980-CORRELATION.conf"] [line "86"] [id "980130"] [msg "Inbound Anomaly Score Exceeded (Total Inbound Score: 10 - SQLI=10,XSS=0,RFI=0,LFI=0,RCE=0,PHPI=0,HTTP=0,SESS=0): individual paranoia level scores: 10, 0, 0, 0"] [tag "event-correlation"] Apache-Error: [file "apache2_util.c"] [line 273] [level 3] [client 10.8.1.9] ModSecurity: Warning. Pattern match "(?i:(?:[\\\\"'`](?:;?\\\\\\\\s*?(?:having|select|union)\\\\\\\\b\\\\\\\\s*?[^\\\\\\\\s]|\\\\\\\\s*?!\\\\\\\\s*?[\\\\"'`\\\\\\\\w])|(?:c(?:onnection_id|urrent_user)|database)\\\\\\\\s*?\\\\\\\\([^\\\\\\\\)]*?|u(?:nion(?:[\\\\\\\\w(\\\\\\\\s]*?select| select @)|ser\\\\\\\\s*?\\\\\\\\([^\\\\\\\\)]*?)|s(?:chema\\\\\\\\s*?\\\\\\\\([^\\\\\\\\)]*?|elect.*?\\\\\\\\w?user\\\\\\\\()|in ..." at ARGS:password. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "190"] [id "942190"] [msg "Detects MSSQL code execution and information gathering attempts"] [data "Matched Data: UNION SELECT found within ARGS:password: ;UNION SELECT 1, version() limit 1,1--"] [severity "CRITICAL"] [ver "OWASP_CRS/3.2.0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-sqli"] [tag "OWASP_CRS"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"] [hostname "db.default.svc.cluster.local"] [uri "/"] [unique_id "XfrpRArFgedF@mTDKh9QvAAAAI4"] Apache-Error: [file "apache2_util.c"] [line 273] [level 3] [client 10.8.1.9] ModSecurity: Warning. Pattern match "(?i:(?:^[\\\\\\\\W\\\\\\\\d]+\\\\\\\\s*?(?:alter\\\\\\\\s*(?:a(?:(?:pplication\\\\\\\\s*rol|ggregat)e|s(?:ymmetric\\\\\\\\s*ke|sembl)y|u(?:thorization|dit)|vailability\\\\\\\\s*group)|c(?:r(?:yptographic\\\\\\\\s*provider|edential)|o(?:l(?:latio|um)|nversio)n|ertificate|luster)|s(?:e(?:rv(?:ice|er)| ..." at ARGS:password. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-942-APPLICATION-ATTACK-SQLI.conf"] [line "471"] [id "942360"] [msg "Detects concatenated basic SQL injection and SQLLFI attempts"] [data "Matched Data: ;UNION SELECT found within ARGS:password: ;UNION SELECT 1, version() limit 1,1--"] [severity "CRITICAL"] [ver "OWASP_CRS/3.2.0"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-sqli"] [tag "OWASP_CRS"] [tag "OWASP_CRS/WEB_ATTACK/SQL_INJECTION"] [tag "WASCTC/WASC-19"] [tag "OWASP_TOP_10/A1"] [tag "OWASP_AppSensor/CIE1"] [tag "PCI/6.5.2"] [hostname "db.default.svc.cluster.local"] [uri "/"] [unique_id "XfrpRArFgedF@mTDKh9QvAAAAI4"] Apache-Error: [file "apache2_util.c"] [line 273] [level 3] [client 10.8.1.9] ModSecurity: Access denied with code 403 (phase 2). Operator GE matched 5 at TX:anomaly_score. [file "/etc/modsecurity.d/owasp-crs/rules/REQUEST-949-BLOCKING-EVALUATION.conf"] [line "91"] [id "949110"] [msg "Inbound Anomaly Score Exceeded (Total Score: 10)"] [severity "CRITICAL"] [tag "application-multi"] [tag "language-multi"] [tag "platform-multi"] [tag "attack-generic"] [hostname "db.default.svc.cluster.local"] [uri "/"] [unique_id "XfrpRArFgedF@mTDKh9QvAAAAI4"] Apache-Error: [file "apache2_util.c"] [line 273] [level 3] [client 10.8.1.9] ModSecurity: Warning. Operator GE matched 5 at TX:inbound_anomaly_score. [file "/etc/modsecurity.d/owasp-crs/rules/RESPONSE-980-CORRELATION.conf"] [line "86"] [id "980130"] [msg "Inbound Anomaly Score Exceeded (Total Inbound Score: 10 - SQLI=10,XSS=0,RFI=0,LFI=0,RCE=0,PHPI=0,HTTP=0,SESS=0): individual paranoia level scores: 10, 0, 0, 0"] [tag "event-correlation"] [hostname "db.default.svc.cluster.local"] [uri "/"] [unique_id "XfrpRArFgedF@mTDKh9QvAAAAI4"] Action: Intercepted (phase 2) Apache-Handler: proxy-server Stopwatch: 1576724804853810 2752 (- - -) Stopwatch2: 1576724804853810 2752; combined=2296, p1=669, p2=1340, p3=0, p4=0, p5=287, sr=173, sw=0, l=0, gc=0 Response-Body-Transformed: Dechunked Producer: ModSecurity for Apache/2.9.3 (http://www.modsecurity.org/); OWASP_CRS/3.2.0. Server: Apache Engine-Mode: "ENABLED" --fa628b64-Z--
Here we see modsecurity
has blocked and logged the East/West SQLi attack from one of the wwworigin
containers to a dborigin
container. Excellent!
But there’s still a bit more to do. Even though we are now inspecting and protecting traffic at the application layer, we are not yet enforcing micro-segmentation between the services. That means that, even with the WAFs in place, any authorigin
container can communicate with any dborigin
container. We can demonstrate this by opening a shell on a authorigin
container and attempting to send a simulated SQLi to a dborigin
container from it:
# curl 'http://dborigin:8080/?username=joe%40example.com&password=%3BUNION+SELECT+1%2C+version%28%29+limit+1%2C1--' X7fJ4MnlHo5gzJFQ1... Thu Dec 19 04:54:25 2019 hostname: dborigin-7dc8d69f86-6mj2d ip: 10.8.2.10 remote: 10.8.2.5 hostheader: dborigin:8080 path: /?username=joe%40example.com&password=%3BUNION+SELECT+1%2C+version%28%29+limit+1%2C1--
Not only can they communicate – we have completely bypassed the WAF! Let’s fix this with Network Policy.
Network Policy
Here is a Network Policy spec that will control the ingress to each internal pod. I tried to keep the rules simple, but in a production deployment a tighter policy would likely be desired. For example, you would probably also want to include Egress policies.
apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: wwworigin-ingress namespace: default spec: podSelector: matchLabels: app: wwworigin policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: www to: ports: - protocol: TCP port: 8080 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: auth-ingress namespace: default spec: podSelector: matchLabels: app: auth policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: wwworigin to: ports: - protocol: TCP port: 80 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: db-ingress namespace: default spec: podSelector: matchLabels: app: db policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: wwworigin to: ports: - protocol: TCP port: 80 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: authorigin-ingress namespace: default spec: podSelector: matchLabels: app: authorigin policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: auth to: ports: - protocol: TCP port: 8080 --- apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: dborigin-ingress namespace: default spec: podSelector: matchLabels: app: dborigin policyTypes: - Ingress ingress: - from: - podSelector: matchLabels: app: db to: ports: - protocol: TCP port: 8080
Even with a simple Network Policy you can see one of the downsides to the Security Services Layer Pattern: it can be tedious to set the proper micro-segmentation policy without making errors.
Basically what this policy is saying is:
- On the
wwworigin
containers only accept traffic from thewww
containers that is destined to TCP port 8080 - On the
auth
containers only accept traffic from thewwworigin
containers that is destined to TCP port 80 - On the
db
containers only accept traffic from thewwworigin
containers that is destined to TCP port 80 - On the
authorigin
containers only accept traffic from theauth
containers that is destined to TCP port 8080 - On the
dborigin
containers only accept traffic from thedb
containers that is destined to TCP port 8080
Not Fun! In a large deployment with many services, this can quickly get out of hand and errors will be easy to make as you trace the traffic flow between each service. That’s why a Service Mesh is probably a better choice for an application with more than a few services.
So let’s see if this works. Let’s copy the Nework Policy text to a file named servicelayer-network-policy.yaml
in vi
and apply the Network Policy to the cluster with kubectl
:
$ kubectl create -f servicelayer-network-policy.yaml networkpolicy.networking.k8s.io/wwworigin-ingress created networkpolicy.networking.k8s.io/auth-ingress created networkpolicy.networking.k8s.io/db-ingress created networkpolicy.networking.k8s.io/authorigin-ingress created networkpolicy.networking.k8s.io/dborigin-ingress created
And now let’s try that simulated SQLi attack again from authorigin
to dborigin
:
/var/log # curl 'http://dborigin:8080/?username=joe%40example.com&password=%3BUNION+SELECT+1%2C+version%28%29+limit+1%2C1--' curl: (7) Failed to connect to dborigin port 8080: Operation timed out
Success!
Finally, let’s doublecheck that the rest of the application is still working by checking the dborigin
logs. If we are still getting legitimate requests then we should be good to go:
$ kubectl logs -f dborigin-7dc8d69f86-6mj2d <snip> 10.8.2.6 - - [19/Dec/2019 05:23:26] "POST / HTTP/1.1" 200 - 10.8.2.6 - - [19/Dec/2019 05:23:28] "POST / HTTP/1.1" 200 - 10.8.2.9 - - [19/Dec/2019 05:23:31] "POST / HTTP/1.1" 200 - 10.8.2.6 - - [19/Dec/2019 05:23:33] "POST / HTTP/1.1" 200 - 10.8.0.11 - - [19/Dec/2019 05:23:34] "POST / HTTP/1.1" 200 - 10.8.2.9 - - [19/Dec/2019 05:23:34] "POST / HTTP/1.1" 200 - 10.8.2.6 - - [19/Dec/2019 05:23:35] "POST / HTTP/1.1" 200 - 10.8.2.9 - - [19/Dec/2019 05:23:39] "POST / HTTP/1.1" 200 - 10.8.2.9 - - [19/Dec/2019 05:23:40] "POST / HTTP/1.1" 200 - 10.8.2.9 - - [19/Dec/2019 05:23:41] "POST / HTTP/1.1" 200 - {"Total": {"Requests": 8056, "Sent Bytes": 133603375, "Received Bytes": 8342908, "Attacks": 1, "SQLi": 1, "XSS": 0, "Directory Traversal": 0}, "Last 30 Seconds": {"Requests": 17, "Sent Bytes": 281932, "Received Bytes": 17612, "Attacks": 0, "SQLi": 0, "XSS": 0, "Directory Traversal": 0}} 10.8.2.6 - - [19/Dec/2019 05:23:43] "POST / HTTP/1.1" 200 - 10.8.2.6 - - [19/Dec/2019 05:23:43] "POST / HTTP/1.1" 200 -
Nice! We see the service is still getting requests with the Network Policy in place. We can even see that test SQLi request we sent earlier when we bypassed the WAF, but no SQLi attacks are seen since the Network Policy was applied.
Conclusion
Whew – that was fun! As you can see, it is possible to lock down an application with just a few microservices that need to communicate with each other using the Security Services Layer Pattern, but for anything more than a few services things can get complicated quickly. It does have the advantage, however, of allowing you to independently scale the security layers and the application layers.
Stay tuned for the next post where we’ll go over the Security Sidecar Pattern and we’ll see the advantages and disadvantages of that approach.
Next in the series: Part 4
Of course, as soon as I published this I realized that I could have further simplified the insertion of the modsecurity pods by exposing port 8080 and pointing that to port 80 on the internal modsecurity container. There’s always room for improvement! 🙂 Anything else I missed?