Setting Up the Insecure Deployment
In Part 1 of this series on microservices security patterns for Kubernetes we went over three design patterns that enable micro-segmentation and deep inspection of the application and API traffic between microservices:
- Security Service Layer Pattern
- Security Sidecar Pattern
- Service Mesh Security Plugin Pattern
In this post we will set the groundwork to deep dive into the Security Service Layer Pattern with a live insecure deployment on Google Kubernetes Engine (GKE). By the end of this post you will be able to bring up an insecure deployment and demonstrate layer 7 attacks and unrestricted access between internal services. In the next post we will layer on a Security Service Layer Pattern to secure the application.
The Base Deployment
Let’s first get our cluster up and running with a simple deployment with no security and show what is possible in a nearly default state. We’ll use this simple.yaml
deployment I have created using my microsim
app. microsim
is a microservice simulator that can send simulated JSON/HTTP and application attack traffic between services. It has some logging and statistics reporting functionality that will allow us to see attacks being sent by the client and received or blocked by the server.
Here is a diagram of the deployment.
Figure 1: Simple Deployment

In this microservice architecture we see three simulated services:
- Public Web interface service
- Internal Authentication service
- Internal Database service
In the default state, all services are able to communicate with one another and there are no protections from application layer attacks. Let’s take a quick look at the Pod Deployments and Services in this application.
www
Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: www spec: replicas: 3 selector: matchLabels: app: www template: metadata: labels: app: www spec: containers: - name: microsimserver image: kellybrazil/microsimserver env: - name: STATS_PORT value: "5000" ports: - containerPort: 8080 - name: microsimclient image: kellybrazil/microsimclient env: - name: REQUEST_URLS value: "http://auth.default.svc.cluster.local:8080,http://db.default.svc.cluster.local:8080" - name: SEND_SQLI value: "True" - name: STATS_PORT value: "5001"
In the www
deployment above we see three Pod replicas, each running two containers. (microsimserver
and microsimclient
)
The microsimserver
container is configured to expose port 8080
, which is the default port the service listens on. By default, the server will respond with 16KB of data and some diagnostic information in either plain HTTP or JSON/HTTP, depending on whether the request is an HTTP GET or POST.
The microsimclient
container is configured to send a single 1KB JSON/HTTP POST request every second to http://auth.default.svc.cluster.local:8080
or http://db.default.svc.cluster.local:8080
which will resolve to the internal auth
and db
Services using the default Kubernetes DNS resolver.
We also see that microsimclient
is configured to occasionally send SQLi attack traffic to the auth
and db
Services. There are many other behaviors that can be configured, but we’ll keep things simple.
The stats server for microsimserver
is configured to run on port 5000
and the stats server for microsimclient
is configured to run on port 5001
. These ports are not exposed to the cluster, so we will need to get shell access to the containers to see the stats.
Now, let’s look at the www
service.
www
Service
apiVersion: v1 kind: Service metadata: labels: app: www name: www spec: externalTrafficPolicy: Local ports: - port: 80 targetPort: 8080 selector: app: www sessionAffinity: None type: LoadBalancer
The service is configured to publicly expose the www
service via port 80
with a LoadBalancer
type. The externalTrafficPolicy: Local
option allows the originating IP address to be preserved within the cluster.
Now let’s take a look at the db
deployment and service. The auth
service is exactly the same as the db
service so we’ll skip going over that one.
db
Deployment
apiVersion: apps/v1 kind: Deployment metadata: name: db spec: replicas: 3 selector: matchLabels: app: db template: metadata: labels: app: db spec: containers: - name: microsimserver image: kellybrazil/microsimserver env: - name: STATS_PORT value: "5000" ports: - containerPort: 8080
Just like the www
service, there are three Pod replicas, but only one container (microsimserver
) runs in each Pod. The default microsimserver
listening port of 8080
is exposed and the stats server listens on port 5000
, though it is not exposed, so we’ll need to shell into it to view the stats.
And here is the db
Service:
db
Service
apiVersion: v1 kind: Service metadata: labels: app: db name: db spec: ports: - port: 8080 targetPort: 8080 selector: app: db sessionAffinity: None
Since this is an internal service, we are not using the LoadBalancer
type, which will cause the Service to be created as a ClusterIP
type, nor do we need to define externalTrafficPolicy
.
Firing up the Cluster
Let’s bring up the cluster from within the GKE console. Create a standard cluster using the n1-standard-2
machine type with the Enable network policy option checked under the advanced Network security options:
Figure 2: Enable network policy in GKE

Note: you can also create a cluster with network policy enabled at the command line with the
--enable-network-policy
argument:
$ gcloud container clusters create test --machine-type=n1-standard-2 --enable-network-policy
Once the cluster is up and running, we can spin up the deployment using kubectl
locally after configuring it with the gcloud
command, or you can use the Google Cloud Shell terminal. For simplicity, let’s use the Cloud Shell and connect to the cluster:
Figure 3: Connect to the Cluster via Cloud Shell

Within Cloud Shell, copy paste the deployment text into a new file called simple.yaml
with vi
.
Then create the deployment:
$ kubectl create -f simple.yaml deployment.apps/www created deployment.apps/auth created deployment.apps/db created service/www created service/auth created service/db created
You will see the deployments and services start up. You can verify the application is running successfully with the following commands:
$ kubectl get pods NAME READY STATUS RESTARTS AGE auth-5f964774bd-mvtcl 1/1 Running 0 67s auth-5f964774bd-sn4cw 1/1 Running 0 66s auth-5f964774bd-xtt54 1/1 Running 0 66s db-578757bf68-dzjdq 1/1 Running 0 66s db-578757bf68-kkwzr 1/1 Running 0 66s db-578757bf68-mlf5t 1/1 Running 0 66s www-5d89bcb54f-bcjm9 2/2 Running 0 67s www-5d89bcb54f-bzpwl 2/2 Running 0 67s www-5d89bcb54f-vbdf6 2/2 Running 0 67s
$ kubectl get deploy NAME READY UP-TO-DATE AVAILABLE AGE auth 3/3 3 3 92s db 3/3 3 3 92s www 3/3 3 3 92s
$ kubectl get service NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE auth ClusterIP 10.0.13.227 <none> 8080/TCP 2m1s db ClusterIP 10.0.3.1 <none> 8080/TCP 2m1s kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 10m www LoadBalancer 10.0.6.39 35.188.221.11 80:32596/TCP 2m1s
Find the external address assigned to the www
service and send an HTTP GET request to it to verify the service is responding. You can do this from Cloud Shell or your laptop:
$ curl http://35.188.221.11 FPGpqiVZivddHQvkvDHFErFiW2WK8Kl3ky9cEeI7TA6vH8PYmA1obaZGd1AR3avz3SqPZlcrbXFOn3hVlFQdFm9S07ca <snip> jYbD5jNA62JEQbUSqk9V0JGgYLATbYe2rv3XeFQIEayJD4qeGnPp7UbEESPBmxrw Wed Dec 11 20:07:08 2019 hostname: www-5d89bcb54f-vbdf6 ip: 10.56.0.4 remote: 35.197.46.124 hostheader: 35.188.221.11 path: /
You should see a long block of random text and some client and server information on the last line. Notice if you send the request as an HTTP POST the response comes back as JSON. Here I have run the response through jq
to pretty-print the response:
$ curl -X POST http://35.188.221.11 | jq . { "data": "hhV9jogGrM7FMxsQCUAcjdsLQRgjgpCoO...", "time": "Wed Dec 11 20:14:20 2019", "hostname": "www-5d89bcb54f-vbdf6", "ip": "10.56.0.4", "remote": "46.18.117.38", "hostheader": "35.188.221.11", "path": "/" }
Testing the Deployment
Now, let’s prove that any Pod can communicate with any other Pod and that the SQLi attacks are being received by the internal services. We can do this by opening a shell to one of the www
pods and one of the db
pods.
Open two new tabs in Cloud Shell and find the Pod names from the kubectl get pods
command output above.
In one tab, run the following to get a shell on the microsimclient
container in the www
Pod:
$ kubectl exec www-5d89bcb54f-bcjm9 -c microsimclient -it sh /app #
In the other tab, run the following to get a shell on the microsimserver
container in the db
Pod:
$ kubectl exec db-578757bf68-dzjdq -c microsimserver -it sh /app #
From the microsimclient
shell, run the following curl
command to see the application stats. This will show us how many normal and attack requests have been sent:
/app # curl http://localhost:5001 { "time": "Wed Dec 11 20:21:30 2019", "runtime": 1031, "hostname": "www-5d89bcb54f-bcjm9", "ip": "10.56.1.3", "stats": { "Requests": 1026, "Sent Bytes": 1062936, "Received Bytes": 17006053, "Internet Requests": 0, "Attacks": 9, "SQLi": 9, "XSS": 0, "Directory Traversal": 0, "DGA": 0, "Malware": 0, "Error": 1 }, "config": { "STATS_PORT": 5001, "STATSD_HOST": null, "STATSD_PORT": 8125, "REQUEST_URLS": "http://auth.default.svc.cluster.local:8080,http://db.default.svc.cluster.local:8080", "REQUEST_INTERNET": false, "REQUEST_MALWARE": false, "SEND_SQLI": true, "SEND_DIR_TRAVERSAL": false, "SEND_XSS": false, "SEND_DGA": false, "REQUEST_WAIT_SECONDS": 1.0, "REQUEST_BYTES": 1024, "STOP_SECONDS": 0, "STOP_PADDING": false, "TOTAL_STOP_SECONDS": 0, "REQUEST_PROBABILITY": 1.0, "EGRESS_PROBABILITY": 0.1, "ATTACK_PROBABILITY": 0.01 } }
Run the command a few times until you see a number of SQLi attacks have been sent. Here we see that this microsimclient
instance has sent 9 SQLi attacks in the last 1031 seconds of runtime.
From the microsimserver
shell, curl
the server stats to see if any SQLi attacks have been detected:
/app # curl http://localhost:5000 { "time": "Wed Dec 11 20:23:52 2019", "runtime": 1177, "hostname": "db-578757bf68-dzjdq", "ip": "10.56.2.11", "stats": { "Requests": 610, "Sent Bytes": 10110236, "Received Bytes": 629888, "Attacks": 2, "SQLi": 2, "XSS": 0, "Directory Traversal": 0 }, "config": { "LISTEN_PORT": 8080, "STATS_PORT": 5000, "STATSD_HOST": null, "STATSD_PORT": 8125, "RESPOND_BYTES": 16384, "STOP_SECONDS": 0, "STOP_PADDING": false, "TOTAL_STOP_SECONDS": 0 } }
Here we see that this particular server has detected two SQLi attacks coming from the clients within the cluster. (East/West traffic) Remember, there are also five other db
and auth
Pods that are receiving attacks so you will see the attack load shared amongst them.
Let’s also demonstrate that the db
server can directly communicate with the auth
service:
/app # curl http://auth:8080 firOXAY4hktZLjHvbs41JhReCWHqs... <snip> Wed Dec 11 20:26:38 2019 hostname: auth-5f964774bd-mvtcl ip: 10.56.1.4 remote: 10.56.2.11 hostheader: auth:8080 path: /
Since we get a response it is clear that there is no micro-segmentation in place between the db
and auth
Services and Pods.
Microservice logging
As with most services in Kubernetes, both microsimclient
and microsimserver
regularly send logs for each request and response to stdout
, which means they can be found with the kubectl logs
command. Every 30 seconds a JSON summary will also be logged:
microsimclient
logs
$ kubectl logs www-5d89bcb54f-bcjm9 microsimclient 2019-12-11T20:04:19 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:20 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 2019-12-11T20:04:21 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:22 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:23 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:23 SQLi sent: http://auth.default.svc.cluster.local:8080/?username=joe%40example.com&password=%3BUNION+SELECT+1%2C+version%28%29+limit+1%2C1-- 2019-12-11T20:04:24 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16574 2019-12-11T20:04:25 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:26 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 2019-12-11T20:04:27 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 2019-12-11T20:04:28 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:29 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:30 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:31 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:32 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:33 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 2019-12-11T20:04:34 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 2019-12-11T20:04:35 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:36 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:37 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:38 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:39 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 2019-12-11T20:04:40 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:41 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 2019-12-11T20:04:42 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 2019-12-11T20:04:43 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:44 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:45 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 2019-12-11T20:04:46 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 2019-12-11T20:04:47 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 2019-12-11T20:04:48 Request to http://auth.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16577 {"Total": {"Requests": 30, "Sent Bytes": 31080, "Received Bytes": 497267, "Internet Requests": 0, "Attacks": 1, "SQLi": 1, "XSS": 0, "Directory Traversal": 0, "DGA": 0, "Malware": 0, "Error": 0}, "Last 30 Seconds": {"Requests": 30, "Sent Bytes": 31080, "Received Bytes": 497267, "Internet Requests": 0, "Attacks": 1, "SQLi": 1, "XSS": 0, "Directory Traversal": 0, "DGA": 0, "Malware": 0, "Error": 0}} 2019-12-11T20:04:49 Request to http://db.default.svc.cluster.local:8080/ Request size: 1036 Response size: 16573 ...
microsimserver
logs
$ kubectl logs db-578757bf68-dzjdq microsimserver 10.56.1.5 - - [11/Dec/2019 20:04:22] "POST / HTTP/1.1" 200 - 10.56.0.4 - - [11/Dec/2019 20:04:22] "POST / HTTP/1.1" 200 - 10.56.1.3 - - [11/Dec/2019 20:04:24] "POST / HTTP/1.1" 200 - 10.56.1.5 - - [11/Dec/2019 20:04:25] "POST / HTTP/1.1" 200 - 10.56.0.4 - - [11/Dec/2019 20:04:26] "POST / HTTP/1.1" 200 - 10.56.1.5 - - [11/Dec/2019 20:04:27] "POST / HTTP/1.1" 200 - 10.56.0.4 - - [11/Dec/2019 20:04:33] "POST / HTTP/1.1" 200 - 10.56.0.4 - - [11/Dec/2019 20:04:35] "POST / HTTP/1.1" 200 - 10.56.0.4 - - [11/Dec/2019 20:04:41] "POST / HTTP/1.1" 200 - 10.56.0.4 - - [11/Dec/2019 20:04:43] "POST / HTTP/1.1" 200 - {"Total": {"Requests": 10, "Sent Bytes": 165740, "Received Bytes": 10360, "Attacks": 0, "SQLi": 0, "XSS": 0, "Directory Traversal": 0}, "Last 30 Seconds": {"Requests": 10, "Sent Bytes": 165740, "Received Bytes": 10360, "Attacks": 0, "SQLi": 0, "XSS": 0, "Directory Traversal": 0}} 10.56.1.5 - - [11/Dec/2019 20:04:47] "POST / HTTP/1.1" 200 - ...
You can see how the traffic is automatically being load balanced by the Kubernetes cluster by inspecting the request sources in the microsimserver
logs.
Adding Micro-segmentation and Application Layer Protection
Stay tuned for the next post where we will take this simple, insecure deployment, and implement a Security Services Layer pattern. Then we’ll show how the internal application layer attacks are blocked with this approach. Finally, we will demonstrate micro-segmentation which will restrict access between microservices, for example, traffic between the auth
and db
services.
Note: Depending on your Google Cloud account status you may incur charges for the cluster, so remember to delete it from the GKE console when you are done. You may also need to delete any load balancer objects that were created by the deployment within GCP to avoid residual charges to your account.
Next in the series: Part 3