The Service Mesh Sidecar-on-Sidecar Pattern
In Part 4 of of my series on Microservice Security Patterns for Kubernetes we dove into the Sidecar Security Pattern and configured a working application with micro-segmentation enforcement and deep inspection for application-layer protection. The Sidecar Security Pattern is nice and clean, but what if you are running a Service Mesh like Istio with Envoy?
For a great overview of the state of the art in Service Mesh, see this article by Guillaume Dury. He provides a nice comparison between modern Service Mesh options.
In this post we will take the Sidecar Security Pattern from Part 4 and apply it in an Istio Service Mesh using Envoy sidecars. This is essentially a Sidecar-on-Sidecar Pattern that will allow us to not only use the native encryption and segmentation capabilities of the Service Mesh, but will allow us to layer on L7 application security for OWASP top 10 type of attacks against the microservices.
How does the Service Mesh Sidecar-on-Sidecar Pattern work?
It’s Sidecars All The Way Down
As we discussed in Part 4, you can have multiple containers in a Pod. We used the modsecurity
container as a sidecar to intercept HTTP requests and inspect them before forwarding them on to the microsimserver
container in the same pod. But with an Istio Service Mesh, there will also be an Envoy container injected into the Pod and it will do the egress and ingress traffic interception. Can we have two sidecars in a Pod?
The answer is yes. In the case of Envoy using the sidecar injection functionality, it configures itself based on the existing Pod spec in the deployment manifest. This means that we can use a manifest nearly identical to what we used in Part 4 and Envoy will correctly configure itself to send intercepted traffic on to the modsecurity
container, which will then send the traffic to the microsimserver
container.

In this post we will be demonstrating this in action. There are surprisingly few changes that need to be made to the Security Sidecar Pattern deployment file to make this work. Also, we’ll be able to easily see how this works using the Kiali dashboard which provides visualization for the Istio Service Mesh.
The Sidecar-on-Sidecar Pattern
We’ll be using this deployment manifest that is nearly identical to the Security Sidecar Pattern manifest from Part 4. Here is what the design looks like:

First we’ll enable service-to-service encryption, then strict mutual TLS (mTLS) with RBAC to provide micro-segmentation. Finally, we’ll configure Istio ingress gateway so we can access the app from the public internet.
But first, let’s just deploy the modified Sidecar Pattern manifest with a vanilla Istio configuration.
Spinning up the Cluster in GKE
We’ll spin up a kubernetes cluster in GKE similar to how we did previously in Part 2 except this time we’ll use 4 nodes of n1-standard-2
machine type instead of 3. Since we’ll be using Istio to control service-to-service traffic (East/West flows) we no longer need to check the Enable Network Policy box. Instead, we will need to check the Enable Istio (beta) box under Additional Features.

We’ll start with setting Enable mTLS (beta) to Permissive. We will change this later via configuration files as we try out some scenarios.
I’m not going to give a complete tutorial on how to complete the set up of Istio on GKE, but I basically used the instructions documented in the following links to enable Prometheus and Grafana. I used the same idea to enable the Kiali dashboard to visualize the Service Mesh. We’ll be using the Kiali service graphs to verify the status of the application.
Once you have Kiali enabled, you can configure port forwarding on the Service so you can browse to the dashboard using your laptop.

Click the https://ssh.cloud.google.com/devshell/proxy?port=8080
link and then append /kiali
at the end of the translated link in your browser. You should see a login screen. Use the default credentials or the ones you specified with a kubernetes secret during setup. You should see a blank service graph:

Make sure to check the Security checkbox under the Display menu:

Finally, we want to enable automatic sidecar injection for the Envoy proxy by running this command within Cloud Shell:
$ kubectl label namespace default istio-injection=enabled
Alright! Now let’s deploy the app.
Deploying the Sidecar-on-Sidecar Manifest
There are only a few minor differences between the sidecar.yaml
manifest used in Part 4 and the istio-sidecar.yaml
that we will be using for the following examples. Let’s take a look:
Service Accounts
apiVersion: v1 kind: ServiceAccount metadata: name: www --- apiVersion: v1 kind: ServiceAccount metadata: name: db --- apiVersion: v1 kind: ServiceAccount metadata: name: auth
First, we have added these ServiceAccount
objects. This is what Istio uses to differentiate services within the mesh and affects how the certificates used in mTLS are generated. You’ll see how we bind these ServiceAccount
objects to the Pods next.
Deployments
We’ll just take a look at the www
Deployment since the same changes are required for all of the Deployments.
apiVersion: apps/v1 kind: Deployment metadata: name: www spec: replicas: 3 selector: matchLabels: app: www template: metadata: labels: app: www version: v1.0 # add version spec: serviceAccountName: www # add serviceAccountName containers: - name: modsecurity image: owasp/modsecurity-crs:v3.2-modsec2-apache ports: - containerPort: 80 env: - name: SETPROXY value: "True" - name: PROXYLOCATION value: "http://127.0.0.1:8080/" - name: microsimserver image: kellybrazil/microsimserver ports: - containerPort: 8080 # add microsimserver port env: - name: STATS_PORT value: "5000" - name: microsimclient image: kellybrazil/microsimclient env: - name: STATS_PORT value: "5001" - name: REQUEST_URLS value: "http://auth.default.svc.cluster.local:8080/,http://db.default.svc.cluster.local:8080/" - name: SEND_SQLI value: "True"
The only difference from the original sidecar.yaml
is:
- We have added a
version
label. Istio requires this label to be included. - We associated the Pods with the appropriate
ServiceAccountName
. This will be important for micro-segmentation later on. - We add the
containerPort
configuration for themicrosimserver
containers. This is important so the Envoy proxy sidecar can configure itself properly.
Services
Now let’s see the minor changes to the Services. Since they are all very similar, we will just take a look at the www
Service:
apiVersion: v1 kind: Service metadata: labels: app: www name: www spec: # externalTrafficPolicy: Local # remove externalTrafficPolicy ports: - port: 8080 targetPort: 80 name: http # add port name selector: app: www sessionAffinity: None # type: LoadBalancer # remove LoadBalancer type
We have removed a couple of items from the www
service: externalTrafficPolicy
and type
. This is because the www
service is no longer directly exposed to the public internet. We’ll expose it later using an Istio Ingress Gateway.
Also, we have added the port name
field. This is required so Istio can correctly configure Envoy to listen for the correct protocol and produce the correct telemetry for the inter-service traffic.
Deploy the App
Now let’s deploy the application using kubectl
. Copy/paste the manifest to a file called istio-sidecar.yaml
within Cloud Shell using vi
. Then run:
$ kubectl apply -f istio-sidecar.yaml serviceaccount/www created serviceaccount/db created serviceaccount/auth created deployment.apps/www created deployment.apps/auth created deployment.apps/db created service/www created service/auth created service/db created
After a couple of minutes you should see this within the Kiali dashboard:

Excellent! You’ll notice the services will alternate between green and orange. This is because the www
service is sending SQLi attacks to the db
and auth
services every so often and those are being blocked with HTTP 403
errors being returned by the modsecurity
WAF container.
Voila! We have application layer security in Istio!
But you may have noticed that there is no encryption between services enabled yet. Also, all services can talk to each other, so we don’t have proper micro-segmentation. We can illustrate that with a curl
from auth
to db
:
$ kubectl exec auth-cf6f45fb-9k678 -c microsimserver curl http://db:8080 <snip> sufH1FhoMgvXvbPOkE3O0H3MwNAN Tue Jan 28 01:16:48 2020 hostname: db-55747d84d8-jlz7z ip: 10.8.0.13 remote: 127.0.0.1 hostheader: 127.0.0.1:8080 path: /
Let’s fix these issues.
Encrypting the East/West Traffic
It is fairly easy to encrypt East/West traffic using Istio. First we’ll demonstrate permissive mTLS and then we’ll advance to strict mTLS with RBAC to enforce micro-segmentation.
Here’s what the manifest for this configuration looks like:
apiVersion: "authentication.istio.io/v1alpha1" kind: "Policy" metadata: name: "default" namespace: "default" spec: peers: - mtls: {} --- apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "default" namespace: "default" spec: host: "*.default.svc.cluster.local" trafficPolicy: tls: mode: ISTIO_MUTUAL
The Policy
manifest specifies that all Pods in the default
namespace will only accept encrypted requests using TLS. The DestinationRule
manifest specifies how the client-side outbound connections are handled. Here we see that connections to any services in the default
namespace will use TLS (*.default.svc.cluster.local
) This effectively disables plaintext traffic between services in the namespace.
Copy/paste the manifest text to a file called istio-mtls-permissive.yaml
. Then apply it with kubectl
:
$ kubectl apply -f istio-mtls-permissive.yaml policy.authentication.istio.io/default created destinationrule.networking.istio.io/default created
After 30 seconds or so you should start to see the padlocks between the services in the Kiali Dashboard indicating that the communications are encrypted. (Ensure you checked the Security checkbox under the Display drop-down)

Nice! We have successfully encrypted traffic between our services.
Enforcing micro-segmentation
Even though the communications between services is now encrypted, we still don’t have effective micro-segmentation between Pods running the Envoy sidecar. We can test this again with a curl
from an auth
pod to a db
pod:
$ kubectl exec auth-cf6f45fb-9k678 -c microsimserver curl http://db:8080 <snip> 2S76Q83lFt3eplRkAHoHkqUl1PhX Tue Jan 28 03:47:03 2020 hostname: db-55747d84d8-9bhwx ip: 10.8.1.5 remote: 127.0.0.1 hostheader: 127.0.0.1:8080 path: /
And here is the connection displayed in Kiali:

So the good news is that the connection is encrypted. The bad news is that auth
shouldn’t be able to communicate with db
. Let’s implement micro-segmentation.
The first step is to enforce strict mTLS and enable Role Based Access Control (RBAC) for the default
namespace. First copy/paste the manifest to a file called istio-mtls-strict.yaml
with vi
. Let’s take a look at the configuration:
apiVersion: "authentication.istio.io/v1alpha1" kind: "Policy" metadata: name: "default" namespace: "default" spec: peers: - mtls: mode: STRICT --- apiVersion: "networking.istio.io/v1alpha3" kind: "DestinationRule" metadata: name: "default" namespace: "default" spec: host: "*.default.svc.cluster.local" trafficPolicy: tls: mode: ISTIO_MUTUAL --- apiVersion: "rbac.istio.io/v1alpha1" kind: ClusterRbacConfig metadata: name: default spec: mode: 'ON_WITH_INCLUSION' inclusion: namespaces: ["default"]
The important bits here are:
- Line 9:
mode: STRICT
in thePolicy
, which disallows any plaintext communications - Line 27:
mode: 'ON_WITH_INCLUSION'
, which requires RBAC policies to be satisfied before allowing connections between services for the namespaces defined in line 29 - Line 29:
namespaces: ["default"]
, which are the namespaces that have the RBAC policies applied
Let’s apply this by deleting the old config and applying the new one:
$ kubectl delete -f istio-mtls-permissive.yaml policy.authentication.istio.io "default" deleted destinationrule.networking.istio.io "default" deleted $ kubectl apply -f istio-mtls-strict.yaml policy.authentication.istio.io/default created destinationrule.networking.istio.io/default created clusterrbacconfig.rbac.istio.io/default created

Hmm… the entire application is broken now. No worries – this is expected! We did this to illustrate that policies need to be explicitly defined to allow any service-to-service (East/West) communications.
Let’s add one service at a time to see these policies in action. Copy paste this manifest to a file called istio-rbac-policy-test.yaml
with vi
:
apiVersion: "rbac.istio.io/v1alpha1" kind: ServiceRole metadata: name: www-access-role namespace: default spec: rules: - services: ["db.default.svc.cluster.local"] methods: ["GET", "POST"] paths: ["*"] --- apiVersion: "rbac.istio.io/v1alpha1" kind: ServiceRoleBinding metadata: name: www-to-db namespace: default spec: subjects: - user: "cluster.local/ns/default/sa/www" roleRef: kind: ServiceRole name: "www-access-role"
Remember those serviceAccounts
we created in the beginning? Now we are tying them to an RBAC policy. In this case we are allowing GET
and POST
requests to db.default.svc.cluster.local
from Pods that offer client certificates identifying themselves as www
.
The user
field takes an entry in the form of cluster.local/ns/<namespace>/sa/<serviceAcountName>
. In this case cluster.local/ns/default/sa/www
refers to the www
Service Account we created earlier.
Let’s apply this:
$ kubectl apply -f istio-rbac-policy-test.yaml servicerole.rbac.istio.io/www-access-role created servicerolebinding.rbac.istio.io/www-to-db created

It worked! www
can now talk to db
. Now we can fix auth
by updating the policy to look like this:
spec: rules: - services: ["db.default.svc.cluster.local", "auth.default.svc.cluster.local"]
Let’s do that, plus allow the Istio Ingress Gateway service istio-ingressgateway-service-account
to access www
. This will allow public access to the service when we configure the Ingress Gateway later. Copy/paste this manifest to a file called istio-rbac-policy-final.yaml
and apply it:
$ kubectl delete -f istio-rbac-policy-test.yaml servicerole.rbac.istio.io "www-access-role" deleted servicerolebinding.rbac.istio.io "www-to-db" deleted $ kubectl apply -f istio-rbac-policy-final.yaml servicerole.rbac.istio.io/www-access-role created servicerolebinding.rbac.istio.io/www-to-db created servicerole.rbac.istio.io/pub-access-role created servicerolebinding.rbac.istio.io/pub-to-www created

Very good! We’re back up and running. Let’s verify that micro-segmentation is in place and that requests cannot get through even by using IP addresses instead of Service names. We’ll try connecting from an auth
Pod to a db
Pod:
$ kubectl exec auth-cf6f45fb-9k678 -c microsimserver curl http://db:8080 RBAC: access denied $ kubectl exec auth-cf6f45fb-9k678 -c microsimserver curl 10.4.3.10:8080 upstream connect error or disconnect/reset before headers. reset reason: connection termination
Success!
Exposing the App to the Internet
Now that we have secured the app internally, we can expose it to the internet. If you try to visit the site now it will fail since the Istio Ingress has not been configured to forward traffic to the www
service.
In Cloud Shell, copy/paste this manifest to a file called istio-ingress.yaml
with vi
:
apiVersion: networking.istio.io/v1alpha3 kind: Gateway metadata: name: www-gateway spec: selector: app: istio-ingressgateway istio: ingressgateway release: istio servers: - port: number: 80 name: http2 protocol: HTTP2 hosts: - "*" --- apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name: www-vservice spec: hosts: - "*" gateways: - www-gateway http: - match: - uri: prefix: "/" route: - destination: port: number: 8080 host: www.default.svc.cluster.local
Here we’re telling Istio Ingress to listen on port 80 using HTTP2 protocol and then we attach our www
service to that gateway. We allowed the Ingress Gateway to communicate with the www
service earlier via RBAC policy so we should be good to apply this:
$ kubectl apply -f istio-ingress.yaml gateway.networking.istio.io/www-gateway created virtualservice.networking.istio.io/www-vservice created
Now we should be able to reach the application from the internet:
$ kubectl get services -n istio-system NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE grafana ClusterIP 10.70.12.231 <none> 3000/TCP 83m istio-citadel ClusterIP 10.70.2.197 <none> 8060/TCP,15014/TCP 87m istio-galley ClusterIP 10.70.11.184 <none> 443/TCP,15014/TCP,9901/TCP 87m istio-ingressgateway LoadBalancer 10.70.10.196 34.68.212.250 15020:30100/TCP,80:31596/TCP,443:32314/TCP,31400:31500/TCP,15029:32208/TCP,15030:31368/TCP,15031:31242/TCP,15032:31373/TCP,15443:30451/TCP 87m istio-pilot ClusterIP 10.70.3.210 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 87m istio-policy ClusterIP 10.70.4.74 <none> 9091/TCP,15004/TCP,15014/TCP 87m istio-sidecar-injector ClusterIP 10.70.3.147 <none> 443/TCP 87m istio-telemetry ClusterIP 10.70.10.55 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 87m kiali ClusterIP 10.70.15.2 <none> 20001/TCP 86m prometheus ClusterIP 10.70.7.187 <none> 9090/TCP 84m promsd ClusterIP 10.70.8.70 <none> 9090/TCP $ curl 34.68.212.250 <snip> ja1IO2Hm2GJAqKBPao2YyccDAVrd Wed Jan 29 01:24:46 2020 hostname: www-74f9dc9df8-j54k4 ip: 10.4.3.9 remote: 127.0.0.1 hostheader: 127.0.0.1:8080 path: /
Excellent! Our simple App is secured internally and exposed to the Internet.
Conclusion
I really enjoyed this challenge and I see great potential in using a Service Mesh along with a security sidecar proxy like modsecurity
. Though, I have to say that things are changing quickly, including the best practices and configuration syntax.
For example, in this proof of concept I used the default version of Istio that was installed on my GKE cluster (1.1.16) which already seems old since version 1.4 has deprecated the RBAC configuration I used for a new style called AuthorizationPolicy
. Unfortunately, this option was not available in my version of Istio but it does look more straightforward than RBAC.
There is a great deal more complexity in a Service Mesh deployment and troubleshooting connectivity issues can be difficult.
One thing that would probably need to be addressed in a production environment would be the Envoy proxy sidecar configuration. In my simple scenario I was getting very strange connectivity results until I exposed port 8080 on the microsimserver
container in the Deployment. Without that configuration (which worked fine without Istio) Envoy didn’t properly grab all of the ports, so it was possible to completely bypass Envoy altogether which meant broken micro-segmentation and WAF bypass when connecting directly to the Pod IP address.
There is a traffic management configuration called sidecar
which allows you to fine-tune how the Envoy sidecar configures itself. Fortunately, I ended up not needing to do this in this example, though I did go through some iterations of experimenting with it to get micro-segmentation working without exposing port 8080 on the Pod.
So in the end, the Service Mesh Sidecar-on-Sidecar Pattern may work for you, but you might end up tearing out a fair bit of your hair getting it to work in your environment.
I’m looking forward to doing a proof of concept of the Service Mesh Security Plugin Pattern in the future, which will require compiling a custom version of Envoy that automatically filters traffic through modsecurity
. I may let the versions of Istio and Envoy mature a bit before attempting that, though.
What do you think about the Sidecar-on-Sidecar Pattern?