Istio — How it solves EiT (Encryption-in-Transit) in a Microservices Architecture (MSA)

Introduction
With a growing demand for MSA due to its various advantages, Organizations are containerizing their workloads faster and migrating them to their preferred Container Orchestration Platform such as Kubernetes (Self-managed, EKS, GKE, AKS, IKS, Redhat OCP and many more). This poses various challenges too including securing your data in transit. As none of these Kubernetes platforms provides out-of-the-box TLS/mTLS encryption enabled between pod to pod communication, it becomes seriously challenging for Organizations to obtain, manage and rotate various certificates, keys etc. used to encrypt communication between various services.
Consider the below diagram and treat it as a MSA in its simplest form with only 2 services in action.

Let’s consider a 2-tier application architecture with a front-end webserver (Nginx in this case) and back-end hello-world-rest-api application. They have their own pods running as well as Kubernetes service created and accessible. Now, if we make a curl request from nginx pod to the service hello-world-rest-api in namespace back-end, the http response code will be 200
which is expected. It must be noted here that the communication is happening in a plain-text mode now.
If we want to make the above communication encrypted (TLS and not mTLS), the development team needs to copy the server.crt and private.key in the hello-world-rest-api pod (the image needs to be updated using its Dockerfile) and the public.key must be copied into the nginx pod in a similar way. The communication between the services is encrypted using TLS but still not authenticated using mTLS (Mutual TLS, where both client and server trust each other). The below figure shows encryption enabled state of the 2 microservices.

Let’s try to see what needs to be done further in order to enable mTLS communication between the services. The development team needs to perform additional work of copying server.crt and private.key in the nginx pod as well as copying the public.key into the hello-world-rest-api pod. The communication between services is now encrypted and authenticated using mTLS. The below figure shows mTLS encryption enabled state of the 2 microservices.

The additional steps mentioned above which the development team must perform is just the tip of the iceberg of the challenges to solve the EiT issue for services in MSA. There are many hidden challenges as well. Some of them are listed below:
- The Organization must have their own Certificate Authority, private key and store them securely
- (optional, create a sub-CA only for your microservices)
- Generate all certificates by this CA (involves generating CSR etc.)
- Use Trust stores for keeping the public keys and Key stores for storing private keys
- Microservices architecture principle also recommends not to use same certificates for all microservices. This means repeat above processes for different services (imagine if your application has 100 or 1000 microservices)
- Manage and rotate these certificates as per the Organization security policy
- and many more…
Here, comes Istio service mesh to the rescue. Istio takes care of most of the above challenges which we could see as part of the demonstration described in the following sections.
I deployed Istio service mesh on an AWS EKS cluster and enabled it for one of the namespaces back-end
to be used by a sample application hello-world-rest-api
as part of creating a PoC.
The sole purpose of this PoC is to demonstrate how Istio service mesh can be deployed on any EKS cluster and can be enabled for different applications deployed on the cluster.
What is Istio — A short introduction
Before, delving deeper into the PoC, a quick re-cap on what Istio Service mesh helps to achieve???
Istio is an open source service mesh that layers transparently onto existing distributed applications. Istio’s powerful features provide a uniform and more efficient way to secure, connect, and monitor services. Istio is the path to load balancing, service-to-service authentication, and monitoring — with few or no service code changes. Its powerful control plane brings vital features, including:
- Secure service-to-service communication in a cluster with TLS encryption, strong identity-based authentication, and authorization
- Automatic load balancing for HTTP, gRPC, WebSocket, and TCP traffic
- Fine-grained control of traffic behaviour with rich routing rules, retries, failovers, and fault injection
- A pluggable policy layer and configuration API supporting access controls, rate limits and quotas
- Automatic metrics, logs, and traces for all traffic within a cluster, including cluster ingress and egress
This PoC demonstration covers only the first bullet point mentioned above which talks about how to secure service-to-service communication between applications within an EKS cluster by injecting istio-proxy side-car envoy into application pods and enabling mTLS communication between the services. All this could be done without any change required at the application code level.
Pre-requisites
- Access to an EKS (or any Kubernetes/OCP) cluster
- Istio component deployed on the Kubernetes cluster
PoC and Demonstration
Deploy Nginx
- Check your access to Kubernetes cluster using kubectl cli(either EC2 instance, Cloud9, AWS Workspace etc.).
- Copy the content of the below YAML into a file
01-nginx-deployment.yaml
. This manifest file will create a namespacefront-end
and create adeployment
namednginx
and a service callednginx
within that namespace.
apiVersion: v1
kind: Namespace
metadata:
name: front-end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: nginx
name: nginx
namespace: front-end
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- image: nginx
imagePullPolicy: IfNotPresent
name: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
labels:
app: nginx
name: nginx
namespace: front-end
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
- Apply the manifest file to create the resources in the EKS Cluster.
$ kubectl apply -f 01-nginx-deployment.yaml
Above command will produce an output like what is shown below:
namespace/front-end created
deployment.apps/nginx created
service/nginx created
- Verify that the
nginx
pod is in Ready state.
$ kubectl get po -n front-end
Above command will produce an output like what is shown below:
NAME READY STATUS RESTARTS AGE
nginx-845d4d9dff-pqjgx 1/1 Running 0 116s
Note — The pod name above would differ in your case which is expected.
Deploy hello-world-rest-api
- Copy the content of the below YAML into a file
02-helloworld-deployment.yaml
. This manifest file will create a namespaceback-end
and create adeployment
namedhello-world-rest-api
and a service calledhello-world-rest-api
within that namespace.
apiVersion: v1
kind: Namespace
metadata:
name: back-end
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: hello-world-rest-api
name: hello-world-rest-api
namespace: back-end
spec:
replicas: 1
selector:
matchLabels:
app: hello-world-rest-api
template:
metadata:
labels:
app: hello-world-rest-api
spec:
containers:
- image: in28min/hello-world-rest-api:0.0.1.RELEASE
imagePullPolicy: IfNotPresent
name: hello-world-rest-api
ports:
- name: liveness-port
containerPort: 8080
hostPort: 8080
resources:
requests:
cpu: 200m
memory: 512Mi
limits:
cpu: 400m
memory: 1024Mi #256Mi
readinessProbe:
httpGet:
path: /
port: liveness-port
failureThreshold: 1
periodSeconds: 10
initialDelaySeconds: 30
livenessProbe:
httpGet:
path: /
port: liveness-port
failureThreshold: 3
periodSeconds: 10
initialDelaySeconds: 30
---
apiVersion: v1
kind: Service
metadata:
labels:
app: hello-world-rest-api
name: hello-world-rest-api
namespace: back-end
spec:
#type: LoadBalancer
ports:
- port: 8080
protocol: TCP
selector:
app: hello-world-rest-api
- Apply the manifest file to create the resources in the EKS Cluster.
$ kubectl apply -f 02-helloworld-deployment.yaml
Above command will produce an output like what is shown below:
namespace/back-end created
deployment.apps/hello-world-rest-api created
service/hello-world-rest-api created
- Verify that the
hello-world-rest-api
pod is in Ready state.
$ kubectl get po -n back-end
Above command will produce an output like what is shown below:
NAME READY STATUS RESTARTS AGE
hello-world-rest-api-8665c556d8-ghhwd 1/1 Running 0 43s
Note — The pod name above would differ in your case which is expected.
Test communication from pod to service
- Test the communication from
Nginx
pod tohello-world-rest-api
pod by making acURL
request to the servicehello-world-rest-api
kubectl exec $(kubectl get pod -l app=nginx -o jsonpath={.items..metadata.name} -n front-end) -c nginx -n front-end -- curl http://hello-world-rest-api.back-end:8080 -o /dev/null -s -w "From nginx.front-end to hello-world-rest-api.back-end - HTTP Response Code: %{http_code}\n"
Above command will produce an output like what is shown below:
From nginx.front-end to hello-world-rest-api.back-end - HTTP Response Code: 200
Enable Istio injection on back-end
namespace
- To inject Istio side-car envoy proxy into the
hello-world-rest-api
pod inback-end
namespace, execute the following command to add labelistio-injection=enabled
toback-end
namespace:
$ kubectl label ns back-end istio-injection=enabled
Above command will produce an output like what is shown below:
namespace/back-end labeled
Important — Adding label to a namespace does not inject Istio side-car envoy proxy into an existing pod running inside the namespace. The pod must be deleted and re-created in order to inject the side-car.
- Delete and re-create
hello-world-rest-api
pod:
$ kubectl delete po $(kubectl get pod -l app=hello-world-rest-api -o jsonpath={.items..metadata.name} -n back-end) -n back-end
Above command will produce an output like what is shown below:
pod "hello-world-rest-api-8665c556d8-ghhwd" deleted
- Verify if the
hello-world-rest-api
pod is injected with istio side-car envoy proxy container. This can be easily verified by checking the count under columnREADY
which must show2/2
now.
$ kubectl get po -n back-end
Above command will produce an output like what is shown below:
NAME READY STATUS RESTARTS AGE
hello-world-rest-api-8665c556d8-8cdmq 2/2 Running 0 2m41s
- Verify the logs of side-car envoy proxy container injected into the
hello-world-rest-api
pod:
kubectl logs $(kubectl get pod -l app=hello-world-rest-api -o jsonpath={.items..metadata.name} -n back-end) -n back-end -c istio-proxy | tail -2
Above command will produce an output like what is shown below:
2022-01-31T02:36:55.109980Z info Initialization took 568.138123ms
2022-01-31T02:36:55.110000Z info Envoy proxy is ready
Test communication from pod to service
- Test the communication from
Nginx
pod tohello-world-rest-api
pod by making acURL
request to the servicehello-world-rest-api
kubectl exec $(kubectl get pod -l app=nginx -o jsonpath={.items..metadata.name} -n front-end) -c nginx -n front-end -- curl http://hello-world-rest-api.back-end:8080 -o /dev/null -s -w "From nginx.front-end to hello-world-rest-api.back-end - HTTP Response Code: %{http_code}\n"
Above command will produce an output like what is shown below:
From nginx.front-end to hello-world-rest-api.back-end - HTTP Response Code: 200
Important — It must be noted here that even after enabling Istio side-car envoy proxy on
hello-world-rest-api
pod, thecURL
request is still returning a response code of200
over plain-text. This is due to the fact that Istio has a default mTLS configuration set asPERMISSIVE
which means allow both plain-text as well as encrypted communication.
- Verify the logs of side-car envoy proxy container injected into the
hello-world-rest-api
pod:
kubectl logs $(kubectl get pod -l app=hello-world-rest-api -o jsonpath={.items..metadata.name} -n back-end) -n back-end -c istio-proxy | tail -2
Above command will produce an output like what is shown below:
2022-01-31T02:36:55.110000Z info Envoy proxy is ready
{"accessLogFormat":"{\"traceId\":\"bb829957adecffe92ee476a797187ad2\",\"authority\":\"hello-world-rest-api.back-end:8080\",\"bytes_received\":\"0\",\"bytes_sent\":\"14\",\"downstream_local_address\":\"10.40.0.61:8080\",\"downstream_remote_address\":\"10.40.30.123:37352\",\"duration\":\"3\",\"istio_policy_status\":\"-\",\"method\":\"GET\",\"path\":\"/\",\"protocol\":\"HTTP/1.1\",\"request_id\":\"490aa173-a692-9c71-bad0-442275cd35ad\",\"requested_server_name\":\"-\",\"response_code\":\"200\",\"response_flags\":\"-\",\"route_name\":\"default\",\"start_time\":\"2022-01-31T02:42:40.511Z\",\"upstream_cluster\":\"inbound|8080||\",\"upstream_host\":\"10.40.0.61:8080\",\"upstream_local_address\":\"127.0.0.6:40133\",\"upstream_service_time\":\"3\",\"upstream_transport_failure_reason\":\"-\",\"user_agent\":\"curl/7.74.0\",\"x_forwarded_for\":\"-\"}"}
The below picture describes what we have done so far:

Enable mTLS mode of STRICT
on back-end
namespace
- Copy the content of the below YAML into a file
peerAuth_STRICT.yaml
. This manifest file will create a Istio Custom Resource (CR)PeerAuthentication
nameddefault
inback-end
namespace. Notice themTLS
mode is set toSTRICT
.
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: "default"
namespace: "back-end"
spec:
mtls:
mode: STRICT
- Apply the manifest file to create the custom resource in the EKS Cluster.
$ kubectl apply -f peerAuth_STRICT.yaml
Above command will produce an output like what is shown below:
peerauthentication.security.istio.io/default created
Test communication from pod to service
- Test the communication from
Nginx
pod tohello-world-rest-api
pod by making acURL
request to the servicehello-world-rest-api
kubectl exec $(kubectl get pod -l app=nginx -o jsonpath={.items..metadata.name} -n front-end) -c nginx -n front-end -- curl http://hello-world-rest-api.back-end:8080 -o /dev/null -s -w "From nginx.front-end to hello-world-rest-api.back-end - HTTP Response Code: %{http_code}\n"
Above command will produce an output like what is shown below:
From nginx.front-end to hello-world-rest-api.back-end - HTTP Response Code: 000
command terminated with exit code 56
Important — After enabling Istio side-car envoy proxy on
hello-world-rest-api
pod withmTLS
mode asSTRICT
, thecURL
request is failing and returning a response code of000
over plain-text.
The below picture describes what we have done so far:

Enable Istio injection on front-end
namespace
- To inject Istio side-car envoy proxy into the
nginx
pod infront-end
namespace, execute the following command to add labelistio-injection=enabled
tofront-end
namespace:
$ kubectl label ns front-end istio-injection=enabled
Above command will produce an output like what is shown below:
namespace/front-end labeled
Important — Adding label to a namespace does not inject Istio side-car envoy proxy into an existing pod running inside the namespace. The pod must be deleted and re-created in order to inject the side-car.
- Delete and re-create
nginx
pod:
$ kubectl delete po $(kubectl get pod -l app=nginx -o jsonpath={.items..metadata.name} -n front-end) -n front-end
Above command will produce an output like what is shown below:
pod "nginx-845d4d9dff-pqjgx" deleted
- Verify if the
nginx
pod is injected with istio side-car envoy proxy container. This can be easily verified by checking the count under columnREADY
which must show2/2
now.
$ kubectl get po -n front-end
Above command will produce an output like what is shown below:
NAME READY STATUS RESTARTS AGE
nginx-845d4d9dff-nqb9x 2/2 Running 0 59s
Test communication from pod to service
- Test the communication from
Nginx
pod tohello-world-rest-api
pod by making acURL
request to the servicehello-world-rest-api
kubectl exec $(kubectl get pod -l app=nginx -o jsonpath={.items..metadata.name} -n front-end) -c nginx -n front-end -- curl http://hello-world-rest-api.back-end:8080 -o /dev/null -s -w "From nginx.front-end to hello-world-rest-api.back-end - HTTP Response Code: %{http_code}\n"
Above command will produce an output like what is shown below:
From nginx.front-end to hello-world-rest-api.back-end - HTTP Response Code: 200
Important — The communication between
nginx
pod andhello-world-rest-api
pod is enabled with mTLS now.
- Verify the logs of side-car envoy proxy container injected into the
hello-world-rest-api
pod:
kubectl logs $(kubectl get pod -l app=hello-world-rest-api -o jsonpath={.items..metadata.name} -n back-end) -n back-end -c istio-proxy | tail -2
Above command will produce an output like what is shown below:
{"accessLogFormat":"{\"traceId\":\"-\",\"authority\":\"-\",\"bytes_received\":\"0\",\"bytes_sent\":\"0\",\"downstream_local_address\":\"10.40.0.61:8080\",\"downstream_remote_address\":\"10.40.30.123:42792\",\"duration\":\"0\",\"istio_policy_status\":\"-\",\"method\":\"-\",\"path\":\"-\",\"protocol\":\"-\",\"request_id\":\"-\",\"requested_server_name\":\"-\",\"response_code\":\"0\",\"response_flags\":\"NR\",\"route_name\":\"-\",\"start_time\":\"2022-01-31T02:55:10.063Z\",\"upstream_cluster\":\"-\",\"upstream_host\":\"-\",\"upstream_local_address\":\"-\",\"upstream_service_time\":\"-\",\"upstream_transport_failure_reason\":\"-\",\"user_agent\":\"-\",\"x_forwarded_for\":\"-\"}"}
{"accessLogFormat":"{\"traceId\":\"08e4cc523dbdbfb4e0c4db0401657cc0\",\"authority\":\"hello-world-rest-api.back-end:8080\",\"bytes_received\":\"0\",\"bytes_sent\":\"14\",\"downstream_local_address\":\"10.40.0.61:8080\",\"downstream_remote_address\":\"10.40.30.137:46580\",\"duration\":\"1\",\"istio_policy_status\":\"-\",\"method\":\"GET\",\"path\":\"/\",\"protocol\":\"HTTP/1.1\",\"request_id\":\"1f7a3c40-6a2a-9aed-8ba3-25a13e5866aa\",\"requested_server_name\":\"outbound_.8080_._.hello-world-rest-api.back-end.svc.cluster.local\",\"response_code\":\"200\",\"response_flags\":\"-\",\"route_name\":\"default\",\"start_time\":\"2022-01-31T03:00:53.199Z\",\"upstream_cluster\":\"inbound|8080||\",\"upstream_host\":\"10.40.0.61:8080\",\"upstream_local_address\":\"127.0.0.6:53215\",\"upstream_service_time\":\"1\",\"upstream_transport_failure_reason\":\"-\",\"user_agent\":\"curl/7.74.0\",\"x_forwarded_for\":\"-\"}"}
The below picture describes what we have done so far:

Cleaning Up
- Clean the resource created as part of the PoC and Demonstration. Istio CR
PeerAuthentication
nameddefault
inback-end
namespace.
$ kubectl delete -f peerAuth_STRICT.yaml
Above command will produce an output like what is shown below:
peerauthentication.security.istio.io "default" deleted
- Delete the
nginx
resources.
$ kubectl delete -f 01-nginx-deployment.yaml
Above command will produce an output like what is shown below:
namespace "front-end" deleted
deployment.apps "nginx" deleted
service "nginx" deleted
- Delete the
hello-world-rest-api
resources.
$ kubectl delete -f 02-helloworld-deployment.yaml
Above command will produce an output like what is shown below:
namespace "back-end" deleted
deployment.apps "hello-world-rest-api" deleted
service "hello-world-rest-api" deleted
Congratulations!!! You have successfully used the mTLS encryption offered out-of-the-box by Istio for your application.
References: https://istio.io/latest/docs/tasks/security/authentication/mtls-migration/