Security
Authentication
Access Control
In Kubernetes terms:
-
Subject: The user or service account a.k.a. the principal
-
Action: The verb e.g. get, list, create, delete
-
Object: The resource
Authentication is the act of validating some credential and ensuring that the credential is both valid and trusted. Once the authentication gets performed, we have an authenticated principal.
In Kubernetes, each workload gets assigned a unique identity in the form of service accounts.
Istio use the X.509 certificate to create a new identity according to the SPIFFE specification.
The identity in the certificate gets encoded in the Subject alternate name field of the certificate. It looks like this:
spiffe://cluster.local/ns/<pod namespace>/sa/<pod service account>
The Envoy proxies are modified so when they do the TLS handshake, they’ll also do the portion required by the SPIFFE validation (check the SAN field) to get a valid SPIFFE identity. After this process, we can use the authenticated principals for policy.
Identity Provisioning Workflow
Istio securely provisions identities to every workload with X.509 certificates.
The below diagram explains the process.
-
istiod offers a gRPC service to accept Certificate Signing Requests (CSRs).
-
Istio agent creates the private key and CSR, sends the CSR with its credentials to istiod for signing.
-
The CA in istiod validates the credentials carried in the CSR. Upon successful validation, it signs the CSR to generate the certificate. The Istio agent stores the key and certificate in memory.
-
When a workload is started, Envoy requests the certificate and key from the Istio agent in the same container via the Envoy secret discovery service (SDS) API.
-
The Istio agent sends the certificates received from istiod and the private key to Envoy via the Envoy SDS API.
-
Istio agent monitors the expiration of the workload certificate. The above process repeats periodically for certificate and key rotation.
Authentication Types
Peer authentication
Peer authentication is used for service-to-service authentication to verify the client that’s making the connection via mutual TLS.
The PeerAuthentication
resource provides two modes for mTLS
-
STRICT: enables strict mutual TLS between services
-
PERMISSIVE: graceful mode that enables opting into mutual TLS one workload or namespace at a time. Enabled by default.
Mutual TLS
Istio enables secure client to server mTLS connections with the help of the sidecar Envoy proxies. When a service tries to communicate to another over mTLS, the workload is routed to the Envoy.
The client and server Envoy participate in a mTLS handshake.
mTLS for upstream connections (e.g. connections to upstream database cluster) can be controlled through configuration in the service’s destination rule.
The supported TLS modes are:
-
DISABLE: Do not setup a TLS connection to the upstream endpoint.
-
SIMPLE: Originate a TLS connection to the upstream endpoint.
-
MUTUAL: Secure connections to the upstream using mutual TLS by presenting client certificates for authentication.
-
ISTIO_MUTUAL: Same as MUTUAL, but uses certificates generated automatically by Istio for mTLS authentication.
Permissive and Strict mTLS Modes
Permissive mode allows a service to simultaneously accept plaintext traffic and mTLS traffic. This assists gradual onboarding to Istio. This is the default setting.
Istio tracks workloads that use the Envoy proxies and automatically sends the mTLS traffic to them. If workloads do not have an Envoy proxy, Istio sends plain text traffic.
Once all workloads have the sidecars installed, we must switch to the strict mTLS mode. This is done using a PeerAuthentication
resource.
We can create the PeerAuthentication resource and enforce strict mode in specific namespaces at first.
demo
namespaceapiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: demo
namespace: demo
spec:
mtls:
mode: STRICT
To implement STRICT mTLS globally across the mesh, create a policy in the root istio-system
namespace as shown below.
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: istio-system
spec:
mtls:
mode: STRICT
Additionally, we can also specify the selector field and apply the policy only to specific workloads in the mesh as shown below.
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: my-namespace
spec:
selector:
matchLabels:
app: customers
mtls:
mode: STRICT
Lab: Mutual TLS
In this lab, we deploy the rest-client
application without an Envoy proxy sidecar whereas the customer application will have the sidecar injected.
This will demonstrate how Istio can send both mTLS and plain text traffic to a service simultaneously.
Create the gateway resource.
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
kubectl apply -f gateway.yaml
To prevent the automatic sidecar injection in the rest-client
service, run the following command
kubectl label namespace default istio-injection-
Deploy the rest-client application
apiVersion: apps/v1
kind: Deployment
metadata:
name: rest-client
labels:
app: rest-client
spec:
replicas: 1
selector:
matchLabels:
app: rest-client
template:
metadata:
labels:
app: rest-client
version: v1
spec:
containers:
- image: adityasamantlearnings/rest-client:1.0
imagePullPolicy: Always
name: rest-client
ports:
- containerPort: 8082
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers:8081'
---
kind: Service
apiVersion: v1
metadata:
name: rest-client
labels:
app: rest-client
spec:
selector:
app: rest-client
ports:
- port: 8082
name: http
targetPort: 8082
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: rest-client
spec:
hosts:
- '*'
gateways:
- gateway
http:
- route:
- destination:
host: rest-client.default.svc.cluster.local
port:
number: 8082
kubectl apply -f rest-client-with-vs.yaml
Verify that the rest-client
Pod does not have the sidecar injected
kubectl get pod -l=app=rest-client
NAME READY STATUS RESTARTS AGE
rest-client-cd647bdd6-sgfjw 1/1 Running 0 32s
Enable the automatic sidecar injection
kubectl label namespace default istio-injection=enabled
Deploy the customers application
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: adityasamantlearnings/customers:0.8
imagePullPolicy: Always
name: customers-v1
ports:
- containerPort: 8081
---
kind: Service
apiVersion: v1
metadata:
name: customers
labels:
app: customers
spec:
selector:
app: customers
ports:
- port: 8081
name: http
targetPort: 8081
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers
spec:
hosts:
- 'customers.default.svc.cluster.local'
http:
- route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
kubectl apply -f customers-v1-with-vs.yaml
Verify that the customers Pod has the sidecar injected
kubectl get pod -l=app=customers
NAME READY STATUS RESTARTS AGE
customers-v1-7478655548-ngbwv 2/2 Running 0 76s
Test the application by invoking the rest-client through the GATEWAY_URL in a browser.
You should see the following output
[{"id":1,"firstName":"John","lastName":"Doe"},{"id":2,"firstName":"Alice","lastName":"Smith"},{"id":3,"firstName":"Bob","lastName":"Stevens"}]
Access the Kiali UI
istioctl dashboard kiali
Goto Graphs and choose the following values in the dropdown
Namespace: default
Display: Include Security
Workload graph
Make requests to the rest-client via the gateway in a endless loop
while true; do curl http://127.0.0.1/api/customers; done
You will see the following graph:
The rest-client
application is shown as unknown
. As the rest-client application does not have a sidecar proxy, Istio does not know who that service is.
Update the customers
VirtualService and attach the gateway to it. Attaching the gateway allows us to make calls directly to the customer service.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers
spec:
hosts:
- 'customers.default.svc.cluster.local'
gateways:
- gateway
http:
- match:
- headers:
Host:
exact: customers.default.svc.cluster.local
route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
kubectl apply -f vs-customers-gateway.yaml
To make calls from the gateway to the customer service, specify the Host header value.
In two separate terminals, generate some traffic to both the rest-client
and customers
service through the gateway.
Terminal 1:
while true; do curl -H "Host: customers.default.svc.cluster.local" http://$GATEWAY_URL/api/customers; done
Terminal 2:
while true; do curl http://$GATEWAY_URL/api/customers; done
Access the Kiali UI
istioctl dashboard kiali
Goto Graphs and choose the following values in the dropdown
Namespace: default
Display: Include Security
Workload graph
Include the istio-system
namespace.
You should see a graph similar to the following:
Notice a padlock icon between the istio-ingress-gateway
and the customers
service, which means the traffic gets sent using mTLS.
However, there’s no padlock between the unknown
(rest-client) and the customers
service, as well as the istio-ingress-gateway
and rest-client
. Istio is sending plain text traffic to and from the services which do not have the sidecar injected.
Let’s see what happens if we enable mTLS in STRICT mode.
Create the PeerAuthentication resource as follows:
apiVersion: security.istio.io/v1beta1
kind: PeerAuthentication
metadata:
name: default
namespace: default
spec:
mtls:
mode: STRICT
kubectl apply -f peer-authentication-mtls-strict-default-ns.yaml
In the loop which is running, you will an exception with ERROR_CODE as 502 and ERROR_STATUS as BAD_GATEWAY as follows:
"code":502,"status":"BAD_GATEWAY","message":"I/O error on GET request for \"http://customers:8081/api/customers\": Connection reset"
This error indicates that the server side closed the connection. In our case, it was because it was expecting an mTLS connection.
On the other hand, the requests we’re making directly to the customers service continue to work because the customer service has an Envoy proxy running next to it and can do mutual TLS.
Delete the PeerAuthentication resource deployed earlier
kubectl delete peerauthentication default
Istio returns to its default (PERMISSIVE mode), and the errors will disappear.
Authorization
Authorization is about access control. Is an (authenticated) principal allowed to perform an action on an object?
The AuthorizationPolicy
resource makes use of identities extracted from PeerAuthentication
and RequestAuthentication
resources.
The three main parts of an AuthorizationPolicy
are described below:
-
Select the workloads to apply the policy to
-
Action to take (deny, allow, or audit)
-
Rules when to take the action
Consider the example below:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: customers-deny
namespace: default
spec:
selector:
matchLabels:
app: customers (1)
version: v2 (1)
action: DENY (2)
rules:
- from:
- source:
notNamespaces: ["default"] (3)
1 | workloads that the policy applies to |
2 | Set the action to DENY |
3 | when the requests are coming from outside of the default namespace |
The example below depicts the implementation of multiple rules:
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: customers-deny
namespace: default
spec:
selector:
matchLabels:
app: customers (1)
version: v2 (1)
action: DENY (2)
rules:
- from:
- source:
notNamespaces: ["default"] (3)
- to:
- operation:
methods: ["GET"] (4)
- when:
- key: request.headers[User-Agent]
values: ["Mozilla/*"] (5)
1 | workloads that the policy applies to |
2 | set the action to DENY |
3 | when the requests are coming from outside of the default namespace |
4 | allow access to the GET HTTP method |
5 | when the User-Agent header value matches regex Mozilla/* |
If there are multiple policies used for a single workload, Istio evaluates the deny policies first. The evaluation follows these rules:
-
If there are DENY policies that match the request, deny the request
-
If there are no ALLOW policies for the workload, allow the request
-
If any of the ALLOW policies match the request, allow the request
-
If none of the above hold true, then deny the request
The relevant links to the Istio documentation are given below:
Lab: Authorization
This lab demonstrates the use of an authorization policy to control access between workloads.
Create the gateway resource.
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
kubectl apply -f gateway.yaml
Create the rest-client
deployment similar to the previous lab, but this time we will configure a unique ServiceAccount for the Deployment.
apiVersion: v1
kind: ServiceAccount
metadata:
name: rest-client
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: rest-client
labels:
app: rest-client
spec:
replicas: 1
selector:
matchLabels:
app: rest-client
template:
metadata:
labels:
app: rest-client
version: v1
spec:
serviceAccountName: rest-client
containers:
- image: adityasamantlearnings/rest-client:1.0
imagePullPolicy: Always
name: rest-client
ports:
- containerPort: 8082
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers:8081'
---
kind: Service
apiVersion: v1
metadata:
name: rest-client
labels:
app: rest-client
spec:
selector:
app: rest-client
ports:
- port: 8082
name: http
targetPort: 8082
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: rest-client
spec:
hosts:
- '*'
gateways:
- gateway
http:
- route:
- destination:
host: rest-client.default.svc.cluster.local
port:
number: 8082
kubectl apply -f rest-client-with-vs-and-sa.yaml
Deploy the customer
service, which is also accompanied by its dedicated ServiceAccount
apiVersion: v1
kind: ServiceAccount
metadata:
name: customers-v1
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
serviceAccountName: customers-v1
containers:
- image: adityasamantlearnings/customers:0.8
imagePullPolicy: Always
name: customers-v1
ports:
- containerPort: 8081
---
kind: Service
apiVersion: v1
metadata:
name: customers
labels:
app: customers
spec:
selector:
app: customers
ports:
- port: 8081
name: http
targetPort: 8081
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers
spec:
hosts:
- 'customers.default.svc.cluster.local'
http:
- route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
kubectl apply -f customers-with-vs-and-sa.yaml
Run cURL against the GATEWAY_URL
curl -v http://$GATEWAY_URL/api/customers
You should see the output with all the customers.
Create an AuthorizationPolicy
that denies all requests in the default
namespace.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: deny-all
namespace: default
spec:
{}
kubectl apply -f deny-all-default-ns.yaml
Run cURL against the GATEWAY_URL
curl -v http://$GATEWAY_URL/api/customers
You should see the following response:
RBAC: access denied
Try to access the rest-client
and customers
service from within a temporary Pod.
kubectl run curl --image=radial/busyboxplus:curl -i --tty
curl
Podcurl rest-client:8082/api/customers
curl
Podcurl customers:8081/api/customers
You should see the following output for both the commands:
RBAC: access denied
Let’s allow requests being sent from the ingress gateway to the rest-client
application.
To do this, create an AuthorizationPolicy
with an ALLOW action.
selector: should match the label for rest-client
from rule: should match the namespace istio-system and the principal istio-ingressgateway-service-account
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-ingress-frontend
namespace: default
spec:
selector:
matchLabels:
app: rest-client
action: ALLOW
rules:
- from:
- source:
namespaces: ["istio-system"]
- source:
principals: ["cluster.local/ns/istio-system/sa/istio-ingressgateway-service-account"]
kubectl apply -f allow-gateway-to-rest-client.yaml
Run cURL against the GATEWAY_URL
curl -v http://$GATEWAY_URL/api/customers
You should see a different error in the output with with ERROR_CODE as 403 and ERROR_STATUS as FORBIDDEN as follows:
"code":403,"status":"FORBIDDEN","message":"403 Forbidden: \"RBAC: access denied\""
This error is coming from the customers
service - remember we allowed calls to the rest-client. However, rest-client still can’t make calls to the customers service.
Try to access the rest-client
from within a temporary Pod.
kubectl exec curl -- curl rest-client:8082/api/customers
You should see the following output:
RBAC: access denied
This is because the DENY policy is in effect, and we are only allowing calls to be made from the ingress gateway.
When we deployed the rest-client, we also created a service account for the Pod (otherwise, all Pods in the namespace are assigned the default service account). We can now use that service account to specify where the customer service calls can come from.
apiVersion: security.istio.io/v1beta1
kind: AuthorizationPolicy
metadata:
name: allow-rest-client-customers
namespace: default
spec:
selector:
matchLabels:
app: customers
version: v1
action: ALLOW
rules:
- from:
- source:
namespaces: ["default"]
- source:
principals: ["cluster.local/ns/default/sa/rest-client"]
kubectl apply -f allow-rest-client-to-customers.yaml
Run cURL against the GATEWAY_URL
curl -v http://$GATEWAY_URL/api/customers
You should see the successful response from the customers
service.
Cleanup
kubectl delete sa customers-v1 rest-client
kubectl delete deploy rest-client customers-v1
kubectl delete svc customers rest-client
kubectl delete vs customers rest-client
kubectl delete gateway gateway
kubectl delete authorizationpolicy allow-ingress-frontend allow-rest-client-customers deny-all
kubectl delete pod curl --force --grace-period 0