Traffic Management
Gateways
We installed the ingress and egress gateways as part of the Istio installation. The gateways are an instance of the Envoy proxy and act as load balancers at the edge of the mesh.
Gateways are configured using a Gateway resource.
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: my-gateway
namespace: default
spec:
selector:
istio: ingressgateway (1)
servers:
- port:
number: 80 (2)
name: http
protocol: HTTP
hosts: (3)
- dev.example.com (4)
- test.example.com (4)
1 | applies to the istio-ingressgateway proxy in the istio-system namespace |
2 | exposing port 80 for ingress |
3 | hosts field acts as a filter |
4 | only traffic destined for dev.example.com and test.example.com is allowed |
Simple Routing
We can use the VirtualService
resource for traffic routing within the Istio service mesh. With a VirtualService we can define traffic routing rules and apply them when the client tries to connect to the service.
A sample VirtualService is described below.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers-route
spec:
hosts:
- customers.default.svc.cluster.local (1)
http: (2)
- name: customers-v1-routes
route:
- destination:
host: customers.default.svc.cluster.local (3)
subset: v1
weight: 70 (4)
- name: customers-v2-routes
route:
- destination:
host: customers.default.svc.cluster.local (3)
subset: v2
weight: 30 (4)
gateways:
- my-gateway (5)
1 | the destination host, e.g. a Kubernetes service |
2 | the http field contains an ordered list of rules |
3 | service in Istio’s service registry to which the request will be sent after processing the routing rule. |
4 | weight defines the percentage of traffic to be sent to this destination |
5 | the Gateway to which we want to bind this VirtualService |
When a VirtualService is attached to a Gateway, only the hosts defined in the Gateway resource will be allowed.
Subsets and DestinationRule
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: customers-destination
spec:
host: customers.default.svc.cluster.local
subsets:
- name: v1
labels:
version: v1 (1)
- name: v2
labels:
version: v2 (2)
1 | subset v1 includes all Pods with label version=v1 |
2 | subset v2 includes all Pods with label version=v2 |
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: customers-destination
spec:
host: customers.default.svc.cluster.local
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN (1)
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
1 | sets the load balancing algorithm for the destination to round-robin. Allowed values: UNSPECIFIED, RANDOM, PASSTHROUGH, ROUND_ROBIN, LEAST_REQUEST, LEAST_CONN |
location
for affinityapiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: customers-destination
spec:
host: customers.default.svc.cluster.local
trafficPolicy:
loadBalancer:
consistentHash:
httpCookie:
name: location
ttl: 4s
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: customers-destination
spec:
host: customers.default.svc.cluster.local
trafficPolicy:
connectionPool:
http:
http2MaxRequests: 50 (1)
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
1 | sets a limit of 50 concurrent requests to the service |
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: customers-destination
spec:
host: customers.default.svc.cluster.local
trafficPolicy:
connectionPool:
http:
http2MaxRequests: 500 (1)
maxRequestsPerConnection: 10 (2)
outlierDetection:
consecutiveErrors: 10 (3)
interval: 5m (4)
baseEjectionTime: 10m (5)
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
1 | maximum limit for concurrent HTTP2 requests |
2 | maximum requests per connection |
3 | maximum number of allowed consecutive errors |
4 | interval to scan the status of the Pods |
5 | number of minutes the Envoy will eject the faulty Pod |
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: customers-destination
spec:
host: customers.default.svc.cluster.local
trafficPolicy:
tls:
mode: MUTUAL
clientCertificate: /etc/certs/cert.pem
privateKey: /etc/certs/key.pem
caCertificates: /etc/certs/ca.pem
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: customers-destination
spec:
host: customers.default.svc.cluster.local
trafficPolicy:
portLevelSettings:
- port:
number: 80
loadBalancer:
simple: LEAST_CONN
- port:
number: 8000
loadBalancer:
simple: ROUND_ROBIN
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
Resiliency
Resiliency is the ability to provide and maintain an acceptable level of service in the face of faults and challenges to regular operation.
It is mainly achieved through Timeouts
and Retry Policies
on the VirtualService
resource.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers-route
spec:
hosts:
- customers.default.svc.cluster.local
http:
- name: customers-v1-routes
route:
- destination:
host: customers.default.svc.cluster.local
subset: v1
weight: 70
timeout: 10s (1)
- name: customers-v2-routes
route:
- destination:
host: customers.default.svc.cluster.local
subset: v2
weight: 30
timeout: 10s (1)
1 | If the request takes longer than the value specified in the timeout field, Envoy proxy will drop the requests and mark them as timed out (return an HTTP 408 to the application). The connections remain open unless outlier detection is triggered. |
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers-route
spec:
hosts:
- customers.default.svc.cluster.local
http:
- name: customers-v1-routes
route:
- destination:
host: customers.default.svc.cluster.local
subset: v1
weight: 70
retries:
attempts: 10
perTryTimeout: 2s
retryOn: connect-failure,reset
- name: customers-v2-routes
route:
- destination:
host: customers.default.svc.cluster.local
subset: v2
weight: 30
retries:
attempts: 10
perTryTimeout: 2s
retryOn: connect-failure,reset
The above retry policy will attempt to retry any request that fails with a connect timeout (connect-failure) or if the server does not respond at all (reset). We set the per-try attempt timeout to 2 seconds and the number of attempts to 10. Note that if we set both retries and timeouts, the timeout value will be the maximum the request will wait. If we had a 10-second timeout specified in the above example, we would only ever wait 10 seconds maximum, even if there are still attempts left in the retry policy.
Failure Injection
Failure injection can be used to simulate a faulty upstream service.
Fault injection will not trigger any retry policies we have set on the routes. |
abort
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers-route
spec:
hosts:
- customers.default.svc.cluster.local
http:
- name: customers-v1-routes
route:
- destination:
host: customers.default.svc.cluster.local
subset: v1
weight: 70
fault:
abort:
percentage:
value: 30 (1)
httpStatus: 404 (2)
- name: customers-v2-routes
route:
- destination:
host: customers.default.svc.cluster.local
subset: v2
weight: 30
fault:
abort:
percentage:
value: 30
httpStatus: 404
1 | aborting 30% of the HTTP requests, if not specified all requests are aborted |
2 | with error code 404 |
delay
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers-route
spec:
hosts:
- customers.default.svc.cluster.local
http:
- name: customers-v1-routes
route:
- destination:
host: customers.default.svc.cluster.local
subset: v1
weight: 70
fault:
delay:
percentage:
value: 5 (1)
fixedDelay: 3s (1)
- name: customers-v2-routes
route:
- destination:
host: customers.default.svc.cluster.local
subset: v2
weight: 30
fault:
delay:
percentage:
value: 5
fixedDelay: 3s
1 | apply 3 seconds of delay to 5% of the incoming requests |
Advanced Routing
Istio allows us to use parts of the incoming requests and match them to the defined values.
Rules can be configured on the following properties:
-
uri: match the request URI to the specified value
-
schema: match the request schema (HTTP, HTTPS, …)
-
method: match the request method (GET, POST, …)
-
authority: match the request authority header
-
headers: match the request headers. If used, other properties get ignored
Each of the above properties can get matched using one of these methods:
-
exact: "value" matches the exact string
-
prefix: "value" matches the prefix only
-
regex: "value" matches based on the ECMAscript style regex
Matching rules can be configured with both AND and OR semantics.
ServiceEntry
With the ServiceEntry resource, we can add additional entries to Istio’s internal service registry and make external services or internal services that are not part of our mesh look like part of our service mesh.
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: external-svc
spec:
hosts:
- api.external-svc.com
ports:
- number: 443
name: https
protocol: TLS
resolution: DNS
location: MESH_EXTERNAL
Together with the WorkloadEntry resource, we can handle the migration of VM workloads to Kubernetes. In the WorkloadEntry, we can specify the details of the workload running on a VM (name, address, labels) and then use the workloadSelector field in the ServiceEntry to make the VMs part of Istio’s internal service registry.
apiVersion: networking.istio.io/v1beta1
kind: WorkloadEntry
metadata:
name: customers-vm-1
spec:
serviceAccount: customers
address: 1.0.0.0
labels:
app: customers
instance-id: vm1
apiVersion: networking.istio.io/v1beta1
kind: WorkloadEntry
metadata:
name: customers-vm-2
spec:
serviceAccount: customers
address: 2.0.0.0
labels:
app: customers
instance-id: vm2
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: customers-svc
spec:
hosts:
- customers.com
location: MESH_INTERNAL
ports:
- number: 80
name: http
protocol: HTTP
resolution: STATIC
workloadSelector:
labels:
app: customers
Sidecar Resource
Sidecar resource describes the configuration of sidecar proxies. By default, all proxies in the mesh have the configuration required to reach every workload in the mesh and accept traffic on all ports.
Three parts make up the sidecar resource, a workload selector, an ingress listener, and an egress listener.
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: default-sidecar
namespace: default
spec:
egress:
- hosts:
- "default/*"
- "istio-system/*"
- "staging/*"
The above applies to all proxies inside the default namespace, because there’s no selector defined:
To apply the resource only to specific workloads, we can use the workloadSelector field. For example, setting the selector to version: v1 will only apply to the workloads with that label set.
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: default-sidecar
namespace: default
spec:
workloadSelector:
labels:
version: v1
egress:
- hosts:
- "default/*"
- "istio-system/*"
- "staging/*"
apiVersion: networking.istio.io/v1beta1
kind: Sidecar
metadata:
name: default-sidecar
namespace: default
spec:
workloadSelector:
labels:
version: v1
ingress:
- port:
number: 3000 (1)
protocol: HTTP
name: somename
defaultEndpoint: 127.0.0.1:8080 (2)
egress:
- port:
number: 8080 (3)
protocol: HTTP
hosts:
- "staging/*" (4)
1 | ingress listening on port 3000 |
2 | forward traffic to the loopback IP on the port 8080 where your service is listening |
3 | proxy the HTTP traffic bound for port 8080 |
4 | applicable to services running in the staging namespace |
Envoy Filter
The EnvoyFilter resource allows you to customize the Envoy configuration that gets generated by the Istio Pilot. Using the resource you can update values, add specific filters, or even add new listeners, clusters, etc.
Use this feature with care, as incorrect customization might destabilize the entire mesh. |
apiVersion: networking.istio.io/v1beta1
kind: EnvoyFilter
metadata:
name: api-header-filter
namespace: default
spec:
workloadSelector:
labels:
app: web-frontend
configPatches:
- applyTo: HTTP_FILTER
match:
context: SIDECAR_INBOUND
listener:
portNumber: 8080
filterChain:
filter:
name: "envoy.http_connection_manager"
subFilter:
name: "envoy.router"
patch:
operation: INSERT_BEFORE
value:
name: envoy.lua
typed_config:
"@type": "type.googleapis.com/envoy.config.filter.http.lua.v2.Lua"
inlineCode: |
function envoy_on_response(response_handle)
response_handle:headers():add("api-version", "v1")
end
Labs
Set the GATEWAY_URL
For all the labs, we use an environment variable called GATEWAY_URL to point to the EXTERNAL_IP of the ingress-gateway.For this lab, the ingress-gateway load balancer service is exposed through a minikube tunnel, so the GATEWAY_URL is equivalent to 127.0.0.1
|
export GATEWAY_URL=127.0.0.1
Lab 1: Creating a deployment and using a Gateway to expose it
In this lab, we will deploy a Hello World application to the cluster. We will then deploy a Gateway resource and a VirtualService that binds to the Gateway to expose the application on the external IP address.
In the Gateway resource, we will set the hosts field to * to access the ingress gateway directly from the external IP address. If we wanted to access the ingress gateway through a domain name, we could set the hosts’ value to a domain name (e.g. example.com) and add the external IP address to an A record for the domain.
Create the gateway resource.
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
kubectl apply -f gateway.yaml
Create a simple Hello World Deployment and Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world
labels:
app: hello-world
spec:
replicas: 1
selector:
matchLabels:
app: hello-world
template:
metadata:
labels:
app: hello-world
spec:
containers:
- image: nginx
imagePullPolicy: Always
name: svc
ports:
- containerPort: 80
---
kind: Service
apiVersion: v1
metadata:
name: hello-world
labels:
app: hello-world
spec:
selector:
app: hello-world
ports:
- port: 80
name: http
targetPort: 80
kubectl apply -f hello-world.yaml
If we look at the created Pods, we will notice two containers running. One is the Envoy proxy sidecar, and the second one is the application.
kubectl get po,svc -l=app=hello-world
The output is similar to
NAME READY STATUS RESTARTS AGE
pod/hello-world-5bdddc8467-p8m2w 2/2 Running 0 6m15s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hello-world ClusterIP 10.100.73.1 <none> 80/TCP 6m15s
The next step is to create a VirtualService for the hello-world service and bind it to the Gateway resource:
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: hello-world
spec:
hosts:
- "*"
gateways:
- gateway
http:
- route:
- destination:
host: hello-world.default.svc.cluster.local
port:
number: 80
kubectl apply -f vs-hello-world.yaml
kubectl get vs
The output is similar to
NAME GATEWAYS HOSTS AGE
hello-world ["gateway"] ["*"] 2m19s
curl -v http://$GATEWAY_URL
You should see the successful response from the nginx server
* Trying 127.0.0.1:80...
* Connected to 127.0.0.1 (127.0.0.1) port 80
> GET / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/8.4.0
> Accept: */*
>
< HTTP/1.1 200 OK
< server: istio-envoy
< date: Fri, 19 Jan 2024 06:57:19 GMT
< content-type: text/html
< content-length: 615
< last-modified: Tue, 24 Oct 2023 13:46:47 GMT
< etag: "6537cac7-267"
< accept-ranges: bytes
< x-envoy-upstream-service-time: 2
<
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
<style>
html { color-scheme: light dark; }
body { width: 35em; margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif; }
</style>
</head>
<body>
<h1>Welcome to nginx!</h1>
<p>If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.</p>
<p>For online documentation and support please refer to
<a href="http://nginx.org/">nginx.org</a>.<br/>
Commercial support is available at
<a href="http://nginx.com/">nginx.com</a>.</p>
<p><em>Thank you for using nginx.</em></p>
</body>
</html>
* Connection #0 to host 127.0.0.1 left intact
the server header is set to istio-envoy telling us that the request went through the Envoy proxy.
|
Lab 2: Observing failure injection and delays
In this lab, we will deploy the Web Frontend and Customers v1 service. We will then inject a failure, a delay, and observe both in Zipkin, Kiali, and Grafana.
Create the gateway resource.
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
kubectl apply -f gateway.yaml
Create the rest-client
Deployment, Service and VirtualService
apiVersion: apps/v1
kind: Deployment
metadata:
name: rest-client
labels:
app: rest-client
spec:
replicas: 1
selector:
matchLabels:
app: rest-client
template:
metadata:
labels:
app: rest-client
version: v1
spec:
containers:
- image: adityasamantlearnings/rest-client:1.0
imagePullPolicy: Always
name: rest-client
ports:
- containerPort: 8082
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers:8081'
---
kind: Service
apiVersion: v1
metadata:
name: rest-client
labels:
app: rest-client
spec:
selector:
app: rest-client
ports:
- port: 8082
name: http
targetPort: 8082
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: rest-client
spec:
hosts:
- '*'
gateways:
- gateway
http:
- route:
- destination:
host: rest-client.default.svc.cluster.local
port:
number: 8082
kubectl apply -f rest-client-with-vs.yaml
Create the customers
Deployment, Service, VirtualService and DestinationRule with one subset
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: adityasamantlearnings/customers:0.8
imagePullPolicy: Always
name: customers-v1
ports:
- containerPort: 8081
---
kind: Service
apiVersion: v1
metadata:
name: customers
labels:
app: customers
spec:
selector:
app: customers
ports:
- port: 8081
name: http
targetPort: 8081
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: customers
spec:
host: customers.default.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers
spec:
hosts:
- 'customers.default.svc.cluster.local'
http:
- route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
subset: v1
kubectl apply -f customers-with-vs-and-dr-one-subset.yaml
Test the application by invoking the rest-client through the GATEWAY_URL in a browser.
The output of the web service is a list of 3 customers with their ID, first name and last name.
[{"id":1,"firstName":"John","lastName":"Doe"},{"id":2,"firstName":"Alice","lastName":"Smith"},{"id":3,"firstName":"Bob","lastName":"Stevens"}]
Inject a 5-second delay to the Customers service for 50% of all requests.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers
spec:
hosts:
- 'customers.default.svc.cluster.local'
http:
- route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
subset: v1
fault:
delay:
percent: 50
fixedDelay: 5s
kubectl apply -f customers-vs-delay.yaml
Make requests to the rest-client via the gateway in a endless loop
while true; do curl http://127.0.0.1/api/customers; done
Access the Grafana UI
istioctl dashboard grafana
Navigate to Dashboards > istio > Istio Service Dashboard
In the Service dropdown, select customers.default.svc.cluster.local
In the Reporter dropdown, select destination + source
In the Client Workloads panel, observe the response time in the Incoming Request Duration By Source graph
You can switch to the rest-client
service and observe the similar delay.
Access the Zipkin UI
istioctl dashboard zipkin
Select the serviceName
as rest-client.default
, then add the minDuration
criteria and enter 5s
and click the search button to find traces.
You’ll notice in the details that the response_flags tag gets set to DI. “DI” stands for “delay injection” and indicates that the request got delayed.
Update the VirtualService again, and this time, we will inject a fault and return HTTP 500 for 50% of the requests.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers
spec:
hosts:
- 'customers.default.svc.cluster.local'
http:
- route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
subset: v1
fault:
abort:
httpStatus: 500
percentage:
value: 50
kubectl apply -f customers-vs-fault.yaml
You will start noticing failures from the request loop we had running.
In Grafana, you will observe a dip in the success rate.
There’s a similar story in Zipkin. If we search for traces again (we can remove the min duration), we will notice the traces with errors will show up in red color, as shown below.
Access the Kiali UI
istioctl dashboard kiali
Look at the service graph by clicking the Graph
item. You will notice how the rest-client service has a red border.
If we click on the rest-client
service and look at the sidebar on the right, you will notice the details of the HTTP requests. The graph shows the percentage of success and failures. Both numbers are around 50%, which corresponds to the percentage value we set in the VirtualService.
Lab 3: Simple Traffic Routing
This lab describes how to split traffic between two versions using subsets.
Create the gateway resource.
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
kubectl apply -f gateway.yaml
Create the rest-client
Deployment and Service
apiVersion: apps/v1
kind: Deployment
metadata:
name: rest-client
labels:
app: rest-client
spec:
replicas: 1
selector:
matchLabels:
app: rest-client
template:
metadata:
labels:
app: rest-client
version: v1
spec:
containers:
- image: adityasamantlearnings/rest-client:1.0
imagePullPolicy: Always
name: rest-client
ports:
- containerPort: 8082
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers:8081'
---
kind: Service
apiVersion: v1
metadata:
name: rest-client
labels:
app: rest-client
spec:
selector:
app: rest-client
ports:
- port: 8082
name: http
targetPort: 8082
kubectl apply -f rest-client.yaml
Now we can deploy the v1
of the Customer service. Notice how we set the version: v1
label in the Pod template. However, the Service only uses app: customers
in its selector. That’s because we will create the subsets in the DestinationRule, and those will apply the additional version label to the selector, allowing us to reach the Pods running specific versions.
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: adityasamantlearnings/customers:0.8
imagePullPolicy: Always
name: customers-v1
ports:
- containerPort: 8081
---
kind: Service
apiVersion: v1
metadata:
name: customers
labels:
app: customers
spec:
selector:
app: customers
ports:
- port: 8081
name: http
targetPort: 8081
kubectl apply -f customers-v1.yaml
Create a VirtualService for the rest-client and bind it to the Gateway resource
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: rest-client
spec:
hosts:
- '*'
gateways:
- gateway
http:
- route:
- destination:
host: rest-client.default.svc.cluster.local
port:
number: 8082
kubectl apply -f rest-client-vs.yaml
Run cURL against the GATEWAY_URL
curl -v http://$GATEWAY_URL/api/customers
You will see the output as below:
[{"id":1,"firstName":"John","lastName":"Doe"},{"id":2,"firstName":"Alice","lastName":"Smith"},{"id":3,"firstName":"Bob","lastName":"Stevens"}]
Next, we create a DestinationRule for the Customer Service with two subsets for two versions. However, the VirtualService will initially route all the traffic only to v1.
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: customers
spec:
host: customers.default.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
kubectl apply -f customers-dr.yaml
Now we configure the VirtualService directing all the traffic to v1 only.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers
spec:
hosts:
- 'customers.default.svc.cluster.local'
http:
- route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
subset: v1
kubectl apply -f customers-vs.yaml
We are now ready to deploy the v2
version of the Customers service.
The v2 version has a small enhancement to return the country of each customer.
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v2
labels:
app: customers
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v2
template:
metadata:
labels:
app: customers
version: v2
spec:
containers:
- image: adityasamantlearnings/customers:0.9
imagePullPolicy: Always
name: customers-v2
ports:
- containerPort: 8081
kubectl apply -f customers-v2.yaml
Run cURL against the GATEWAY_URL
curl -v http://$GATEWAY_URL/api/customers
Run the above command a few times. You will observe that although the customers-v2
version is deployed, all the traffic is still routed to v1.
[{"id":1,"firstName":"John","lastName":"Doe"},{"id":2,"firstName":"Alice","lastName":"Smith"},{"id":3,"firstName":"Bob","lastName":"Stevens"}]
Let’s use the weight field and modify the VirtualService. We’ll send 50% of the traffic to the v1 subset and the other 50% to the v2 subset.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers
spec:
hosts:
- 'customers.default.svc.cluster.local'
http:
- route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
subset: v1
weight: 50
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
subset: v2
weight: 50
kubectl apply -f customers-50-50.yaml
Test the application by invoking the rest-client through the GATEWAY_URL in a browser. Refresh the browser a couple of times.
You will see that the traffic is routed to both v1 and v2 equally.
[{"id":1,"firstName":"John","lastName":"Doe"},{"id":2,"firstName":"Alice","lastName":"Smith"},{"id":3,"firstName":"Bob","lastName":"Stevens"}]
[{"id":1,"firstName":"John","lastName":"Doe","country":"Australia"},{"id":2,"firstName":"Alice","lastName":"Smith","country":"USA"},{"id":3,"firstName":"Bob","lastName":"Stevens","country":"England"}]
Lab 4: Advanced Routing
In this lab, we will use request properties to route the traffic between multiple service versions.
Create the gateway resource.
apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: gateway
spec:
selector:
istio: ingressgateway
servers:
- port:
number: 80
name: http
protocol: HTTP
hosts:
- '*'
kubectl apply -f gateway.yaml
Next, we will create the Deployment, Service and VirtualService for the rest-client
apiVersion: apps/v1
kind: Deployment
metadata:
name: rest-client
labels:
app: rest-client
spec:
replicas: 1
selector:
matchLabels:
app: rest-client
template:
metadata:
labels:
app: rest-client
version: v1
spec:
containers:
- image: adityasamantlearnings/rest-client:1.0
imagePullPolicy: Always
name: rest-client
ports:
- containerPort: 8082
env:
- name: CUSTOMER_SERVICE_URL
value: 'http://customers:8081'
---
kind: Service
apiVersion: v1
metadata:
name: rest-client
labels:
app: rest-client
spec:
selector:
app: rest-client
ports:
- port: 8082
name: http
targetPort: 8082
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: rest-client
spec:
hosts:
- '*'
gateways:
- gateway
http:
- route:
- destination:
host: rest-client.default.svc.cluster.local
port:
number: 8082
kubectl apply -f rest-client-with-vs.yaml
We then create the Deployment, Service, VirtualService and DestinationRules for the customer application
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v1
labels:
app: customers
version: v1
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v1
template:
metadata:
labels:
app: customers
version: v1
spec:
containers:
- image: adityasamantlearnings/customers:0.8
imagePullPolicy: Always
name: customers-v1
ports:
- containerPort: 8081
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: customers-v2
labels:
app: customers
version: v2
spec:
replicas: 1
selector:
matchLabels:
app: customers
version: v2
template:
metadata:
labels:
app: customers
version: v2
spec:
containers:
- image: adityasamantlearnings/customers:0.9
imagePullPolicy: Always
name: customers-v2
ports:
- containerPort: 8081
---
kind: Service
apiVersion: v1
metadata:
name: customers
labels:
app: customers
spec:
selector:
app: customers
ports:
- port: 8081
name: http
targetPort: 8081
---
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers
spec:
hosts:
- 'customers.default.svc.cluster.local'
http:
- route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
subset: v1
---
apiVersion: networking.istio.io/v1beta1
kind: DestinationRule
metadata:
name: customers
spec:
host: customers.default.svc.cluster.local
subsets:
- name: v1
labels:
version: v1
- name: v2
labels:
version: v2
kubectl apply -f customers-with-vs-and-dr-two-subsets.yaml
Test the application by invoking the rest-client through the GATEWAY_URL in a browser.
You will observe that all the traffic is directed to customers-v1
.
We will now route the traffic based on the value of a header named user
. If the value is debug
we route to v2, else we route to v1.
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: customers
spec:
hosts:
- 'customers.default.svc.cluster.local'
http:
- match:
- headers:
user:
exact: debug
route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
subset: v2
- route:
- destination:
host: customers.default.svc.cluster.local
port:
number: 8081
subset: v1
kubectl apply -f customer-vs-route-via-headers.yaml
If we access the GATEWAY_URL
we should still get back the response from the customers-v1
. If we add the header user: debug
to the request we will notice that the response is from the customers-v2
.
Try using the below two commands
curl http://127.0.0.1/api/customers
customers-v1
[{"id":1,"firstName":"John","lastName":"Doe"},{"id":2,"firstName":"Alice","lastName":"Smith"},{"id":3,"firstName":"Bob","lastName":"Stevens"}]
curl -H "user: debug" http://127.0.0.1/api/customers
[{"id":1,"firstName":"John","lastName":"Doe","country":"Australia"},{"id":2,"firstName":"Alice","lastName":"Smith","country":"USA"},{"id":3,"firstName":"Bob","lastName":"Stevens","country":"England"}]