As the complexity of a microservice architecture grows, it becomes important to implement a service mesh for better insights into your cluster and microservices. In this blog, Kristijan explains how Istio can be used as a service mesh, along with a detailed installation steps & configuration setup.
Service Mesh? You’ve heard about it, but does it solve something, or is it just another hot buzzword in the industry?
In this article you will learn about the Istio service mesh, along with a full installation guide and configuration setup.
Before moving straight to Istio, it’s worth mentioning that in one of our previous articles - The Age of Service Mesh, Gigi Sayfan explained in detail how service meshes work and what problems they solve.
I highly suggest you give that article a read. Maybe even as a prequel, as it will provide you with great insight into service mesh basics and the general idea behind them.
Right now, there are a plethora of options for service meshes.
To name a few:
Each service mesh has its pros and cons, along with specific use cases that you should consider for your cluster and end goal.
You can decide which “brand” of a service mesh to install.
Istio is a service mesh designed to enhance and give you better insight into your cluster and microservices.
One of the great things about Istio and service meshes overall is that they require absolutely no code change for them to work.
Istio works by integrating itself as an additional layer inside the Kubernetes cluster and thus provides modern features that you can utilize to your advantage.
Those features can include advanced load balancing, circuit breaking, mTLS traffic encryption, better authentication and authorization options, metrics, telemetry, and overall fine-grain control over the cluster’s traffic going in and out.
Now Istio isn’t just a single object that you install. It’s more of a collection of entities that work together and make up the whole service mesh.
Like Kubernetes, Istio has a control plane that manages everything and a data plane that handles the traffic between the services.
There is more to Istio, as it isn’t bound to only work in a Kubernetes cluster. It will also work with virtual machines and supports different deployment options both for installing and running.
In the next section, we will explain Istio’s components and architecture.
As the saying goes, a picture is worth a thousand words.
Consider the following diagram:
You can see that the traffic destined in and out of the pods doesn’t flow directly now; Instead, it first must pass through the sidecar proxies.
The container sidecars are Envoy proxies that get automatically injected into your pods on startup.
During installation, you instruct Istio which namespace to ‘watch’ and deploy Envoy proxies along with your applications.
You will see how this is done in action when we get to the installing section.
The other part is the control plane, made of multiple components bundled in one binary - istiod. The control plane manages the proxies, certificates, service discovery, and executing the configuration you set.
The components making Istio are:
To explain a bit better and give some analogy here.
Consider the service mesh as a telephone network.
The data plane consists of phones that you and your friends use to communicate with each other.
You will be able to communicate without them, but you will have to yell across. Instead, this way is much more modern, secure, and with better control over the communication.
Now the control plane will be the telephone service provider, and from there, all the calls get managed, routed, and billed.
Everything you apply is done towards and on the control plane; the control plane will communicate that change to the sidecar proxies.
The traffic traversing the data plane is only visible to the proxies; the other Istio components have no access.
Depending on your setup, Istio offers you different installation and deployment strategies.
Each cloud service provider has its own thing. So it’s best to go over the platform setup and check if any prerequisites or dependencies are needed before installing.
The install options can range from using helm, istioctl, or using an operator.
You can look into them at the following link.
For this guide, we will install Istio using the istioctl tool.
First, you’ll need to download the binary:
Navigate into the newly created folder, export the path to the binary, and verify that it works:
Well, that’s okay, you still haven’t installed Istio.
It is a good idea to run the pre-flight check to verify if your cluster doesn’t have any issues running the Istio service mesh.
Before moving forward, you should assess which type of profile you want Istio to be installed with.
There are six of them at the time of this writing:
You can view each profile with an extended description here.
We will go with the default profile intended for production environments.
Each profile is just a set of features that Istio will enable when installed.
If you want to test every feature, you can install it using the demo profile.
Note: Installing profiles that include the Ingress or Egress Gateway will automatically spin up an external load balancer.
Istio also offers customizations and custom third-party add-ons you can include in the profile.
Suppose none of the above profiles meet your requirements, you can use istioctl to generate and create custom manifests to fit your needs.
To install using the default profile:
Excellent! You’ve installed Istio successfully!
You are halfway there.
Now you will need to label which namespace Istio will control and inject sidecar proxies in the pods.
For example, to label the default namespace for sidecar injection:
You can now verify this with:
With that, you completed the Istio core components installation.
I installed Istio, and now what?
Next comes the observability part.
The Envoy proxies will send off telemetry and other data that you can use to visualize the traffic in the mesh.
Like the Prometheus and the Grafana setup, you will need Istio paired with a visualization tool to display the data.
You will use the Kiali dashboard to visualize and see what’s going on in the cluster.
There is one caveat, however. Kiali requires that you have a running Prometheus instance in your cluster.
You can deploy one or supply the address of the existing one if you have it already deployed.
Keep in mind that you shouldn’t rely on this setup for running in production environments!
Further below, there will be an explanation of how to set up Kiali to work with an existing Prometheus instance.
Navigate to the Istio folder and apply the manifests located under samples/addons:
Applying the above will deploy many objects, so give them a couple of minutes to start.
Check on the Kiali pod if it’s started:
Once it’s running, you can now access the dashboard using kubectl and port-forward.
However, istioctl offers a much simpler way:
open http://localhost:20001/kiali in your browser.
As you can see, there is no traffic running in the selected namespace, and Kiali will show no connections.
If you want to access the other dashboards - Grafana and Jaeger, you can again use istioctl dashboard:
There are also other ways to deploy Kiali that are more inclined to production use, where you can customize and set your own parameters.
You can find the GitHub link for both Helm charts here.
As mentioned in the previous section, you can specify external instances of Prometheus and the other tools.
It’s best to install all the tooling Kiali needs for you to have the most benefit and greater observability in the service mesh.
Those are the Prometheus instance, Grafana, and Jaeger for tracing.
Note: Refer to the Jaeger documentation as it requires additional configuration to have full distributed tracing in your apps.
Grafana and Jaeger are optional for Kiali and not required for it to work.
You can specify every connection to the other systems during installation.
Example, for specifying an existing Prometheus instance:
The Kiali authentication options are available here.
The anonymous option used above provides free unauthenticated access to the dashboard.
You’ve deployed Istio, have a running service mesh inside your cluster, and you also installed the Kiali dashboard to observe the traffic.
Let’s now deploy a simple demo application.
You can use the following hello-world web app that will display a simple web page for testing.
Apply the following deployment and service manifests:
Verify that the pod is running and the service is deployed:
Now, for testing, you can use port-forward to access the application. However, a more permanent solution would be to use a load balancer or an ingress.
Istio has its own ingress controller that you can utilize and test the application.
The following ingress manifest will expose the application on the `/ ` path:
Note: Notice the ingress class annotation; you specify that the Istio ingress controller will pick up this object.
You can get the IP address using:
Visiting the external IP of the ingress gateway will open up the web application.
To see some activity in the Kiali dashboard, you first need to generate some traffic.
The simplest way is to use curl and a while loop:
From another terminal run:
And check the Kiali dashboard:
You can now see that Kiali displays the traffic, and it reaches the web application without any issues.
Squadcast is an incident management tool that’s purpose-built for SRE. Your team can get rid of unwanted alerts, receive relevant notifications, work in collaboration using the virtual incident war rooms, and use automated tools like runbooks to eliminate toil.