Modern day applications need faster deployment with new versions and new environments, this can be achieved through containerization with Kafka, Kubernetes and Knative.
Before we start let’s break down what these three are:
Kafka is the leading platform for enterprise-grade event streaming – it is a distributed system consisting of servers and clients that communicate via a high-performance TCP network protocol. It can be deployed on bare-metal hardware, virtual machines, and containers in on-premise as well as cloud environments.
Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management.
Knative is an open source community project which adds components for deploying, running, and managing serverless, cloud-native applications to Kubernetes. It is a serverless cloud computing model.
Using Kafka, Kubernetes and Knative
Modern day applications need faster deployment with new versions and new environments, this can be achieved through containerization. To carry out containerization of your applications you need container orchestration tools of which Kubernetes is the leading option. Kubernetes is an open source platform from Google used to manage containerized workloads and services, that facilitates both declarative configuration and automation. Kubernetes makes it easy to deploy, upgrade, scale and monitor your applications like Kafka. With Kubernetes comes Knative which aids the platform in increasing developer productivity and reducing operational costs.
Kafka on Kubernetes
Kafka enables users to analyse, establish and maintain real-time connectivity with internal and external suppliers and customers. With cloud-native development and serverless architecture increasing, organizations prefer to use the Kafka framework as they enable event driven, distributed, high availability features. The main benefits of running Kafka on Kubernetes are infrastructure abstraction as you can use Kubernetes to abstract the infrastructure away from higher-level services and applications. Not only does this make your apps a lot more portable, but it also adds flexibility and builds a much-needed, future-ready architecture.
Where does Knative come in?
It is a platform that can run serverless workloads in a Kubernetes-native way while having control over building, deploying, binding, scaling and running on premise and in the cloud container-based applications. Knative is a middleware component and also has pluggable components that help in your own monitoring, networking and service mesh. Knative components build on top of Kubernetes, abstracting away the complex details and enabling developers to focus on what matters.
Key components of Knative
Knative contains two major components: Knative Serving and Knative Eventing. Knative serving is a request-driven model that scales your applications deployed on the containers. Here we declare your application with core logic as KService in a YAML based configuration with linux containers and you apply it and then the model will take care of dynamic autoscaling, including scaling down to zero pods. With Eventing, you now have the same auto-scaling capability that can have source connectors that access a third party system like apache Kafka. An Apache Kafka topic can cause autoscaling of your Kubernetes-based service.
Below is a diagram of the key components of Knative with Kubernetes:
Knative Eventing Architecture
Knative Eventing enables us to bind event sources and event consumers in a cloud native environment and severless architecture like containers. As these services are loosely coupled we can develop and deploy across Kubernetes, VM’s, Saas.
Main components of Knative Eventing are Event sources (Producers), Event consumers, Event brokers and triggers, Event registry, Event channels and subscriptions. Event sources generate events and Event consumers will receive them. Event Channels can be used to connect to various backends such as In-Memory, Kafka and GCP. Event Subscribers receive the event messages and process as needed. Event Brokers triggers support filtering of Event messages based on defined attributes and helps in populating the Events to sinks.
Steps involved in using Kafka with Knative Eventing
- Deploy Kafka cluster
- Source Kafka Events with Knative Eventing and install the Knative dependencies into the Kubernetes cluster. Then deploy the Knative Kafka Channel, that is used to connect the Knative Eventing Channel to Apache Kafka. We run Knative Eventing KafkaSource to get Kafka messages to flow through the Knative Eventing Channels.
- Scaling out in Kubernetes cluster. As we have connected Kafka topic to Knative Eventing, now we can scale out by setting autoscaling target, while simultaneously pushing enough messages to the topic.
- Monitoring the metrics. To gather the metrics (memory, CPU from pods) and explore the Knative-based applications, we have to install Prometheus, Grafana and Jaeger into the Kubernetes cluster.
- Restricting service visibility. We can restrict service visibility and separate the services that are consumed outside and inside the cluster. Knative Services are exposed as public routes by default, by applying visibility label on the service YAML we can generate a new route.
Functionality of using Knative Eventing:
- Knative have lot of eventing sources that is basically used to access existing third party systems examples: Github, Kafka, Google pub sub.
- Scaling from just few events to real time data-streaming pipelines.
- Event orchestration.
- Pluggable internal transport called channel ex:In-Memory channel(default),Apache Kafka channel and Google Pub-Sub.
- Knative has declarative APIs to distribute events to multiple sources.
Enterprises are enabling serverless architecture in cloud native applications, this has resulted in resource cost optimisation, fast development and deployment. Knative provides a simpler deployment model with all modern application workloads
Other useful links: