Introduction

One of the most widely used tools in the application management field is Kubernetes. However, it’s essential to monitor activity within your Kubernetes cluster. The Go Informers package fulfills that need. It functions as a helpful assistance to help you stay informed about the activities happening in Kubernetes clusters. Let’s get started and see how it works!

Understanding Kubernetes Monitoring

Keeping an eye on your Kubernetes cluster includes monitoring its performance. This includes checking if your applications are running smoothly, ensuring they have enough resources and how many resources they are using, and being notified if something goes wrong.

Introducing the Go Informers Package

The Go Informers package is like a special toolbox for working with Kubernetes. It helps you get information from the Kubernetes API server quickly and easily. With this package, you can stay updated about what’s happening in their clusters without writing a lot of complicated code. The package keeps track of changes in your cluster in real time. So you always know what’s happening without having to check constantly. But it’s not just about seeing what’s happening now. Go Informers can also tell you when important events occur in your cluster, like when a new application is deployed or something gets deleted.

How Informers Work

Event-driven architecture is the foundation upon which informers function. When a Kubernetes resource undergoes a modification, such as creation, update, or deletion, the API server emits an event. Informers ensure that monitoring applications are up to date with the most recent cluster state by constantly monitoring for these events and responding appropriately.

Caching Mechanism

One of the key features of informers is their caching mechanism, which improves performance and reduces the load on the Kubernetes API server. The resource objects that informers have retrieved from the API server are kept in a local cache. To ensure that the client always has access to the most recent data without overloading the server with too many requests, this cache is periodically synchronized with the server to fetch any updates or changes.

Indexing for Efficient Retrieval

Informers employ indexing mechanisms to facilitate efficient data retrieval. They simplify the lookup process by indexing resource objects based on specific fields or labels, providing rapid and focused access to relevant data. This indexing strategy enhances the responsiveness of monitoring applications, particularly when working with large-scale Kubernetes clusters that have many resources.

Anatomy of Kubernetes Informers

Anatomy of Kubernetes Informers
The diagram illustrates the workflow of Kubernetes informers:

  • Initialization: Monitoring applications initialize informers by specifying the types of resources they want to monitor (e.g., pods, services).
  • Watch Mechanism: Informers set up watch mechanisms with the Kubernetes API server to get alerts about resource events.
  • Event Processing: Informers process events based on predefined event handlers upon receiving events from the API server. These handlers define how the monitoring application reacts to different types of events (e.g., creation, update, deletion).
  • Caching Strategy: Informers maintain a local cache of resource objects retrieved from the API server.  To guarantee data consistency, this cache and the server are periodically synchronized.
  • Indexing for Efficiency: Informers use indexing techniques to optimize the efficiency of data retrieval.  By indexing resource objects based on specific criteria, informers facilitate fast and efficient access to relevant data.
  • Client Interaction: Monitoring applications interact with informers to retrieve real-time information about the Kubernetes cluster state. This interaction allows applications to monitor and respond to changes dynamically.

Access configuration

There are two ways how we can access Kubernetes API, if we are accessing from inside the cluster we use InClusterConfig to access KubernetesAPI:

corev1 "k8s.io/api/core/v1"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"

// Get the in-cluster configuration
    config, err := rest.InClusterConfig()

    // Create the Kubernetes clientset
    clientset, err := kubernetes.NewForConfig(config)

If a monitoring app is going to be deployed in a targeted cluster then we can use  cluster config file:

// Load the Kubernetes client configuration from the kubeconfig file
config, err := clientcmd.BuildConfigFromFlags("", kubeconfig)

// Create a Kubernetes clientset
clientset, err := kubernetes.NewForConfig(config)


Practical Usage

Let’s say you want to keep track of whenever a new pod (a basic unit of deployment in Kubernetes) is created or deleted in your cluster. Now with the clientset configured we can instantiate podInformer and monitor pods resources and lifecycle.

// Stop signal for the informer
    stopper := make(chan struct{})
    defer close(stopper)

    // Set resync interval to 10 minutes
    ResyncInterval = 10 * time.Minute

// Create shared informers for resources in all known API group versions with a reSync period and namespace
factory := informers.NewSharedInformerFactoryWithOptions(clientSet, ResyncInterval)
    podInformer := factory.Core().V1().Pods().Informer()

    defer k8sruntime.HandleCrash()

    // Start informer
    go factory.Start(stopper)

Now we can create an event handler:

 podInformer.AddEventHandler(cache.ResourceHandlerException{
UpdateFunc: func(oldPodObject, newPodObject interface{}) {
// Old version of the Pod Object
    oldPod := oldPodObj.(*corev1.Pod)
    // New version of the Pod Object
    newPod := newPodObj.(*corev1.Pod)

// If resource version is different we know there was a change with the pod
oldPodData.ResourceVersion != newPodData.ResourceVersion {
// Handle update
} else {
fmt.Println(“Nothing to update for pod %s.”, newPodData.Name)
}
},
  AddFunc: func(podObj interface{}) {
pod, ok := podObj.(*corev1.Pod)
if ok {
fmt.Printf("Added Pod: %s. Current status: %s", pod.GetName(), pod.Status.Phase)
}
},
DeleteFunc: func(podObj interface{}) {
pod, ok := podObj.(*corev1.Pod)
if ok {
fmt.Printf("Deleted Pod: %s", pod.GetName())
}    
}, 
}

In this example, we’re using Go Informers to track new pods created in the cluster, pods deleted in the cluster, and any changes made in any pod. 

However, it’s important to understand that AddFunc reacts to each pod creation in the cluster, so whenever a pod enters the pending state, we get notified. At the same time, UpdateFunc handles the event when the pod’s status switches to running. Therefore, we may track the status change in UpdateFunc to mimic on-event handling. When a pod is deleted from the cluster, DeleteFunc will be called concurrently. However, there may be a delay between the pod stopping and being deleted, so we can also monitor the status change in UpdateFunc. When the pod status stops running, we can determine that the pod has entered the terminate status.

UpdateFunc: func(oldPodObject, newPodObject interface{}) {
// Old version of the Pod Object
oldPod := oldPodObject.(*corev1.Pod)
// New version of the Pod Object
newPod := newPodObject.(*corev1.Pod)

// If resource version is different we know there was a change with the pod
oldPodData.ResourceVersion != newPodData.ResourceVersion {
if oldPod.Status.Phase == corev1.PodPending && newPod.Status.Phase == corev1.PodRunning {
    // Handle pod creation
}
if oldPod.Status.Phase == corev1.PodRunning && newPod.Status.Phase != corev1.PodRunning {
    // Handle pod deletion
}
} else {
fmt.Println(“Nothing to update for pod %s.”, newPodData.Name)
}
}


Accessing resources data

If you want to know more about how resources are used, you can access information about requests, limits, and allocatable resources.  Understanding these metrics can help you optimize resource utilization, improve application performance, and efficiently allocate resources within your Kubernetes cluster. 

Requests are the minimal resources needed for starting a pod, limits are set inside the cluster and indicate the maximum number of resources a pod can use, and allocatable is the entire amount of resources available within the node. It is important to remember that a pod consists of several containers. Therefore, to get the total resource data for the pod, we must combine the resources of these containers.

We can obtain information about these resources through informers in the following way:

UpdateFunc: func(oldObj, newObj interface{}) {
pod := newObj.(*corev1.Pod)
fmt.Printf("Pod Name: %s\n", pod.Name)
fmt.Printf("Pod Namespace: %s\n", pod.Namespace)

var PodCPULimit int64
var PodCPURequest int64
var PodMemoryLimit int64
var PodMemoryRequest int64

// Access pod resource information
for _, container := range append(pod.Spec.InitContainers, pod.Spec.Containers...) {
PodCPULimit += container.Resources.Limits.Cpu().MilliValue()
PodCPURequest += container.Resources.Requests.Cpu().MilliValue()
PodMemoryLimit += container.Resources.Limits.Memory().Value()
PodMemoryRequest += container.Resources.Requests.Memory().Value()
}
      
// Calculate allocatable resources based on node capacity
nodeName := pod.Spec.NodeName
node, err := clientset.CoreV1().Nodes().Get(context.TODO(), nodeName, metav1.GetOptions{})
if err != nil {
fmt.Printf("Error getting node: %v\n", err)
return
}

NodeAllocatableMemory := NodeCapacityCPU := node.Status.Capacity.Cpu().MilliValue()
NodeCapacityMemory := node.Status.Capacity.Memory().Value()

fmt.Printf("Pod CPU Limit: %d\n", PodCPULimit)
fmt.Printf("Pod CPU Request: %d\n", PodCPURequest)
fmt.Printf("Pod Memory Limit: %d\n", PodMemoryLimit)
fmt.Printf("Pod Memory Request: %d\n", PodMemoryRequest)
fmt.Printf("Node Allocatable CPU: %d\n", NodeAllocatableCPU)
fmt.Printf("Node Allocatable Memory: %d\n", NodeAllocatableMemory)
fmt.Printf("Node Capacity CPU: %d\n", NodeCapacityCPU)
fmt.Printf("Node Capacity Memory: %d\n", NodeCapacityMemory)
},


Pros and cons of using Go Informer for custom monitoring service


Pros:

  1. Flexibility: With Go Informer, you have complete control over the monitoring logic and can tailor it to meet your unique needs. You can customise the monitoring service to monitor any aspect of your applications or Kubernetes cluster.
  2. Integration: Go Informer enables smooth integration with additional application or infrastructure components. Adding monitoring logic directly to your current Go-based services or applications is a straightforward process.
  3. Scalability: Go Informer can grow with your Kubernetes cluster because it is a component of the Kubernetes client library. It efficiently uses Kubernetes’ scalability features, like horizontal pod autoscaling, to manage demanding monitoring jobs.
  4. Real-time monitoring: Go Informer enables real-time monitoring by reacting to Kubernetes events as they occur. This can be helpful for monitoring important events or performing immediate actions based on changes in the cluster.

Cons:

  1. Development overhead: Using Go Informer to provide a personalized monitoring service calls for knowledge and work in development. Writing and maintaining code is required for handling data storage, alerting, event processing, and visualization. This may require an extensive amount of time.
  2. Complexity:  Go Informer-built custom monitoring systems may become more sophisticated as time goes on and the requirements for the monitoring evolve. Overhead and maintenance expenses might rise when overseeing a sophisticated monitoring system.
  3. Limited features: Go Informer offers the basic elements needed to monitor Kubernetes clusters. However, it can lack some of the more advanced capabilities and functionalities found in specialized monitoring programs like Prometheus or Grafana. For instance, it does not have built-in support for long-term data retention, complex alerting rules, or sophisticated visualization options.
  4. Resource utilization: Running a custom monitoring service alongside your applications may consume additional resources, such as CPU, memory, and network bandwidth, which could impact the overall performance and scalability of your cluster.

Conclusion

Using the Go Informers package to monitor your Kubernetes cluster is like having a dependable assistant. It simplifies the cluster monitoring process, allowing immediate reactions to any issues or changes and enabling even inexperienced developers to stay informed. With Go Informers, mastering Kubernetes monitoring becomes a breeze!

Also, compared to specialized monitoring tools like Prometheus or Grafana, Go Informer lacks some advanced features and may need additional effort, despite its flexibility and real-time monitoring capabilities. Thus,to select the best monitoring solution for your Kubernetes cluster, consider your unique needs, available resources, and level of experience.


“Simplifying Kubernetes Monitoring with Go Informers” Tech Bite was brought to you by Kenan Bajrić, Junior Software Engineer at Atlantbh.

Tech Bites are tips, tricks, snippets or explanations about various programming technologies and paradigms, which can help engineers with their everyday job.

oban
Software DevelopmentTech Bites
February 23, 2024

Background Jobs in Elixir – Oban

When and why do we need background jobs? Nowadays, background job processing is indispensable in the world of web development. The need for background jobs stems from the fact that synchronous execution of time-consuming and resource-intensive tasks would heavily impact an application's  performance and user experience.  Even though Elixir is…

Want to discuss this in relation to your project? Get in touch:

Leave a Reply