Telegram Web
The Lost Fourth Pillar of Observability - Config Data Monitoring

A lot has been written about logs, metrics, and traces as they are indeed key components in observability, application, and system monitoring. One thing that is often overlooked, however, is config data and its observability. In this blog, we'll explore what config data is, how it differs from logs, metrics, and traces, and discuss what architecture is needed to store this type of data and in which scenarios it provides value.


https://www.cloudquery.io/blog/fourth-lost-pillar-of-observability-config-data-monitoring
L4-L7 Performance: Comparing LoxiLB, MetalLB, NGINX, HAProxy

As Kubernetes continues to dominate the cloud-native ecosystem, the need for high-performance, scalable, and efficient networking solutions has become paramount. This blog compares LoxiLB with MetalLB as Kubernetes service load balancers and pits LoxiLB against NGINX and HAProxy for Kubernetes ingress. These comparisons mainly focus on performance for modern cloud-native workloads.


https://dev.to/nikhilmalik/l4-l7-performance-comparing-loxilb-metallb-nginx-haproxy-1eh0
Optimising Node.js Application Performance

In this post, I’d like to take you through the journey of optimising Aurora, our high-traffic GraphQL front end API built on Node.js. Running on Google Kubernetes Engine, we’ve managed to reduce our pod count by over 30% without compromising latency, thanks to improvements in resource utilisation and code efficiency.

I’ll share what worked, what didn’t, and why. So whether you’re facing similar challenges or simply curious about real-world Node.js optimisation, you should find practical insights here that you can apply to your own projects.


https://tech.loveholidays.com/optimising-node-js-application-performance-7ba998c15a46
The Karpenter Effect: Redefining Our Kubernetes Operations

A reflection on our journey towards AWS Karpenter, improving our Upgrades, Flexibility, and Cost-Efficiency in a 2,000+ Nodes Fleet


https://medium.com/adevinta-tech-blog/the-karpenter-effect-redefining-our-kubernetes-operations-80c7ba90a599
Replacing StatefulSets With a Custom K8s Operator in Our Postgres Cloud Platform

Over the last year, the platform team here at Timescale has been working hard on improving the stability, reliability and cost efficiency of our infrastructure. Our entire cloud is run on Kubernetes, and we have spent a lot of engineering time working out how best to orchestrate its various parts. We have written many different Kubernetes operators for this purpose, but until this year, we always used StatefulSets to manage customer database pods and their volumes.

StatefulSets are a native Kubernetes workload resource used to manage stateful applications. Unlike Deployments, StatefulSets provide unique, stable network identities and persistent storage for each pod, ensuring ordered and consistent scaling, rolling updates, and maintaining state across restarts, which is essential for stateful applications like databases or distributed systems.

However, working with StatefulSets was becoming increasingly painful and preventing us from innovating. In this blog post, we’re sharing how we replaced StatefulSets with our own Kubernetes custom resource and operator, which we called PatroniSets, without a single customer noticing the shift. This move has improved our stability considerably, minimized disruptions to the user, and helped us perform maintenance work that would have been impossible previously.


https://www.timescale.com/blog/replacing-statefulsets-with-a-custom-k8s-operator-in-our-postgres-cloud-platform
Kubernetes Authentication - Comparing Solutions

This post is a deep dive into comparing different solutions for authenticating into a Kubernetes cluster. The goal of this post is to give you an idea of what the various solutions provide for a typical cluster deployment using production capable configurations. We're also going to walk through deployments to get an idea as to how long it takes for each project and look at common operations tasks for the each solution. This blog post is written from the perspective of an enterprise deployment. If you're looking to run a Kubernetes lab, or use Kubernetes for a service provider, I think you'll still find this useful. We're not going to do a deep dive in how either OpenID connect or Kubernetes authentication actually works.


https://www.tremolo.io/post/kubernetes-authentication-comparing-solutions
From four to five 9s of uptime by migrating to Kubernetes

When we launched User Management along with a free tier of up to 1 million MAUs, we faced several challenges using Heroku: the lack of an SLA, limited rollout functionality, and inadequate data locality options. To address these, we migrated to Kubernetes on EKS, developing a custom platform called Terrace to streamline deployment, secret management, and automated load balancing.


https://workos.com/blog/from-four-to-five-9s-of-uptime-by-migrating-to-kubernetes
Tackling OOM: Strategies for Reliable ML Training on Kubernetes

Tackle OOMs => reliable training => win !


https://medium.com/better-ml/tackling-oom-strategies-for-reliable-ml-training-on-kubernetes-dcd49a2b83f9
Kubernetes -Network Policies

A NetworkPolicy is a Kubernetes resource that defines rules for controlling the traffic flow to/from pods. It works at layer 3 (IP) and layer 4 (TCP/UDP) of the OSI model. The policies are namespaced and use labels to identify the target pods and define allowed traffic.


https://medium.com/@umangunadakat/kubernetes-network-policies-41f288fa53fc
wave

Wave watches Deployments, StatefulSets and DaemonSets within a Kubernetes cluster and ensures that their Pods always have up to date configuration.

By monitoring mounted ConfigMaps and Secrets, Wave can trigger a Rolling Update of the Deployment when the mounted configuration is changed.


https://github.com/wave-k8s/wave
winter-soldier

Winter Soldier can be used to

- cleans up (delete) Kubernetes resources
- reduce workload pods to 0

at user defined time of the day and conditions. Winter Soldier is an operator which expects conditions to be defined using CRD hibernator.


https://github.com/devtron-labs/winter-soldier
terraform-mcp-server

A Model Context Protocol (MCP) server that provides tools for interacting with the Terraform Registry API. This server enables AI agents to query provider information, resource details, and module metadata.


https://github.com/thrashr888/terraform-mcp-server
rybbit

Rybbit is the modern open source and privacy friendly alternative to Google Analytics. It takes only a couple minutes to setup and is super intuitive to use.


https://github.com/rybbit-io/rybbit
Using MCPs to Run Terraform

We jump into a hands-on exploration of Model Context Protocol (MCP), sharing our experiment using a MCP client to run terraform init, plan, apply. We share our take on where agents add value and highlight security considerations when adding MCPs to your workflow.


https://masterpoint.io/blog/using-mcps-to-run-terraform
Upgrading ECK Operator: A Side-by-Side Kubernetes Operator Upgrade Approach

To leverage the advancements in recently released ECK operator versions, we embarked on an upgrade project. Operator upgrades are inherently complex and risky, often involving significant changes that can affect system stability.

In this article, I’ll delve into the challenges we encountered and the strategies we employed to manage operator upgrades for stateful workloads like Elasticsearch. Additionally, I’ll detail how we modified the ECK operator to facilitate a more resilient side-by-side upgrade process.


https://engineering.mercari.com/en/blog/entry/20250428-upgrading-eck-operator-a-side-by-side-kubernetes-operator-upgrade-approach/
Zero-Touch Bare Metal at Scale

In this episode, we talk about how we operationalize the hardware once it’s installed.


https://blog.railway.com/p/data-center-build-part-two
A Case Study in Synchronizing Database Schema Updates between Projects and Environments

If you’ve come across this post, you probably know about relational database schema migrations. You’ve likely worked with a relational database like PostgreSQL or MySQL and a tool for managing schema migrations via code.

This post doesn’t walk you through the basics of schema migrations but rather explains an especially complicated and non-typical schema migrations use case. This post will also show you the nuances of the custom schema migration solution we created here at DoubleVerify.


https://medium.com/doubleverify-engineering/a-case-study-in-synchronizing-database-schema-updates-between-projects-and-environments-a69a3cc38985
2025/07/08 15:27:23
Back to Top
HTML Embed Code: