Kubernetes is a container orchestration platform that automates an application’s deployment and scaling. Since it was first launched in 2015 by a team of Google engineers, it exploded in popularity as microservice architecture became the standard for application deployment on the modern web.
And it’s easy to see why. It doesn’t take much experience in DevOps to understand how powerful Kubernetes can be and why it is ranked right alongside Linux in terms of its impact on the software industry as a whole.
Once configured, Kubernetes will monitor and manage every container that composes your microservices application to make sure it scales with demand, handles errors gracefully, and rolls updates out in a controlled manner across your user base.
By leveraging a containerized microservices architecture with Kubernetes, engineers are able to minimize unnecessary resource consumption and boost efficiency — saving money and the environment.
The Purpose of Kubernetes
You can think of a containerized application like a terrarium. It mimics an environment that the inhabitants can survive and function in, but is portable and can function with all needs met wherever if properly packaged.
In this analogy, Kubernetes would be like the zookeepers, constantly organizing and maintaining the containers and their contents, scheduling when they should be available, or taken down for maintenance. They will check in and monitor the status of each container at any point in time.
Admittedly, this is a gross oversimplification and Kubernetes has even more very specific functionality in terms of making sure the environment of containers exists in an optimally functional state, including scaling application resources according to the needs and strain on the current configuration, allow for automatic rolling updates over time, and much, much more.
But at the end of the day, its purpose is to monitor the state of your containerized application and constantly compare that to the desired state you have configured. Whenever something is off, like a good zookeeper, Kubernetes takes some action to set it right.
Why We Built ShipShape
A wise uncle once said:
With great power, comes great complexity.
Or something like that.
Our good friend Kubernetes is no exception. Many engineers would describe Kubernetes as a dense and challenging technology. Even the most senior engineers can be heard recounting their war stories with a sigh:
“I opened the Kubernetes docs once. Then I closed my laptop and promised to never go back.”
Its notorious learning curve has surely led to the premature death of many promising DevOps careers. And for managers and startups who need quick insights into the Kubernetes Clusters, it’s an impossible wall to scale.
The current open source visualization tools available all require a solid understanding of Kubernetes, or at least the attention span to watch the same 30 minute tutorial a few times until the graphs magically start populating.
What’s more, many of the solutions have unique requirements across the different platforms such as Amazon, Google, Azure, and Minikube.
We designed ShipShape to be simple to use and easy to set up with minimal Kubernetes experience. If you have a cluster running, we can show what it’s doing!
When cloned directly from our GitHub repo, ShipShape can automatically locate and import your kubeconfig file to connect to your cluster without any additional steps or complicated IAM security authorization.
As long as you can access your cluster through the command line, you can visualize the metrics with our application!
There are three main views to approach your new cluster with: Pods, Nodes, and Cluster.
The pod view focuses on the software that comprises your application. It lets you see how your containers are organized and what resources are allocated to each.
The node view gives you insight into the physical hardware or virtualware that is running your containers. It lets you look at the total allocatable resources in terms of CPU, Memory, and Disk space available to the nodes and how hard they are working under the current configuration to identify potential system strain.
The cluster view leverages an open-source Prometheus server to get an aggregated view of some very important cluster-wide metrics such as average CPU load, memory consumption, and network traffic.
Prometheus will take a little setup for your cluster, but as long as you’ve deployed the service under a Prometheus namespace, ShipShape will automatically forward it’s internal IP to a local port on your server and sustain that connection to receive real-time metrics.
The ability to easily monitor and track your cluster’s performance and health over time is essential for optimizing performance and preventing crashes and errors. ShipShape makes that easy even for those without extensive devOps experience so you can start optimizing your microservices application.
The Future of ShipShape
Today the ShipShape alpha launches to the world as part of the OS Labs Accelerator Program. Built with a react frontend and a Node.js/Express backend that communicates with the Kubernetes API using your local machine’s cluster access information, we can get you your metrics with no need to expose them to the world.
To get set up monitoring your cluster, check out the Github repo. Don’t forget to click that star button to follow future iterations!
We’re inspired by the world changing impact of free frameworks like Kubernetes and are dedicated to the open source community that makes it possible.
To view a live demo or get in touch to contribute, visit: getinshipshape.io
You can find our product roadmap at the bottom of the readme.
In the meantime, keep those Clusters ShipShape :)
The ShipShape Crew
This article was brought to you by the ShipShape maintainers: