Reducing Kubernetes Cost: Kubecost vs Cast.ai

Lukas Gentele
Tyler Charbonneau
10 min read

Maintaining a microservices ecosystem is challenging, without doubt. However, many companies that unravel that day-to-day mystery still struggle with cost—whether or not they fully understand their operating expenses. Running clusters and horizontally-scaled containers at 99.99 percent uptime is costly.

Reserving memory, computing capacity, storage, networking, and added services incurs numerous charges. Organizations that don’t plan effectively often end up paying too much, whether they leverage pay-as-you-go, term contract, or volume-based models.

While Kubernetes itself is open source and free, third-party services that help deploy and manage Kubernetes are not. It also seems that costs are mounting over time. A recent survey of 178 IT professionals uncovered the following:

  • 68% have seen increases in Kubernetes costs
  • 44% rely on monthly estimates to determine total expenses
  • 24% do nothing to curb or control their costs
  • Just 38% can predict their bills within 10% accuracy

Accordingly, this unpredictability is burdensome for companies of all sizes. It’s especially troubling as deployments scale. Given that the number of pods per organization has doubled from 2019 to 2021, opportunities to slash spending are abundant. Large companies are incentivized to cut costs—as they oversee multiple Kubernetes-based applications and manage high quantities of containers.

Self-service users, comparatively, can have an even harder time managing their Kubernetes expenses. Instead of turning to GKE or EKS, teams can manually deploy or leverage various vendors and tools. However, manual deployments can be difficult to manage. The occasional piecemeal nature of these solutions—plus mixing and matching compute machines or VMs—can complicate budgeting.

#Tackling Cost in Two Ways

There are two processes behind controlling your Kubernetes expenses:

  • monitoring
  • optimization

Monitoring entails actively tracking and tabulating operating costs on a continual basis; user activity, autoscaling, and even downtime have their monetary impacts. It’s quite common for costs to fluctuate, and doubly so for pay-as-you-go deployments subject to dynamic resource demands.

Monitoring becomes exponentially easier with tools. These applications excel at tapping into your deployments. They can identify your containerized architecture, scaling behaviors, real-time cluster costs, and boost overall visibility into your ecosystem. Raw data and dashboard visualizations paint a clearer picture of your expenditures.

Next comes optimization, which is the continual process of cutting costs based on your monitoring results. Like security vulnerabilities demand remediation, spending inefficiencies must be eliminated. This is where automation and artificial intelligence often come into play. While manual optimization is possible, algorithms can tell you the most meaningful avenues for cost reduction—uncovering low-hanging fruit and deeper issues. Some services even generate human-readable reports as information flows inward.

Once you know that you can save, the question soon becomes, “How much can I save?” The best tools and optimization strategies extract the highest level of savings possible. Specifically, tools like Kubecost and Cast.ai excel at tackling both of these practices. So, how exactly do we evaluate them?

#Comparison Criteria

There are many facets to every SaaS tool, but this article focuses on the following:

  • Performance: Are there bottlenecks within your ecosystem, or other resource-allocation issues impacting how efficiently your applications run? How a tool measures performance—without incurring too much overhead while running as a partner application—influences its ability to uncover unused resources or similar problems (for example).
  • Maintainability: How easy does the tool make it to maintain continual oversight over your costs? Does the tool itself require any housekeeping to help it run effectively over extended periods of time?
  • Setup Complexity: Is the application easy to install, configure, and get started with?
  • Ease of use: Is the application user-friendly and approachable enough to harness on a daily basis? Are its core features usable without requiring too much of a learning curve?
  • Savings: How much money by dollar amount or percentage, on average, can the tool save you within a certain timeframe?
  • Price: How much does the tool itself cost, and what’s the cost structure?

In the battle of Kubecost vs. Cast.ai, picking a winner is a challenge. In the event that things draw level, always remember that the best tool is the one tailored to your goals and preferences. This comparison also expands on each tool’s unique features that promote monetary savings.

#Introducing Kubecost

The KubeCost UI

Image courtesy of Kubecost.

Simply put, Kubecost bills itself as “a comprehensive cost monitoring and optimization solution for teams running Kubernetes.” The value proposition is clearly laid out, plus Kubecost’s allocation model is open source and hosted on GitHub for public use.

The service bundles a kubectl CLI for cost determination, its cost-allocation model, cost analysis Helm charts, and a cost analyzer together to assist users. Kubecost hit the market in 2019, and its youthfulness is attributed to its timely importance amid the modern microservices boom.

From a performance standpoint, Kubecost’s mechanisms are designed not to interfere with your deployment components—no matter which vendor they’re from. Using the service doesn’t incur a major performance hit during active use.

Maintaining Kubecost and leveraging it to lower costs should be relatively simple. This is reflected in the documentation, which is both simple and lightning-quick to read. It appears that simplicity lies at the core of the Kubecost experience. The same goes for integration. Kubecost allows you to bring your own Prometheus or Grafana—which helps immensely for both metrics handling and visualization.

When we look at Kubecost’s API options, they appear to be geared toward “small to medium-sized clusters.” If you think this means Kubecost is better for smaller deployments, consider the following: larger clusters and smaller clusters don’t necessarily align with organization size. Conversely, opting for either constitutes a deployment strategy focused on centralization vs. flexibility, respectively. Since the latter aligns more with the spirit of microservices best practices, it seems that Kubecost is hitting the right marks. However, options like the Allocation API can scale to large clusters, thus giving users more leeway within their own deployments.

Regarding setup, the application claims to be installable in just five minutes. Installing Kubecost ideally means that you’re using the service’s free Helm chart. This chart includes all components, dependencies, access privileges, scalability tools, and flexibility needed to operate Kubecost effectively during its lifetime. Alternatively, you can opt for YAML generation via helm template. The last two install options—flat manifest and direct-on-pod—have notable limitations. The former method grants less flexibility, while the latter strips away the UI and complete enterprise support (SAML included).

Configurations are either basic or advanced. The former entails the following:

  • Setting up metric storage
  • Setting limits and requests
  • Making cloud integrations
  • Using Prometheus, Grafana, or existing node exporters
  • Exposing Kubecost’s APIs and functionality with Ingress
  • Reflecting reserved or committed pricing
  • Deploying without consistent volumes

While there’s plenty you can do with Kubecost, not all of these items are obligatory. Thankfully, they’re also simple for the most part. Should you desire out-of-cluster cost monitoring, steps are clearly outlined within the online documentation.

Kubecost is also relatively easy to use; you can even tackle cost monitoring depending on your level of technical ability. While UI-based options are available, you can jump into the CLI and execute script-based commands as desired. It’s not necessary to adapt your typical workflows to the tool—quite the opposite. Overall, the service blends easy cost breakdowns with multi-cloud relevance, real-time alerting, and more to promote cost savings. That said, Kubecost claims to save teams 30 to 50 percent on their total infrastructure costs.

Finally, Kubecost is priced according to service tier. A free option offers limited cluster support, feature availability, and metric retention. The Business tier builds upon this by offering up to over 200 nodes (at $449, $799, or more monthly) while no limitations are specified for Enterprise customers. Kubecost’s Enterprise pricing isn’t standardized and is therefore determined on a case-by-case basis.

#Introducing Cast.ai

The Cast.ai UI

Image courtesy of Cast.ai.

Cast.ai’s secret sauce is apparent immediately within its name. The service employs AI algorithms, which do much of the heavy lifting pertaining to cost optimization and periodic bill analysis. Like Kubecost, the application aims to slash your costs by 50 percent, and explicitly mentions benefits related to Kubernetes DevOps practices. The solution is also built to support both spot instances and auto-scaling—two features found within Kubecost. There’s some expected overlap here and in other areas, as you’ll probably soon discover.

From a performance standpoint, Cast’s mechanisms are also designed to play nicely with your deployment components, regardless of vendor. The developers are particularly keen on keeping 100 percent performance and reliability. Should you deactivate Cast.ai, you can keep your cluster after the fact.

Maintaining Cast.ai to manage your own cost savings seems to be pretty straightforward. If we take the documentation as a hint, there may be more “meat” behind the Cast solution. The docs are easy to read, yet the content feels more robust and generally denser than Kubecost’s.

It’s not 100 percent clear whether that’s attributable to Cast.ai having more complexity. However, it’s important to note that the Cloud Cost Optimization platform exists alongside the company’s two other platforms: DevOps Optimization and Disaster Prevention.

While this also is a plus for ease of use, the console lies at the heart of Cast.ai’s user experience. The GUI is a conduit through which you can make configuration changes, administration decisions, and complete general maintenance tasks as required.

For setup, you can either connect your existing cluster(s) to Cast.ai or deploy an entirely-new cluster to leverage the software. The latter is preferable for multi-cloud setups, according to the Cast.ai team. It involves adding cloud credentials, creating a cluster, and deploying your application within it. Overall, both Cast.ai and Kubecost are on level terms when assessing time investments. It takes just five minutes to onboard external clusters. You only need ten minutes to create a specialized multi-cloud cluster from scratch.

It’s also pretty easy to use Cast.ai, provided you have some degree of prior experience in microservices management. You should know the basics behind running Kubernetes within public clouds, and have at least intermediate proficiency with kubectl. The console, which we touched on briefly, adds some point-and-click functionality to Cast.ai. You can create clusters, delete clusters, pause them, and even download kubeconfig files with just a few clicks. This is all doable in conjunction with Cast’s assessment of your infrastructure spending.

There’s good granular control there with little effort. Finally, APIs help compile your usage data into dashboards. As with Kubecost, charts and other visualizations are very readable; they help you better understand your spending and resource usage without having to dig through complex logs. That’s not to discount the importance of logs, however. Cast.ai’s Audit Log lays out timestamped operations in an organized format.

So, while you might expect to save roughly 50 percent with Cast.ai, how much does the tool cost? There’s a free tier available, which is limited in its overall scope while still providing AI analysis. The company’s Growth and Growth Pro tier variant cost $200 and $1,000 monthly, respectively. Enterprise plans start at $5,000 per month, and these costs likely grow alongside infrastructure demands.

#Conclusion

It’s certainly hard to pick poorly between Kubecost and Cast.ai. Both tools are highly capable, approachable, and pleasing to the eye. These are modern solutions built for today’s organizations. It does appear that Kubecost wins out on pure simplicity, yet might offer slightly fewer features. At least from a marketing standpoint, Kubecost doesn’t call out AI so prominently—if at all—compared to Cast.ai. This seems to be a differentiating feature. Otherwise, both are easy to set up, use, and start saving money with.

Comparing pricing is tricky within the paid tiers. Kubecost’s middle tier appears cheaper on the higher end, though expanded scaling introduces some individualized extra costs. Enterprise pricing certainly isn’t cheap for Cast.ai, while this figure isn’t easily obtainable from Kubecost unless you contact Sales.

Also, if you are looking to save money across your Kubernetes ecosystem, Loft Labs has developed a powerful feature called Sleep Mode, which can put your workloads to sleep after specified periods of inactivity. Sleep Mode is a great option for things like dev environments that aren’t needed 24x7. Give Loft a try today, and learn how to cut idle usage costs by up to 76 percent.

Photo by Alexander Schimmeck on Unsplash

Sign up for our newsletter

Be the first to know about new features, announcements and industry insights.