Elasticsearch total cost of operation is a big pain point for many organizations. From incorrect heap space sizing to having the wrong type of hardware. Learn how you can optimize your Elasticsearch costs.
What is Elasticsearch?
Elasticsearch is a search engine built on top of the open-source Apache Lucene library. It is designed to provide fast and scalable full-text search and analytics capabilities. Elasticsearch is commonly used in applications that require real-time search, analysis, and visualization of large and complex datasets. It has become a popular choice for powering search functionality in a wide range of applications, including e-commerce, social media, and enterprise search.
Like anything in the big data space and technical stratosphere in general, there are pros and cons you should be aware of before getting started.
Elasticsearch comes with its inherited set of unique characteristics and for some use-cases and workloads it is the best in class. No wonder it became the industry de-facto standard for search, real-time BI and log analytics:
- Great for scale - Elasticsearch is an excellent option while scaling up fast. It’s a flexible and robust architecture with components like clusters (node collections) and indexes (organizing data in logical namespaces). ES’s architect is great for scaling up because it has abstraction levels that simplify the management process. Everything can be controlled easily.
- Distributed processing and storage - Elasticsearch is fully capable of processing data that’s located on one node, but it really comes to life when doing so across multiple ones. Primary and replica shards are allocated equally across all nodes, with parallel processing enabling a high throughput. It simply leverages the power and memory of all nodes for optimal results.
Many of the default settings are appropriate for basic workloads, but as your data scales, these defaults may begin to burn a hole in your pocket. Add to that instance types, storage mechanisms, memory management and a bunch more, and you could be looking at very expensive cluster costs. In this post we will cover a few things you can do to change that, and optimize the cost of your Elasticsearch cluster.
What makes this functionality (parallelization) even more appealing is that no manual intervention is needed to make it happen. You don’t have to configure anything, Just lay back and let the magic happen. - Comprehensive ad hoc querying capabilities - Elasticsearch search performance is another pro thanks to the full-text search option it offers. Unlike relational DBs where everything is rigid and inflexible, ES stores all data as objects. This allows complex connections that allows the sorting of results based on their relevance to the text-based search query. Speed is power.
But Elasticsearch needs to be configured for best results. Although ES is great out-of-the-box, you will still need to play around with it to get the best Elasticsearch performance (more on this in the best practices section). Many of the default settings are appropriate for basic workloads, but as your data scales, these defaults may begin to burn a hole in your pocket. This along with the wrong instance types, storage mechanisms, memory management and a bunch more, and you could be looking at very expensive cluster costs.
Without further ado, let’s dive into some best practices and advice that could help reducing your Elasticsearch cluster costs.
#1 - Right-size your Java heap configuration
Elasticsearch is a top contender for document searching and log analytics. But many of the default settings are appropriate for basic workloads. As your data scales, these defaults may begin to burn a hole in your pocket. Add in instance types, storage mechanisms, memory management and a bunch more - you could be looking at very expensive cluster costs as you start scaling up fast. One of those configurations is the Java Heap Size.
Elasticsearch runs on Java. In the JVM, the heap is the area of memory allocated for loaded data. Elasticsearch depends heavily on having enough memory to access the data that it is querying rapidly. This memory is used for a few different reasons.
While there are many significant data structures that are running on the disk, Elasticsearch still relies heavily on the heap for data relating to indices and various pointers to existing data. This relationship between the heap and the disk is a fundamental reason why Elasticsearch runs as efficiently as it does.
If you’ve left the heap size at 1GB, it is very likely that your Elasticsearch nodes are not making use of all of the available memory on their virtual machines. This wasted RAM directly translates into wasted money and, worse, poor performance from the cluster.
Choosing the correct heap space is time consuming but worth doing. The performance benefits alone make it a compelling way to improve your cluster, but with better capacity utilization of your virtual machines comes further Elasticsearch cost-optimization.
A word of warning on the heap size
There are limits to how big you can set your heap size. The creators of Elasticsearch recommend no more than 50% of your available memory should be allocated to the heap. This is to make sure that the other 50% is available for off-the-heap memory storage and OS-level caches.
- When you increase the size of your heap, you also increase the impact of garbage collection in the JVM.
- Any adjustment to your heap size should be immediately followed by detailed monitoring and benchmarking of your JVM GC pauses, to better understand the impact on performance.
#2 - Avoid Oversized Dedicated Master Nodes
When you’re looking to run a productionised Elasticsearch cluster, you may wish to assign roles to your nodes. These roles give different nodes responsibilities:
- Data nodes store the data, and are in charge of ingesting and optimizing it, as well as satisfying queries.
- Master Nodes are responsible for coordinating cluster-wide activities, like assigning shards to nodes.
- Ingest Nodes process incoming documents before they are indexed.
- Machine Learning Nodes execute Machine Learning jobs (an XPack functionality), and so on.
If you’ve taken the step to separate nodes by specialized roles, which is recommended for any production-grade cluster above 20 nodes, you may have fallen into a common trap. When you set these roles in your cluster, a step that many people often miss is the creation of different instance types, so that each node can undertake a specialist responsibility.
This leads to Master nodes, which are responsible for cluster-wide administrative tasks, with way more memory than they need, driving up your Elasticsearch costs. While data nodes are memory heavy, master nodes are CPU bound and mostly need to always be available.
In your cloud provider, look for instance types that have more CPU than memory, to ensure that more of your resources are effectively utilized by the cluster. You can avoid attaching costly disks as well. Just make sure the node types you choose for master nodes have enough memory to hold the cluster metadata - the larger the cluster, the more memory you’ll need.
You might be able to allocate more memory to the heap on your master nodes.
If you have small master nodes, because your cluster isn’t particularly large, you can allocate a larger percentage of the physical RAM to your node. Rather than the standard 50%, you can allocate up to 75% in your master nodes. This is because the master node itself isn’t holding any data.
However, as your cluster grows, you may find that the memory requirements of your master nodes increase. In larger clusters, this approach can cause performance and operational complications, so be careful when you turn this dial. Fortunately, thanks to the innate scalability of the cloud, resizing master nodes is trivial.
#3 - Get Data Retention Right
As your data increases, it won’t be long before your cluster is holding onto a great deal of information. A difficult but crucial question to ask is this: how much of this data do we truly need? Holding onto everything ensures you never miss out on data, but after a while, your memory and storage are being consumed by documents that are never queried.
There are multiple things to consider if you’re going to implement retention policies:
- Your indexing strategy needs to make it possible to drop whole c at a time, rather than deleting individual documents en masse. Elasticsearch can easily drop a single index, but it may struggle with processing a high volume of individual delete requests.
- You need to capture metrics on how often different data is queried over time, so you have a data-driven understanding of which documents are used.
- Whether you want to push the archived documents out into cheap storage, such as AWS S3, so they’re always available if you need them.
These questions are specific to the product you’re building, but either way, there is a simple solution to handle this problem for you.
SLM to the rescue
Snapshot Lifecycle Management (SLM) is a feature in Elasticsearch that allows you to take regular snapshots of your cluster. These snapshots can be held on your Elasticsearch servers and restored whenever you need them, and they can be deleted as needed. This gives you a sliding window of documents in your cluster, to prevent the cluster space from getting out of control.
And you don’t need to delete either!
If you don’t want to delete your snapshots after a certain amount of time, or if you want to move them entirely out of your cluster immediately, you can export them out to S3 instead by creating a backup repository using an S3 bucket. This allows you to make use of cheap storage but retain historical data in the event of an audit or to maintain regulatory compliance.
And the data can still be queried if you need it
If you’re running on AWS, you can still read these documents by using AWS UltraWarm to store documents in S3. This provides direct integration with the AWS OpenSearch project. If you’re not leveraging AWS, you can still make use of Elastic’s Searchable snapshots. This means that you can export and store documents in a cost-effective way, without introducing complex reingestion processes to ever make use of this data again. The drawback is that these queries can be quite slow, but that’s to be expected - you’re making a conscious choice to trade performance for cost.
#4 - Don't Keep Everything in “Hot” Storage
If your cluster is sufficiently large and all of that data is being queried, even infrequently, you may want to take more fine-grained control of the type of storage you maintain to keep your Elasticsearch costs down. Elasticsearch allows you to set up hot, warm, and cold storage within your cluster. This allows you to keep information readily available, so you don’t need to restore snapshots from external sources, but it also allows you to dedicate resources to only the documents that are in demand.
What does hot, warm, and cold storage look like in practice?
Your hot storage may be virtual machines with huge pools of available memory and local SSD storage. This enables them to index large volumes of information and also interface with the hard disk very quickly. Your warm storage may have a little less memory and not be the fastest of SSDs. Your cold storage could even run on a spinning disk, with smaller pools of memory.
Logs Example: If you retain all of your logs for a month, you may find that logs that were written yesterday are queried far more often than logs that were written 3 weeks ago. In this case, it doesn’t make sense to hold them in the same type of storage, but you also don’t want to lose them from the cluster just yet. Not all solutions will favor tiered storage, but you may find that you can change the access patterns for certain data, based on how it’s currently used.
By setting up the appropriate configuration in your cluster, you can have different indexes travel from hot, to warm, to cold storage as they age. This means you don’t need to use the most expensive possible storage and node types for every document. This is a vital elasticsearch cost optimization step for any large scale cluster and typically yields significant savings.
#5 - Right Type of Hardware
Once you’ve configured your cluster correctly, the next best step is to ensure that you’ve picked the right storage and virtual machines to optimize your Elasticsearch costs. For example, your data nodes are going to need memory intensive nodes, while your master nodes will rely on the CPU.
Rightsizing your VMs (Virtual Machines)
The exact node type that you have available will depend on your cloud provider, but as a rough guide, here are the common node types you may see in a cluster, and some advice on the instance type you should consider:
Hardware for Master Nodes: Your master nodes should run on CPU optimized instances. However, if any of your instances are voting-only nodes, then they can run on far less powerful instance types since they are never going to run the cluster. Master nodes can typically run on smaller instances than data nodes since they are only delegating requests to the wider cluster.
For Data Nodes: Data nodes do it all, so they will need solid all-round nodes that can handle CPU, Memory and I/O loads. You should focus on memory and I/O as the primary goal, but the CPU can become a bottleneck if you don’t include enough.
Other node types like Ingest nodes and Machine Learning nodes perform processing on incoming documents before they are indexed. This means that most of their activity happens in the CPU, so you should favor CPU optimized instances for these node types. Since those are CPU intensive, if you are using those features a lot you should consider running nodes dedicated to those roles.
But how do you know the right instance type? It’s often an experimental process to find the perfect instance type. If you want to get an edge on the problem, Pulse offers insights into the hardware on which your cluster depends, and can even recommend more appropriate hardware, so that your cluster can run in the most efficient manner possible.
Choosing the correct storage
Possibly more important than your virtual machine type is your storage mechanism. Depending on the type of data you’ve got in your cluster, you may find that your I/O bottleneck renders your powerful servers redundant.
The most common mistake people make when choosing their storage type is to go with a remote storage option, like Elastic Block Storage on AWS. Elastic Block Storage is typically acceptable for smaller clusters but doesn’t scale well over larger clusters. This requires a lot of money spent on provisioned IOPS, for very little performance benefit.
It goes back to hot, warm and cold
Teams often try to save money by choosing a magnetic storage solution over solid-state, across the board. While the cost per HDD is lower with magnetic storage, the disk performance requirements of your cluster will mean that you need more disks available. I/O is commonly the bottleneck, but also “random seek” time is important too. Overall, you spend more on storage and don’t get any performance benefits.
However, if you’re looking at cold storage in your cluster, you might wish to employ magnetic storage for those indices that receive only a few requests. This is something you will learn as your cluster grows and usage patterns present themselves.
This all sounds like a lot of work!
On top of these decisions, you’ve also got other mechanisms for tuning your cluster. For example: taking control of your field mappings, using compression, choosing not to index certain fields and more.
Elasticsearch management is complex, but the benefits of a finely tuned, cost-optimized Elasticsearch cluster are difficult to deny. It can be a long and difficult journey to build the best cluster for your company, and Elasticsearch knowledge is both rare and expensive.
Rather than rolling a base Elasticsearch cluster, you can upgrade your cluster experience to bring sophisticated functionality out of the box. More than that, if you want to behave as the consumer of a cluster, rather than the architect, consider bringing in some experts to help and avoid the most common mistakes on the way to actionable insights.