A detailed breakdown of AWS OpenSearch Service pricing, OpenSearch Serverless OCU costs, Elastic Cloud subscription tiers, and self-managed Elasticsearch expenses - with concrete numbers and cost optimization strategies for 2026.
Search infrastructure pricing is notoriously opaque. AWS OpenSearch Service bills across five dimensions. Elastic Cloud ties pricing to subscription tiers that double costs between Standard and Enterprise. OpenSearch Serverless charges per OCU-hour with a minimum floor that surprises most teams. And self-managed deployments carry hidden costs that regularly exceed the visible infrastructure spend by 2-3x.
This guide breaks down the real costs of running OpenSearch and Elasticsearch in production across all four deployment models - AWS OpenSearch Service (provisioned), OpenSearch Serverless, Elastic Cloud, and self-managed - with concrete pricing figures, a side-by-side comparison, and optimization strategies that can cut your search bill by 30-60%.
AWS OpenSearch Service Pricing
AWS OpenSearch Service uses provisioned pricing, meaning you select instance types, node counts, and storage, then pay hourly for what you've allocated. Pricing has three main components: compute (instance hours), storage (EBS volumes), and data transfer.
Instance costs
Instance pricing varies by family and size. In US East (N. Virginia), on-demand rates for the most commonly used types look like this:
| Instance Type | vCPUs | Memory | On-Demand $/hr | Monthly (730 hrs) |
|---|---|---|---|---|
| t3.medium.search | 2 | 4 GiB | $0.073 | ~$53 |
| m6g.large.search | 2 | 8 GiB | $0.128 | ~$93 |
| c6g.large.search | 2 | 4 GiB | $0.113 | ~$82 |
| r6g.large.search | 2 | 16 GiB | $0.167 | ~$122 |
| r6g.xlarge.search | 4 | 32 GiB | $0.335 | ~$245 |
| r6g.2xlarge.search | 8 | 64 GiB | $0.670 | ~$489 |
Graviton-based instances (the "g" suffix) deliver roughly 15-20% better price-performance than their x86 equivalents. For data nodes, r6g instances are the workhorse choice - memory-optimized for heap and OS caches. Master nodes can run on smaller c6g or m6g types since they handle cluster coordination, not data.
Storage and data transfer
EBS storage adds to the bill. Note that EBS attached to an OpenSearch Service domain is billed at the managed-service rate, which is higher than raw EBS prices: General Purpose SSD (gp3) costs approximately $0.122/GB/month under OpenSearch Service (versus ~$0.08/GB/month for raw EBS in us-east-1), and Provisioned IOPS SSD (io1) is roughly $0.135/GB/month plus $0.072 per provisioned IOPS-month. For warm-tier data, UltraWarm uses S3-backed storage at around $0.024/GB/month - a fraction of hot-tier EBS costs.
Data transfer within the same Availability Zone is free. Cross-AZ transfer (required for multi-AZ deployments) costs $0.01/GB in each direction. Internet egress follows standard AWS rates starting at $0.09/GB.
OR1, OR2, OM2, and OI2: OpenSearch-optimized instances
AWS now offers a family of OpenSearch-optimized instance types that decouple primary storage from local volumes by writing replicas to Amazon S3 instead of EBS. These typically offer better price-performance for indexing-heavy workloads:
- OR1 (memory-optimized, GA 2023) - the original Optimized family. Uses local EBS for caching and S3 for durable storage; supports a writeable warm tier.
- OR2 / OM2 (March 2025) - successors to OR1 with up to 26% higher indexing throughput than OR1 and ~70% over r7g general-purpose. OR2 is balanced compute/memory; OM2 is memory-optimized.
- OI2 (December 2025) - the newest addition, optimized for indexing-heavy workloads. Uses 3rd-generation AWS Nitro local NVMe SSDs for caching with S3 for durable storage. Up to 9% higher indexing throughput than OR2 and 33% over I8g, with up to 22.5 TB of storage and sizes up to 24xlarge.
For most indexing-heavy or large-storage workloads on AWS, the OR2/OM2/OI2 family will outperform classic r6g/r7g instances on cost and throughput. Reserved Instance pricing is supported across all of them.
Reserved Instance savings
Reserved Instances cut costs substantially for steady-state workloads:
- 1-year No Upfront: 31% discount
- 1-year All Upfront: 35% discount
- 3-year No Upfront: 48% discount
- 3-year All Upfront: 52% discount
A production cluster running 3x r6g.xlarge data nodes with 500 GB gp3 storage each, plus 3x c6g.large dedicated master nodes, costs roughly $1,150/month on-demand. With 1-year Reserved Instances, that drops to about $790/month.
OpenSearch Serverless Pricing
OpenSearch Serverless decouples compute from storage, using OpenSearch Compute Units (OCUs) for processing and S3 for data persistence. Each OCU provides a combination of vCPU, memory, and ephemeral storage, billed at $0.24 per OCU-hour for both indexing and search workloads.
The minimum cost floor
Here is the part that catches most teams off guard. A production collection requires a minimum of 2 OCUs - one for indexing (0.5 primary + 0.5 standby) and one for search (0.5 primary + 0.5 replica). That minimum runs continuously, even with zero queries.
Minimum monthly cost: 2 OCUs x $0.24/hr x 730 hours = ~$350/month.
For dev/test workloads, you can disable redundancy and run with just 1 OCU total (0.5 indexing + 0.5 search), cutting the floor to roughly $175/month. Storage is billed separately at $0.024/GB/month on S3.
When Serverless makes sense
OpenSearch Serverless scales OCUs automatically based on load. This works well for bursty workloads - an internal search tool that sees 10x traffic during business hours and near-zero overnight. Where it falls apart: sustained high-throughput workloads. Once you consistently need 8+ OCUs, provisioned instances almost always cost less.
There are also hard functional limitations. No custom plugins, no alerting, no anomaly detection, and limited API coverage. Performance degrades on very large collections - Serverless was originally capped at 1 TB and has since been raised to ~6 TB, but it still trails provisioned at scale. These constraints matter more than the pricing model for many production use cases.
| Scenario | Serverless monthly cost | Provisioned equivalent |
|---|---|---|
| Light search app (2 OCUs avg) | ~$350 | ~$290 (2x m6g.large + storage) |
| Medium workload (4 OCUs avg) | ~$700 | ~$530 (3x r6g.large + storage) |
| Heavy workload (10 OCUs avg) | ~$1,752 | ~$1,150 (3x r6g.xlarge + storage) |
| Bursty (2 OCU base, 10 OCU peaks) | ~$450-600 | ~$1,150 (sized for peak) |
The bursty pattern is the one case where Serverless has a clear cost advantage. For everything else, provisioned wins on both cost and flexibility.
Elastic Cloud Pricing Compared
Elastic Cloud prices differently from AWS OpenSearch. You pay per GB of RAM per hour for each running component (Elasticsearch nodes, Kibana, APM server), and the rate depends on your subscription tier.
Subscription tiers
Elastic offers four tiers. Each progressively adds features and increases the per-resource cost:
| Tier | Starting monthly | What it adds |
|---|---|---|
| Standard | ~$99 | Core search, basic Kibana alerting, observability, basic security (malware prevention, CSPM) |
| Gold | ~$114 | Watcher (advanced alerting and third-party actions), reporting, business-hours support |
| Platinum | ~$131 | ML / anomaly detection, advanced security (SAML, RBAC, field-level security), cross-cluster replication, 24/7 support |
| Enterprise | ~$184+ | Searchable snapshots, GPU inference, AI assistant, fastest support SLA |
Production deployments on Elastic Cloud typically run $500-$2,000+/month depending on cluster size and tier. A comparable cluster to the AWS example above (3 data nodes with 32 GB RAM each, high availability) lands around $1,200-1,800/month on the Gold tier.
The key pricing difference: Elastic bundles proprietary features (ML, advanced security, Canvas, Lens) into higher tiers. Amazon OpenSearch Service includes equivalent features - alerting, anomaly detection, fine-grained access control - at no additional license cost. You're paying for infrastructure, not feature tiers. For teams that only need core search and analytics, this pricing gap is substantial.
Where Elastic Cloud wins on cost
Large storage workloads with infrequently queried data favor Elastic Cloud's frozen tier (searchable snapshots backed by S3). If you're storing hundreds of terabytes and only querying recent data at speed, the frozen tier is cheaper than provisioning UltraWarm nodes on AWS at scale. Elastic Cloud also supports multi-cloud deployments across AWS, GCP, and Azure from a single console - avoiding vendor lock-in carries its own cost value.
Self-Managed OpenSearch and Elasticsearch Costs
Running OpenSearch or Elasticsearch yourself eliminates managed service markups but introduces costs that are easy to underestimate.
Visible infrastructure costs
Compute and storage pricing depends on your cloud provider or bare-metal setup. For an apples-to-apples comparison using AWS EC2 on-demand prices in us-east-1 (most self-managed deployments run in a cloud anyway):
- 3x r6g.xlarge EC2 instances (data nodes): 3 × $0.2016/hr × 730 = ~$441/month
- 3x c6g.large EC2 instances (masters): 3 × $0.068/hr × 730 = ~$149/month
- EBS gp3 storage (1.5 TB total at $0.08/GB/month raw EBS): ~$120/month
- Total infrastructure: ~$710/month on-demand
That is roughly 35-40% cheaper than the equivalent AWS OpenSearch Service provisioned cluster (~$1,150/month). With 1- or 3-year EC2 Reserved Instances or Savings Plans, the gap widens further - 3-year RIs can take that compute cost down by another ~50%, putting raw infrastructure under $400/month for the same hardware. So on visible infrastructure alone, self-managing is materially cheaper. The real cost lives elsewhere.
Hidden costs
Self-managed clusters require ongoing engineering time for upgrades, security patching, monitoring setup, capacity planning, shard rebalancing, and incident response. Industry estimates put these hidden costs at 1.5-3x the visible infrastructure spend, depending on cluster maturity, team experience, and how much tooling you've already built. A cluster costing $700/month in compute may actually cost $1,500-2,500/month when you account for the engineering hours it consumes - which closes most or all of the gap to the managed service.
The economics shift with scale and tooling. At smaller scale (under ~20 nodes), the managed service markup is almost always worth it because operational overhead doesn't shrink linearly with cluster size. At larger scale (100+ nodes), the absolute dollar savings from self-managing can comfortably justify a dedicated platform engineer or two. The interesting middle is what most teams hit - 20 to 100 nodes, where you're paying real money to AWS but you're not large enough to staff a full platform team.
For that middle ground, pairing self-managed OpenSearch or Elasticsearch with expert consulting and support and a purpose-built monitoring tool like Pulse for Elasticsearch and OpenSearch is the path that actually moves the cost curve. Pulse replaces the operational dashboards and incident response tooling teams typically build internally - it surfaces shard hotspots, slow queries, mapping problems, and capacity issues directly, instead of leaving engineers to assemble that picture from raw cluster APIs. Combined with on-demand expert support, this gives teams the cost advantages of self-managed (often 40-60% lower TCO than fully managed services for sustained workloads) without carrying the full operational burden alone. This is exactly the model BigData Boutique runs with clients across hundreds of production clusters.
Cost Optimization Strategies
Regardless of which deployment model you use, these strategies consistently deliver the largest savings:
Use Graviton/ARM instances. Switching from r5 to r6g or r7g on AWS saves 15-20% with equal or better performance. This is the easiest win available.
Implement data tiering. Move older or infrequently queried data from hot SSD nodes to warm (HDD or smaller SSD) and cold/frozen (S3-backed) tiers. Frozen-tier storage costs $0.024/GB/month versus $0.08+/GB for hot gp3 - a 70% reduction per GB. For time-series workloads, this is transformative.
Commit to Reserved Instances. If your cluster has been stable for 3+ months, a 1-year RI commitment saves 31-35%. Three-year commitments save up to 52%. AWS Savings Plans offer similar discounts with more flexibility.
Right-size your shards. Oversized shards waste memory. Undersized shards (the more common problem) multiply overhead. Target 10-50 GB per shard for most workloads. Use the _cat/shards API to audit shard sizes regularly.
Tune field mappings and use compression. Disable indexing on fields you never search. Use
zstdorzstd_no_dict(introduced in OpenSearch 2.9) for better compression than the default LZ4 with minimal CPU overhead, orbest_compression(DEFLATE/zlib) when you want maximum compression at the cost of higher CPU. Drop unnecessarydoc_valuesandnorms. These changes reduce storage 20-40% without altering functionality.Right-size master nodes. Dedicated masters don't store data. Running them on the same instance type as data nodes wastes memory and budget. A c6g.large ($82/month) handles master duties for clusters up to ~100 nodes - no need for the r6g.xlarge ($245/month) many teams default to.
Monitor before you scale. Tools like Pulse or OpenSearch's own monitoring reveal whether performance bottlenecks come from undersized hardware or from inefficient queries and mappings. Throwing hardware at a query problem is the most expensive mistake in search infrastructure.
Key Takeaways
- AWS OpenSearch Service (provisioned) is the best starting point for most teams on AWS. A production 3-node cluster runs $800-1,200/month with Reserved Instances. All features included, no license tiers.
- OpenSearch Serverless has a ~$350/month minimum floor and makes financial sense only for bursty, low-volume workloads. Sustained workloads cost more than provisioned instances.
- Elastic Cloud starts low but scales expensively due to tiered licensing. It wins for multi-cloud deployments and very large frozen-tier storage use cases. Expect $1,200-1,800/month for production clusters comparable to a $800-1,150 AWS setup.
- Self-managed can save 35-50% on visible infrastructure versus the managed service, but adds 1.5-3x in hidden operational costs. The economics work best at 100+ node scale, or when paired with a monitoring/operations tool like Pulse and on-demand expert support to keep the operational overhead down.
- The biggest cost levers are data tiering (70% storage savings), Reserved Instances (31-52% compute savings), and Graviton instances (15-20% price-performance gain). Start with these before considering architectural changes.
- For help optimizing your OpenSearch or Elasticsearch costs, BigData Boutique's OpenSearch consulting and Elasticsearch consulting teams work with organizations daily on exactly these decisions.