A detailed breakdown of AWS OpenSearch Service pricing, OpenSearch Serverless OCU costs, Elastic Cloud subscription tiers, and self-managed Elasticsearch expenses - with concrete numbers and cost optimization strategies for 2026.
Search infrastructure pricing is notoriously opaque. AWS OpenSearch Service bills across five dimensions. Elastic Cloud ties pricing to subscription tiers that double costs between Standard and Enterprise. OpenSearch Serverless charges per OCU-hour with a minimum floor that surprises most teams. And self-managed deployments carry hidden costs that regularly exceed the visible infrastructure spend by 2-3x.
This guide breaks down the real costs of running OpenSearch and Elasticsearch in production across all four deployment models - AWS OpenSearch Service (provisioned), OpenSearch Serverless, Elastic Cloud, and self-managed - with concrete pricing figures, a side-by-side comparison, and optimization strategies that can cut your search bill by 30-60%.
AWS OpenSearch Service Pricing
AWS OpenSearch Service uses provisioned pricing, meaning you select instance types, node counts, and storage, then pay hourly for what you've allocated. Pricing has three main components: compute (instance hours), storage (EBS volumes), and data transfer.
Instance costs
Instance pricing varies by family and size. In US East (N. Virginia), on-demand rates for the most commonly used types look like this:
| Instance Type | vCPUs | Memory | On-Demand $/hr | Monthly (730 hrs) |
|---|---|---|---|---|
| t3.medium.search | 2 | 4 GiB | $0.073 | ~$53 |
| m6g.large.search | 2 | 8 GiB | $0.128 | ~$93 |
| c6g.large.search | 2 | 4 GiB | $0.113 | ~$82 |
| r6g.large.search | 2 | 16 GiB | $0.167 | ~$122 |
| r6g.xlarge.search | 4 | 32 GiB | $0.335 | ~$245 |
| r6g.2xlarge.search | 8 | 64 GiB | $0.670 | ~$489 |
Graviton-based instances (the "g" suffix) deliver roughly 15-20% better price-performance than their x86 equivalents. For data nodes, r6g instances are the workhorse choice - memory-optimized for heap and OS caches. Master nodes can run on smaller c6g or m6g types since they handle cluster coordination, not data.
Storage and data transfer
EBS storage adds to the bill. General Purpose SSD (gp3) costs approximately $0.08/GB/month, while Provisioned IOPS SSD (io1) runs $0.125/GB/month plus $0.065 per provisioned IOPS. For warm-tier data, UltraWarm uses S3-backed storage at around $0.024/GB/month - a fraction of hot-tier EBS costs.
Data transfer within the same Availability Zone is free. Cross-AZ transfer (required for multi-AZ deployments) costs $0.01/GB in each direction. Internet egress follows standard AWS rates starting at $0.09/GB.
Reserved Instance savings
Reserved Instances cut costs substantially for steady-state workloads:
- 1-year No Upfront: 31% discount
- 1-year All Upfront: 35% discount
- 3-year No Upfront: 48% discount
- 3-year All Upfront: 52% discount
A production cluster running 3x r6g.xlarge data nodes with 500 GB gp3 storage each, plus 3x c6g.large dedicated master nodes, costs roughly $1,150/month on-demand. With 1-year Reserved Instances, that drops to about $790/month.
OpenSearch Serverless Pricing
OpenSearch Serverless decouples compute from storage, using OpenSearch Compute Units (OCUs) for processing and S3 for data persistence. Each OCU provides a combination of vCPU, memory, and ephemeral storage, billed at $0.24 per OCU-hour for both indexing and search workloads.
The minimum cost floor
Here is the part that catches most teams off guard. A production collection requires a minimum of 2 OCUs - one for indexing (0.5 primary + 0.5 standby) and one for search (0.5 primary + 0.5 replica). That minimum runs continuously, even with zero queries.
Minimum monthly cost: 2 OCUs x $0.24/hr x 730 hours = ~$350/month.
For dev/test workloads, you can disable redundancy and run with just 1 OCU total (0.5 indexing + 0.5 search), cutting the floor to roughly $175/month. Storage is billed separately at $0.024/GB/month on S3.
When Serverless makes sense
OpenSearch Serverless scales OCUs automatically based on load. This works well for bursty workloads - an internal search tool that sees 10x traffic during business hours and near-zero overnight. Where it falls apart: sustained high-throughput workloads. Once you consistently need 8+ OCUs, provisioned instances almost always cost less.
There are also hard functional limitations. No custom plugins, no alerting, no anomaly detection, and limited API coverage. Performance degrades noticeably past ~1 TB of indexed data. These constraints matter more than the pricing model for many production use cases.
| Scenario | Serverless monthly cost | Provisioned equivalent |
|---|---|---|
| Light search app (2 OCUs avg) | ~$350 | ~$290 (2x m6g.large + storage) |
| Medium workload (4 OCUs avg) | ~$700 | ~$530 (3x r6g.large + storage) |
| Heavy workload (10 OCUs avg) | ~$1,752 | ~$1,150 (3x r6g.xlarge + storage) |
| Bursty (2 OCU base, 10 OCU peaks) | ~$450-600 | ~$1,150 (sized for peak) |
The bursty pattern is the one case where Serverless has a clear cost advantage. For everything else, provisioned wins on both cost and flexibility.
Elastic Cloud Pricing Compared
Elastic Cloud prices differently from AWS OpenSearch. You pay per GB of RAM per hour for each running component (Elasticsearch nodes, Kibana, APM server), and the rate depends on your subscription tier.
Subscription tiers
Elastic offers four tiers. Each progressively adds features and increases the per-resource cost:
| Tier | Starting monthly | What it adds |
|---|---|---|
| Standard | ~$95 | Core search, observability, security basics |
| Gold | ~$114 | Enhanced support, expanded analytics |
| Platinum | ~$131 | ML, advanced security (SAML, RBAC), alerting |
| Enterprise | ~$175+ | Searchable snapshots, cross-cluster replication, full support SLA |
Production deployments on Elastic Cloud typically run $500-$2,000+/month depending on cluster size and tier. A comparable cluster to the AWS example above (3 data nodes with 32 GB RAM each, high availability) lands around $1,200-1,800/month on the Gold tier.
The key pricing difference: Elastic bundles proprietary features (ML, advanced security, Canvas, Lens) into higher tiers. Amazon OpenSearch Service includes equivalent features - alerting, anomaly detection, fine-grained access control - at no additional license cost. You're paying for infrastructure, not feature tiers. For teams that only need core search and analytics, this pricing gap is substantial.
Where Elastic Cloud wins on cost
Large storage workloads with infrequently queried data favor Elastic Cloud's frozen tier (searchable snapshots backed by S3). If you're storing hundreds of terabytes and only querying recent data at speed, the frozen tier is cheaper than provisioning UltraWarm nodes on AWS at scale. Elastic Cloud also supports multi-cloud deployments across AWS, GCP, and Azure from a single console - avoiding vendor lock-in carries its own cost value.
Self-Managed OpenSearch and Elasticsearch Costs
Running OpenSearch or Elasticsearch yourself eliminates managed service markups but introduces costs that are easy to underestimate.
Visible infrastructure costs
Compute and storage pricing depends on your cloud provider or bare-metal setup. For an apples-to-apples comparison using AWS EC2 instances (since most self-managed deployments run in a cloud anyway):
- 3x r6g.xlarge EC2 instances (data nodes): ~$725/month on-demand
- 3x c6g.large EC2 instances (masters): ~$247/month
- EBS gp3 storage (1.5 TB total): ~$120/month
- Total infrastructure: ~$1,092/month
That is slightly cheaper than the equivalent AWS OpenSearch Service provisioned cluster (~$1,150/month), but the difference is modest. The real cost lives elsewhere.
Hidden costs
Self-managed clusters require ongoing engineering time for upgrades, security patching, monitoring setup, capacity planning, shard rebalancing, and incident response. Multiple industry analyses put these hidden costs at 2-3x the visible infrastructure spend. A cluster costing $1,100/month in compute may actually cost $2,500-3,500/month when you account for the engineering hours it consumes.
At smaller scale (under ~20 nodes), the managed service markup is almost always worth it. At larger scale (100+ nodes), the economics shift - the absolute dollar savings from self-managing can justify dedicated platform engineers. For teams in between, pairing self-managed OpenSearch with expert consulting and support is often the right middle ground - you keep control without carrying the full operational burden alone.
Cost Optimization Strategies
Regardless of which deployment model you use, these strategies consistently deliver the largest savings:
Use Graviton/ARM instances. Switching from r5 to r6g or r7g on AWS saves 15-20% with equal or better performance. This is the easiest win available.
Implement data tiering. Move older or infrequently queried data from hot SSD nodes to warm (HDD or smaller SSD) and cold/frozen (S3-backed) tiers. Frozen-tier storage costs $0.024/GB/month versus $0.08+/GB for hot gp3 - a 70% reduction per GB. For time-series workloads, this is transformative.
Commit to Reserved Instances. If your cluster has been stable for 3+ months, a 1-year RI commitment saves 31-35%. Three-year commitments save up to 52%. AWS Savings Plans offer similar discounts with more flexibility.
Right-size your shards. Oversized shards waste memory. Undersized shards (the more common problem) multiply overhead. Target 10-50 GB per shard for most workloads. Use the _cat/shards API to audit shard sizes regularly.
Tune field mappings and use compression. Disable indexing on fields you never search. Use
best_compression(zstd in OpenSearch 2.10+) for indices where query latency is less critical. Drop unnecessarydoc_valuesandnorms. These changes reduce storage 20-40% without altering functionality.Right-size master nodes. Dedicated masters don't store data. Running them on the same instance type as data nodes wastes memory and budget. A c6g.large ($82/month) handles master duties for clusters up to ~100 nodes - no need for the r6g.xlarge ($245/month) many teams default to.
Monitor before you scale. Tools like Pulse or OpenSearch's own monitoring reveal whether performance bottlenecks come from undersized hardware or from inefficient queries and mappings. Throwing hardware at a query problem is the most expensive mistake in search infrastructure.
Key Takeaways
- AWS OpenSearch Service (provisioned) is the best starting point for most teams on AWS. A production 3-node cluster runs $800-1,200/month with Reserved Instances. All features included, no license tiers.
- OpenSearch Serverless has a ~$350/month minimum floor and makes financial sense only for bursty, low-volume workloads. Sustained workloads cost more than provisioned instances.
- Elastic Cloud starts low but scales expensively due to tiered licensing. It wins for multi-cloud deployments and very large frozen-tier storage use cases. Expect $1,200-1,800/month for production clusters comparable to a $800-1,150 AWS setup.
- Self-managed saves 5-15% on infrastructure but adds 2-3x in hidden operational costs. Only cost-effective at 100+ node scale or when you need capabilities that managed services restrict.
- The biggest cost levers are data tiering (70% storage savings), Reserved Instances (31-52% compute savings), and Graviton instances (15-20% price-performance gain). Start with these before considering architectural changes.
- For help optimizing your OpenSearch or Elasticsearch costs, BigData Boutique's OpenSearch consulting and Elasticsearch consulting teams work with organizations daily on exactly these decisions.