Side-by-side comparison of OpenSearch and Elasticsearch - covering performance benchmarks, vector search, security, licensing (AGPL vs Apache 2.0), managed service pricing, and migration paths. Updated for 2026 with real-world data from 10,000+ cluster deployments.

Both Elasticsearch and OpenSearch shipped major version bumps in 2025 - Elasticsearch 9.0 in April, OpenSearch 3.0 in May. These weren't minor increments. Both upgraded to Lucene 10, introduced breaking changes, and staked out increasingly different positions on AI, vector search, and observability. The two projects share the same DNA from the 2021 fork, but they're diverging faster than ever.

At BigData Boutique, we've worked with both technologies for over 13 years and now maintain production clusters for clients across search, analytics, and observability workloads. We recently joined the OpenSearch Software Foundation as a General Member. This comparison reflects what we see across thousands of real-world deployments - not vendor marketing.

Project Status, Governance, and Licensing

Elasticsearch is now triple-licensed under AGPLv3, SSPL 1.0, and Elastic License 2.0 (user's choice). Elastic added AGPL in late 2024, which technically makes the core an OSI-approved open source project again. But there's a catch: AGPL's network-use copyleft clause means anyone offering Elasticsearch as a service must open-source their entire infrastructure stack. Binary releases and all features above the "Free" tier remain under the Elastic License 2.0, which prohibits providing Elasticsearch as a managed service. For most organizations consuming Elasticsearch as a backend, this changes nothing. For those embedding it in commercial products or offering it as a service, the licensing remains restrictive.

OpenSearch stays on Apache 2.0 - the most permissive widely-used open source license. In September 2024, AWS transferred governance to the OpenSearch Software Foundation (OSSF) under the Linux Foundation. The foundation has grown since: BigData Boutique, OpenSource Connections, and Resolve Technology joined as General Members in March 2026, alongside 400+ contributing organizations and 3,300+ individual contributors. This is no longer "AWS's project" - it has genuine vendor-neutral governance with a Technical Steering Committee and open roadmap.

On the version front: Elasticsearch is at 9.3 (February 2026), OpenSearch at 3.6 (April 2026). Both are on Lucene 10 and Java 21+.

Elasticsearch OpenSearch
License AGPLv3 / SSPL / ELv2 (triple) Apache 2.0
Governance Elastic Co OSSF (Linux Foundation)
Latest Version 9.3 (Feb 2026) 3.6 (Apr 2026)
Lucene Version 10 10
Release Cadence ~Monthly minors ~Bimonthly minors

Performance and Scalability

Both projects share the Lucene core, which means raw text search performance is comparable on equivalent hardware and configurations. The differences show up in the optimizations built on top of Lucene and in specialized workloads.

Elasticsearch 9.x brought ES|QL to full production readiness, with a reported 5x latency reduction on time-series queries in 9.3. LogsDB index mode reached GA with up to 65% storage reduction for log data through synthetic source and doc-value-only fields. The Streams feature (GA in 9.2) adds AI-driven log parsing with automatic field extraction.

OpenSearch 3.0 claims a 9.5x performance improvement over OpenSearch 1.3, with range queries 25% faster on the Big5 benchmark suite. Concurrent segment search is now enabled by default, and gRPC transport support offers faster data processing. Pull-based ingestion from Apache Kafka and Amazon Kinesis removes the need for intermediate ingestion pipelines in many architectures.

A word on benchmarks: vendor-published numbers are marketing. Elastic's own benchmarks show 40-140% better performance than OpenSearch on log analytics workloads. An independent Trail of Bits benchmark from March 2026 found OpenSearch faster on Big5 mixed workloads. The truth depends on your workload, hardware, and configuration. For text search, the difference is marginal. For specialized workloads - vector search, time-series, log analytics - the engine-specific optimizations matter more than the shared Lucene foundation.

Query Languages

Both platforms now have dedicated query languages competing for the same space. Elasticsearch has ES|QL, a pipe-based language tightly integrated into Kibana with autocomplete and visualization support. OpenSearch has PPL (Piped Processing Language), which received substantial updates in OpenSearch 3.3 with new commands and functions for log analytics and observability workflows. Both also support SQL. The choice between them is largely an ecosystem decision - if you're on Kibana, ES|QL is the natural fit; if you're on OpenSearch Dashboards, PPL is.

Vector Search and AI Capabilities

This is where the two projects diverge the most, and where the competition has been fiercest in 2025-2026.

Vector Engines and Dimensions

Elasticsearch uses Lucene's native HNSW as its sole vector engine. It compensates with aggressive quantization innovations: BBQ (Better Binary Quantization) became the default for vectors with >384 dimensions in ES 9.1, reducing memory by 95%+ compared to float32. DiskBBQ (GA in 9.2) reads vectors from disk at sub-20ms latency with ~100MB memory regardless of index size - a real option for cost-sensitive deployments with hundreds of millions of vectors. NVIDIA cuVS GPU acceleration is in tech preview in 9.3, delivering up to 12x faster indexing. Max vector dimensions: 4,096.

OpenSearch supports two vector engines: Lucene HNSW and Faiss (Facebook AI Similarity Search). Note that nmslib was deprecated in OpenSearch 2.16 and removed for new index creation in 3.0. Faiss provides IVF (Inverted File Index) and product quantization (PQ) as alternatives to HNSW, which matter for workloads where memory is constrained or where approximate search with different recall/latency tradeoffs is needed. OpenSearch supports multiple quantization methods: byte vectors, FP16, product quantization, and binary quantization (since 2.17, via Faiss with 32x compression). OpenSearch also has its own disk-based vector search mode - it uses a two-phase approach where a compressed binary-quantized index lives in memory and full-precision vectors are rescored from disk, cutting costs by roughly 67%. It doesn't have Elasticsearch's BBQ yet, though there's an active RFC to integrate Lucene's BBQ. Concurrent segment search for k-NN is enabled by default in 3.0, delivering up to 2.5x faster vector queries. Max vector dimensions: 16,000 - nearly 4x Elasticsearch's limit, relevant for newer embedding models with higher dimensionality.

Elasticsearch 9.x OpenSearch 3.x
Vector Engines Lucene HNSW Lucene HNSW, Faiss
Max Dimensions 4,096 16,000
Default Quantization BBQ (9.1+) Configurable (byte, FP16, PQ, binary)
Disk-based Search DiskBBQ (GA 9.2) Faiss with disk mode
GPU Acceleration cuVS (tech preview 9.3) Via Amazon OpenSearch Service
ANN Algorithms HNSW HNSW, IVF

RAG Workflows and AI Agents

Elasticsearch's Retriever framework supports multi-stage retrieval pipelines in a single _search call - combining knn, RRF, text similarity reranking, diversification, and rule-based pinning. The Jina AI acquisition (October 2025) brought three multilingual embedding models directly into the Elastic Inference Service - no external API keys or ML nodes needed. The Elastic Agent Builder (GA January 2026) lets developers build AI agents over Elasticsearch data with natural language, and supports MCP server import/export for integration with Claude Desktop, Cursor, and LangChain.

OpenSearch took a different path. The agentic search capability (GA in 3.3) uses the Flow Framework plugin to orchestrate AI-driven search workflows, achieving 82% query translation accuracy and up to 235% relevance improvements in evaluation benchmarks. OpenSearch 3.4 added a no-code UX for building agents with MCP integration. The Launchpad (April 2026) is an AI-powered tool that generates a running search application from plain-language requirements in minutes - a real time-saver for teams new to search.

Perhaps the most distinctive OpenSearch AI feature is Agent Health - open-source observability and evaluation specifically for AI agents. It provides trace-level visibility into agent execution, automated benchmarking, and LLM-as-judge evaluation. For teams running AI agents in production, this addresses a real gap: agents fail silently, and without dedicated monitoring you won't know until users complain.

Solutions, Ecosystem, and Security

Observability

Elastic's observability stack remains the more mature offering. Full OpenTelemetry support with a managed OTLP endpoint (GA October 2025) means you can ship traces directly from OTel SDKs without running a local collector. Combined with APM, log correlation, and ES|QL, Elastic offers a complete observability platform that competes with Datadog and Splunk.

OpenSearch has been closing the gap. The OpenTelemetry integration in OpenSearch 3.1 provides service maps and trace analytics for distributed microservices. PPL's expanded observability commands in 3.3 streamline log analytics workflows. OpenSearch Dashboards received a redesign unifying log analytics, distributed tracing, and visualizations. It's functional, and improving fast, but still lacks the depth and polish of Elastic's integrated experience.

SIEM and Security Features

Elastic Security is a full-blown SIEM with detection rules, SOAR automation, and threat intelligence integrations. Elastic was named a Leader in the Forrester Wave for Cognitive Search Platforms, Q4 2025. For organizations that need an enterprise-grade SIEM, this is a strong differentiator.

OpenSearch doesn't offer a comparable built-in SIEM, but Wazuh - an open source XDR and SIEM built on OpenSearch - fills this gap for many deployments.

Where OpenSearch wins decisively is on built-in security features. LDAP, Active Directory, SAML, OpenID authentication, role-based access control, field-level security, document-level security, audit logging - all of this is free and open source in OpenSearch. In Elasticsearch, anything beyond basic authentication requires a paid X-Pack subscription. For organizations that need enterprise security controls without additional licensing costs, this is a major advantage.

Client Libraries

OpenSearch's client library situation has come a long way since the early post-fork days. Actively maintained clients now exist for Python, Java, JavaScript, Go, Ruby, PHP, .NET, and Rust. The Python and Java clients are the most mature. Elasticsearch's clients still have a larger community, more Stack Overflow coverage, and better documentation - but the gap has narrowed to where it's no longer a deciding factor for most teams.

Pricing and Cost Efficiency

Both technologies are free to run self-managed. The cost picture changes dramatically once you factor in managed services and licensed features.

Managed service options: Elasticsearch is available only on Elastic Cloud (deployable on AWS, GCP, Azure) and as an Azure native integration. OpenSearch has multiple competing providers: Amazon OpenSearch Service, Aiven, Instaclustr (NetApp), and others. More competition means lower prices.

The searchable snapshots gap remains the single biggest cost differentiator. This feature - serving queries from cheap object storage instead of expensive SSD-backed nodes - is transformative for log analytics and observability workloads where most data is cold. In OpenSearch, searchable snapshots are free. In Elasticsearch, they require an Enterprise subscription. For organizations storing months or years of log data, this can cut infrastructure costs by 50-70%.

Amazon OpenSearch Service continues to add cost-optimization features: GPU-accelerated vector search offloaded to a serverless fleet, and Zstandard (zstd) compression for up to 32% index size reduction. Elastic Cloud updated its serverless pricing model in late 2025, moving to VCU-based compute pricing.

On a like-for-like basis, Amazon OpenSearch Service is typically 30-50% cheaper than Elastic Cloud for equivalent workloads. Factor in the licensing cost for features that are free in OpenSearch (security, searchable snapshots, cross-cluster replication) and the gap widens further.

Summary

Feature Elasticsearch 9.x OpenSearch 3.x
License AGPLv3 / SSPL / ELv2 Apache 2.0
Governance Elastic Co OSSF (Linux Foundation)
Vector Engines Lucene Lucene, Faiss
Max Vector Dimensions 4,096 16,000
Vector Quantization BBQ (default), DiskBBQ, int8, bfloat16 Byte, FP16, PQ, binary (32x)
Disk-based Vector Search DiskBBQ (GA 9.2) Faiss on_disk mode (2-phase rescore)
RAG Framework ESRE + Retrievers (GA) Flow Framework + Agentic Search (GA)
AI Agent Builder Elastic Agent Builder + MCP No-code agent builder + MCP
AI Agent Monitoring Via APM Agent Health (dedicated)
Query Language ES|QL PPL + SQL
Security (LDAP, SAML) Paid (X-Pack) Free, built-in
Searchable Snapshots Paid (Enterprise) Free
APM / SIEM Full-featured, mature Basic (Wazuh for SIEM)
Managed Options Elastic Cloud Amazon OpenSearch, Aiven, Instaclustr, others
Onboarding Kibana guided setup Launchpad (AI-powered)

When to choose Elasticsearch:

  • You need the full Elastic platform - APM, SIEM, Enterprise Search as integrated solutions
  • Your team already invests in the Elastic ecosystem and Kibana
  • You need the most polished, integrated RAG developer experience
  • Licensing terms (AGPL copyleft) are acceptable for your use case

When to choose OpenSearch:

  • Cost efficiency is a priority, especially for observability and log analytics workloads
  • You need permissive licensing (Apache 2.0) for embedding in commercial products or offering as a service
  • Vector search at scale is a primary use case - Faiss support, higher dimensions, and more quantization options give you flexibility
  • You want enterprise security features (LDAP, SAML, field-level security) without additional licensing costs
  • You prefer vendor-neutral governance and a broader choice of managed service providers

When it genuinely doesn't matter: For standard text search, log analytics dashboards, and basic alerting - the core functionality is identical. Pick based on your ecosystem, team expertise, and total cost of ownership.

Need help deciding, or planning a migration between the two? Our team has guided hundreds of companies through this decision. Reach out to us to discuss, or check out our OpenSearch consulting and Elasticsearch support services.