Visit us at the OpenSearch booth at KubeCon EU in Amsterdam and OpenSearchCon EU in Prague. Let's talk about running OpenSearch on Kubernetes, Pulse for OpenSearch, and our enterprise distribution.

This spring, the BigData Boutique team is heading to two of the most important events in the OpenSearch and cloud-native ecosystem: KubeCon + CloudNativeCon EU in Amsterdam (March 23-26) and OpenSearchCon EU in Prague (April 16-17). You'll find us at the OpenSearch booth at both events.

If you're running OpenSearch at scale and self-managed - on-prem, in your own cloud, or on Kubernetes - we want to hear about what you're building and where the pain points are. That's the kind of conversation we enjoy most.

Two Events, One Focus: OpenSearch in Production

OpenSearch has had a remarkable year. The project crossed 1 billion total downloads with 78% year-over-year growth, the OpenSearch Software Foundation celebrated its first anniversary under the Linux Foundation, and the 3.x release line brought serious performance gains - 2.5x faster vector search, gRPC transport, pull-based ingestion, and native AI agent support.

But shipping new versions is the easy part. Running OpenSearch reliably in production, at scale, across upgrade cycles - that's where things get interesting. And that's exactly what we spend our days doing.

Here's what we're bringing to the booth.

Running OpenSearch on Kubernetes - Lessons from Building the Operator

Our team leads the development of the OpenSearch Kubernetes Operator, and we recently shipped the 3.0 Alppha release - a ground-up rewrite focused on production stability. If you've run earlier versions, you know the pain: upgrade deadlocks, split-brain during rolling restarts, unreliable recovery from error states.

Operator 3.0 fixes that. The key changes:

  • Quorum-safe rolling restarts that prevent split-brain scenarios across multi-AZ and multi-tier deployments
  • Multi-namespace and multi-tenant support with namespace-scoped RBAC
  • TLS certificate hot reloading - no more pod restarts for cert rotation
  • Full OpenSearch 3.x support including gRPC port configuration
  • Init-containers and sidecars for both OpenSearch and Dashboard pods

We've deployed this across customer environments at various scales and configurations. If you're running OpenSearch on Kubernetes - or considering it - come talk to us about what production-grade actually looks like. We'll share what we've learned, including the edge cases that don't show up in documentation.

Pulse, the AI SRE for OpenSearch - Stop Firefighting, Start Operating

Operating OpenSearch clusters at scale means dealing with shard allocation issues, slow queries, resource bottlenecks, configuration drift, and the occasional 3 AM alert that turns out to be a missing index setting. Most teams end up in a reactive cycle: something breaks, you investigate, you fix it, you move on until the next thing breaks.

Pulse changes that. It's our always-on AI SRE platform for OpenSearch - continuous monitoring, anomaly detection, and actionable recommendations delivered before problems become incidents. Think of it as having an OpenSearch operations expert watching your clusters around the clock.

If you're managing multiple OpenSearch clusters and spending too much time on operational toil, we'd love to show you what Pulse can do. Stop by the booth for a live demo.

OpenSearch Enterprise - A Distribution Built for Business-Critical Environments

Open-source OpenSearch is powerful. But when your search infrastructure underpins revenue-critical applications, you need more than raw software. You need predictable upgrade paths, rapid security patches, hardened configurations, and someone to call when things go sideways.

That's why we built OpenSearch Enterprise - our enterprise-hardened distribution with long-term support. It includes:

  • Long-term supported releases with predictable, tested upgrade paths
  • Rapid security patching and critical fixes
  • Enterprise-hardened configurations and operational best practices
  • Full support for both OpenSearch and the Kubernetes Operator
  • Pulse included for continuous AI-driven cluster monitoring

For teams running self-managed OpenSearch in environments where stability and safety isn't optional, this is what peace of mind looks like.

BDB OpenSearch Enterprise Distribution

Beyond Search: OpenSearch as AI Infrastructure

One conversation we keep having with customers is about OpenSearch's role beyond traditional search. With OpenSearch 3.x delivering 9x faster vector indexing and 3x storage reduction, the platform is now a serious contender for production vector database workloads, and a solid foundation for building Agentic RAG workflows.

We're helping teams go well beyond basic semantic search:

  • Hybrid retrieval combining keyword and vector search for better relevance
  • RAG pipelines backed by OpenSearch as the retrieval layer
  • Agentic AI architectures using OpenSearch's native MCP support for tool integration and context management

If you're already running OpenSearch for search or log analytics, you may not need a separate vector database. We can show you how to consolidate.

Come Find Us

We'll be at the OpenSearch booth at both events:

  • KubeCon + CloudNativeCon EU - Amsterdam, RAI Amsterdam, March 23-26
  • OpenSearchCon EU - Prague, Prague Marriott Hotel, April 16-17

Want to make sure we have time to sit down properly? Schedule a meeting ahead of time. No slide decks, no sales pitch - just a real technical conversation about your OpenSearch challenges.

Whether you want to talk about migrating from Elasticsearch, scaling vector search to billions of documents, tuning cluster performance, or getting OpenSearch production-ready on Kubernetes - we're here for it.

See you in Amsterdam and Prague.