We live and breathe Elasticsearch, the Elastic Stack, and OpenSearch as well. We are looking to add a team member who is passionate about search, data and analytics, and knows Elasticsearch well.
This role is fully remote and highly flexible.
What you will do:
- Help our customers maintain and improve their Elasticsearch and OpenSearch infrastructure.
- Solve complex search, information retrieval and analytics problems on a daily basis.
- Get to the bottom of sophisticated issues.
- Make a difference in mission critical systems at large organizations and startups alike, world-wide.
What you are:
- Highly experienced engineer, hands-on and love it.
- Problem solver and can-do attitude, who gets things done.
- At least 3 years experience with Elasticsearch as an engineer or DevOps (not just a user).
- Good learner and self-motivated.
- Deep Java knowledge - advantage.
- Open-source experience - a huge advantage!
We are looking for a strong Data Engineer with Apache Spark / Databricks experience to join our growing team of builders. You will be working with Fortune 100 companies and startups alike to build and improve their modern data platforms at huge scales.
What you will do:
- Help our customers grow and scale their Big Data infrastructure.
- Solve complex data and infrastructure problems on a daily basis.
- Innovate and use bleeding-edge technologies and methods in the data space.
- Make a difference in mission critical systems at large organizations and startups alike, world-wide.
- Be an active part of building innovative solutions involving Big Data technologies, Clouds and complex backend systems.
What you are:
- Highly experienced engineer, hands-on and love it.
- Problem solver and can-do attitude, who gets things done.
- At least 3 years experience in Java (including Kotlin and Scala).
- Proven experience with Apache Spark and / or Databricks.
- Good learner and self-motivated.
- Experience with Delta Lake, Iceberg, Apache Flink, AWS - advantage.
- Open-source experience - a huge advantage!
We are looking for a strong BigData engineer to join our growing team of doers. You will be working with Fortune 100 companies and startups alike to build and improve their modern data platforms at huge scales.
What you will do:
- Help our customers grow and scale their Big Data infrastructure.
- Solve complex data and infrastructure problems on a daily basis.
- Make a difference in mission critical systems at large organizations and startups alike, world-wide.
- Be an active part of building innovative solutions involving Big Data technologies, Clouds and complex backend systems.
What you are:
- Highly experienced engineer, hands-on and love it.
- Problem solver and can-do attitude, who gets things done.
- At least 3 years experience in Python and Java (including Kotlin and Scala).
- Proven experience with at least 2 of the following: Kafka, Elasticsearch, Apache Spark, MongoDB, Flink, Presto/Trino, Kafka streams, ClickHouse.
- Proven experience with at least one public cloud (Google Cloud or AWS preferred).
- Good learner and self-motivated.
- Familiarity in Docker and Kubernetes.
- Experience with orchestration tools like Airflow, Dagster, etc. - advantage
- Open-source experience - a huge advantage!
We are looking for a backend engineer familiar with the Java / JVM ecosystem to join our growing team of builders of Big Data platforms. You will be working with Fortune 100 companies and startups alike to build and improve their modern data platforms at huge scales.
What you will do:
- Help our customers build and scale their Big Data platforms.
- Solve complex data and infrastructure problems on a daily basis.
- Write interesting and sometimes challenging Spark and Flink jobs in Java / Scala / Kotlin
- Make a difference in mission critical systems at large organizations and startups alike, world-wide.
- Be an active part of building innovative solutions involving Big Data technologies, Clouds and complex backend systems.
What you are:
Highly experienced engineer, hands-on and love it.
Problem solver and can-do attitude, who gets things done.
Have at least 3 years of experience working as a backend developer, data engineer or similar role.
Proficient in JVM-based programming languages such as Java, Kotlin, or Scala.
Have strong understanding of language fundamentals including class loading, garbage collection, memory management, parallel processing, and multithreading.
Good learner and self-motivated.
Have experience working with Linux environments and SQL databases.
Open-source experience - a huge advantage!
We design and create platforms and products designed to reduce the friction in various BigData and Search operations.
As a Business Development Representative at BigData Boutique, you will...
- Fall in love in our products, and spread this love forward to potential customers
- Build a targeted list of accounts and leads
- Help create the messaging for different personas on the different platforms
- Generate a pipeline of meetings and opportunities through outbound efforts
- Use omni-channel approach to generate sales qualified meetings - Linkedin, cold calls, emails, etc.
To be a Business Development Representative at BigData Boutique you need...
- A strong desire to grow your career in sales
- Highly organized and able to work in a fast-paced quota driven environment
- At least 3 years of any customer-facing experience
- Fluent English – Mandatory (read/write/speak)
- Excellent communication and interpersonal skills
- Motivated by individual and team achievement as well as able to operate under minimal supervision
- BA/BS degree or equivalent practical experience
As our business is growing, we need a dedicated person to build, grow and maintain strong relationships with our customers and partners. As such, we are seeking a talented individual who will be responsible for fostering partnerships with both our customers and strategic partners such as AWS, Google and more.
This role is founded on understanding customers’ needs, the modern data ecosystem and solutions that optimizes the customer value.
The primary accountability is generating business growth through scouting partners, growing existing partnerships, and being intimately familiar with existing customers and their requirements.
The position is full time, and fully remote.
In this role, you will...
- Maintain and expand current partnerships to grow impact on the business and create additional customer value.
- Work with partner teams from all around the world to generate new business, innovate on lead acquisition, marketing and more.
- Develop and enhance the client relationship with assigned partners to build engagement strategies to increase revenues for both Kramer and the Alliance Partners
- Manage post-sale activities with existing customers to enrich their experience and deepen the mutual commitment.
- Serve as the focal point for escalation for issue resolution with the Strategic Partners, and work with the internal functions to solve any issues.
Requirements:
- B.A. degree or equivalent – a must.
- Native Hebrew speaker, perfect English.
- Technical background in the software, cloud or data space.
- Entrepreneurship, can-do attitude, and result orientation.
- Attention to detail and problem-solving skills.
- Excellent communication and interpersonal skills.
- Motivated by individual and team achievement as well as able to operate under minimal supervision.
At Big Data Boutique, we are at the forefront of modern data infrastructure. We specialize in providing cutting-edge software and architecture solutions, and optimization services for complex data environments. Our mission is to ensure that our clients' data operations are seamless, scalable, and resilient.
Whether it's managing high-throughput analytical engines or fine-tuning traditional relational databases, we are the experts that companies turn to when their data needs to move at the speed of business.
As a Software Engineer, you will be a key contributor to our core engineering team, building and maintaining the tools that power our data operations platform. You won't just be writing scripts; you will be designing robust backend systems that interact with diverse database technologies; and often interact with data platforms powering Fortune 100 success stories.
Key Responsibilities:
- Develop, test, and maintain high-performance Python applications and APIs.
- Monitor, manage, and optimize data flows.
- Harness GenAI to your will - use it in a smart way to become more efficient, and build tooling and agentic systems to automate workflows.
- Collaborate with DevOps and Data Engineers to troubleshoot complex performance bottlenecks.
- Contribute to the automation of data pipelines and infrastructure management.
- Participate in code reviews and architectural discussions to ensure high code quality and system reliability.
Minimum Requirements:
- Experience: At least 2+ years of professional software development experience with Python.
- Database Foundations: Proven understanding and hands-on experience with at least one transactional database system (e.g., PostgreSQL, MySQL, SQL Server).
- Core Skills: Strong command of Pythonic principles, asynchronous programming, and RESTful API design.
- Problem Solving: A methodical approach to debugging and a drive to understand "under the hood" mechanics.
- GenAI Ready: Open minded mentality, ready to learn new things and leverage GenAI tooling and mindset without hampering quality and accuracy.
Major Advantages:
- OLAP & Search Engines: Experience with ClickHouse or Elasticsearch is highly valued.
- Database Internals: Deep knowledge of database engine internals and query optimization.
Nice-to-Have Skills:
- Expert Tuning: In-depth experience in performance tuning for MySQL or PostgreSQL (e.g., indexing strategies, configuration optimization, execution plan analysis).
- Infrastructure: Familiarity with Docker, Kubernetes, or cloud-native data services.