Imply
6 months ago
At Imply, we are on a mission to help developers become the new heroes of analytics. Our unique database, built from Apache Druid, enables them to develop the next generation of analytics applications. With Imply, developers can build without constraints as our database lets them create interactive data experiences on streaming and batch data with limitless scale and at the best economics.
Backed by leading investors including a16z and Bessemer Venture Partners, Imply is on a fast growth trajectory - disrupting the $100B database market - with customers including Pepsi, Zillow, Splunk and more. Come join our team of disruptors, pioneers, and innovators!
The Role:
The druid systems team builds the machinery that allows a druid cluster to scale to thousands of nodes. This team works on areas such as real-time streaming ingestion, node management and data balancing algorithms within druid. Their work directly impacts cluster availability, ingestion throughput and query performance. In this role, you will be adding capabilities in core druid so that our clusters can scale even further than what is possible today. You will also be heavily involved in the development and technical direction of the open-source Apache Druid project.
Responsibilities:
- Build a highly scalable and robust query engine in druid that can manage thousands of workers
- Build data management capabilities in druid to help our users reduce cost and improve query performance
- Work with the field engineering team so they can offer the best support to our customers
- Help in the growth of Apache Druid community through code and design reviews
Requirements:
- Experience developing high concurrency, performance-oriented distributed Java systems
- 5+ years of experience working as an Individual contributor
- Solid grasp on good software engineering practices such as code reviews and deep focus on testability and quality
- Strong communication skills: ability to explain complex technical concepts to designers, support staff, and other engineers
- Bachelor’s degree in computer science, engineering, or a related field (or equivalent experience)
Bonus Points:
- Experience working on internals of large-scale distributed systems and databases such as Hadoop, Spark, Presto, ElasticSearch
- A history of open-source contributions is a plus; being a contributor on data-related projects is a big plus
What we offer:
- Provident Fund - Employer will contribute Equivalent to your contribution to the Provident Fund
- Private Medical Insurance
- Group Life & Accident Insurance
- Paid Time Off
- Phone/Internet Allowance
- Home Office Equipment Reimbursement
Don’t meet every single requirement? Studies have shown that certain minority groups are less likely to apply to jobs unless they meet every qualification. At Imply, we are dedicated to building a diverse, inclusive and authentic workplace. If you’re excited about this role but your past experience doesn’t align perfectly with every qualification in the job description, we encourage you to apply anyways. You may be just the right candidate for this or for other roles in the future.
Imply is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, color, gender identity or expression, marital status, national origin, disability, protected veteran status, race, religion, pregnancy, sexual orientation, or any other characteristic protected by applicable laws, regulations and ordinances.
Please note, applications and candidate submissions are subject to our privacy policy and, for California residents, the CCPA terms available at https://imply.io/privacy.
—
Attention: Imply Applicants
Due to reports of phishing, we’re requesting that all Imply applicants apply through our official Careers page at imply.io/careers. All official communication from Imply will come from email addresses ending with @imply.io.