Pax8
27 days ago
No matter who you are, Pax8 is a place you can call home. We know there’s no such thing as a “perfect candidate, so we don’t look for the right fit – instead, we look for the add. We encourage you to apply for a role at Pax8 even if you don’t meet 100% of the bullet points. We believe in cultivating an environment with a diversity of perspectives, in hopes that we can all thrive in an inclusive environment.
We are only as great as our people. And we have great people all over the world. No matter where you live and work, you’re a part of the Pax8 team. This means embracing hybrid- and remote-work whenever possible.
Position Summary:
As a Data Engineer I, youll be diving into the exciting world of building data systems that drive critical insights and decision-making across Pax8. This role is perfect for someone who enjoys working with big datasets, has a passion for problem-solving, and thrives on collaboration. Youll have the opportunity to interface with stakeholders like Product Managers and business teams to gather requirements, ensuring the data solutions we build are aligned with business needs. Your work will directly impact end users, creating data flows and analytic processes that make a difference.
You’ll be at the heart of our data operations, helping to build and maintain data pipelines that fuel analytic workflows and processes. Your tasks will include developing systems that efficiently collect, transform, store, and manage data using well-documented techniques. You’ll work alongside experienced technical leaders to plan and execute detailed tasks. You’ll also conduct simple investigative analyses and tests to ensure the data is reliable and meets business requirements. Working with stakeholders, you’ll gain a deep understanding of their needs, helping shape visual solutions for tools like PowerBI. Your insights will contribute to improving data visualizations and stakeholder reporting. You’ll also contribute to data modeling efforts, ensuring our data infrastructure is optimized for analytics and reporting.
To succeed in this role, youll need strong technical chops in SQL and Python, with experience using data frame tools like Pandas and Numpy. A solid understanding of object-oriented programming (OOP) principles and a proven ability to apply them in developing clean, efficient, and scalable solutions will be crucial. You’re familiar with big databases like Redshift, BigQuery, Presto, or Hive and have some experience with data orchestration tools such as Airflow or AWS Glue. A solid understanding of dimensional modeling will set you up for success in designing effective data structures. You’ll have strong communication skills, allowing you to effectively interface with stakeholders and gather requirements. Stakeholder management will be crucial as you translate business needs into technical solutions. Experience in B2B, Retail, or SaaS industries is a plus, giving you the context to better understand the business side of the role.
Essential Responsibilities:
- Learns coding techniques/standards and applies them to their work
- Define, build, test, and implement scalable data pipelines using Python and SQL
- Transforms data to support varied use cases
- Optimizes existing data pipelines and improves existing code quality
- Writes unit and integration tests
- Works collaboratively with peers to solve pressing data issues
- Participates in on-call rotation
Ideal Skills, Experience, and Competencies:
- At least one (1) to three (3) years of relevant data engineering experience.
- Intermediate experience with the Python programming language.
- Intermediate experience with SQL.
- Experience with Data Modeling.
- Exposure to a JVM language.
- Exposure to Apache Spark or other distributed processing engines.
- Exposure to Apache Kafka or other stream processing frameworks.
- Exposure to Terraform, Docker, Kubernetes, or other similar infrastructure tooling.
- Exposure to job orchestration and/or ETL tools such as Airflow, Prefect, Glue, Talend, or Informatica.
- Exposure to cloud environments such as AWS, Azure, or Google Cloud.
- Exposure to analytical databases such as Redshift, Athena, Big Query, and Presto.
- Ability to build partnerships and work collaboratively with others to meet shared objectives.
- Ability to actively seek new ways to grow using both formal and informal development challenges.
- Ability to effectively absorb and apply peer feedback.
Required Education & Certifications:
- B.A./B.S. in a related field or equivalent work experience.
Compensation:
- Qualified candidates can expect a compensation range of $93,000 to $115,000 or more depending on experience.
Expected Closing Date: 11/15/24
## LI-Remote ## LI-JF1 ## BI-Remote ## DICE-J
- Non-Commissioned Bonus Plans or Variable Commission
- 401(k) plan with employer match
- Medical, Dental & Vision Insurance
- Employee Assistance Program
- Employer Paid Short & Long Term Disability, Life and AD&D Insurance
- Flexible, Open Vacation
- Paid Sick Time Off
- Extended Leave for Life events
- RTD Eco Pass (For local Colorado Employees)
- Career Development Programs
- Stock Option Eligibility
- Employee-led Resource Groups