Tixr
about 2 months ago
- Deep knowledge and experience with architectures for modern data infrastructure including data lakes, data warehouses, ETL pipelines, physical and logical data formats, data processing systems, data reliability, security, governance, and performance
Work to improve and migrate existing code to improve performance, making deliberate and thoughtful tradeoffs where necessary.
Extensive knowledge of different kinds of data stores (row-oriented, columnar, key/value, document, graph, etc.) and their use cases and tradeoffs
Proficiency with various big data technologies including BigQuery, Redshift, Parquet, Spark, AWS Glue, Expertise in Java (Hibernate)
Be independently responsible for the entire lifecycle of projects and systems, including design, development, and deployment
High standards and expectations for software development, excellent written and verbal communication skills
Strive to write elegant and maintainable code; comfortable picking up new technologies.
You break down complex projects into simple systems that can be built and maintained
by less experienced engineers
- You are proficient in working with distributed systems and have experience with different distributed processing frameworks that can handle data in batch and real-time
- 5+ Years experience as a Software Engineer (Java preferred) with a focus on data
- 5+ Years experience working with distributed ingestion, processing, storage, and access of big data (bonus points for experience with AI/ML)
- 5+ Years experience with MySQL, Couchbase, Kafka, SageMaker
- 7+ Years experience leveraging tools and infrastructure provided by GCP and AWS
- Salary Range $130,000 - $180,000 + Bonus + Equity
- 100% Remote with Hybrid Optional
- Paid Health Benefits ($0 Premiums)
- Dental, Vision, Life plans
- Open Vacation
- 401k (50% match up to 3%)
- Paid Equipment
- Education Stipend
- Paid Holidays & Birthdays Off
- Parental Leave
- Team Offsites / Events
- Ticket hookups!