5 months ago

Logo of Agtonomy

Software Engineer, Perception

$160k - $240k

Agtonomy

San Francisco, CARemote Hybrid
About Us

Agtonomy is pioneering advanced automation and AI solutions to transform agriculture and beyond. Initially focused on specialty crops, our TeleFarmer™ platform addresses labor-intensive needs with automation, turning conventional equipment into autonomous machines. By partnering with leading manufacturers like Doosan Bobcat, we integrate smart technology into tractors and other machinery, enhancing safety and efficiency. As we expand into ground maintenance and other industrial applications, our expert team continues to address key challenges with labor shortages, sustainability and profitability across various industries.

About the Role

As a Perception / Machine Learning Engineer on the Autonomy Team, you will play a key role in solving challenging perception problems in outdoor vehicle automation. Leveraging your experience, you will implement state-of-the-art ML perception techniques to improve how Agtonomy’s tractors perceive and understand the environments where they operate. You will work closely with embedded, localization, and planning engineers on the team to design and evolve the upstream and downstream interfaces of the perception system. This role is perfect for someone who loves implementing ML to tackle real world problems, and is excited about applying their experience to make robots perceive in rugged, agricultural environments.

What Youll Do

  • Applying machine learning to solve challenging perception problems for autonomous systems (e.g. object detection, semantic segmentation, instance segmentation, dense depth, optical flow, tracking, etc.).
  • Driving the architecture, deployment, and performance characterization of our deep learning models.
  • Refining and optimizing models for low-latency inference on embedded hardware.
  • Designing and building cloud-based training and labeling pipelines.
  • Collaborating with the hardware and embedded teams on sensor selection and vehicle packaging given safety requirements.
  • Writing performant, well-tested software, and improving code quality of the entire Autonomy team through code and design reviews.

What Youll Bring

  • 5+ years of experience in software development for problems involving computer vision, machine learning, and robotic perception techniques.
  • Foundational understanding of deep learning: model layer design, loss function intuition, training best practices.
  • Experience handling large datasets efficiently and organizing them for training and evaluation.
  • Experience curating synthetic and real-world image datasets for training.
  • Strong proficiency in modern C++ and Python and experience writing efficient algorithms for resource-constrained embedded systems.
  • Ability to thrive in a fast-moving, collaborative, small team environment with lots of ownership.
  • Excellent analytical, communication, and documentation skills with demonstrated ability to collaborate with interdisciplinary stakeholders outside of Autonomy.
  • An eagerness to get your hands dirty by testing your code on real robots at real customer farms (gives “field testing” a whole new meaning!).

What Makes You a Strong Fit

  • Experience architecting multi-sensor ML systems from scratch.
  • Experience with compute-constrained pipelines: optimizing models to balance the accuracy vs. performance tradeoff, leveraging TensorRT, model quantization, etc.
  • Experience implementing custom operations in CUDA.
  • MS or PhD in Robotics, Computer Science, Computer Engineering, or a related field.
  • Publications at top-tier perception/robotics conferences (e.g. CVPR, ICRA, etc.).
  • Passion for sustainable agriculture and electric vehicles.
Benefits:

• 100% covered medical, dental, and vision for the employee (cost plus partner, children, or
family is additional)
• Commuter Benefits
• Flexible Spending Account (FSA)
• Life Insurance
• Short- and Long-Term Disability
• 401k Plan
• Stock Options
• Collaborative work environment working alongside passionate mission-driven folks!

Our interview process is generally conducted in five (5) phases:

1. Phone Screen with People Operations (30 minutes)
2. Video Interview with the Hiring Manager (45 minutes)
3. Coding Challenge and Technical Challenge (1 hour with an Autonomy Engineer)
4. Panel Interview (Video interviews scheduled with key stakeholders, each interview will be 30 to 45 minutes)
5. Final Interviews (CEO, CFO, VP of Engineering, 30 minutes each)