We are seeking an individual to conduct advanced engineering in 3D perception and machine learning to address challenges in automated driving.
The candidate will have the opportunity to:
- Create and curate internal datasets.
- Adapt existing models trained using public datasets to use internally generated data.
- Perform large-scale data processing.
- Perform basic data mining.
- Evaluate and document model performance of different methods.
Required Qualification:
- M.S. or Ph.D student in Computer Science or Engineering
- 1+ years of research and engineering experience for computer vision and machine learning algorithms (e.g. DNN, RNN, GNN, GAN, ViT, etc).
- 1+ year of experience in computer vision and deep learning topics with focus on at least two of the following areas: object detection/segmentation, 3D scene understanding, autonomous driving, sensor fusion, state estimation, structure from motion.
- Knowledge of automated driving software stack components (e.g. Perception, Localization, Object Fusion, Planning, Motion Control, etc).
- Programming experience in Python and C++, and hands-on experience with libraries such as PyTorch.
- Demonstrated willingness and ability to learn and grasp new concepts quickly.
- Self-starter and self-motivated.
Desired Qualification:
- Experience working with open-source autonomous driving datasets.
- Experience with geographic data (WGS84/UTM).
- Experience working with various sensor modalities including cameras and LiDAR.
- Experience developing deep learning architectures and performing model optimization.
Additional Information
The U.S. base salary range for this intern position is $26.50 – $68.00. Within the range, individual pay is determined based on several factors, including, but not limited to, type of degree, work experience and job knowledge, complexity of the role, type of position, job location, etc. Your Hiring Manager can share more details about the specific salary range for this position during the interview process.
Ready to apply for this role?
Apply Now →


