Easy Apply
Easy Apply
Design and develop large-scale multi-modal models for autonomous driving systems, focusing on vision, language, and control integration. Collaborate with researchers and engineers for model deployment and optimization.
XPENG is a leading smart technology company at the forefront of innovation, integrating advanced AI and autonomous driving technologies into its vehicles, including electric vehicles (EVs), electric vertical take-off and landing (eVTOL) aircraft, and robotics. With a strong focus on intelligent mobility, XPENG is dedicated to reshaping the future of transportation through cutting-edge R&D in AI, machine learning, and smart connectivity.
We are looking for a full-time Machine Learning Engineer / Research Scientist to drive the modeling and algorithmic development of XPENG’s next-generation Vision-Language-Action (VLA) Foundation Model — the core brain that powers our end-to-end autonomous driving systems.
You will work closely with world-class researchers, perception and planning engineers, and infrastructure experts to design, train, and deploy large-scale multi-modal models that unify vision, language, and control. Your work will directly shape the intelligence that enables XPENG’s future L3/L4 autonomous driving products.
Key Responsibilities- Design and implement large-scale multi-modal architectures (e.g., vision–language–action transformers) for end-to-end autonomous driving.
- Develop pretraining and fine-tuning strategies leveraging massive labeled and unlabeled fleet data (images, video, LiDAR, CAN bus, maps, human driving behaviors, etc.).
- Research and integrate cross-modal alignment (e.g., visual grounding, temporal reasoning, policy distillation, imitation and reinforcement learning) to improve model interpretability and action quality.
- Collaborate with infrastructure engineers to scale training across thousands of GPUs using distributed training frameworks (FSDP, DDP, etc.).
- Conduct systematic ablation, evaluation, and visualization of model behavior across perception, reasoning, and planning tasks.
- Contribute to model deployment optimization, including quantization, export, and latency–accuracy trade-offs for onboard execution.
- Master’s degree or higher in Computer Science, Electrical/Computer Engineering, or related field, with 3+ years of experience in deep learning research or productization.
- Strong proficiency in PyTorch and modern transformer-based model design.
- Experience in large-scale pretraining or multi-modal modeling (vision, language, or planning).
- Deep understanding of representation learning, temporal modeling, and self-supervised or reinforcement learning techniques.
- Familiarity with distributed training (DDP, FSDP) and large-batch optimization.
- PhD in CS/CE/EE or related field, with 1+ years of relevant industry experience.
- Publication record in top-tier AI conferences (CVPR, ICCV, NeurIPS, ICLR, ICML, ECCV).
- Prior experience building foundation or end-to-end driving models, or LLM/VLM architectures (e.g., ViT, Flamingo, BEVFormer, RT-2, or GRPO-style policies).
- Familiarity with RLHF/DPO/GRPO, trajectory prediction, or policy learning for control tasks.
- Proven ability to collaborate cross-functionally with infra, perception, and planning teams to deliver production-ready models.
What do we provide:
- A collaborative, research-driven environment with access to massive real-world data and industry-scale compute.
- An opportunity to work with top-tier researchers and engineers advancing the frontier of foundation models for autonomous driving.
- Direct impact on the next generation of intelligent mobility systems.
- Opportunity to make significant impact on the transportation revolution by the means of advancing autonomous driving.
- Competitive compensation package.
- Snacks, lunches, dinners, and fun activities.
The base salary range for this full-time position is $244,140-$413,160, in addition to bonus, equity and benefits. Our salary ranges are determined by role, level, and location. The range displayed on each job posting reflects the minimum and maximum target for new hire salaries for the position across all US locations. Within the range, individual pay is determined by work location and additional factors, including job-related skills, experience, and relevant education or training.
We are an Equal Opportunity Employer. It is our policy to provide equal employment opportunities to all qualified persons without regard to race, age, color, sex, sexual orientation, religion, national origin, disability, veteran status or marital status or any other prescribed category set forth in federal or state regulations.
Top Skills
Distributed Training
Large-Scale Pretraining
Multi-Modal Modeling
PyTorch
Reinforcement Learning
Vision-Language-Action Transformers
XPeng Motors Palo Alto, California, USA Office
Palo Alto, CA, United States, 94301
Similar Jobs
Cloud • Software
Lead and develop a talented SRE team while ensuring compliance with FedRAMP regulations and collaborating across teams for security and operations.
Top Skills:
AIAutomationCloudDistributed SystemsFedrampSecurity
Fintech • Financial Services
As a Teller, you will support customer transactions, engage with clients, process operations, and ensure compliance with bank policies while building community relationships.
Fintech • Financial Services
The Associate Personal Banker will build relationships and provide financial solutions to customers, assist with account openings, and ensure compliance with regulations.
What you need to know about the San Francisco Tech Scene
San Francisco and the surrounding Bay Area attracts more startup funding than any other region in the world. Home to Stanford University and UC Berkeley, leading VC firms and several of the world’s most valuable companies, the Bay Area is the place to go for anyone looking to make it big in the tech industry. That said, San Francisco has a lot to offer beyond technology thanks to a thriving art and music scene, excellent food and a short drive to several of the country’s most beautiful recreational areas.
Key Facts About San Francisco Tech
- Number of Tech Workers: 365,500; 13.9% of overall workforce (2024 CompTIA survey)
- Major Tech Employers: Google, Apple, Salesforce, Meta
- Key Industries: Artificial intelligence, cloud computing, fintech, consumer technology, software
- Funding Landscape: $50.5 billion in venture capital funding in 2024 (Pitchbook)
- Notable Investors: Sequoia Capital, Andreessen Horowitz, Bessemer Venture Partners, Greylock Partners, Khosla Ventures, Kleiner Perkins
- Research Centers and Universities: Stanford University; University of California, Berkeley; University of San Francisco; Santa Clara University; Ames Research Center; Center for AI Safety; California Institute for Regenerative Medicine


