About Architect
Architect is an AI research and product lab for chip design. We build AI models and systems that can explore, design, optimize, and verify new hardware. Our goal is to reimagine chip design using AI, cut down ASIC design time and cost, and enable a new era of ultra-efficient, domain-specific chips powering the future of computation.
Born out of Stanford, our team blends researchers and engineers from Anthropic, DeepMind, Meta, Apple, Intel, and other frontier labs. Backed by leading VCs and angels, including the Chief Scientist at Google, Stanford professors, and founders of chip companies, Architect operates in stealth, pushing the limits of AI4EDA and building the intelligence layer for the hardware revolution.
What You'll Do
As a Research Intern at Architect, you will spend 3 months working alongside the founding team to push the boundaries of how AI models explore and optimize hardware designs. This is a high-impact role where your experiments will directly influence our core modeling roadmap.
- Responsible for co-designing and implementing the Reinforcement Learning experiments (GRPO/PPO/DPO), training data mixes and reward signal explorations.
- Contribute to research on post-training techniques, running ablation studies to improve model reasoning and alignment capabilities.
- Implement and test new algorithms for model fine-tuning and evaluation, helping to translate research papers into working prototypes.
- Analyze experimental results and debug model behavior to help establish best practices for our training recipes.
What We'd Like to See
Qualifications & Skills:
- Education: Currently pursuing a PhD or Master’s degree in Computer Science, Machine Learning, Mathematics, or a related field. Exceptional undergraduates with strong research experience are also encouraged to apply.
- RL Knowledge: Strong academic understanding or project experience with Reinforcement Learning (e.g., PPO, DPO, GRPO). You should be comfortable reading and implementing concepts from recent research papers.
- Coding Proficiency: Strong proficiency in Python and deep learning frameworks (PyTorch). You should be able to write clean, efficient research code.
- Research Mindset: A fast learner who is comfortable navigating ambiguity. You enjoy analyzing complex problems and iterating quickly on experiments.
- LLM Familiarity: Experience with training or fine-tuning Large Language Models (LLMs) or familiarity with the modern NLP stack (Transformers, HuggingFace, etc.).
Bonus:
- Previous internship experience at frontier AI labs or research organizations.
- Publications (or submissions) in top ML venues (NeurIPS, ICLR, ICML) or EDA venues (DAC, ICCAD).
- Familiarity with hardware design concepts (Verilog, RTL, EDA tools), though not required.
What We Offer
- Competitive internship stipend
- Mentorship from a team of researchers and engineers from Anthropic, DeepMind, Meta, and Stanford
- Opportunity to work on 0→1 problems in AI-driven chip design