Opportunities at Architect

Founding Member of Technical Staff - ML

About Architect


Architect is an AI research and product lab for chip design. We build AI models and systems that can explore, design, optimize, and verify new hardware. Our goal is to reimagine chip design using AI, cut down ASIC design time and cost, and enable a new era of ultra-efficient, domain-specific chips powering the future of computation.


Born out of Stanford, our team blends researchers and engineers from DeepMind, Meta, Apple, Intel, and other frontier labs. Backed by leading VCs and angels, including the Chief Scientist at Google, Stanford professors, and founders of chip companies, Architect operates in stealth, pushing the limits of AI4EDA and building the intelligence layer for the hardware revolution.


What You’ll Do


As a Founding Member of the Technical Staff (ML) at Architect, you’ll be at the forefront of training the AI models for chip design task like RTL code generation, verification, architectural exploration, multimodal capabilities, tool use etc.

You’ll design multi-agent systems that reason, generate, and verify real silicon, integrating seamlessly into chip engineers’ workflows. This is a hands-on, 0→1 role where you’ll work across the entire ML lifecycle, data pipelines, model training, deployment, and iteration, while collaborating closely with hardware and product teams. You’ll push the limits of what’s possible with LLMs, post-training, RL, and graph-based methods for EDA, experimenting at the bleeding edge and helping define how AI truly designs hardware.


What We’d Like to See

  • Degree: PhD in Computer Science, EECS, Mathematics, or a closely related field. Preferably, specialization in Machine Learning, Deep Learning or Artificial Intelligence. Or BS/MS with strong research engineering background from frontier labs, Deep Tech AI startups etc.
  • Background: We don't expect candidates to have any chip design background. Rather we would prefer candidates with strong ML background with an interest to apply that to hardware design.
  • Hands-On Experience:
    • Strong industry or research background building end-to-end ML pipelines, data curation and preparation, modeling including midtraining and post training using RL.
    • 2+ years of industry experience; 4 yrs+ strongly preferred.
  • Core Skills:
    • Deep expertise in reinforcement learning and post-training, with a proven track record of taking models from research to real-world deployment—not just publishing papers.
    • Experience training and fine-tuning LLMs and code models for reasoning, tool use, and structured generation tasks (especially RTL and hardware design domains).
    • Strong hands-on background in local LLM/VLM deployment (vLLM, sgLang, LM Studio, Ollama, etc.) and distributed training/serving (QLoRA, ZeRO, PagedAttention, CUDA, PyTorch Parallelism).
    • Comfortable owning end-to-end RL post-training workflows—from reward modeling and environment design to test-time optimization and scaling.
    • Fast-moving builder who can prototype, benchmark, and productionize training pipelines with tight feedback loops.
    • Passionate about pushing state-of-the-art performance in code generation and reasoning under real-world engineering constraints.
  • Systems Knowledge: Bonus if comfortable with cloud-native architectures and distributed systems.
  • Bonus:
    • Worked at post-training team at frontier labs like OpenAI, Anthropic, DeepMind, Mistral, MSL, Cohere etc.
    • Foundation in Electrical/Computer Engineering and chip-design or verification processes but not required.
    • Publications in top ML (NeurIPS, ICLR, ICML) or EDA (DAC, ICCAD, DVCon) venues
    • Founding ML Engineer/ Researcher Early Hire at an AI deeptech startups.

What We Offer

  • Competitive salary and meaningful equity stake
  • Fast-paced startup with autonomy and visible impact
  • Cutting-edge AI-driven chip design challenges


Engineering

Palo Alto, CA

Share on:

Terms of servicePrivacyCookiesPowered by Rippling