Opportunities at Architect

Founding Machine Learning Engineer

About Architect


Architect is building the intelligence layer for chip companies to bring the time to tapeout from 3 years to 6 months. Our AI amplifies your hardware engineers across DI, DV, and PD workflows, help build IP from the ground up, and keeps you in the loop at every step.


Founded out of Stanford, we’re one of the fastest-moving Bay Area startups, blending frontier ML with deep chip design expertise to transform a trillion-dollar industry. Backed by top VCs and legendary angels, we’re assembling a world-class team of founding engineers and researchers to architect the next era of silicon.


What You’ll Do

  • Train, fine-tune, and evaluate cutting-edge models for RTL, architecture, and PD tasks.
  • Design and deploy multi-agent systems for RTL code generation, design verification, and PD automation—leveraging MCP servers, toolchains, design space optimization, and real chip workflows.
  • Own the entire ML lifecycle from raw data pipelines to live deployment inside our production grade platform for chip engineers.
  • Collaborate cross-functionally with hardware engineers, product, and design to build AI-first chip design workflows.
  • Experiment at the frontier with LLMs, diffusion models, RL (especially test-time scaling), and graph-based approaches for EDA.
  • Prototype new AI-driven features by tracking cutting-edge research and internal benchmarks (throughput, ac
    curacy, PPA).

What We’d Like to See

  • Degree: PhD in Computer Science, EECS, Mathematics, or a closely related field. Preferably, specialization in Machine Learning, Deep Learning or Artificial Intelligence.
  • Hands-On Experience:
    • Strong industry or research background building end-to-end ML pipelines, training models and building multi-agent systems.
    • 2+ years of industry experience; 4 yrs+ strongly preferred.
  • Core Skills:
    • Deep expertise in reinforcement learning, multi-agent systems or large-scale model training- with a track record of shipping systems, not just papers.
    • Multimodal Retrieval-augmented generation (RAG) pipelines, graph-based indexing and retrieval, and NLP techniques for topic modeling and clustering.
    • Local LLM/VLM deployments (vLLM, sgLang, etc.)
    • Large-scale model training and serving (QLoRA, PagedAttention, ZeRO, CUDA, PyTorch Parallelism) - with a track record of shipping production systems, not just papers.
    • End to end model training, especially owning the RL post-training workflows
    • Multi-agent orchestration, context management, memory management and prompt tuning
    • Ability to move fast, prototype, and scale research into production.
    • Obsession with pushing state-of-the-art performance in real-world constraints.
  • Systems Knowledge: Comfortable with cloud-native architectures and distributed systems.
  • Bonus:
    • Prior AI-for-chip-design experience (Synopsys, Cadence, NVIDIA, DeepMind, Etched, Groq, AMD etc)
    • Foundation in Electrical/Computer Engineering and chip-design or verification processes
    • Publications in top ML (NeurIPS, ICLR, ICML) or EDA (DAC, ICCAD, DVCon) venues
    • Founding Engineer/Early Hire at a dev tool company or a fast-moving AI startups.

What We Offer

  • Competitive salary and meaningful equity stake
  • Fast-paced startup with autonomy and visible impact
  • Cutting-edge AI-driven chip design challenges


Engineering

Palo Alto, CA

Share on:

Terms of servicePrivacyCookiesPowered by Rippling