Opportunities at Architect

Founding Member of Technical Staff - RL Post Training

About Architect


Architect is a frontier AI research and product lab for chip design. We build AI models and systems that can explore, design, optimize, and verify new hardware. Our goal is to reimagine chip design using AI, cut down ASIC design time and cost, and enable a new era of ultra-efficient, domain-specific chips powering the future of computation.

Born out of Stanford, our team blends researchers and engineers from Anthropic, Google DeepMind, NVIDIA, Meta SuperIntelligence Labs, Apple, and Intel. Backed by leading VCs and angels, including the Chief Scientist at Google, Stanford professors, and founders of chip companies, Architect currently operates in stealth, pushing the limits of AI4EDA and building the intelligence layer for the hardware revolution.


What You'll Do

As a Founding Member of the Technical Staff (RL) at Architect, you'll be at the forefront of post-training the AI models for chip design tasks like RTL code generation, verification, and architectural exploration.

  • Responsible for co-designing and implementing the Reinforcement Learning environments and algorithms, Reward Models trainings and reward signal experiments.
  • You will work at the intersection of cutting-edge research and production engineering for chip designs, implementing, scaling, and improving post-training techniques to enhance model capabilities and usability .
  • Design, build, and run robust, efficient pipelines for model fine-tuning and evaluation, ensuring that theoretical performance translates into production-ready implementations.
  • This is a hands-on, 0→1 role where you'll own the end-to-end RL workflow—from reward modeling and environment design to test-time optimization and scaling.
  • Collaborate with research teams to translate emerging techniques into production-ready implementations and debug complex issues in training pipelines and model behavior.

What We'd Like to See

Qualifications & Skills:

  • Degree: PhD in Computer Science, EECS, Mathematics, or a closely related field. Preferably, specialization in Machine Learning, Deep Learning, or Artificial Intelligence. Or BS/MS with a strong research engineering background.
  • RL & Post-Training Expertise: Deep expertise in reinforcement learning and post-training, with a proven track record of taking models from research to real-world deployment.
  • Model Training: Strong industry or research background building end-to-end ML pipelines. Experience RL and fine-tuning LLMs and code models for reasoning, tool use, and structured coding tasks.
  • Systems Engineering: Strong software engineering skills with experience building complex ML systems. Comfortable working with large-scale distributed systems, high-performance computing, and distributed training frameworks (e.g., PyTorch, CUDA, QLoRA, ZeRO).
  • Engineering Rigor: Adept at analyzing and debugging model training processes. Capable of balancing research exploration with engineering rigor and operational reliability.
  • Execution: Fast-moving builder who can prototype, benchmark, and productionize training pipelines with tight feedback loops.

Bonus:

  • Worked on the post-training team at frontier labs like OpenAI, Anthropic, DeepMind, Mistral, MSL, Cohere, etc.
  • Foundation in Electrical/Computer Engineering and chip-design or verification processes (not required, but a plus).
  • Publications in top ML (NeurIPS, ICLR, ICML) or EDA (DAC, ICCAD, DVCon) venues.
  • Experience as a Founding ML Engineer/Researcher or early hire at an AI deeptech startup.

What We Offer

  • Competitive salary and meaningful equity stake
  • Fast-paced startup with autonomy and visible impact
  • Cutting-edge AI-driven chip design challenges

Engineering

Palo Alto, CA

Share on:

Terms of servicePrivacyCookiesPowered by Rippling