About HerculesAI
HerculesAI helps finance and operations leaders solve problems that are too complex, large-scale, or time-consuming for human teams to manage alone. Its platform automates the validation and verification of data across millions of high-volume, rules-based transactions, improving billing accuracy, reducing costs, and accelerating cash flow. Built on a modular, multi-AI agent architecture, HerculesAI delivers industry-specific solutions for staffing, insurance, government, and financial services. Its accuracy and consistency enable enterprises to achieve levels of precision and speed that were previously out of reach.
Headquartered in the United States, HerculesAI also has offices in the United Kingdom, Armenia, Canada, and Portugal.
About the role
This role will be operating at the intersection of applied research and engineering, building crucial AI capabilities, and owning their robust implementation. You will have direct impact on shaping the models served in our products, and the user experience as well as business value that is driven by them.
What you'll do
Post-training, distillation and reinforcement learning
- Create training environments that elevate the quality ceiling of synthetic data, and provide high-quality reward signals for both off- and on-policy learning.
- Design model behaviors that optimize for interpretability and an outstanding user experience.
- Train agentic models that autonomously interact with complex structured and unstructured environments, write and prototype code, and navigate multimodal inputs.
- Utilize the best tool for the job – SFT over synthetic data, on-policy distillation, RL or any combination of these to drive down the cost of specialized intelligence.
- Own benchmarking pipelines – assemble high-quality datasets and measure impact of different training parameters or methodologies.
Engineering Principles
- AI engineers are engineers – build production-ready code and own the end-to-end process of data, training, benchmarking and implementationArchitect scalable inference strategies that maximize compute efficiency.
- Maintain solutions to ensure stable, reliable execution for customers.
- Closely collaborate with engineering team to design important intersections of AI and product services such as interfaces, inference tracking, checkpointing, data persistence and caching.
- Contribute to engineering culture; mentor peers, evolve patterns, raise the bar on code quality, testing strategy, and documentation.
Tooling & Resources
- Work with bleeding-edge open-weights models and internal training frameworks.
- Stay at the forefront of open research and tooling– rapidly prototype and validate novel approaches.
- Contribute to internal tooling for inference, training, orchestration, and data generation.
Soft Skills & Mindset
- Communicate ideas and explain solutions – even to a non-technical audience.
- Autonomously build demonstrations and showcase work to the wider company.
- Break down ambiguous problems into a clear technical scope – think beyond immediate business needs and. incorporate future scalability into each design.
- Proactively collaborate with team members and seek out all resources relevant to the task at hand.
Qualifications
Our ideal candidate has all of the following:
- Strong background in both ML and engineering
- Great communication skills and rapid understanding of problem descriptions
- High agency; willing to propose, prototype and own solutions
- Experience in post-training large models (SFT, distillation, RL)
- Experience deploying models at production scale
- Previous work with dockerized systems (Docker, Kubernetes, etc.)
- Previous contributions to open research or AI infrastructure
- Strong familiarity with latest AI research and open-source tooling
- Experience shipping complex software and owning production-code
Nice to Have
- Experience performance-engineering inference architecture for maximum throughput (vLLM, SGlang, llama.cpp)
- Detailed knowledge of RL frameworks (verifiers, ART, prime-rl, trl)
- Published models or blog posts/publications
Success Indicators
- Benchmarks and user metrics: Model quality does not just exist in a lab – both benchmarks and real user metrics are critical to evaluate model quality.
- Reliability: Minimizing hallucinations, maximizing inference uptime and consistency, ensuring high accuracy in expected model outputs.
- Interpretability: We do not build black-box AI – all models must be able to communicate reasoning processes, break down code execution, reference sources and achieve consensus decisions.
- Impact: Improving internal frameworks and tooling, elevating technical culture, educating teams and sharing learnings.
- Collaboration: Ownership of projects without micro-management, agency in seeking out and offering help wherever possible.