QA Architect - Enterprise Agentic AI

Kore.ai is a pioneering force in enterprise AI transformation, empowering organizations through our comprehensive agentic AI platform. With innovative offerings across "AI for Service," "AI for Work," and "AI for Process," we're enabling over 400+ Global 2000 companies to fundamentally reimagine their operations, customer experiences and employee productivity.

Our end-to-end platform enables enterprises to build, deploy, manage, monitor, and continuously improve agentic applications at scale. We've automated over 1 billion interactions every year with voice and digital AI in customer service, and transformed employee experiences for tens of thousands of employees through productivity and AI-driven workflow automation.

Recognized as a leader by Gartner, Forrester, IDC, ISG, and Everest, Kore.ai has secured Series D funding of $150M, including strategic investment from NVIDIA to drive Enterprise AI innovation. Founded in 2014 and headquartered in Florida, we maintain a global presence with offices in India, UK, Germany, Korea, and Japan.

You can find full press coverage at  https://kore.com/press/

POSITION / TITLE: QA Architect - Enterprise Agentic AI


POSITION SUMMARY: We seek a visionary QA Architect to define and implement quality strategies for our transformative agentic AI platform. This is not a traditional QA role for conventional applications – we're building the foundational technology that enables enterprises to create agentic applications at scale, requiring a research-focused approach to quality that challenges established paradigms.



LOCATION: Hyderabad (Work from Office)


RESPONSIBILITIES 

  Platform Quality Architecture: Define revolutionary quality frameworks for testing the platform that enables building, deploying, and managing agentic applications

  AI-Powered Testing Innovation: Design and implement AI-driven test systems using agentic coding to autonomously identify edge cases and potential failure modes

   Unit Testing Excellence: Create sophisticated frameworks for comprehensive unit testing of platform components, ensuring code quality at all levels

    Scale Testing: Develop methodologies to validate platform performance and reliability at enterprise scale with thousands of agents running simultaneously

  Test Harness Architecture: Design comprehensive test environments that can validate the platform's capabilities across diverse agent creation scenarios

  Cross-Functional Collaboration: Partner closely with platform engineers, UX researchers, and product owners to ensure testability of all platform components

  Quality Strategy Evolution: Continuously evolve QA methodologies as platform capabilities advance, anticipating future challenges before they emerge

    Code Quality Assurance: Implement rigorous code testing practices that ensure the stability and security of the underlying platform

    Integration of Novel Metrics: Develop sophisticated measurements that capture the nuanced performance of the platform beyond traditional pass/fail criteria

 

EXPERIENCE REQUIRED 

      10-15 years of progressive experience in quality assurance, with demonstrated ability to think from first principles

 

MUST HAVE SKILLS    

 

     Experience testing platforms that enable AI/ML applications or agent-based systems

     Strong command of test automation fundamentals, including frameworks, methodologies, and standards

    Proficiency with modern test automation tools beyond Selenium WebDriver, such as Playwright, Cypress, Puppeteer and k6

     Expertise in agent evaluation frameworks like Trulens, DeepEval, Ragas, or Langsmith

     Advanced knowledge of unit testing frameworks, code coverage tools, and test-driven development

     Experience implementing automated code quality analysis in CI/CD pipelines

     Solid understanding of REST API automation, GraphQL, and web services testing

     Experience with Java/JavaScript/Python and modern testing libraries

     Familiarity with agent frameworks (LangChain, LangGraph, AutoGen, CrewAI)

     Understanding of model-based testing, stochastic output validation, and performance testing at scale

  

OTHER SKILLS WE'D APPRECIATE    

  Intellectual Curiosity: Insatiable desire to understand how things work and question established assumptions

  Research Orientation: Experience applying research findings to create novel testing frameworks for emerging technologies

  First-Principles Thinking: Capability to reduce complex problems to their fundamental elements and build solutions from the ground up

   Agentic Coding Expertise: Experience implementing AI-powered coding solutions for test automation and quality assurance

   Code Quality Focus: Passion for maintaining exceptional code quality through automated testing and analysis

  Scale Testing Mastery: Experience designing and implementing test frameworks that can validate performance at enterprise scale

    Exposure to advanced agent evaluation approaches beyond traditional testing methodologies

    Analytics Integration: Hands-on experience with post-deployment validation using analytics dashboards

 

EDUCATION QUALIFICATION    

 

  •        Bachelor’s in Engineering or Master’s in Computer Applications 

Why Join Us?

At Kore.ai, you won't be maintaining quality for conventional software—you'll be defining what quality means for an entirely new category of platform technology that enables enterprise-scale agentic applications. Your work will directly influence how the world's leading organizations build, deploy, and trust AI systems, establishing standards that could transform the industry.

 

Join us in building not just a better platform, but the frameworks that ensure enterprise agentic applications deliver on their transformative promise safely, effectively, and responsibly at scale.



XO IVA

Hyderabad, India

Share on:

Terms of servicePrivacyCookiesPowered by Rippling