LLMOps Platform Engineer

SumerSports is a leading football intelligence technology company that specializes in providing an innovative suite of products for football fans and NFL clubs. We are a collection of executives, engineers, data scientists, and visionaries from NFL clubs, technology startups, finance, and academia. 


Our data-driven platform empowers teams with insights and tools to make informed decisions within salary cap constraints. The platform also serves the NCAA, offering insights around the transfer portal and more.


What sets us apart is our unique blend of big tech talent, data scientists, and former NFL personnel, who have a combined 600+ years of NFL experience. Our domain knowledge is augmented by AI and machine learning technologies to create a unique view into many aspects of Football.

As an ML/AI Engineer on the LLMOps Platform team, you’ll build the core infrastructure that powers our AI-first product organization.


You’ll design, implement, and scale the systems that make it possible for product pods to develop, evaluate, and safely deploy LLM-based and multimodal applications — from RAG pipelines and model gateways to eval frameworks and cost-optimized serving.


You’ll work closely with AI app engineers, full stack engineers, and the Deep Learning Research group to ensure every AI system we ship is fast, grounded, and reliable.


Responsibilities:

  • Build and operate the LLM Platform:
    • Develop model routing, prompt registry, and orchestration services for multi-model workflows.
    • Integrate external LLM APIs (OpenAI, Anthropic, Mistral) and internal finetuned models.
  • Enable fast, safe experimentation:
    • Implement automated evaluation pipelines (offline + online) with golden sets, rubrics, and regression detection.
    • Support CI/CD for prompt and model changes, with rollback and approval gates.
  • Collaborate cross-functionally:
    • Partner with product pods to instrument RAG pipelines and prompt versioning.
    • Work with deep learning and data teams to integrate structured and unstructured retrieval into LLM workflows.
  • Optimize performance and cost:
    • Profile latency, token usage, and caching strategies.
    • Build observability and monitoring for LLM calls, embeddings, and agent behaviors.
  • Ensure reliability and safety:
    • Implement guardrails (toxicity, PII filters, jailbreak detection).
    • Maintain policy enforcement and audit logging for AI usage.

Qualifications:

  • 5+ years of experience in applied ML, NLP, or ML infrastructure engineering.
  • Strong coding skills in Python and experience with frameworks like LangChain, LlamaIndex, or Haystack.
  • Solid understanding of retrieval-augmented generation (RAG), embeddings, vector databases, and evaluation methodologies.
  • Experience deploying models or AI systems in production environments (AWS, GCP, or Azure).
  • Familiarity with prompt management, LLM observability, and CI/CD automation for AI workflows.

Nice to Have:

  • Experience with model serving (vLLM, Triton, Ray Serve, KServe).
  • Understanding of LLM evaluation frameworks (OpenAI Evals, Promptfoo, Arize Phoenix, TruLens).
  • Background in sports analytics, data engineering, or multimodal (video/text) systems.
  • Exposure to Responsible AI practices (guardrails, safety evals, fairness testing).

Benefits:

  • Competitive Salary and Bonus Plan
  • Comprehensive health insurance plan
  • Retirement savings plan (401k) with company match
  • Remote working environment
  • A flexible, unlimited time off policy
  • Generous paid holiday schedule - 13 in total including Monday after the Super Bowl

Engineering

Remote (United States)

Share on:

Terms of servicePrivacyCookiesPowered by Rippling