AI Engineer, LLM Systems & Agentic Workflows

About Command|Link


Command|Link is a global SaaS Platform providing network, voice services, and IT security solutions, helping corporations consolidate their core infrastructure into a single vendor and layering on a proprietary single pane of glass platform. Command|Link has revolutionized the IT industry by tackling the problems our competitors create. In recognition for our unprecedented innovation and dedication, Command|Link was recognized as the SD-WAN Product of the Year, ITSM Visionary Spotlight, UCaaS Product of the Year, NaaS Product of the Year, Supplier of the Year, and the AT&T Strategic Growth Partner. Command|Link has built the only IT platform for scale that solves ISP vendor sprawl and IT headaches. We make it easy for our customers to get more done, maximize uptime and improve the bottom line.


Learn more about us here!


This is a 100% remote position

About your new role:

As an AI Engineer focused on LLM Systems, your primary mandate is to design, build, and operate the AI layer that powers intelligent automation across the CommandLink platform. You'll be working at the engineering layer of agentic AI: building durable, production-grade LLM workflows on top of Temporal, implementing security and policy controls around LLM execution, and solving hard problems around prompt injection, output trust, and runtime governance in domain-specific contexts.


You'll work closely with Engineering and Product leads to build agentic workflows to execute deterministic workflows for context aware insights, triage, investigations and remediation into reliable, observable, and policy-compliant AI workflows. That means designing for failure, latency, and adversarial inputs from day one, not retrofitting safety controls after the fact. The space is moving fast, the problems are genuinely unsolved, and we're looking for someone who has strong opinions about how to build AI systems that are trustworthy in production.


Key Responsibilities:

  • Agentic workflow engineering: design and build multi-step LLM workflows using Temporal as the durable orchestration backbone; handling retries, state, parallelism, human-in-the-loop steps, and long-running agent execution
  • Domain-specific automation: work with subject matter experts to identify, scope, and implement AI-driven automation for specific business and operational domains; own the full delivery from prototype to production
  • LLM security and policy enforcement: implement runtime policy controls around LLM execution, including prompt injection mitigation, output validation, privilege separation (dual-LLM / quarantined execution patterns), and integration with policy engines
  • Parallel and live evaluation: build evaluation frameworks to assess LLM output quality in parallel with production traffic; implement continuous evals, regression detection, and automated quality gates
  • Prompt injection defense: apply and adapt state-of-the-art design patterns including the Dual LLM, Plan-Then-Execute, and Code-Then-Execute patterns to harden agent pipelines against adversarial inputs
  • Policy engine integration: integrate tools such as Sequrity.ai to define, enforce, and audit natural-language security policies over LLM tool use and execution paths
  • Observability and auditability: instrument AI workflows with full event history, structured logging of prompts and completions, cost tracking, and latency profiling making the behavior of AI systems traceable and debuggable
  • LLM steering and control: implement output steering strategies, structured generation, constrained decoding, and fallback routing to ensure models behave within defined operational envelopes
  • Collaborate on architecture: work across the engineering team to define standards for how AI capabilities are integrated into the product setting patterns others will follow


What You'll Need for Success

Essential:

  • Experience with complex and large datasets
  • 2+ years building production LLM-powered applications beyond RAG prototypes; real systems handling real failure modes
  • Hands-on experience with Temporal (or equivalent durable execution platforms such as Cadence or Conductor) for orchestrating multi-step, long-running AI workflows
  • Deep understanding of prompt injection attack vectors, mitigation strategies, and the trade-offs between defense patterns (Dual LLM, CaMeL / Code-Then-Execute, Action-Selector, context minimization)
  • Experience implementing policy controls and guardrails around LLM execution RBAC/PBAC for agents, output filtering, semantic validation, and tool-use restrictions
  • Practical experience building parallel evaluation pipelines for LLM outputs live evals, shadow scoring, regression suites, and automated quality gates
  • Strong software engineering fundamentals. You write maintainable, testable code; experience in Python and/or Go preferred
  • Familiarity with LLM APIs and inference providers (OpenAI, Anthropic, Mistral, or open-weight models via vLLM / Ollama)
  • Understanding of agentic architecture patterns: tool use, multi-agent delegation, structured outputs, memory and context management
  • Experience integrating LLM systems with external tools and APIs in a secure, auditable way
  • Experience with langchain or other agentic frameworks


Nice to Have:

  • Experience with dedicated policy engines for LLM security such as Sequrity.ai, LLM Guard, or equivalent TOML/rules-based policy frameworks
  • Familiarity with OWASP LLM Top 10 and NIST AI RMF compliance requirements
  • Experience with structured generation frameworks (Outlines, Instructor, Guidance) for constrained LLM outputs
  • Knowledge of chaos and adversarial testing for AI systems; red-teaming, jailbreak evaluation, and automated adversarial prompt suites
  • Experience with open-weight model deployment (vLLM, TGI, Ollama) and inference optimization
  • Familiarity with MCP (Model Context Protocol) and other protocols for standardised agent tool integration
  • Background in security engineering, particularly application-layer threat modelling and or networking and device management
  • Takes on additional responsibilities and projects as needed to support the success of the team and organization.


Why you'll love life at Command|Link

Join us at CommandLink, where you'll have the opportunity to shape the future of business communication. We value the innovative spirit and seek individuals ready to bring their unique vision and expertise to a team that values bold ideas and strategic thinking. Are you ready to make an impact? Apply now and be the architect of your career as well as our clients' success.

  • Room to grow at a high-growth company
  • An environment that celebrates ideas and innovation
  • Your work will have a tangible impact
  • Flexible time off  
  • Fun events at cool locations
  • Employee referral bonuses to encourage the addition of great new people to the team


At CommandLink, we’re committed to creating a fair, consistent, and efficient hiring experience. As part of our process, we use AI-assisted tools to help review and analyze applications. These tools support our recruiting team by identifying qualifications and experience that align with the requirements of each role.


AI tools are used only to assist in the evaluation process — they do not make final hiring decisions. Every application is reviewed by a member of our recruiting or hiring team before any decisions are made.

Software Engineering

Argentina

Brazil

Chile

Colombia

Costa Rica

India

Mexico

Philippines

United Kingdom

Partager sur :

Conditions générales d’utilisationConfidentialitéCookiesPropulsé par Rippling