Junior AI/ML Engineer – LLM-Based Content Moderation

About TrustLab


Online misinformation, hate speech, child endangerment, and extreme violence are some of the world's most critical and complex problems. TrustLab is a fast-growing, VC-backed startup, founded by ex-Google, TikTok and Reddit executives determined to use software engineering, ML, and data science to tackle these challenges and make the internet healthier and safer for everyone. If you’re interested in working with the world’s largest social media companies and online platforms, and building technologies to mitigate these issues, you’ve come to the right place. 

About the role

  • We are seeking an AI/ML Engineer with expertise in Large Language Models (LLMs) to enhance the precision and recall of classification systems detecting content abuse, including hate speech, sexual content, misinformation, and other policy-violating material. You will work with cutting-edge AI models to refine detection mechanisms, improve accuracy, and minimize false positives/negatives.

Responsibilities

  • Design, develop, and optimize AI models for content moderation, focusing on precision and recall improvements.
  • Fine-tune LLMs for classification tasks related to abuse detection, leveraging supervised and reinforcement learning techniques.
  • Develop scalable pipelines for dataset collection, annotation, and training with diverse and representative content samples.
  • Implement adversarial testing and red-teaming approaches to identify model vulnerabilities and biases.
  • Optimize model performance through advanced techniques such as active learning, self-supervision, and domain adaptation.
  • Deploy and monitor content moderation models in production, iterating based on real-world performance metrics and feedback loops.
  • Stay up-to-date with advancements in NLP, LLM architectures, and AI safety to ensure best-in-class content moderation capabilities.
  • Collaborate with policy, trust & safety, and engineering teams to align AI models with customer needs.

Minimum Qualifications

  • Bachelor's or Master’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. 
  • 1+ years of experience in AI/ML, with a focus on NLP, deep learning, and LLMs.
  • Proficiency in Python and deep learning frameworks such as TensorFlow, PyTorch, or JAX.
  • Experience in fine-tuning and deploying transformer-based models like GPT, BERT, T5, or similar.
  • Familiarity with evaluation metrics for classification tasks (e.g., F1-score, precision-recall curves) and best practices for handling imbalanced datasets.

Preferred skills

  • Experience working with large-scale, real-world content moderation datasets.
  • Knowledge of regulatory frameworks related to content moderation (e.g., GDPR, DSA, Section 230).
  • Familiarity with knowledge distillation and model compression techniques for efficient deployment.
  • Experience with reinforcement learning (e.g., RLHF) for AI safety applications.

Opportunities and perks

  • Work on cutting-edge AI technologies shaping the future of online safety.
  • Collaborate with a multidisciplinary team tackling some of the most challenging problems in content moderation.
  • Competitive compensation, comprehensive benefits, and opportunities for professional growth.

The pay range for this role is:

100,000 - 130,000 USD per year (Palo Alto)

Engineering

Palo Alto, CA

Share on:

Terms of servicePrivacyCookiesPowered by Rippling