About Quartermaster
At Quartermaster AI, we believe the ocean should be a safe and sustainably managed resource for all. By leveraging cutting-edge AI and robotics, we unlock capabilities that were only recently impossible. Our distributed open-ocean systems enable every vessel to sense, compute, and communicate, enhancing maritime domain awareness for those who need it most.
Equal Employment Opportunity (EEO) Statement
Quartermaster AI is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We do not discriminate based on race, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, veteran status, genetic information, or any other protected status under applicable federal, state, or local laws.
We encourage individuals of all backgrounds to apply and join us in shaping the future of defense technology. If you require accommodations during the application or interview process, please let us know.
We are looking for an Artificial Intelligence Engineer with an emphasis in Multi-Modal system to design, develop, and deploy machine learning systems that utilize diverse sensor data for real-time maritime intelligence. You’ll work at the intersection of vision, RF, acoustic, and natural language signals—building models that fuse these modalities to provide a robust and contextual understanding of vessel activity. This role is ideal for someone who thrives on ambiguity, bridges theory and implementation, and is excited by the challenge of building AI systems that work in dynamic, constrained, and remote environments.
Key Responsibilities: Research, design, and implement advanced machine learning models that combine vision, RF, and acoustic signals for detection, classification, and tracking tasks Architect sensor fusion pipelines that support robust, redundant, and context-aware perception in dynamic environments Collaborate closely with domain experts and systems engineers to translate raw sensor data into actionable model inputs Design and oversee data pipelines for multi-modal learning, including data alignment, augmentation, and pre-processing across modalities Optimize models and inference workflows for low-latency execution on embedded and edge compute platforms Lead performance analysis across individual and fused modalities, and drive strategies for improving robustness and generalization Prototype and operationalize novel research in sensor fusion, uncertainty modeling, and representation learning Contribute to long-term architectural decisions around multi-modal AI infrastructure, tooling, and evaluation frameworks Document model design, training methodology, and validation processes with rigor and clarity
Qualifications (Preferred): PhD or Master’s degree in Machine Learning, Computer Vision, Signal Processing, or a closely related field 7+ years of experience building and deploying machine learning systems, with a focus on multi-modal or sensor fusion applications Proficiency in Python and deep learning frameworks such as PyTorch or TensorFlow Demonstrated experience working with camera imagery, RF signals and/or acoustic data Deep understanding of signal alignment, temporal/spatial synchronization, and feature extraction across diverse data types Proven ability to bridge research and application—delivering high-performance models in production contexts Excellent communication and collaboration skills in cross-functional, interdisciplinary teams Experience in maritime, aerospace, or other sensor-rich environments is a significant plus
Work Environment: This is a remote position with collaboration via online tools. Flexible working hours with occasional deadlines requiring high availability. Opportunity to work on innovative projects with a global impact.
Benefits: Competitive salary Flexible work hours and the option for remote work. Opportunities for professional development and continued education.
Engineering
Remote (United States)
Share on: