Torch is a passionate team of innovators, engineers, and creators dedicated to empowering individuals and businesses with cutting-edge technology solutions. Torch was born from the belief that powerful tools should be accessible, intuitive, and scalable for everyone—from startups to enterprises.
About The Role
We are seeking a highly skilled and motivated Data Engineer to join our Operations/Processing department. This is a full-time, hourly position where you will play a pivotal role in designing, building, and maintaining data pipelines and systems that support the efficient processing and analysis of critical operational data. As a Data Engineer, you will collaborate with cross-functional teams to ensure data integrity, scalability, and accessibility for organizational success.
What You'll Do
- Design, develop, and maintain scalable data pipelines and ETL processes to support operational workflows.
- Collaborate with cross-functional teams to collect, clean, and structure raw data from various sources to ensure data accuracy and consistency.
- Optimize data systems and architecture for performance and scalability to handle large datasets efficiently.
- Monitor and troubleshoot data processing issues to ensure uninterrupted operational workflows.
- Implement and maintain security best practices to ensure data privacy and compliance with company and regulatory requirements.
- Document systems, processes, and data flow to ensure knowledge sharing and alignment across teams.
- Stay updated on industry best practices, emerging tools, and technologies to continuously drive improvements in data engineering processes.
Qualifications
- Proficiency in English with excellent communication skills to collaborate effectively with team members.
- Experience with programming languages commonly used in data engineering, such as Python, Java, or Scala.
- Strong understanding of relational and non-relational databases (e.g., SQL, NoSQL).
- Hands-on experience with cloud platforms such as AWS, Azure, or Google Cloud, and related data tools.
- Proven ability to design and implement scalable ETL frameworks and data integration platforms.
- Familiarity with data pipeline and workflow management tools, such as Apache Airflow or similar frameworks.
- Strong problem-solving skills and a detail-oriented mindset to ensure data accuracy and reliability.
- Bachelor’s degree in Computer Science, Data Science, or a related field, or equivalent practical experience.
- Prior experience working in operations/processing environments is a plus.