Ascentt is transforming the future of manufacturing through advanced Data Analytics, AI/ML, and Generative AI solutions. We partner with global manufacturing enterprises to convert complex industrial data into actionable, real-time business insights. Our teams work on scalable, high-impact engineering challenges across cloud, data, and intelligent automation ecosystems. If you are passionate about innovation, solving complex problems, and building next-generation data platforms, Ascentt offers an exciting opportunity to create real-world impact at scale.
We are looking for a passionate and highly motivated Data Engineer to join our growing data team. In this role, you will work on building scalable data platforms, optimizing large-scale data pipelines, and enabling data-driven decision-making across the organization. You will collaborate closely with Data Scientists, Analysts, and business stakeholders to develop modern cloud-based data solutions using technologies such as Databricks, Snowflake, PySpark, SQL, and Python. If you enjoy solving complex data challenges and working in a fast-paced, innovative environment, we’d love to connect with you.
Key Responsibilities:
- Design, build, and maintain scalable ETL/ELT pipelines for processing large volumes of structured and unstructured data
- Develop high-performance data processing solutions using PySpark and distributed computing frameworks
- Build, optimize, and manage data platforms on Databricks and/or Snowflake
- Write clean, efficient, and production-ready SQL queries and Python code for data transformation, automation, and analytics
- Collaborate with cross-functional teams including Data Analysts, Data Scientists, Product teams, and Business stakeholders to deliver data-driven solutions
- Ensure data quality, governance, integrity, scalability, and reliability across enterprise data systems
- Monitor, troubleshoot, and optimize existing pipelines, workflows, and database performance
- Implement best practices around coding standards, testing, CI/CD, version control, and technical documentation
Required Skills & Qualifications:
- 2–5 years of experience in Data Engineering or related roles
- Strong hands-on experience with Databricks and/or Snowflake
- Proficiency in SQL and Python programming
- Practical experience with PySpark and distributed data processing
- Solid understanding of Data Warehousing, ETL/ELT concepts, and Data Modeling
- Experience working with large-scale datasets in cloud-based environments
- Bachelor’s degree in Computer Science, Engineering, Information Systems, or a related technical field
Preferred Skills:
- Experience with cloud platforms such as AWS, Azure, or GCP
- Familiarity with orchestration and transformation tools such as Airflow, dbt, or Azure Data Factory (ADF)
- Knowledge of Git, CI/CD pipelines, and DevOps best practices
- Exposure to Delta Lake, Lakehouse architecture, Kafka, Spark Streaming, or real-time data processing
- Experience working in Agile/Scrum environments is a plus