About Camlin Group:
Camlin is a global technology leader that operates with the vision of bringing revolutionary products to life for a wide range of industries, including power and rail, and also has interests in a number of R&D projects in a variety of scientific sectors.
At Camlin we believe in high quality engineering and design, allowing us to develop market leading products and services. In short, we love creating value for our customers by solving difficult problems. As of today, the Camlin operation spans over 20 countries across the globe.
As a Data Engineer at Camlin, you will be instrumental in shaping the future of energy through data innovation. Collaborating closely with our dynamic team of data engineers, machine learning experts, and data scientists, you'll play a pivotal role in identifying the data-related needs of our organization and contributing to the execution of strategic plans to fulfil them.
Your responsibilities will span a diverse range of tasks, from architecting software solutions to designing data models and implementing cutting-edge data science and machine learning algorithms. You'll be dedicated to enhancing our tools and applications by proactively addressing bugs, performing code refactoring, and ensuring top-notch quality.
We are looking for candidates with diverse levels of experience, spanning from mid-level to senior-level positions.
Key Responsibilities:
- Collaborate with cross-functional teams to identify and address data-related requirements, contributing to strategic planning and execution.
- Architect and develop data solutions, spanning from backend infrastructure to data model design and the implementation of data science and machine learning algorithms.
- Lead development activities, mentor team members and share your knowledge in a context of continuous improvement.
- Continuously enhance the quality of our tools and applications through bug fixes and code refactoring.
- Leverage the latest data technologies and programming languages, including Python, Scala, and Java, along with systems like Spark, Kafka, and Airflow, within cloud services such as AWS.
- Stay abreast of emerging technologies through research and testing, unlocking opportunities for productivity improvements and ensuring our market-leading services for customers.
What You'll Need:
- Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
- Proficiency in programming languages such as Python, Scala or Java.
- SQL knowledge for database querying and management.
- Strong knowledge of relational and NoSQL databases (e.g., PostgreSQL, MongoDB) and data modeling principles.
- Proven ability to design, build, and maintain scalable data pipelines and workflows using tools like Apache Airflow or similar.
- Strong problem-solving and analytical skills.
- Excellent communication and collaboration skills.
Nice to have:
- Hands-on experience with data warehouse and lakehouse architectures (e.g., Databricks, Snowflake, or similar).
- Experience with big data frameworks (e.g., Apache Spark, Hadoop) and cloud platforms (e.g., AWS, Azure, or GCP).
Benefits:
- 25 Days of Annual Leave
- FitPass membership
- Private Health Insurance
- Internal Reward & Recognition Tool Kudos