Data Engineer

About Sterling Brokers

Sterling Brokers is disrupting the Group Benefits and Retirement landscape in Canada through our innovative application, integrating with the largest insurance carriers in Canada and global HRIS platforms, to administer a streamlined solution. We are the largest independently owned and operated Third Party Administrator (TPA) and Broker in Canada, and 4th largest overall. Established in 2014, SCB is growing rapidly and looking for talented individuals to join our team. 



About the role

The Data Engineer is a software development role, focused on data engineering, data pipelines, and business analytics with opportunities to work on statistical analysis, machine learning, data presentation visualization projects. In this role, you will be responsible for data ingestion (either machine readable or via OCR type technologies), storing, analyzing, and reducing data to actionable information. The team is currently comprised of one other full stack data engineer, a data engineer focused on presentation (Tableau/PowerBI), and a program manager, so you will be involved in all aspects of our data platform.

 

Sterling Brokers' headquarters is in Toronto. The broad team is largely in Canada, with the North American time zones comprising our primary working hours. You can live/work from anywhere in Canada, and while not required, many find it helpful to adjust their working hours to others on their immediate teams.


As we operate across Canada, this is a fully remote position.



Responsibilities will include:

  • Maintain, augment, and establish data pipelines from multiple sources into our data warehouse/data lake Databricks
  • Play a strong role in the design and architecture of the data infrastructure
  • Ensure performance, scalability, and reliability of our data environment
  • Interact with business and other development teams
  • Debug and fix issues across the entire application stack and underlying systems
  • Set technical standards, and collaborate on technical direction
  • Establish ownership of one or more areas of our data infrastructure, including documentation, operational monitoring, and servicing our customers
  • Help answer questions from the business, as well as formulate and pose your own questions, such as "how do we predict customers likely to churn," or "where do our best customers come from?"
  • Lead and teach others about good data model practices
  • Continuously learn new technologies


Requirements (what you need to be successful in the role):


  • 3+ years of data engineering experience
  • Strong written and verbal communication skills
  • Familiar with Databricks / Amazon EMR or AWS Glue or their equivalent services on Azure and GCP/Apache Spark
  • Able to program in Python using Software Development best practices
  • Strong SQL skills, both querying, creating views, and understanding how to generate re- usable data models based on often messy sources
  • Solid understanding of data modeling, ETL/ELT processes, and data pipeline development, preferable with experience in dbt Cloud
  • Interest in data analysis, presentation, and visualization
  • Able to work independently and asynchronously
  • Have to ability to deal with ambiguous problems, manage priorities and help set the pace for a team to execute successfully on a shared vision
  • Be a self-starter, passionate about data analysis, using traditional and advanced ML tools
  • Experience developing in Jupyter notebooks (or commercial equivalents such as Databricks or Mode)
  • Illustrate experience with third party services via API calls, such as AWS Textract
  • Statistics knowledge, experience with R
  • Have experience with reporting tools and presentation layers (such as PowerBI or Tableau)
  • Familiarity with tools such as AutoML
  • Experience with "real money," or mission critical applications
  • Comfortable with multidimensional data
  • Experience with data privacy, security, and anonymization techniques (up to and including differential privacy)
  • Experience with pipeline orchestrators (we use Databricks Workflows and Apache Airflow


Why Work for Sterling? 
Rapidly growing business nationally, with career progression opportunities 
Small office culture amidst a larger organization 
Competitive salaries and benefits  



Technology Operations

Remote (Canada)

Share on:

Terms of servicePrivacyCookiesPowered by Rippling