Data Engineer

At Advisor360°, we build technology that transforms how wealth management firms operate, scale, and serve their clients. As a leading SaaS platform in the fintech space, we’re trusted by some of the largest independent broker-dealers and RIAs to power the full advisor and client experience—from portfolio construction and reporting to compliance and client engagement.

What sets us apart? It's not just the tech (though it's best-in-class). It's the people, the purpose, and the passion behind everything we do. We’re a team of builders, thinkers, and doers who believe that great companies are defined by the stories they tell and the experiences they create—internally and externally. We bring deep industry expertise, a collaborative spirit, and a commitment to innovation as we reshape what’s possible in wealth management. 

As we grow, we’re looking for teammates who are ready to roll up their sleeves, think big, and help elevate our brand in a way that reflects the bold ambitions we have for our company and the clients we serve. 

Join us, and be part of a company that's not only moving fast—but making it count. 

We hire people with all kinds of awesome experiences, backgrounds, and perspectives. We like it that way. So even if you don’t meet every single requirement, please consider applying if you like what you see.
Here’s What You’ll Do:  

  • Act as a senior individual contributor on the data engineering team, owning the design and delivery of mid- to large-scale data initiatives from discovery through production.
  • Partner closely with product managers, analytics, and downstream consumers to translate business requirements into well-defined data solutions, including ingestion, transformation, and serving layers.
  • Plan, write, and groom Jira stories with clear problem statements, acceptance criteria, dependencies, and estimates; actively participate in sprint planning and backlog refinement.
  • Design and build reliable, scalable, cloud-native data pipelines using Snowflake, Azure services, Python, SQL, and modern orchestration patterns.
  • Develop and maintain ELT pipelines and data models using dbt, following best practices for modularity, documentation, testing, and performance.
  • Apply strong data modeling principles (dimensional, normalized, and analytical models) to support reporting, analytics, and downstream applications.
  • Ensure high standards of data quality, observability, and reliability by implementing automated data tests, monitoring, and alerting.
  • Contribute to and evolve coding standards, CI/CD pipelines, and data engineering best practices, identifying opportunities for simplification, automation and improvement.
  • Leverage AI-assisted development tools (e.g., Cursor, Augment, Claude, LLMs) to accelerate development, improve code quality, and assist with documentation and testing.
  • Share knowledge through design reviews, documentation, and mentoring, helping raise the overall data engineering maturity of the team.


What You Bring to the Table:  

  • 5+ years of experience in data engineering, analytics engineering, or closely related software engineering roles.
  • Proven ability to analyze ambiguous problems, ask the right questions, and decompose work into well-scoped, deliverable stories.
  • Strong hands-on experience building and maintaining production-grade data pipelines and analytical data models.
  • Solid understanding of data engineering SDLC, including development, testing, deployment, and ongoing support.
  • Experience participating in or leading technical design discussions and making pragmatic trade-off decisions.
  • A quality-first mindset with experience implementing data validation, testing frameworks, and QA automation for data.
  • Ability to work independently while also collaborating effectively within a cross-functional, Agile team.
  • Willingness to learn and grow within the Wealth Management domain (prior experience is a plus, but not required).


 Preferred Strengths

  • Strong experience with SQL and relational databases, with a focus on analytical workloads and performance tuning (Snowflake preferred).
  • Proficiency with Python for data engineering use cases (data pipelines, validation, automation).
  • Hands-on experience with Azure data services, such as Azure Data Factory, Azure SQL, Azure Storage, and related services.
  • Experience working with modern data tooling, such as dbt, data catalogs, and data governance frameworks.
  • Familiarity with CI/CD practices for data platforms using GitHub, YAML-based pipelines, and automated testing.
  • Experience with containerization (Docker) and understanding of cloud-native deployment patterns is a plus.
  • Exposure to streaming or incremental data ingestion patterns (e.g., CDC, Kafka).
  • Comfortable using Agile tooling (Jira) to manage work, communicate progress, and collaborate across teams.
  • Experience with Power BI or similar BI and reporting tools.


Why You’ll Love Working Here: 

It’s not just about work—it’s about building a career and enjoying the ride! Here’s what you can expect:

We believe in recognizing and rewarding performance. Our compensation package includes competitive base salaries, annual performance-based bonuses, and the chance to share in the equity value you and your colleagues create during your time with the company. We offer comprehensive health benefits, including dental, life, and disability insurance. We also trust our employees to manage their time effectively, which is why we offer an unlimited paid time off program to help you perform at your best every day.  

Join us on this journey. Advisor360° is an equal opportunity employer committed to a diverse workforce. We believe diversity drives innovation and are therefore building a company where people of all backgrounds are truly welcome and included. Everyone is encouraged to bring their unique, authentic selves to work each and every day. The way we see it, we are here to learn from each other.    

Product/Engineering

Bangalore, India

Share on:

Terms of servicePrivacyCookiesPowered by Rippling