Senior Data Engineer

Join the revolution in hospitality tech!

Liven is a leading global data, technology, and customer experience provider for the hospitality industry. From humble beginnings, we have grown to serve over 6,000 venues and millions of diners across Australia, the USA, and Southeast Asia, processing over 120 million transactions worth more than $3 billion (AUD) annually.

Our platform is designed to help hospitality businesses save more and work smarter by integrating all operational aspects—from ordering to back-of-house operations to payments. Our passion for hospitality drives us to continually innovate and enhance the industry with AI-enriched data insights and automated process management.

Key Milestones:

  • Expansion: Acquired OrderUpAbacusZeemart, and Copper, forming Asia Pacific’s largest end-to-end group in hospitality technology.

  • Global Reach: Headquartered across major cities including Melbourne, Brisbane, Sydney, Singapore, Bali, Jakarta, New York, and India.

Join us in revolutionising hospitality with best-in-class software, services, and hardware to maximize profitability and operational efficiency.

Here’s a quick glimpse of Liven: 

About the role

Are you a data engineering expert with a passion for designing high-performance, scalable data systems? Liven is looking for a Senior Data Engineer to join our team and take the lead in managing and optimizing our data infrastructure within Databricks and AWS environments. This role is perfect for someone who thrives in dynamic, fast-paced settings, loves solving complex challenges, and is eager to make a real impact.

What you'll do

  • Lead the management of our Databricks platform, including cluster setup, Spark job execution, and performance optimization. Expertise in Databricks is essential.
  •  Architect and automate ETL/ELT pipelines using best practices in CI/CD, ensuring they are efficient, scalable, and capable of handling real-time and batch data processing.
  •  Oversee the use of AWS services such as S3, EC2, RDS, Redshift, and Glue to build and maintain a secure, scalable, and cost-effective data infrastructure.
  • Implement Lakehouse and Medallion architectures (Bronze, Silver, Gold layers) to optimize data storage, retrieval, and analytics workflows.
  • Establish data governance frameworks, ensuring data quality, security, and compliance across all systems.
  • Partner with data science and analytics teams to design data models and deliver clean, actionable datasets that drive business insights. Experience with BI tools like Looker is highly desirable.

Qualifications

  • 5+ years in data engineering with demonstrated expertise in Databricks (mandatory) and AWS services.
  • Advanced skills in Python, Scala, SQL, and big data tools like Spark and Kafka. Strong understanding of CI/CD practices for data pipelines.
  • Proven experience with Lakehouse and Medallion architectures, supporting machine learning, analytics, and reporting needs.
  • Skilled in transforming raw data into actionable datasets, with experience using BI tools such as Looker, PowerBI, or Tableau.
  • Strong leadership skills, with the ability to lead projects from concept to delivery, mentor junior engineers, and collaborate across teams in an agile environment.

Good to Have

  • Familiarity with data specific to the food and beverage or hospitality industries, including POS systems and transactional data.

  • Experience working with data from POS systems, integrating and optimizing it for analytics and reporting.

  • Basic understanding of incorporating data pipelines into machine learning workflows.

  • Familiarity with infrastructure as code tools like Terraform for managing cloud resources.

Engineering

Remote (India)

Share on:

Terms of servicePrivacyCookiesPowered by Rippling