Senior Data Platform Engineer

Join the revolution in hospitality tech!


Liven is a leading global data, technology, and customer experience provider for the hospitality industry. From humble beginnings, we have grown to serve over 6,000 venues and millions of diners across Australia, the USA, and Southeast Asia, processing over 120 million transactions worth more than $3 billion (AUD) annually.


At Liven, our platform is built to help hospitality businesses save more and work smarter by seamlessly integrating every aspect of their operations — from ordering and payments to back-of-house management.

Driven by a deep passion for the hospitality industry, we continuously innovate to elevate the experience for both venues and their guests. Our solutions are powered by AI-enriched insights and automated workflows, enabling smarter decision-making and smoother operations at scale.

We’re proud to be an AI-first organisation. By automating repetitive tasks, we free up space for our teams — and our customers — to focus on what truly matters: solving complex problems, delighting guests, and driving meaningful growth.

Key Milestones:

  • Expansion: Acquired OrderUpAbacusZeemart, Copper and Nomnie forming Asia Pacific’s largest end-to-end group in hospitality technology.
  • Global Reach: Headquartered across major cities including Melbourne, Brisbane, Sydney, Singapore, Bali, Jakarta, New York, and India.

If you're someone who thrives on creativity, bold thinking, and using technology to make things better, faster, and smarter — you’ll feel right at home here.


Here’s a quick glimpse of Liven: 

About the role

As a Senior Data Platform Engineer, you will be the technical owner of Liven’s data platform infrastructure—driving automation, scalability, and resilience across our data workflows. Sitting at the intersection of DevOps and Data Engineering, this role is critical in enabling our teams to ship faster, fail less, and derive more value from our data ecosystem.

You’ll work closely with data engineers, analysts, product managers, and software engineers to build and operate a modern, secure, and scalable data platform that supports everything from machine learning pipelines to real-time analytics.

What you'll do

  • Own and operate the end-to-end data infrastructure, ensuring performance, reliability, and scalability.
  • Design and implement CI/CD pipelines specifically for data workflows and tooling.
  • Deploy and manage tools like Airbyte, Prefect, and Superset using Docker and Kubernetes.
  • Set up and maintain monitoring, secrets management, and alerting systems to ensure platform health and security.
  • Apply GitOps practices or tools like Argo Workflows for streamlined infrastructure deployments.
  • Manage and scale Kafka, Spark, or DuckDB clusters to support real-time and batch data workloads.
  • Explore and maintain self-hosted tools like dbt Cloud, ensuring smooth integration and performance.
  • Use Infrastructure-as-Code tools like Terraform or Helm to automate provisioning and configuration.
  • Administer observability stacks such as Grafana and Prometheus for infrastructure visibility.
  • Implement secure access control, role-based permissions, and ensure compliance with GDPR, HIPAA, and internal data governance standards.
  • Collaborate across teams to support data engineers, analysts, and developers with reliable infrastructure and workflow tooling.
  • Steer clear of proprietary infrastructure platforms like AWS Glue or Azure Synapse (we’re staying open-source/cloud-native for now).

Qualifications

  • 5–8 years of experience in DataOps, DevOps, or Platform Engineering roles.
  • Proficiency with modern data stack components (e.g., Airflow, dbt, Kafka, Databricks, Redshift).
  • Solid understanding of cloud platforms (AWS or GCP).
  • Strong communication skills to collaborate across product, data science, and engineering teams.
  • Bias for ownership, automation, and proactive resolution.

Good to Have

  • Experience with Infrastructure-as-Code tools like Terraform or Helm for managing Kubernetes and cloud resources.
  • Familiarity with administering Grafana, Prometheus, or similar observability stacks.
  • Exposure to GitOps methodologies and tools like Argo CD or Flux.
  • Hands-on experience with self-hosted or hybrid setups of tools like dbt Cloud.
  • Understanding of auto-scaling strategies for distributed systems (Kafka, Spark, DuckDB).
  • Experience contributing to platform or DevOps initiatives in a data-heavy environment.


Engineering

Jakarta, Indonesia

Chennai, India

Remote (India)

Share on:

Terms of servicePrivacyCookiesPowered by Rippling