Flutter International logo

Data Engineer - Betnacional, Hybrid & Remote

Flutter International
Full-time
Remote friendly (Cluj-Napoca, Romania)
Romania
Data Analytics
Data Engineer - Betnacional, Hybrid & Remote

About Betfair Romania Development:

Betfair Romania Development is the largest technology hub of Flutter Entertainment, with over 2,000 people powering the world’s leading sports betting and iGaming brands. Exciting, immersive and safe experiences are delivered to over 18 million customers worldwide, from our office in Cluj-Napoca. Driven by relentless innovation and commitment to excellence, we operate our own unbeatable portfolio of diverse proprietary brands such as FanDuel, PokerStars, SportsBet, Betfair, Paddy Power, or Sky Betting & Gaming.

Our Values:

The values we share at Betfair Romania Development define what makes us unique as a team. They empower us by giving meaning to our contributions, and they ensure that we consistently strive for excellence in everything we do. We are looking for passionate individuals who align with our values and are committed to making a difference.

Win together | Raise the bar | Got your back | Own it | Positive impact

About Betnacional:

Founded in 2021, Betnacional was among the first betting platforms to secure a definitive operating license in Brazil. Known for its speed, innovation, and strong connection with Brazilian players, Betnacional combines cutting-edge technology with a deep understanding of the national market. In 2025, Betnacional together with Betfair Brazil have established Flutter International’s fifth region: Flutter Brazil.

Role Overview:

As a Data Engineer on the Data team, your mission is to build and maintain scalable, high-performance data pipelines that power our analytics and products. You’ll transform raw data into trusted datasets with high quality, working closely with cross-functional teams to meet business and technical needs.

You’ll design batch and streaming workflows, implement ETL/ELT processes, and support the growth of our data platform with a focus on performance, automation, and reliability. Your work will enable better decision-making through clean, accessible, and well-modeled data.

Key Accountabilities & Responsibilities:

  • Assist in building and maintaining batch and real-time data pipelines using modern data tools and cloud platforms.

  • Monitor internal dashboards and communication channels to help identify and escalate production issues.

  • Respond to support requests from data users related to data access, anomalies, and performance questions.

  • Support the resolution of data quality issues, working with senior engineers to identify root causes.

  • Help create and maintain documentation for pipelines, systems, and troubleshooting steps.

  • Collaborate with engineers, analysts, and technical project managers to ensure smooth data operations and successful delivery of features.

  • Ensure the reliability of data ingestion processes, respecting the governance principles defined by the organization.

  • Contribute to the quality and technological innovation of the product and the work environment.

  • Participate in team meetings and code reviews to learn engineering best practices and contribute to technical quality.

  • Sharing in-depth technical knowledge of Big Data and helping to train users in data solutions.

  • Promoting data quality and monitoring in the information environment.

What You’ll Learn and Be Exposed To:

Building, running, and maintaining large-scale data pipelines Working with APIs and real-time data ingestion Using tools like Airflow, Databricks, Great Expectations, and Alation Best practices in data quality, governance, and monitoring Collaborating closely with senior data engineers and learning from experienced mentors Continuous integration and delivery of data solutions in a production environment

Skills, Capabilities & Experience Required:

  • A passion for working with data and solving technical challenges.

  • Solid foundation in SQL, experience with at least one programming language (e.g., Python, Java, Scala), and familiarity with data transformation workflows using dbt.

  • Familiarity with data warehouses and/or cloud platforms such as AWS (bonus if you've used S3, Redshift, or similar tools).

  • Experience with ETL/ELT processes and data ingestion techniques.

  • Experience with Databricks, including Delta Live Tables (DLT) for building reliable and maintainable data pipelines.

  • Hands-on knowledge of Apache Spark and/or Spark Streaming for processing large-scale batch and real-time data.

  • Exposure to data orchestration tools (e.g., Airflow) or monitoring tools (e.g., Datadog).

  • Good communication and collaboration skills.

  • Curiosity and a growth mindset – you’re eager to develop your skills and try new tools.

  • Understanding of dimensional modeling or data testing frameworks Familiarity with Git or CI/CD pipelines (e.g., GitHub Actions).

Nice to Have:

  • Experience with observability and performance tools (e.g., Datadog, Dynatrace).

  • Knowledge of containers and container orchestration (Docker and/or Kubernetes).

  • Experience with Domain-Driven Design (DDD). Knowledge of data governance, including GDPR and data privacy.

  • Experience with high-volume, high-performance production environments.

  • Experience with data products in the financial sector and/or fintechs.

Benefits:

  • Hybrid & remote working options

  • €1,000 per year for self-development

  • Company share scheme

  • 25 days of annual leave per year

  • 20 days per year to work abroad

  • 5 personal days/year

  • Flexible benefits: travel, sports, hobbies

  • Extended health, dental and travel insurances

  • Customized well-being programmes

  • Career growth sessions

  • Thousands of online courses through Udemy

  • A variety of engaging office events

Disclaimer:

We are an inclusive employer. By embracing diverse experiences and perspectives, we create a lasting, positive impact for our employees, customers, and the communities we’re part of. You don't have to meet all the requirements listed to apply for this role. If you need any adjustments to make this role work for you, let us know, and we’ll see how we can accommodate them.

We thank all applicants for their interest; however, only the candidates who best meet the job requirements will be contacted for an interview.

By submitting your application online, you agree that your details will be used to progress your application for employment. If your application is successful, your details will be used to administer your personnel record. If your application is unsuccessful, we will retain your details for a period no longer than three years, to consider you for prospective roles within the company.