We are working with a leading organisation in the energy and commodities sector, supporting the transition to a more sustainable and increasingly complex energy landscape.
As part of a growing Data Platform team, you will play a key role in building modern, scalable data infrastructure that supports analytics, trading insights, and operational decision-making.
This role offers the opportunity to work on a cloud-native data platform using AWS and Databricks, handling large-scale datasets across trading, finance, and operational domains.
Key Responsibilities
- Build and maintain scalable ETL and ELT pipelines using Python and PySpark
- Ingest data from multiple source systems including trading, finance, and operational platforms
- Design and implement data transformations within a Databricks Lakehouse environment
- Work with AWS services such as S3, Kinesis, IAM, and Lambda to support data ingestion and processing
- Optimise Spark jobs for performance and cost efficiency
- Support the full data lifecycle from ingestion through to delivery
- Implement data quality checks to ensure accuracy and reliability of data
- Contribute to data governance standards and best practices
- Support the development of a secure and scalable data platform
- Work closely with analytics and BI teams to deliver high-quality datasets
- Support onboarding of new data sources and business areas
- Collaborate with engineering, operations, and business stakeholders
Requirements
- 3 to 6 years of experience in data engineering or analytics engineering
- Strong Python skills with experience using PySpark or Apache Spark
- Experience building and maintaining production-grade data pipelines
- Hands‑on experience with Databricks, including Delta Lake and Lakehouse architecture
- Strong knowledge of AWS services such as S3, Kinesis, Lambda, or IAM
- Proficiency in SQL and data modelling
- Experience within energy, utilities, or commodities trading environments
- Exposure to streaming or real‑time data pipelines
- Familiarity with CI/CD practices for data engineering workflows
This is an opportunity to work on high-impact data systems in a rapidly evolving sector, contributing to the build of a modern cloud-based data platform while working in a collaborative and technically strong environment.
#J-18808-Ljbffr…
