We are seeking a highly skilled Databricks Data Engineer to join our Data & AI practice. The successful candidate will have deep expertise in building scalable data pipelines, optimizing Lakehouse architectures and enabling advanced analytics and AI use cases on the Databricks platform. This role is critical in building and optimising modern data ecosystems that enable data-driven decision making, advanced analytics, and AI capabilities for our clients.
What you’ll be doing
Primary Responsibilities
- Client Engagement & Delivery
- Data Pipeline Development (Batch and Streaming)
- Databricks & Lakehouse Architectures
- Quality, Governance & Security
Business Relationships
- Solution Architects
- Data Engineers, Developers, ML Engineers, and Analysts
- Client stakeholders up to Head of Data Engineering, Chief Data Architect, and Analytics leadership
What experience you’ll bring
- Proven experience in data engineering and pipeline development on Databricks and cloud-native platforms.
- Strong consulting values with ability to collaborate effectively in client-facing environments.
- Hands‑on expertise across the data lifecycle: ingestion, transformation, modelling, governance, and consumption.
- Strong problem‑solving, analytical, and communication skills.
- Experience leading or mentoring teams of engineers to deliver high‑quality scalable data solutions.
Technical Expertise
- Deep expertise with the Databricks platform (Spark/PySpark/Scala, Delta Lake, Unity Catalog, MLflow).
- Proficiency in ETL/ELT tools such as DBT, Matillion, Talend, or equivalent.
- Strong SQL and Python (or equivalent language) skills for data manipulation and automation.
- Hands‑on experience with cloud platforms (AWS, Azure, GCP).
- Familiarity with Databricks Workflows and other orchestration tools.
- Knowledge of data modelling methodologies (star schemas, Data Vault, Kimball, Inmon).
- Familiarity with medallion architectures, data lakehouse principles and distributed data processing.
- Experience with version control tools (GitHub, Bitbucket) and CI/CD pipelines.
- Understanding of data governance, security, and compliance frameworks.
- Exposure to AI/ML workloads desirable.
Qualifications, and Education
- Experience: Minimum 5–8 years in data engineering, data warehousing, or data architecture roles, with at least 3+ years working with Databricks.
- Education: University degree required (BSc/MSc in Computer Science, Data Engineering, or related field preferred).
- Databricks certifications (Data Engineer Professional) highly desirable.
Measures of Success
- Delivery of high‑performing, scalable, and secure data pipelines aligned to client requirements.
- High client satisfaction and successful adoption of Databricks‑based solutions.
- Demonstrated ability to innovate and improve data engineering practices.
- Contribution to the growth of the practice through reusable assets, accelerators, and technical leadership.
What we’ll offer you
We offer a range of tailored benefits that support your physical, emotional, and financial wellbeing. Our Learning and Development team ensures continuous growth and development opportunities for our people. We also offer flexible work options.
We are an equal opportunities employer. We believe in the fair treatment of all our employees and commit to promoting equity and diversity in our employment practices. We are a proud Disability Confident & Committed Employer and commit to creating a diverse and inclusive workforce.
#J-18808-Ljbffr