At Jacobs, we’re challenging today to reinvent tomorrow by solving the world’s most critical problems for thriving cities, resilient environments, mission‑critical outcomes, operational advancement, scientific discovery and cutting‑edge manufacturing, turning abstract ideas into realities that transform the world for good.
Your impact
Enjoy designing elegant data systems, shipping production‑grade code, and seeing your work make a measurable difference – this role is for you!!
Our team builds data platforms and AI solutions that power critical infrastructure, transform operations, and move entire industries.
In this role you will work with a broad set of clients on high‑scale problems, backed by a global organisation investing heavily in Azure, Databricks, and applied AI. You’ll work primarily on Azure + Databricks (Spark, Delta Lake, Unity Catalog) – ship modern ELT/ETL, streaming and batch data products, and ML/AI pipelines – operate at serious scale across water, transport, energy, and more – join a collaborative, engineering‑led culture with real investment in platforms & tooling.
Utilising stack such as:
- Azure: ADLS Gen2, Event Hubs, ADF/Azure Data Factory or Synapse pipelines, Functions, Key Vault, VNets
- Databricks: Spark, Delta Lake, Unity Catalog, Workflows, MLflow (experiments, model registry)
- Languages: Python (PySpark), SQL (Delta SQL), optional Scala
- Engineering: Git, pull requests, code review, unit/integration tests, dbx, notebooks as code
- Platform & Ops: Azure DevOps/GitHub, CI/CD, Terraform or Bicep, monitoring/alerting
Your remit and responsibilities
- Design & build robust data platforms and pipelines on Azure and Databricks (batch + streaming) using Python/SQL, Spark, Delta Lake, and Data Lakehouse patterns.
- Develop AI‑enabling foundations: feature stores, ML‑ready datasets, and automated model‑serving pathways (MLflow, model registries, CI/CD).
- Own quality & reliability: testing (dbx/pytest), observability (metrics, logging, lineage), and cost/performance optimisation.
- Harden for enterprise: security‑by‑design, access patterns with Unity Catalog, data governance, and reproducible environments.
- Automate the boring stuff: IaC (Terraform/Bicep), CI/CD (Azure DevOps/GitHub Actions), and templated project scaffolding.
- Partner with clients: translate business problems into technical plans, run workshops, and present trade‑offs with clarity.
- Ship value continuously: iterate, review, and release frequently; measure outcomes, not just outputs.
Here’s what you’ll need
- Utilising SQL and Python for building reliable data pipelines.
- Hands‑on with Spark (preferably Databricks) and modern data modelling (e.g., Kimball/Inmon/Data Vault, lakehouse).
- Experience running on a cloud data platform (ideally Azure); sound software delivery practices: Git, CI/CD, testing, Agile ways of working.
- Streaming/event‑driven designs (Event Hubs, Kafka, Structured Streaming).
- MPP/Data Warehouses (Synapse, Snowflake, Redshift) and NoSQL (Cosmos DB).
- ML enablement: feature engineering at scale, MLflow, basic model lifecycle know‑how.
- Infrastructure‑as‑code (Terraform/Bicep) and platform hardening.
Don’t meet every single bullet? We’d still love to hear from you. We hire for mindset and potential as much as current skills!!
Benefits: With safety and flexibility always top of mind, we offer flexible working arrangements, well‑being benefits, and global volunteering opportunities. You’ll uncover flexible working arrangements, benefits, and opportunities, from well‑being benefits to our global giving and volunteering program, to exploring new and inventive ways to help our clients make the world a better place.
As a disability confident employer, we will interview disabled candidates who best meet the criteria. We welcome applications from candidates who are seeking flexible working and from those who may not meet all the listed requirements for a role.
#J-18808-Ljbffr…
