Job Title: DataOps Engineer
Location: Birmingham – Hybrid- 3 days onsite/week
Duration: 6 months contract
Pay Rate: £375 per day through FCSA Umbrella
Role Description:
- Data Ops Engineer will be assisting Senior Data Ops Engineer responsible for inspiring and assisting Data Engineering team in designing, developing, and testing quality data engineering solutions, delivered through domain orientated multidisciplined data product teams.
- This role will act as a Continuous Integration/Continues Delivery (CI/CD) expert for the Data Office, helping Data Engineering teams automate as much of their work as possible, to reduce waste and improve quality.
- Continually challenging and improving our processes, tools, and methodologies. Undertaking review and assurance activity, providing other team members with guidance on design, build and test activity.
Requirements:
- Data Engineering or DevOps related qualification and/or extensive Data/Data Ops/DevOps Development experience in a commercial & Agile environment.
- What we’d like to see strong multi project experience in several the following or similar in a Data Engineering:
- Strong experience in developing and automating scalable data pipelines in a Finance related data context with a DataOps/DevOps mindset and evolved your expertise toward operational excellence and automation in data environments. In addition to a solid foundation in data engineering, you also demonstrate expertise in automation, CI/CD pipelines, IaC, monitoring systems to ensure scalable, reliable data workflows.
You bring professional experience with the following tools:
- AWS data tooling such as S3/Glue/Redshift/SageMaker.
- Familiarity with containerization (e.g., Docker/ec2), Orchestration in enterprise environment (Airflow), Infrastructure automation (Terraform), and CI/CD platform (Github Actions & Admin), Password/Secret management (hashicorp vault).
- Strong Data related programming skills SQL/Python/Spark/Scala.
- Database technologies in relation to Data Warehousing /Data Lake/ Lake housing patterns and relevant experience when handling structured and non-structured data
- (Information Modeler) Experience in data modelling techniques and tooling.
- (Test) Quality Assurance and Test Automation experience in a Data Pipeline.
- (Machine Learning) Experience of industrialising and scaling machine learning models.
- (Machine Learning) Experience using machine learning frameworks such as TensorFlow / PyTorch
What would be nice to have:
- Experience working in an Agile Team; preferably Safe.
- Experience in specific tooling Qlik Replicate / Qlik Compose / Data Bricks / Informatica / SAS
- An understanding of data modelling methodology (Kimball, Data Vault, Lakehouse)
- Understanding of Data Science, AI and Machine Learning ways of working
- Experience testing and testing standards.
…
