AWS Data Migration & Data Engineering Consultant
Experience Level: 8-10+ years
Work Location: United Kingdom (Remote)
Job Overview
We are seeking a highly skilled AWS Data Migration & Data Engineering Consultant to lead end-to-end data migration initiatives across on-prem, cloud, and multi-cloud environments into AWS.
This role focuses on defining and executing data migration strategies, including assessment, wave planning, migration execution, and post-migration stabilization, while ensuring data integrity, minimal downtime, and alignment with business SLAs.
Key Responsibilities
- Lead end-to-end database migrations to AWS from diverse sources, including on-premises systems (e.g., Oracle, SQL Server, MySQL) and other cloud platforms (GCP, Azure).
- Lead end-to-end data migration strategy and execution, including assessment, dependency mapping, and wave planning.
- Design and implement migration frameworks using AWS DMS, SCT, Glue, and Redshift.
- Define and execute data validation, reconciliation, and cutover strategies ensuring zero/low data loss.
- Enable CDC-based replication and minimal downtime migrations across heterogeneous systems.
- Design and optimize ETL/data pipelines using AWS Glue, PySpark, or EMR.
- Architect and implement scalable, secure, and highly available data platforms on AWS.
- Drive schema conversion and modernization strategies (heterogeneous migrations where applicable).
- Optimize performance of source and target systems pre- and post-migration.
- Manage and govern S3-based data lakes and metadata using Glue Data Catalog.
- Collaborate with application, infrastructure, and analytics teams for seamless migration and stabilization.
- Implement monitoring, rollback, and failure recovery mechanisms.
- Ensure adherence to security, governance, and compliance standards.
Required Skills
- 8+ years of experience in database migration and data engineering.
- Proven experience in cross-platform data migrations (on-prem to cloud or cloud to cloud).
- Strong hands‑on expertise with AWS DMS, AWS SCT, AWS Glue, and Amazon Redshift.
- Experience designing and implementing data validation and reconciliation frameworks.
- Strong experience in ETL/data pipeline development (Glue, PySpark, EMR, or equivalent).
- Deep proficiency in SQL and data modeling concepts.
- Experience working with S3-based data lakes and Glue Data Catalog.
- Strong understanding of CDC, transaction consistency, and migration cutover strategies.
- Hands‑on experience with performance tuning, schema design, and query optimization.
- Proficiency in Python/PySpark and scripting (Shell/Python).
- Experience with Oracle, SQL Server, MySQL, PostgreSQL, or cloud-native databases.
Nice to Have
- Exposure to real-time replication tools (e.g., GoldenGate or equivalent).
- Database Administration (DBA) expertise, including backup & recovery strategies, HA/DR design, database patching and upgrades, lifecycle management, capacity planning, performance monitoring, and security hardening.
- Experience with Lake Formation and data governance frameworks.
- Familiarity with Data Mesh and cross‑account data sharing architectures.
- Experience with CI/CD for data pipelines (CodePipeline, Jenkins, etc.).
- Knowledge of Infrastructure as Code (Terraform / CloudFormation).
- AWS Certifications (Solutions Architect / Data Engineer).
- Experience mentoring teams and leading large-scale migration programs.
Benefits
- Make an impact at one of the world’s fastest-growing AI-first digital engineering companies.
- Upskill and discover your potential by solving complex challenges in cutting‑edge areas of technology alongside passionate, talented colleagues.
- Work where innovation happens – collaborate with disruptive innovators in a research‑focused organization with 60+ patents filed across various disciplines.
- Stay ahead of the curve – immerse yourself in breakthrough AI, ML, data, and cloud technologies and gain exposure working with Fortune 500 companies.
#J-18808-Ljbffr