Work Mode: Hybrid (2 -3 Days a week to London office)
Key Skills: Data Bricks(Spark), SQL , Python & AWS services
We at Coforge are hiring for Data Engineer with the following skillset:
Key Responsibilities
- Develop, maintain, and optimize data pipelines and ETL workflows using Databricks (Spark).
- Should have 3+ years of experience on Databricks projects.
- Write efficient, optimized SQL queries for data extraction, transformation, and validation.
- Develop Python‑based scripts for automation, data processing, and integration tasks.
- Work with AWS Cloud services (S3, Glue, EC2, Lambda, IAM, Athena, EMR) to build scalable data solutions.
- Collaborate with business and technical teams to understand data requirements and translate them into technical specifications.
- Perform data validation, quality checks, and root‑cause analysis for data‑related issues.
- Ensure performance tuning, reliability, and scalability of data pipelines.
- Contribute to design reviews, architecture discussions, and best‑practice implementations.
- Prepare and maintain technical documentation.
Technical Skills
- Strong SQL: Complex queries, performance tuning, stored procedures, indexing.
- Python: Experience with data processing (Pandas, PySpark), scripting, automation.
- Databricks:
- Notebooks
- Job orchestration
- AWS Cloud (working knowledge in at least 3+ services):
Additional Skills (Good to Have)
- Experience in Git, CI/CD pipelines.
- Knowledge of data modeling concepts.
- Exposure to Agile methodologies.
- Experience with job scheduling tools (Airflow, Cron, etc.)
#J-18808-Ljbffr…
