Cloud Data Engineer – Outside IR35
Venesky-Brown’s client, a public sector organisation in Edinburgh, is currently looking to recruit a Cloud Data Engineer for an initial 6-month contract with the potential to extend at a rate of £450 – £500/day (Outside IR35). This role will be a hybrid of working at home and in the office.
Responsibilities:
– Implement the data pipelines that handle the ETL processes in the AWS environment.
– Automate the nightly data transfer from on-premises Oracle databases to the AWS environment.
– Collaborate with the ETL Engineer to ensure seamless integration between the extraction and transformation steps.
– Set up monitoring and logging to track the performance and reliability of the data pipelines.
– Optimize the data ingestion process for performance and scalability.
– Troubleshoot and resolve issues related to data pipelines and ETL processes.
– Document data engineering processes and ensure alignment with best practices.
Essential Skills:
– Bachelor’s degree in Computer Science, Data Engineering, or a related field.
– 3+ years of experience in data engineering, with a focus on ETL pipeline development.
– Proficiency in AWS services like Glue, Lambda, S3, and Redshift.
– Strong programming skills in Python, SQL, or other relevant languages.
– Experience with data pipeline monitoring and performance optimisation.
– Knowledge of Oracle database structures and data migration strategies.
Desirable Skills:
– AWS Certified Data Analytics or AWS Certified Solutions Architect certification.
– Experience working with large-scale data warehouses in a cloud environment.
– Familiarity with DevOps practices for data engineering workflows.
If you would like to hear more about this opportunity, please get in touch.
Responsibilities:
– Implement the data pipelines that handle the ETL processes in the AWS environment.
– Automate the nightly data transfer from on-premises Oracle databases to the AWS environment.
– Collaborate with the ETL Engineer to ensure seamless integration between the extraction and transformation steps.
– Set up monitoring and logging to track the performance and reliability of the data pipelines.
– Optimize the data ingestion process for performance and scalability.
– Troubleshoot and resolve issues related to data pipelines and ETL processes.
– Document data engineering processes and ensure alignment with best practices.
Essential Skills:
– Bachelor’s degree in Computer Science, Data Engineering, or a related field.
– 3+ years of experience in data engineering, with a focus on ETL pipeline development.
– Proficiency in AWS services like Glue, Lambda, S3, and Redshift.
– Strong programming skills in Python, SQL, or other relevant languages.
– Experience with data pipeline monitoring and performance optimisation.
– Knowledge of Oracle database structures and data migration strategies.
Desirable Skills:
– AWS Certified Data Analytics or AWS Certified Solutions Architect certification.
– Experience working with large-scale data warehouses in a cloud environment.
– Familiarity with DevOps practices for data engineering workflows.
If you would like to hear more about this opportunity, please get in touch.