London Jobs |
Manchester Jobs |
Liverpool Jobs |
Nottingham Jobs |
Birmingham Jobs |
Cambridge Jobs |
Glasgow Jobs |
Bristol Jobs |
Wales Jobs |
London Jobs |
Manchester Jobs |
Liverpool Jobs |
Nottingham Jobs |
Birmingham Jobs |
Cambridge Jobs |
Glasgow Jobs |
Bristol Jobs |
Wales Jobs |
Oil & Gas Jobs |
Banking Jobs |
Construction Jobs |
Top Management Jobs |
IT - Software Jobs |
Medical Healthcare Jobs |
Purchase / Logistics Jobs |
Sales |
Ajax Jobs |
Designing Jobs |
ASP .NET Jobs |
Java Jobs |
MySQL Jobs |
Sap hr Jobs |
Software Testing Jobs |
Html Jobs |
Job Location | London |
Education | Not Mentioned |
Salary | 70,000 - 80,000 per annum |
Industry | Not Mentioned |
Functional Area | Not Mentioned |
Job Type | Permanent , full-time |
SENIOR DATA ENGINEER - Python, Kafka, AWS/GCP/AzureCENTRAL LONDON (2 Days / Week)£70,000 - £80,000As a Senior Data Engineer, you will be working to implement a data streaming ecosystem as well as building the necessary data pipelines to help a new subscription-based service provider grow and process data more accurately. THE COMPANY:This company is a new subscription-based service provider for small businesses. They use machine learning algorithms to calculate the best plans and policies for their clients. They have recently received millions in funding from some well-known multinationalcorporations. Their aim is to revolutionise the way in which policy rates are calculated and offered by completely automating the process, thanks to their heavy investment into their data and tech teams which provide the data for the machine learning algorithms.THE ROLE:* You will be working in a cross-functional environment alongside data engineers, analysts and scientists. * You will be handling the migration from batch data processing to real-time streaming * You will be helping to build ETL pipelines to serve the 3 new machine learning models which are being constructed* You will be using Snowflake, Spark, Python, Kafka and AWS while deploying managing and maintaining frameworks, CI/CD.YOUR SKILLS AND EXPERIENCE:The ideal Data Engineer will have:* Strong coding skills with Python/Scala * Experience building platforms on cloud services (AWS/GCP/Azure)* Extensive understanding of Snowflake and Kafka (Kinesis and Pub/Sub accepted)* Solid software engineering best practices (CI/CD)HOW TO APPLY:Please register your interest by sending your CV via the Apply link on this page.