sr Data Engineer - Amsterdam
|Duur:||2 maanden / kans op verlenging|
|Geplaatst op:||11 september 2019 om 18:54 uur|
|Tariefindicatie:||In overleg / n.t.b.|
Senior Data Engineer
Hiring: “Onder de veranderde wet en regelgeving leent deze opdracht zich helaas minder voor het inzetten van een ZZP-er. Bij vragen en of onduidelijkheden zijn wij uiteraard graag bereid een en ander toe te lichten. “
Most important skills:
- At least 5 years of experience as Data Engineer
- Experience with Python and Scala.
- Experience with Machine Learning en Continious Delivery
- Experience to work with data scientists and BI Moddelling
Interviews should be planned during next week.
ING is looking for a passionate Senior Data Engineer
Think Forward! Our purpose is to empower people to stay a step ahead in life and in business. We are an industry recognized strong brand with positive recognition from customers in many countries, strong financial position, omni-channel distribution strategy and international network. If you want to work at a place with lots of freedom to innovate, where we believe that you can live by the Agile manifesto without jeopardizing the necessary continuity, compliance and QA measures, where we are committed to deliver stable and secure services to end users, and where we have a 'no nonsense' getting-things done mentality, please read on!
An experienced Data Engineer who enjoys to work with the latest technology to build data products and enable further data science within ING. ING works with large amount of data, both batch and streaming. The engineers in the 1:1 Analytics area work together with data scientists, data analysts and customer journey experts to utilize this data for the benefit of our customers. Our squads are multi-disciplinary and work agile.
Your work environment
You will be part of the ML Engineering chapter, which consists of data engineers who develop software (APIs and end-user applications), data pipelines and tools contain/empower sophisticated analytical algorithms, such as machine learning models. Our goal is to be the best team of people that know how to create and operationalize ML models by applying software engineering.
The 1:1 Analytics IT Area is a team of 40 people hat has the mission to make the customer's interaction extremely personal and relevant. We do this by combining Big Data technology with Data Science to deliver high value solutions and products for our organization. We work in a fun and creative environment and we’re dedicated to bringing out the best in both each other and our projects.
You are an ambitious, enthusiastic software engineer. You enjoy designing, developing, testing and maintaining complex data-driven systems. You have worked with low-latency distributed systems before, or you have read about them and tried some things out for yourself.
You are a good programmer with a strong theoretical basis; nobody has to explain you the details of KISS, DRY, YAGNI and the GoF OOP design patterns. You have some experience in functional programming and know its core principles. You are able to write clean, correct, efficient and maintainable code.
5 years experience transforming data science models in solid and scalable products running into production
Build and maintain our building blocks applications (Python,Scala) in order to continuously improve and integrate new requirements from ML models.
Solid knowledge and experience of machine learning models
Experience with Python and Scala
Hands-on with Big Data Tooling - Hadoop, Yarn, Spark, Kafka, Cassandra, NiFi, Airflow
Solid experience with CD/CI automation in the field of machine learning and software engineering
Solid experience with Containerization - Docker - Kubernetes
Plus point - Contributing to Open Source Project in the field
Do you recognize yourself in this profile:
Bachelor/Master's degree in Computer Science or related subject
Knowledge of data manipulation and transformation, e.g. SQL
Strong programming skills in Scala with passion for FP. Thrive on challenges around microservices, performance, scalability and concurrency
Hands-on experience managing and further developing distributed systems and clusters for both batch as well as streaming data (Hadoop/Spark and/or Kafka/Flink)
Experience in setting up both SQL as well as noSQL databases
Deployment and provisioning automation tools e.g. Docker, Kubernetes, Openshift, CI/CD
Bash scripting and Linux systems administration
Affinity with Predictive Analytics and Machine Learning
For additional details regarding submission eligibility and payment terms, please do refer to the contract. Only submissions from agencies with current service contract in place will be considered.
Eind: 2 maanden (verlenging mogelijk)
Sluitingsdatum reacties: 15 september 15.00 uur
Als u aan bovenstaande eisen voldoet dan ontvangen wij van u graag vóór de hierboven aangegeven sluitingsdatum:
- Een chronologisch CV (.word)
- Motivatie gericht op dit project (maximaal 2000 tekens)
- Uw uurtarief (inclusief reis- en verblijfkosten en ex BTW)
- Uw mogelijke startdatum en eventuele vakantieplannen
Wij verwachten van u exclusiviteit om te voorkomen dat u meerdere malen op dit project bij onze opdrachtgever wordt aangeboden. Mochten wij van uw aanbod geen gebruik maken dan zullen wij u hierover uiteraard nader informeren.