9 months ago
For one of our clients in the financial services sector we are currently looking for an experienced Hadoop Engineer (m/f) in Munich:
- Work closely with architecture teams to obtain requirements, in order to drive and improve technological system performance.
- Administer a large scale Hadoop infrastructure.
- Full lifecycle management of Hadoop cluster: provide architectural guidance, analyse cluster capacity, and construct roadmaps for cluster deployment.
- Create and implement enterprise level security.
- Develop Spark & Kafka applications with Scala.
- Implement machine learning algorithms within Spark applications.
- DevOps support for business applications and use cases.
- Completed degree within Computer Science, Mathematics, Business Informatics or other relevant field.
- Very good knowledge of distributed computing and distributed system performance.
- Deep expertise in implementing large-scale Hadoop clusters.
- 2 years of relevant industry experience working with Spark, Spark Streaming, Kafka, Zookeeper, MapReduce, Flume, Hive as well as Oracle and MySQL.
- Programming and scripting skills in Scala, Java, Ruby, Python or R.
- Proficiency with continuous integration tools
- Linux skills.
- Fluent English skills, German would be a bonus.
For more information please call Tom Fernandez-Buckley on +49 89 2109 3904 or send your CV to firstname.lastname@example.org for consideration.