For a global client in Munich we are currently searching for a (Senior) Big Data Engineer - Hadoop / Spark.
- Manage Hadoop infrastructure, create roadmaps for Hadoop cluster deployment and troubleshoot Hadoop-related applications
- Gather requirements from IT architects, to optimize the system performance as well as to advance its technological foundation
- Develop applications in Spark, Spark Streaming & Kafka using functional programming methods in Scala
- Implement statistical methods and machine learning algorithms to be executed in Spark applications
- Work hand in hand with data science and infrastructure departments
- Completed degree in a relevant field
- 2+ years of hands-on work experience with Hadoop ecosystem (Apache Spark, Spark Streaming, Kafka, MapReduce, Impala, Hive, etc.) as well as with implementing Hadoop clusters
- Programming experience in at least one of the following languages: Scala, Python, Java, R
- Good Linux knowledge
- Experience of working in an international environment
- Fluency in English. German is a bonus
For consideration, please send us your CV, as well as earliest start date and salary expectations.
Feel free to call 089 2109 3906 for more information.