· Experience in building and optimizing big data, data pipelines, architectures and data sets
· Experience with big data tools: Hadoop, Spark, Hive, Kafka, Hbase etc
· Experience with stream-processing systems: Spark-Streaming Advanced working