Understanding Big Data Architecture E2E (Use case including Cassandra + Kafka + Spark + Zeppelin)  

Open Course: Understanding Big Data Architecture E2E (Use case including Cassandra + Kafka + Spark + Zeppelin)  
Timeline & Duration: July 27th – August 14th, 6 X 4 hours online sessions, during 3 weeks (2 sessions/week, Monday + Thursday) . An online setup will be available for exercises/hands-on sessions for the duration of the course. 
Main trainer: Valentina Crisan
Location: Online (Zoom)
Price: 250 EUR 
Pre-requisites: knowledge of distributed systems, Hadoop ecosystem (HDFS, MapReduce), know a bit of SQL.

More details and registration here.

Big Data Learning – Druid working group

Learning a new solution or building an architecture for a specific use case is never easy, especially when you are trying to embark alone on such an endeavour – thus in 2020 bigdata.ro started a new way of learning specific big data solutions/use cases: working groups. And with the first working group (centered around Spark Structured Streaming + NLP) on its way to completion in July, we are now opening registration for a new working group – this time centered around Apache Druid: Building live dashboards with Apache Druid + Superset. The working group aims to take place End of July – October and will bring together a team of 5-6 participants that will define the scope, select the data (open data), install the needed components, implement the needed flow. Besides the participants for this group we will have a team of advisors (with experience in Druid and big data in general) that will advise the participants on how to solve different issues that will arise in the project.

Find more details of the working group here.