Data ingest using Apache NiFi

Data ingest using Apache NiFi

Course date duration: August 8th, 2020, 9:30 – 14:00, 30 min break included
Trainer: Lucian Neghina
Location: Online (using Zoom)
Price: 150 RON (including VAT)
Number of places: 10 3 places left
When you need to design a data solution one of the earliest questions is where your data is coming from and how you will make it available to the solution/solutions that processes/stores the data. Especially since data we might deal with IoT data, thus various sources, and data will be as well processed and stored by several components of your solution. Even more nowadays that we work mainly with streams not with static data such a solution that is able to design and run the flow of events from the source/sources to the processing/storage stage it’s extremely important.  Apache NiFi has been built to automate that data flow from one system to another. Apache NiFi is a data flow management system that comes with a web UI that helps to build data flows in real time. It supports flow-based programming.

You can check out the agenda and register here.

Understanding Big Data Architecture E2E (Use case including Cassandra + Kafka + Spark + Zeppelin)  

Open Course: Understanding Big Data Architecture E2E (Use case including Cassandra + Kafka + Spark + Zeppelin)  
Timeline & Duration: July 27th – August 14th, 6 X 4 hours online sessions, during 3 weeks (2 sessions/week, Monday + Thursday) . An online setup will be available for exercises/hands-on sessions for the duration of the course. 
Main trainer: Valentina Crisan
Location: Online (Zoom)
Price: 250 EUR 
Pre-requisites: knowledge of distributed systems, Hadoop ecosystem (HDFS, MapReduce), know a bit of SQL.

More details and registration here.

Big Data Learning – Druid working group

Learning a new solution or building an architecture for a specific use case is never easy, especially when you are trying to embark alone on such an endeavour – thus in 2020 started a new way of learning specific big data solutions/use cases: working groups. And with the first working group (centered around Spark Structured Streaming + NLP) on its way to completion in July, we are now opening registration for a new working group – this time centered around Apache Druid: Building live dashboards with Apache Druid + Superset. The working group aims to take place End of July – October and will bring together a team of 5-6 participants that will define the scope, select the data (open data), install the needed components, implement the needed flow. Besides the participants for this group we will have a team of advisors (with experience in Druid and big data in general) that will advise the participants on how to solve different issues that will arise in the project.

Find more details of the working group here.