Working group: Streams processing with Apache Flink

Learning a new solution or building an architecture for a specific use case is never easy, especially when you are trying to embark alone on such an endeavour – thus in 2020 bigdata.ro started a new way of learning specific big data solutions/use cases: working groups. In 2020 we started 3 working groups:

  • Spark Structured Streaming + NLP
  • Building live dashboards with Druid + Superset
  • Understanding Decision Trees (running until December)

And with 2 of the groups completed and the Decision Trees one to be completed soon, we are now opening registration for a new working group – this time focused on Apache Flink: How to process streaming data with Apache Flink and Apache Pulsar/Apache Kafka. The working group aims to take place December – February and will bring together a team of 5-6 participants that will define the scope (Kafka/Pulsar and the exact use case), select the data (open data), install the needed components, implement the needed flow.       

More details and registration here.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s