A Big Data Analysis of Meetup Events using Spark NLP, Kafka and Vegas Visualization
Finding trending Meetup topics using Streaming Data, Named Entity Recognition and Zeppelin Notebooks – a tale of a super enthusiastic working group during the pandemic times.
Author : Andrei Deusteanu
Project Team: Valentina Crisan, Ovidiu Podariu, Maria Catana, Cristian Stanciulescu, Edwin Brinza, Andrei Deusteanu
We started out as a working group from bigdata.ro. Our main purpose was to learn and practice on Spark Structured Streaming, Machine Learning and Kafka. We built the entire use case and then the architecture from scratch.
This is a learning case story. We did not really know from the beginning what would be possible or not. For sure, looking back, some of the steps could have been optimized. But, hey, that’s how life works in general.
Since Meetup.com provides data through a real-time API, we used it as our main data source We did not use the data for commercial purposes, just for testing.
The problems we tried to solve:
- Allow meetup organizers to identify trending topics related to their meetup. We computed Trending Topics based on the description of the events matching the tags of interest to us. We did this using the John Snow Labs Spark NLP library for extracting entities.
- See which Meetup events attract the most responses within our region. Therefore we monitored the RSVPS for meetups using certain tags, related to our domain of interest – Big Data.
For these we developed 2 sets of visualizations:
- Trending Keywords
- RSVPs Distribution
Architecture
Trending Keywords
- The Stream Reader script fetches data on Yes RSVPs filtered by certain tags from the Meetup Stream API. It then selects the relevant columns that we need. After that it saves this data into the rsvps_filtered_stream Kafka topic.
- For each RSVP, the Stream Reader script then fetches event data for it, only if the event_id does not exist in the events.idx file. This way we make sure that we read event data only once. The setup for the Stream Reader script can be found -> Install Kafka and fetch RSVPs
- The Spark ML – NER Annotator reads data from the Kafka topic events and then applies a Named Entity Recognition Pipeline with Spark NLP. Finally it saves the annotated data in the Kafka topic TOPIC_KEYWORDS. The Notebook with the code can be found here.
- Using KSQL we create 2 subsequent streams to transform the data and finally 1 table that will be used by Spark for the visualization. In Big Data Architectures, SQL Engines only build a logical object that assign metadata to the physical layer objects. In our case these were the streams we built on top of the topics. To detail a bit: we link data from the TOPIC_KEYWORDS to a new stream via KSQL, called KEYWORDS. Then, using a Create as Select, we create a new stream, EXPLODED_KEYWORDS, for exploding the data since all of the keywords were in an array. Now we have 1 row for each keyword. Next on, we count the occurrences of each keyword and save it into a table, KEYWORDS_COUNTED. The steps to set up the streams and the tabels with the KSQL code can be found here: Kafka – Detailed Architecture.
- Finally, we use Vegas library to produce the visualizations on Trending Keywords. The Notebook describing all steps can be found here.
Detailed Explanation of the NER Pipeline
In order to annotate the data, we need to transform it into a certain format, from text to numbers, and then back to text.
- We first use a DocumentAssembler to turn the text into a Document type.
- Then, we break the document into sentences using a SentenceDetector.
- After this we separate the text into smaller units by finding the boundaries of words using a Tokenizer.
- Next we remove HTML tags and numerical tokens from the text using a Normalizer.
- After the preparation and cleaning of the text we need to transform it into a numerical format, vectors. We use an English pre-trained WordEmbeddingsModel.
- Next comes the actual keyword extraction using an English NerDLModel Annotator. NerDL stands for Named Entity Recognition Deep Learning.
- Further on we need to transform the numbers back into a human readable format, a text. For this we use a NerConverter and save the results in a new column called entities.
- Before applying the model to our data, we need to run an empty training step. We use the fit method on an empty dataframe because the model is pretrained.
- Then we apply the pipeline to our data and select only the fields that we’re interested in.
- Finally we write the data in Kafka:TOPIC_KEYWORDS
RSVPs Distribution
- The Stream Reader script fetches data on Yes RSVPs filtered by certain tags from the Meetup Stream API. It then selects the relevant columns that we need. After that it saves this data into the rsvps_filtered_stream Kafka topic.
- For each RSVP, the Stream Reader then reads event data for it, only if the event_id does not exist in the events.idx file. This way we make sure that we read event data only once. The setup for the Stream Reader script can be found here: Install Kafka and fetch RSVPs
- Using KSQL we aggregate and join data from the 2 topics to create 1 Stream, RSVPS_JOINED_DATA, and subsequently 1 Table, RSVPS_FINAL_TABLE containing all RSVPs counts. The KSQL operations and their code can be found here: Kafka – Detailed Architecture
- Finally, we use Vegas library to produce visualizations on the distribution of RSVPs around the world and in Romania. The Zeppelin notebook can be found here.
Infrastructure
We used a machine from Hetzner Cloud with the following specs:
-
- CPU: Intel Xeon E3-1275v5 (4 cores/8 threads)
- Storage: 2×480 GB SSD (RAID 0)
- RAM: 64GB
Visualizations
RSVPs Distribution
These visualizations are done on data between 8th of May 22:15 UTC and 4th of June 11:23 UTC.
Worldwide – Top Countries by Number of RSVPs
Worldwide – Top Cities by Number of RSVPs
As you can see, most of the RSVPs occur in the United States, but the city with the highest number of RSVPs is London.
Worldwide – Top Events by Number of RSVPs
Romania – Top Cities in Romania by Number of RSVPs
As you can see, most of the RSVPs are in the largest cities of the country. This is probably due to the fact that companies tend to establish their offices here and therefore attract talent to these places.
Romania – Top Meetup Events
Romania – RSVPs Distribution
* This was produced with Grafana using RSVP data processed in Spark and saved locally.
Europa – RSVPs Distribution
* This was produced with Grafana using RSVP data processed in Spark and saved locally.
Trending Keywords
Worldwide
This visualization is done on data from July.
Romania
This visualization is done on almost 1 week of data from the start of August. See Issues encountered chapter, point 5.
Issues discovered along the way
All of these are mentioned in the published Notebooks as well.
-
- Visualizing data using Helium Zeppelin add-on and Vegas library directly from the stream did not work. We had to spill the data to disk, then build Dataframes on top of the files and finally do the visualizations.
- Spark NLP did not work for us in a Spark standalone local cluster installation (with local file system). Standalone Local Cluster means that the cluster runs on the same physical machine – Spark Cluster Manager & Workers. Such a setup does not need distributed storage such as HDFS. The workaround for us was to configure Zeppelin to use local Spark, local (*), meaning a non-distributed single-JVM deployment mode available in Zeppelin.
- Vegas plug-in could not be enabled initially. Running the github – %dep z.load(“org.vegas-viz:vegas_2.11:{vegas-version}”) – recommendation always raised an error. The workaround was to add all the dependencies manually in /opt/spark/jars. These dependencies can be found when deploying spark shell with the Vegas library – /opt/spark/bin/spark-shell –packages org.vegas-viz:vegas-spark_2.11:0.3.11
- Helium Zeppelin addon did not work/couldn’t be enabled. This too raised an error when enabling it from Zeppelin GUI in our configuration. We did not manage to solve this issue. That’s why we used only Vegas, although it does not support Map visualizations. In the end we got creative a bit – we exported the data and loaded it into Grafana for Map visualizations.
- The default retention policy for Kafka is 7 days. This means that data that is older than 1 week is deleted. For some of the topics we changed this setting, but for some we forgot to do this and therefore we lost the data. This affected our visualization for the Trending Keywords in Romania.
Conclusions & Learning Points
- In the world of Big Data you need clarity around the questions you’re trying to answer before building the Data Architecture and then follow through the plan to make sure you’re still working according to those questions. Otherwise, you might end up with something that can’t do what you actually need. It sounds a pretty general statement and pretty “DOH, OBVIOUSLY”. once we’ve seen the visualizations, we realized that we did not create the Kafka objects according to our initial per country keywords distribution visualization – e.g. we created the count aggregation per all countries, in the KEYWORDS_COUNTED Table. Combine this with the mistake of forgetting to change the Kafka retention period from the default 7 days, by the time we realized the mistake we had lost the historical data as well. Major learning point.
- Data should be filtered in advance of the ML/NLP process – we should have removed some keywords that don’t exactly make sense such as “de”, “da”. In order to get more relevant insights maybe several rounds of data cleaning and extracting the keywords might be needed.
- After seeing the final visualizations we should probably have filtered a bit more some of the obvious words. For example of course Zoom was the highest scoring keyword since by June everybody was running only online meetups mainly on Zoom.
- Staying focused in yet another online meeting after a long day of remote work is hard. When we meet in person we use a lot of non-verbal cues to express ourselves and understand others. In online calls we need to devote an extra level of attention as these elements are non-existent. With time this gets really tiring. — But, hei, this is what working groups look like in COVID time.
- Working groups are a great way of learning :-). Just see bellow the feedback from one of our project members after we – finally – managed to end the project.
Ovidiu -> Double Combo : Meet great people and learn about Big Data
This study group was a great way for me to learn about an end-to-end solution that uses Kafka to ingest streaming data, Spark to process it and Zeppelin to build visualizations on it.
It was great that we all had different backgrounds and we had interesting debates about how to approach problems and how to solve them. Besides working with and getting to know these nice people, I got to learn the fundamentals of Kafka, how to use Spark Streaming to consume Kafka events, how a Named Entity Recognition system works, and how to use Spark NLP to train a NER model and make predictions with it. I’ve especially enjoyed one of our last meetings, where we worked together for a couple of hours and managed to build some great visualizations for meetup keywords and RSVPs using Zeppelin and Grafana.
On the downside, the project took almost twice as long to complete than we originally planned – 18 weeks instead of 10. Towards the end this made it hard for me to work on it as much as I would have wanted to because it overlapped with another project that I had already planned for the summer.
All in all, I would recommend this experience for anyone interested in learning Big Data technologies together with other passionate people, in a casual and friendly environment.