(For better viewing, you can visit: https://github.com/aitanagoca/Mastodon-Dynamo-App)
👥 Group: (P102, grup 05)
Aitana González (U186651)
Jordi Alfonso (U111792)
Arnau Royo (U172499)
1️⃣ Mvn: mvn clean
2️⃣ Mvn: mvn validate
3️⃣ Mvn: mvn compile
4️⃣ Mvn: mvn package
5️⃣ Mvn: spark-submit --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=file:///log4j.properties --class edu.upf.MastodonStreamingExample target/lab3-mastodon-1.0-SNAPSHOT.jar src/main/resources/map.tsv
1️⃣ Mvn: mvn clean
2️⃣ Mvn: mvn validate
3️⃣ Mvn: mvn compile
4️⃣ Mvn: mvn package
5️⃣ Mvn: spark-submit --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=file:///log4j.properties --class edu.upf.MastodonStateless target/lab3-mastodon-1.0-SNAPSHOT.jar src/main/resources/map.tsv
1️⃣ Mvn: mvn clean
2️⃣ Mvn: mvn validate
3️⃣ Mvn: mvn compile
4️⃣ Mvn: mvn package
5️⃣ Mvn: spark-submit --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=file:///log4j.properties --class edu.upf.MastodonWindows target/lab3-mastodon-1.0-SNAPSHOT.jar src/main/resources/map.tsv
1️⃣ Mvn: mvn clean
2️⃣ Mvn: mvn validate
3️⃣ Mvn: mvn compile
4️⃣ Mvn: mvn package
5️⃣ Mvn: spark-submit --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=file:///log4j.properties --class edu.upf.MastodonWithState target/lab3-mastodon-1.0-SNAPSHOT.jar en
1️⃣ Mvn: mvn clean
2️⃣ Mvn: mvn validate
3️⃣ Mvn: mvn compile
4️⃣ Mvn: mvn package
5️⃣ Mvn: spark-submit --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=file:///log4j.properties --class edu.upf.MastodonHashtags target/lab3-mastodon-1.0-SNAPSHOT.jar en
1️⃣ Mvn: mvn clean
2️⃣ Mvn: mvn validate
3️⃣ Mvn: mvn compile
4️⃣ Mvn: mvn package
5️⃣ Mvn: spark-submit --conf spark.driver.extraJavaOptions=-Dlog4j.configuration=file:///log4j.properties --class edu.upf.MastodonHashtagsReader target/lab3-mastodon-1.0-SNAPSHOT.jar en
From the output, we can conclude that the application is functioning correctly. It’s successfully capturing tweets and their associated data in real-time. The data includes the tweet’s content, the user who posted it, and any hashtags used.
From the output, we can conclude that the application is functioning correctly. It’s successfully capturing tweets and their associated languages in real-time. The data includes the language of the tweet and the count of tweets in that language. English appears to have the highest number of tweets in both time intervals displayed.
From the output, we can conclude that the application is functioning correctly. It’s successfully capturing tweets and their associated languages in real-time. The data includes the language of the tweet and the count of tweets in that language. English appears to have the highest number of tweets in both the micro batch and the 60-second window.
From the output, we can conclude that the application is functioning correctly. It’s successfully capturing users and their associated number of toots in real-time. The data includes the user’s name and the count of toots they have made. The users are sorted by the number of toots they have made, with the user having the most toots listed first.
Partial example after writing to DynamoDB table "LsdsTwitterHashtags":
From the output, we can conclude that the Spark streaming application is successfully extracting hashtags from toots and storing the data in DynamoDB. The data includes the frequency of each hashtag, the language of the toot, and the toot IDs where the hashtag appears.
Example of obtained top 10 after reading from DynamoDB table "LsdsTwitterHashtags":
From the output, we can conclude that the MastodonHashtagsReader class is successfully retrieving the top 10 hashtags from the DynamoDB table. The hashtags are sorted in descending order based on their frequency of occurrence.