Artificial Intelligence and Big Data
The motivation behind the project is to work as a team with the idea of joining everthing we've seen, in other words:
Being able to design, research, develop and deploy a Data Science idea designing a Big Data Architecture from which to train a model with a conclusion in mind while being ethical and not breaking any EU laws.
For reference about the changes, please, check out our CHANGELOG.
To be graded
- Title
- Description
- Objectives
- Ethics
- Design
- Product
- Methodology
- Tech Stack
- Usage
- Team
- License
- Legal Notice
- Credits
- Gratitude
"Hype" is all you need
This is research into what defines the success of films, and whether success can be predicted (proportionally) based on the hype (expectation) generated around a film; to be able to be expandable with both series and anime, video games or any other type of multimedia content or not.
It is intended, as possible definitions of the success of a film, to be able to predict:
- The benefits generated of a film based on its initial investment and how good will it be received
- The acceptance/acclamation of a film with respect to the initial "hype"
- Predict the note on IMDB a week after release, and whoever says IMDB can say other platforms (Rotten Tomatoes, Metacritic)
- Predict your success (previously defined) one week after your release
For this, various data sources will be used, such as: Twitter, Reddit, YouTube, IMDB, and those that we can discover as the investigation progresses. One of the main and central components of the application is sentiment analysis, which would become the main focus of the prediction.
For the official documentation visit the /docs folder
Not in a specific order.
- Teamwork as a team of Data Sciencist with (almost) no experience in the data field.
- Use knowledge from every subject seen in the degree.
- Develop all the required elements components and integrate them.
- Design a Data Infrastructure.
- Research about the movie's hype and it's success, and it's total box-office.
- Manage and develop an E2E (end-to-end) Big Data project, from idea to analysis/visualizations.
- Apply AI Engineering techniques to deliver a product that showcases our conclusion.
- Develop the (A.I. and machine learning) models required for the desired outcome.
- Use Cloud Computing Services where needed and learn to work with them.
- Fullfill a Data Science Project requirements with a Data Team.
- Trying to understand and predict the box office of blockbuster (mainly) movies, wether independent or from a franchise.
Our idea is to have a non-biased model that does not get influenced by people's opinion, rather, can know the difference between the general sentiment and how well will it reflect the movie's success.
Regarding the ethics, our goal woudln't be to forcefeed certain movies, nor to dictate whatpeople should do/watch, it'd be to have, just another tool to decide what you may want to see.
- Node-RED sniffs the data and sends them to
- Kafka, which itself distributes it to
- Spark for them to be transformed and stored in
- MongoDB to be later retrieved with
- Google Colab/Python
- To be trained with Spark saving the predictions in
- MongoDB so they can be accessed from
- PowerBI/Tableau and display them in
- Azure Web Service with a simple Front with an even simpler interaction
All the data will have an origin tag/field as to better identify it's properties
Instead of following the classic paradigm of ETL, first extract the data, then transform it BEFORE loading it. Data Lakes strives for the ELT, extract the data, load it FIRST then transform it when you need to use it.
And we'll be using it to store all the (raw) data, that we collect in the span of the project. We'll be having Diogenes syndrome towards the data. We'd rather delete data than not having enough.
From this point forward we should have quality data, data that is "clean". Following the aforementioned ELT paradigm, a Data Warehouse is where the information will be loaded ONCE Transformed.
It will serve us as the main storage for our models, all the data that comes to this point, should and must be: clean, standarized, normalized and regularized. It should be as ready as possible for the model.
- IMDB
- YouTube
- Google Trends
We're not going to sell anyting, but, our Product idea is to have a model that retrains with differente sources of information to display the outcome on the web and with some storytelling with the conclusion.
The initial estimation, it should be updated with the real roadmap at the end.
Initial product roadmap
The project has not yet been finished
We've splitted the product in different phases. The traditiona Product phases, and expanded the Data Science development ones:
- Product Identification
- Product Planification
- Product Development
- Product Control
- Product Closure
- Infrastructure
- Data Extraction
- Data Normalization
- Data Storage/Loading
- Data Cleansing
- Data Science/Modeling
- Data Visualization
- Deploy
- Documentation Draft
- Validation
SCRUM
- Kanban Board
- Planning Poker
Pepe
Pepe
Our teachers
- Trello
- Python
An easy-to-learn language chosen, mainly, because it's what the team's most comfortable with related to Big Data and A.I. technologies and it's usage. There were alternatives such as Scala, C++ or Java.
- Node-RED
A light weight graph/node based npm package for flow development to connect services, such as, APIs, and IoT. - Kafka
A data broker, one of the most used ones, if not the most used, meant to be used with Java or Scala, but can be interacted with through plugins, add-ons, and shell scripts - Spark
A highly efficient cluster computation and paralelization. It's API allows for Python (PySpark), Java, Scala, R and SQL, which makes it a perfect fit for our team. It is in high demand nowadays.
- MongoDB
An opensource NoSQL document based Database, it has a great community and multiple implementations and integrations.
- AWS or Azure
Both great cloud computing services that offer similar services, each with their own pros and cons, but both are top notch in the world of cloud computing, data science and DaaS (Data as a Service) - Terraform (and maybe AWS CloudFormation)
IaC (Infrastructure as Code) is the way to go, cloudformation forces/restricts us to one service, but it is important that, however it is that we develop and deploy our cloud infrastructure, if ever, it is, cloud agnostic if possible, but easily replicable, and highly reliable, it should always produce the some output, the same outcome, without (as much) human mistake.
- Docker
An open-source software container service that adds and extra layer of abstraction for packing software solutions - Compose
A cloud-agnostic standard for container orchestration maintained by Docker that is supported by: Docker Swarm, AWS ECS, Azure Container Instances, and many more.
- Docker
-
Engine Version 20.10
-
Compose Version 1.29.2
-
- Python
-
>= 3.6.x
-
- Node
-
>= v15.14.0
-
All the images versions will be provided on each Dockerfile with the exact version, avoid the latest
for security reasons, upgrades will be manual.
Execute the following command on the folder you want to store the project in
git clone https://github.com/jofaval/tfm-iabd.git
cd tfm-iabd
And now configure the project's branches with Git flow
For Windows
cd tools/windows/git/
git-flow.bat
For Linux
cd tools/linux/git/
./git-flow.sh
Execute the tools/windows/infra/stop.bat
or the tools/linux/infra/stop.sh
file
or execute the following commands on the shell
cd app/infra
docker-compose up -d
Execute the tools/windows/infra/stop.bat
or the tools/linux/infra/stop.sh
file
or execute the following commands on the shell
cd app/infra
docker-compose down
Handled by the Github Actions workflow
Name | Role |
---|---|
Diego del Caño | Data Scientist / Data Analyst |
Juan Crespin Valero | Data Analyst / SysAdmin |
Nerea Gluskova | Data Engineer / SysAdmin |
Pepe Fabra Valverde | Data Architect / Data Engineer / Data Scientist |
Table generated with: https://www.tablesgenerator.com/markdown_tables
I (Pepe) will be supervising each task, but we're all out here to help each other.
Defined as Preparation of docker images, ready and interjoined to support the architecture.
Docker (Docker-compose), Linux, if cloud computing were to be required (AWS, Azure or Google Cloud)
The information regarding the infrastructure it's in the Infrastructure section.
- Nerea
- Juan
- Pepe (only if cloud computing is required)
Defined as Retrieving all the necessary data for it's work. (JUST retrieving data)
Node-RED
- Nerea
- Pepe
- Everyone to search for Data Sources
- Twitter Developer API
- IMDB API
- YouTube API
- Reddit API
- Google Trends
Defined as After the data has being retrieved, create a middleground with the common data that may be needed so that all sources end up with the same Data Model, in other words, standarizing the sources.
Node-RED
- Diego
- Nerea
- Juan
- Pepe
Defined as Storing the normalized data into the NoSQL DB (MongoDB most likely).
Node-RED
- Nerea
Defined as At this point, the data has been normalized, but not cleaned, the data should be ready for the Model to train with.
Python (Google Colab?)
- Diego
- Juan
- Pepe
Defined as Developing and implement the required model(s) for the desired performance and outcome.
Artificial Intelligence and/or Machine Learning.
Python (Google Colab?)
- Diego
- Pepe
Defined as Designing and developing the story (StoryTelling) and all the required/desired visualizations for whaterever the outcome(s) are that we want.
PowerBI or Tableau, up to taste.
- Juan
- Nerea
- Diego
Defined as Prepare the connections, and proper usage of the model via endpoints and utilities.
Cloud Platform (if used), Git (Github)
- Diego
- Pepe
The license used (MIT License) can be seen here or you can read it locally by downloading the LICENSE file
All the data used is being used and stored up-to-date with the European Union's legislation, more precisely, to Span's laws which comply with E.U.'s law GDPR (General Data Protection Regulation) and following the standards described at the Charter of European Digital Rights (EDRi, EDR initiative), surrounding the usage A.I. towards sentiment analysis and overall in the possible bias it may provide to the user. As to be ethical and prepare the model for the coming years.
For more information about the ethics of our model, please refer to the Ethics' section.
We plan to use the extracted data and it's provided data to better analyze the sentiments of users all around the world about the hype generated by a movie, wether is it's announcement, a trailer, some celeb talking about it.
By analyzing the general feeling, whether positive, negative, or neutral, we could determine if one user at a time, had a good or bad experience, they were hyped, or not. So we can later influence our model towards the idea people have/had of the movie.
We'll collect the raw text data, if it's a thread, the more information we'll collect, so we can tokenize, lemmatize, preprocess and prepare the text. Our methodology is to preprocess, and clean the data, tokenized it into a word embedding, and using Transformers, maybe Siamese Neural Networks, but surely mT5 HuggingFace BERT to make a Logic Consequence with NLI so that we can “classify the data”.
Maybe even reviews or the general feeling, in case of adaptations we'd have even more information.
And to display the conclusion obtained thanks to the insight of the data extracted. We’ll use personal websites, github of course, a medium article. We’d like to develop and research a paper so that we could more clearly provide, document and explain the results obtained and it’s conclusions.
As for the tools, Tableau, but maybe we could get PowerBI through studentship, it’s unclear at the moment.
- Ismael, for the idea
TODO