Before running notebooks, we first need to download all the data we will be using.
As always, the first step is to clone the repository:
>> git clone https://github.com/JackShen1/sentimento.git
Learning datasets include large dataset with a review from Amazon, you can download it here.
In order to run the iPython notebook, you'll need Python (v3.6+
) and the following libraries:
- Keras (
v2.4+
) - Gensim (
v3.8+
) - Pandas (
v1.2+
) - NumPy (
v1.19.5+
) - NLTK (
v3.5+
) - python-decouple (
v3.4+
) - scikit-learn (
v0.24.1
) - SciPy (
v0.19.1+
) - Matplotlib (
v2.1.1+
) - Jupyter
The commands for installing these libraries will follow. First, let's create a virtual environment.
The easiest way to install Keras
, Gensim
, NumPy
, Jupyter
, matplotlib
and our other libraries is to start with the Anaconda Python distribution.
-
Select your OS and follow the installation instructions for Anaconda Python. We recommend using Python 3.6+ (64-bit).
-
Install the Python development environment on your system:
>> pip install -U pip virtualenv
-
If you haven't done so already, download and unzip this entire repository from GitHub:
>> git clone https://github.com/JackShen1/sentimento.git
-
Use
cd
to navigate into the top directory of the repo on your machine. -
Open Anaconda Promt and install JupyterLab, also enter the following commands:
>> conda install -c conda-forge jupyterlab # install JupyterLab >> conda create -n sentimento pip python=3.7 # choose the Python version >> source activate sentimento # activate the virtual environment
Alternatively, you can install Jupyter with pip:
pip install jupyterlab
-
Now we can install all the libraries we need:
>> pip install Keras gensim pandas numpy nltk python-decouple scikit-learn scipy matplotlib
-
Launch Jupyter by entering:
>> jupyter notebook
Once you have everything installed, the next time to activate everything, do the following:
-
Open Anaconda Prompt and enter the project folder with the
cd
command. Now enter the following commands:>> conda activate sentimento >> jupyter notebook
In this project in 3 parts the whole process of data preparation and training of our model was described, the comparative analysis of classifiers and various models is carried out. Each stage is accompanied by data visualization.