Bhav website- A searchable interface for the bhavcopy data generated daily.
Bhavcopy zip is generated everyday at BSE INDIA website, Bhav parses the data automatically and provides an interface for a user to search different equities by name. A user can also export the search data for future reference.
Implemented with Django/VUE/Redis/Celery
-
Install all the dependencies required for your local installation like python3 and redis server.
$ sudo apt install redis-server python3-pip python3-dev
$ sudo pip3 install virtualenv
-
Clone the Github repo and move to the directory.
$ git clone https://github.com/krishnanunnir/bhavcopy.git && cd bhavcopy
-
Create a virtual environment to isolate dependancies for the project.
$ python3 -m venv .envs/bhavcopyenv
-
Activate the virtual environment and install the required dependancies listed from requirements file.
$ source .envs/bhavcopy/bin/activate
$ pip install -r requirements.txt
-
Create a file with your environment settings in the nested bhavcopy folder based of the .ex-env file.
$ cd bhavcopy && cp .ex-env .env
-
Run the redis-server.
$ redis-server
-
Start the celery beat service for the scheduling.
$ celery -A proj beat
-
Run migrations for various dependancies bundled with the project.
$ python manage.py migrate
-
Start the python server.
$ python manage.py runserver
For production, use a fully fledged server like gunicorn and daemonize celery beat using tools like supervisor. Also edit the .env file to change 'host', 'debug' and 'secret' for production.
- Schedule data retrieval from BSE INDIA website and update to Redis.
- Create apis for retrieval of entire dataset and searched dataset.
- Create a front end based of VUE to display the data from API.
- Provide a input field for user to search for equity by name.
- Export the dataset as CSV.
- Add pagination to show only a chunk of data in each page.