📔 بالعربي 📔
Warning: This project is only for study purpose , please don’t re-share these articles under your name , all these articles is only belongs to Mawdoo3 .
- first create object from class mawdoo3
- second call function save_all_articles_title_into_file
- finally take the name of generated file from previous step and pass it to save_all_articles_into_file
- On GitHub.com, navigate to the main page of the repository.
- Above the list of files, click Code.
- Copy the URL for the repository.
- Open Terminal.
- Change the current working directory to the location where you want the cloned directory.
- Type git clone, and then paste the URL you copied earlier.
git clone github.com/Faris-abukhader/mawdoo3-scrapper
Press Enter to create your local clone
git clone https://github.com/YOUR-USERNAME/YOUR-REPOSITORY
> Cloning into `mawdoo3-scrapper`...
> remote: Counting objects: 10, done.
> remote: Compressing objects: 100% (8/8), done.
> remove: Total 10 (delta 1), reused 10 (delta 1)
> Unpacking objects: 100% (10/10), done.
To set up this project you need to download Python in your machine or if you have it make sure you have the latest version of it.
python3 -V
python --version
for Windows
Download the windows installer from Python offical website make sure you have download the latest version of Python.
for Mac
- You can download Python using brew CLI
brew install python
- You can download Python mac version through the offical website
Go to project direct where requirements.txt is exist and type in terminal :
pip install -r requirements.txt
Name | Description |
---|---|
BeautifulSoup |
Beautiful Soup is a Python library for pulling data out of HTML and XML files. |
selenium |
The selenium package is used to automate web browser interaction from Python. |
aiohttp |
Asynchronous HTTP Client/Server for asyncio and Python. |
asyncio |
asyncio is a library to write concurrent code using the async/await syntax. |