Skip to content
This repository has been archived by the owner on Dec 31, 2021. It is now read-only.

🦀 Scraping & crawling websites using Rust, and ReasonML

License

Notifications You must be signed in to change notification settings

natserract/reason-rust-scraper

Repository files navigation

Reason Rust Scraper

This is a code repository for implementing how to scraping and crawling websites using Rust, and ReasonML. This project was created by me with the aim of being a tools in my company i work. However, because of the sudden change of ideas and concepts finally i make it like this hehe, as well as my first project when learning & use ReasonML.

If you have never heard of scraping, try to read this web scraping for more information.

screenshot

Requirements

Server

  1. Rust
  2. Rustup
  3. Cargo
  4. Apache & MySQL

Client

  1. Node
  2. Bucklescript

Getting started

  1. Clone this repo
  2. Start Apache & MySQL server
  3. Create new database and .env file and set config, like this .env
  4. npm install to install all req'd dependencies

Running

To running this project locally:

  1. For MacOS, and you have a iTerm. You can running easily, type this command
 $ sudo chmod +x ./run.sh && ./run.sh
  1. Or, you can running manually, for client side:
  $ npm start
  1. Open a new terminal tab/window, and type this command:
 $ npm server
  1. Open a new terminal tab/window again, for running server side:
 $ cargo run
  1. Open in you browser http://localhost:8000/

Testing

$ sudo chmod +x ./test.sh && ./test.sh

Build

Client:

$ npm run build

Server:

$ cargo build

Contributing

Pull requests are welcome. For major changes, please:

  1. Create issues and PRs - bugs, missing documentation, typos, unreadable code...
  2. Make sure to update tests as appropriate

Releases

No releases published

Packages

No packages published