DataQA is a tool to label and explore unstructured documents. It uses rules-based weak supervision to significantly reduce the number of labels needed compared to other tools. Here are a few things you can do with it:
- Search your documents using Elasticsearch powerful text search engine,
- Classify your documents,
- Extract entities from your own data or from Wikipedia,
- Link mentions of entities to your own ontology.
... and it's all available with a simple pip command!
- Python 3.6, 3.7, 3.8 and 3.9
- (Recommended) start a new python virtual environment
- Update your pip
pip install -U pip
- Tested on backend: MacOSX, Ubuntu. Tested on browser: Chrome, Firefox.
pip install dataqa
- The first time it is run:
docker run -d -p 5000:5000 dataqa/dataqa
- In order to keep the data between runs, use
docker start [container-id]
anddocker stop [container-id]
In the terminal, type dataqa run
. Wait a few minutes initially, as it takes some minutes to start everything up.
Doing this will run a server locally and open a browser window at port 5000
. If the application does not open the browser automatically, open localhost:5000
in your browser. You need to keep the terminal open.
To quit the application, simply do Ctr-C
in the terminal. To resume the application, type dataqa run
. Doing so will create a folder at $HOME/.dataqa_data
.
The text file needs to be a csv file in utf-8 encoding of up to 30MB with a column named "text" which contains the main text. The other columns will be ignored.
This step is running some analysis on your text and might take up to 5 minutes.
In the terminal:
dataqa uninstall
: this deletes your local application data in the home directory in the folder.dataqa_data
. It will prompt the user before deleting.pip uninstall dataqa
Nope. No data will ever leave your local machine.
If the project data does not load, try to go to the homepage and http://localhost:5000
and navigate to the project from there.
Try running dataqa test
to get more information about the error, and bug reports are very welcome!
To test the application, it is possible to upload a text that contains a column "__LABEL__". The ground-truth labels will then be displayed during labelling and the real performance will be shown in the performance table between brackets.
Documentation at: https://dataqa.ai/docs/.
- To get started with a multi-class classification problem, go here.
- To get started with a named entity recognition problem, go here.
- To get started with a named entity linking problem, go here.
Weak supervision is a set of techniques to produce noisy labels for large quantities of data. It has gained popularity in recent years due to the large amounts of data typically needed for ML systems. The annotator is able to encode any prior domain knowledge it has in the form of rules. Even though these rules can be noisy, the algorithm learns how to weigh them accordingly and use them as signals to extract patterns from the data.
For any feedback, please contact us at contact@dataqa.ai. Also follow me on for more updates and content around ML and labelling.