Skip to content

How to make a data source

Stijn Peeters edited this page Dec 19, 2022 · 31 revisions

Data sources

4CAT is a modular tool. Its modules come in two varietes: data sources and processors. This article covers the former.

Data sources are a collection of workers, processors and interface elements that extend 4CAT to allow scraping, processing and/or retrieving data for a given platform (such as Instagram, Reddit or Telegram). 4CAT has APIs that can do most of the scaffolding around this for you so data source can be quite lightweight and mostly focus on retrieving the actual data while 4CAT's back-end takes care of the scheduling, determining where the output should go, et cetera.

Data sources are defined as an arbitrarily-named folder in the datasources folder in the 4CAT root. It is recommended to use the datasource ID (see below) as the data source folder name. However, since Python files included in the folder will be included as modules by 4CAT, folder names should be allowed as module names. Concretely this means (among other things) that data source folder names cannot start with a number (hence the fourchan data source).

WARNING: Data sources in multiple ways can define arbitrary code that will be run by either the 4CAT server or client-side browsers. Be careful when running a data source supplied by someone else.

A data source will at least contain the following:

  • An __init__.py containing data source metadata and initialisation code
  • A search worker, which can collect data according to provided parameters and format it as a CSV or NDJSON file that 4CAT can work with.

It may contain additional components:

  • Any processors that are specific to datasets created by this data source
  • Views for the web app that allow more advanced behaviour of the web tool interface
  • Database or Sphinx index definitions

The instructions below describe how to format and create these components (work in progress!)

Initialisation code

The data source root should contain a file __init__.py which in turn defines the following:

DATASOURCE = "datasource-identifier"

This constant defines the data source ID. This is used by 4CAT internally to figure out what data source a dataset belongs to and so on.

def init_datasource(database, logger, queue, name):
    pass

This function is called when 4CAT starts, if the data source is enabled, and should set up anything the data source needs to function (e.g. queueing any recurring workers). A default implementation of this function can be used instead (and when defining your own, it is advised to still call it as part of your own implementation):

from backend.lib.helpers import init_datasource

Search workers

The search worker is run when a dataset is created by someone, and collects the data for that dataset (i.e. the posts from the platform matching the given dataset parameters), writing it to the dataset result file. It is contained in an arbitrarily named Python file in the data source root (we recommend search_[datasource].py). The file should define a class that extends backend.abstract.search.Search. This class should define the following attributes and methods:

Attributes

  • str type: Identifier used by the scheduler to know what code to run for jobs for this data source. Should be [datasource-id]-search, datasource-id being equal to the ID defined in __init__.py.
  • str extension: Optional. The extension (format) of the output data file. If omitted, csv is assumed; the other format currently supported is ndjson. Using any other extension will result in a NotImplementedError being raised.
  • int max_workers: Optional, default 1. The amount of search workers that may run in parallel for this data source. Usually, you want to keep this at 1, unless you are confident your server can handle multiple parallel workers of this type.
  • dict options: Optional, default empty. Defines parameters that can be configured when querying this data source. These can be defined via a dictionary here, or via the get_options() method: see Input fields for data sources and processors for more information.

Methods

  • validate_query(dict query, Request request, User user) -> dict: Called statically by the web tool whenever a new dataset is created by someone. query contains the form fields as set in the web interface; this method should return a sanitised version of that query, containing only fields and values relevant to this search worker. On invalid input, a common.lib.exceptions.QueryParametersException should be raised which will prompt the one creating the dataset to change their input and resubmit. You can also raise a common.lib.exceptions.QueryNeedsFurtherInputException(config) where config is a definition of further form fields that need to be completed, which will be shown in the interface while asking the user to submit again. The form fields can be defined in the same format as the 'normal' search parameter options (see get_options()).
  • get_items(self, dict query) -> generator: Yields items matching the query parameters. These are the 'search results' that will comprise the dataset.
  • import_from_file(self, str path) -> generator: Similar to get_items(), but takes a file path as parameter and yields items from that path as items to save in the dataset. Support for this is currently limited but it will serve as the basis for a generic 'import' feature for 4CAT in the future. A generic version of this method is part of the abstract class but it will usually require overriding to fit the nuances of the data source.
  • after_search(self, list items) -> list: Optional. If defined, this will be called after all posts have been retrieved with the methods listed above and, if appropriate, any sampling or such. This method should yield items, like get_items(). You can use it to e.g. perform additional item filtering or processing should your data source require it.
  • get_options(cls, parent_dataset, user) -> dict: Optional. If defined, this will be called to determine the options displayed in the 4CAT web interface when querying the data source, analogous to the options class property (this method overrides that property, if present). See Input fields for data sources and processors for more information.

You can also descend from SearchWithScope, which has built-in support for a couple of more advanced modes of querying data. This is particularly useful if data is available locally, and doesn't require round-trips to some remote server. Specifically, SearchWithScope has functionality to do "full-thread" querying (i.e. all posts in a thread that contains a particular amount of matching posts). To this end, it requires definition of the following additional methods:

  • get_search_mode(self, dict query) -> str: Return simple or complex. If simple, get_items_simple() is used to retrieve posts. Else, get_items_complex() is used. This can be used to define 'fast lane' search methods for simpler queries if shortcuts can be taken. Of course, you can also always make it return either of the options if that is not relevant to your data source.
  • get_items_simple(self, dict query) -> generator: Get posts via the 'simple' path.
  • get_items_complex(self, dict query) -> generator: Get posts via the 'complex' path.
  • fetch_posts(self, list post_ids) -> list: Should be used by get_items_*() to retrieve the actual item data. Takes a list of post_ids (as determined by the get_items_* method) and retrieves data for those post IDs, e.g. via an API or a local database.
  • fetch_threads(self, list thread_ids) -> list: Retrieves all posts for the given thread_ids.
  • get_thread_lengths(self, list thread_ids, int min_length) -> dict: Should return a dictionary with thread IDs as keys and amount of posts per thread as values, for all threads with at least min_length posts.

Additionally, because search workers are (after a fashion) architecturally equivalent to processors, they have access to all the attributes a processor has access to, e.g. dataset. See the page for processors for more information on these. In particular, the map_item method can be useful to define for data sources that return complex (e.g. multi-dimensional) data.

Web Tool Interface

People can use 4CAT to create new datasets with your data source. To this end, the data source should define an interface through which dataset parameters may be set via the options property or get_options() method (see above). Data sources can additionally contain a folder webtool with the following files:

  • views.py: Optional. This can define additional views for the 4CAT Flask app. Any function defined in this file will be available as a view via /api/datasource-call/[datasource-id]/[function name]/. Functions should have the signature function(request, user, **kwargs): request and user are objects supplied by Flask, **kwargs is all HTTP GET parameters as keyword arguments. The function should return an object (remember that in Python everything is an object), which will be serialised as JSON as the view output.