This is the repository of algorithms for the MIP.
Algorithms, written in their native language (R, Matlab, Python, Java...) are encapsulated in a Docker container that provides them with the runtime environment necessary to execute this function.
The environment variables provided to the Docker container are used as parameters to the function or algorithm to execute.
Currently, we expect the Docker containers to be autonomous:
- they should connect to a database and retrieve the dataset to process
- they should process the data, taking into account the parameters given as environment variables to the Docker container
- they should store the results into the results database.
The format of the results should be easily shared.
- For algorithms providing statistical analysis or machine learning, we require the results to be in PFA format in its YAML or JSON form.
- For algorithms providing visualisations, we support different formats, including Highcharts, Vis.js, PNG and SVG.
- For algorithms providing tabular data, we expect a JSON output in this format: Tabular Data Resource
hbpmip/python-anova: Anova algorithm
This is a Python implementation of Anova.
hbpmip/python-correlation-heatmap: Correlation heatmap
Calculate correlation heatmap, only works for real variables. Run it on single node or in a distributed mode. First, intermediate mode calculates covariance matrix from a single node, then aggregate mode is used after intermediate to combine statistics from multiple jobs and produce the final graph.
hbpmip/python-distributed-pca: PCA - principal components analysis
Calculate PCA, only works for real variables. Run it on single node or in a distributed mode. First, intermediate mode calculates covariance matrix from a single node, then aggregate mode is used after intermediate to combine statistics from multiple jobs and produce the final graph.
Code is shared with hbpmip/python-correlation-heatmap
hbpmip/python-distributed-kmeans: K-means
Implementation of distributed k-means clustering (https://github.com/MRN-Code/dkmeans) in Python. It uses Single-Shot Decentralized LLoyd (https://github.com/MRN-Code/dkmeans#single-shot-decentralized-lloyd).
Intermediate mode calculates clusters on a single node, while aggregate mode is merging the clusters according to least merging error (e.g. smallest distance between centroids).
hbpmip/python-histograms: Histograms
Calculates histogram of nominal or real variable grouped by nominal variables in independent variables. Histogram edges are taken from minValue
and maxValue
property of dependent variable. If not available, then these values are calculated dynamically from dependent values (this won't work in distributed mode though).
hbpmip/python-jsi-hedwig: Hedwig method
Hedwig method for semantic subgroup discovery. (https://github.com/anzev/hedwig).
hbpmip/python-jsi-hinmine: HINMINE
The HINMINE algorithm for network-based propositionalization is an algorithm for data analysis based on network analysis methods.
The input for the algorithm is a data set containing instances with real-valued features. The purpose of the algorithm is to construct a new set of features for further analysis by other data mining algorithms. The algorithm outputs a data set with features, generated for each data instance in the input data set. The features represent how close a given instance is to the other instances in the data set. The closeness of instances is measured using the PageRank algorithm, calculated on a network constructed from instance similarities.
hbpmip/python-knn: k-nearest neighbors
Implementation of k-nearest neighbors algorithm (https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm) in Python.
Run it on single node or in a distributed mode.
hbpmip/python-linear-regression: Linear and logistic regression
Python implementation of multivariate linear regression. It supports both continuous and categorical as independent variables. Run it on single node or in a distributed mode. Python implementation of logistic regressions on one class versus the others. Only single-node mode is supported
hbpmip/python-sgd-regression: SGD family of regressions
This is a Python implementation of scikit-learn estimators (http://scikit-learn.org/stable/modules/scaling_strategies.html) using Stochastic Gradient Descent and the partial_fit
method for distributed learning.
Implemented methods:
- linear_model - calls
SGDRegressor
orSGDClassifier
- neural_network - calls
MLPRegressor
orMLPClassifier
- naive_bayes - calls
MixedNB
(mix ofGaussianNB
andMultinomialNB
), only works for classification tasks - gradient_boosting - calls
GradientBoostingRegressor
orGradientBoostingClassifier
, does not support distributed training.
hbpmip/python-summary-statistics: Summary statistics
It calculates various summary statistics for entire dataset and also for all subgroups created by combining all possible values of nominal covariates. Run it on single node or in a distributed mode.
hbpmip/python-tsne: t-SNE
The python-tsne is a wrapper for the the A-tSNE algorithm developed by N. Pezzotti. The underlying algorithm is an improvement on the Barnes-Hut tSNE (http://lvdmaaten.github.io/publications/papers/JMLR_2014.pdf) using an approximated k-nearest neighbor calculation.
hbpmip/java-jsi-clus-fire: k-nearest neighbors
hbpmip/java-jsi-clus-fr: k-nearest neighbors
hbpmip/java-jsi-clus-pct: k-nearest neighbors
hbpmip/java-jsi-clus-pct-ts: k-nearest neighbors
hbpmip/java-jsi-clus-rm: k-nearest neighbors
hbpmip/java-jsi-streams-modeltree: k-nearest neighbors
hbpmip/java-jsi-streams-regressiontree: k-nearest neighbors
hbpmip/python-longitudinal: Longitudinal
hbpmip/r-3c: 3C
hbpmip/r-ggparci: ggParci
hbpmip/r-heatmaply: Heatmaply
hbpmip/java-rapidminer-knn: π k-NN k-NN
k-NN implemented with RapidMiner. Deprecated, replaced by hbpmip/python-knn
java-rapidminer-naivebayes: π Naive Bayes Naive Bayes
Naive Bayes implemented with RapidMiner. Deprecated, replaced by hbpmip/python-naivebayes
hbpmip/r-linear-regression: π Linear regression Linear regression
Linear regression implemented in R, with support for federated results. Deprecated, replaced by hbpmip/python-linear-regression
Algorithm | Description | Predictive | Federated results | In production | Used for | Runtime engine |
---|---|---|---|---|---|---|
hbpmip/python-anova | Anova | βοΈ π | βοΈ | Regression | Woken | |
hbpmip/python-correlation-heatmap | Correlation heatmap | β | βοΈ | Visualisation | Woken | |
hbpmip/python-distributed-pca | PCA | βοΈ | βοΈ | Visualisation | Woken | |
hbpmip/python-distributed-kmeans | K-means | βοΈ | βοΈ | Clustering | Woken | |
hbpmip/python-histograms | Histograms | βοΈ | βοΈ | Visualisation | Woken | |
hbpmip/python-jsi-hedwig | Hedwig | β | βοΈ | Woken | ||
hbpmip/python-jsi-hinmine | HINMINE | β | βοΈ | Woken | ||
hbpmip/python-knn | k-NN | βοΈ | βοΈ | βοΈ | Clustering | Woken |
hbpmip/python-linear-regression | Linear regression | βοΈ | βοΈ | βοΈ | Regression | Woken |
hbpmip/python-linear-regression | Logistic regression | βοΈ | β | βοΈ | Regression, Classification | Woken |
hbpmip/python-sgd-regression | SGD Linear model | βοΈ | βοΈ | βοΈ | Classification | Woken |
hbpmip/python-sgd-regression | SGD Neural Network | βοΈ | β | βοΈ | Classification | Woken |
hbpmip/python-sgd-regression | SGD Naive Bayes | βοΈ | β | βοΈ | Classification | Woken |
hbpmip/python-sgd-regression | SGD Gradient Boosting | βοΈ | β | βοΈ | Classification | Woken |
hbpmip/python-summary-statistics | Summary statistics | βοΈ | βοΈ | Data exploration | Woken | |
hbpmip/python-tsne | t-SNE | β | βοΈ | Visualisation | Woken |
This work has been funded by the European Union Seventh Framework Program (FP7/2007Β2013) under grant agreement no. 604102 (HBP)
This work is part of SP8 of the Human Brain Project (SGA1).