Following this setup guide will get you a running CloudCompute UI for development or testing.
Node, NPM, MongoDB, Bower
apt-get update
apt-get install nodejs
apt-get install npm
npm install -g bower
For installing MongoDB, please refer to their official documentation: Installing Mongo on Ubuntu 16.04
Clone the CloudCompute UI repository to your instance
git clone https://github.com/BU-CS-CE-528-2017/CloudDataverse
Ensure MongoDB is running on your instance before continuing (refer to prior instructions if necessary) and you are in the CloudDataverse directory.
cd CloudDataverse/mean
npm install
node server.js
This will run CloudCompute locally on localhost port 3000 by default.
http://localhost:3000/
Pull the Cloud Dataverse branch of Dataverse to your repository
git pull https://github.com/Allegro-Leon-Li/dataverse.git
Use the branch 4.5-export-harvest-swift-update
For completing the Dataverse installation, please refer to swift_setup.txt
in the Dataverse repository above and their official documentation:
Dataverse Installation
Assuming your Dataverse instance is correctly installed, you can begin uploading datasets for testing. Files are deposited to the Swift object store in your OpenStack project. Container names in Swift will match the dataset name in Dataverse.
Datasets from Dataverse are added to your 'compute batch' when the 'Add to Compute Batch' option is selected. Please note, multiple datasets can be added to your 'compute batch.' When ready, click the 'Compute Batch' button at the top of the page, this will redirect you to CloudCompute. Login with your OpenStack credentials and begin selecting your desired datasets.
At this time, CloudCompute only successfully completes jobs with Hadoop MapReduce. Additional work on our end is required before Spark and Storm will be operational with CloudCompute.
- Benjamin Corn - bencorn
- Will Norman - willnorman
- Sneha Pradhan - snehha
- Ang Li - Allegro-Leon-Li