-
Notifications
You must be signed in to change notification settings - Fork 24
Figure out how to automate test suite & allocate storage/devices/etc. #122
Comments
Data dump incoming:
Some of those are probably out of scope, we have to decide what is required what can be passed off for now. |
@mejackreed I've looked at the partial metadata myself but it is quite a complex format. Would it be possible to get sizes of files in the dataset as a new-line delimited values in bytes from fragment of the dataset? It for sure doesn't need to be whole, few percent probably would be enough for us to observe the distribution and create a test case basing on it. |
Note: The data.gov data is aggregated from many federal agencies, so the content, its structure and the formats might be widely divergent. |
Yeah, it would be good to get a cross section sample, but any sample will be better than none. |
I'll work on putting some samples together. What metrics would you prefer? number of files and file size? |
I am mainly looking for file sizes, but number of files per 'publication' could be useful too. |
@gsf @hsanjuan @victorbjelkholm can any of you help @Kubuxu land this? He needs help getting kubernetes set up. |
It is mostly about integrating this feature/test set: #102 (comment) to https://github.com/ipfs/kubernetes-ipfs, I was able to set up kubernetes on my machine but real problem is implementing the tests. Few preliminary questions:
|
My idea right now is to write simple wrapper/RPC server that would be used to manage the ipfs instance inside the pod, it would perp the repo, allow for choice of the ipfs version and its command arguments, config variables and so on. It would also include all required tools and or call them. This means that the wrapper/RPC has to be created, docker image with it. I am not 100% sure if it is the right path forward but it seems a good path for me. |
@Kubuxu that was the direction i was planning on going with ipfs/notes#191 The client would be a program that manages an ipfs node as you say, though yours will be a server (where we make requests against it), and in my testbed doc, theres a central server that all the clients connect to that coordinates the tests and sending out commands |
Take a look at volumes: https://kubernetes.io/docs/user-guide/volumes/ They are kind of the same as Docker volumes but a bit different. Regarding binaries, what kind of binaries and for what purpose? Normally, you would wrap the binary in a container that you can deploy as usual, but it depends on what you're trying to achieve. If you need binaries inside the go-ipfs container, just create a new go-ipfs-dev image based on the ipfs/go-ipfs one, while in the build step include the necessary binaries.
I'm guessing this is so we can test different I think the simplest route would be the one @jbenet explained in the call. Have one base deployment with the basic stuff for running a go-ipfs daemon. Have a yml/json file with what different configs/arguments you have. Write a small tool that takes the base + configuration files and generate every possible version of that.
I had great success with using prometheus and it was relatively easy to setup. To keep in mind would be to log the test-run's ID so we can have one report per test run. |
Prerequisite: describe the tests that we're aiming for -- see the clarification in #102 (comment)
Tasks:
The text was updated successfully, but these errors were encountered: