Skip to content
This repository has been archived by the owner on Feb 8, 2023. It is now read-only.

Leveling up testing infrastructure #202

Open
1 task done
jbenet opened this issue Dec 16, 2016 · 1 comment
Open
1 task done

Leveling up testing infrastructure #202

jbenet opened this issue Dec 16, 2016 · 1 comment

Comments

@jbenet
Copy link
Member

jbenet commented Dec 16, 2016

(Notes from a call)

Increase improvement rate of IPFS projects

Goals

Goal of the topic:

  • Create a feedback cycle for the development of go-ipfs, js-ipfs and orbit
  • Enable developers to be proactive about adding tests that touch and demonstrate where the hotpaths

Goal of call:

  • To figure out the path to getting that tester that @dignifiedquire described
    • Doesn't have to have all the bells and whistles
    • Start writing the tests in priority of what they would give us
    • Pick 10 clear performance tests that would solve these problems

Testing Concerns:

  • Does this work well in situations with various networks
  • Connect to user stories

Clear Performance Tests:

  1. Simple end-to-end acceptance tests
  2. js-ipfs-bitswap in isolation, test building networks with N=(10, 100, 1000, ...) nodes transferring k=(10, 100, 1000, ...) files, of sizes S=(10, 100, 1000, ...)
  3. js-ipfs & js-ipfs-bitswap tests generation x files on n nodes, and then requesting all files on all nodes, such that at the end each node has n * x files.
  4. Fetch sites (iterative / lock-stepped requests)
  5. Test a simulated load of orbit-db (NOT using orbit-db, just same data pattern / sizes) over js-ipfs and go-ipfs.
  6. Test a network of orbit-db logs with js-ipfs and go-ipfs N=(10, 100, 1000, ...),
  7. Test Orbit itself (Final test?) Test orbit itself -- automated chat bots N=(10, 100, 1000, ...), messages M=(10, 100, 1000, 1000), on channels C=(1, 10, 100, 1000, ...)
  8. npm testing (all of the existing tools should have benchmark tests that spawn whenever things are cloned, stimulated networks)
  9. peer discovery and connectivity tests (test all the different discovery protocols in isolation, and with time bounds ("should get connected in t ms"))
  10. test moving files with ipfs networks N=(10, 100, 1000, ...), num-files=(10, 100, 1000, ...), size=(10, 100, 1000, ...)
  11. iptb working with ipfs, good baseline

Make a list of modules to cross reference tests. Each of these should be tested in these cases: {in isolation, w/ go-ipfs, w/ js-ipfs-node, w/ js-ipfs-browser, w/ all go-ipfs & js-ipfs-node & js-ipfs-browser} --

  • bitswap
  • pubsub
  • dht
  • ipfs-log
  • orbit-db
  • orbit
  • each of the peer-discovery protocols

Make orbit, orbit-db and ipfs-log easily testable, ideally:

> git clone git@github.com/ipfs/{orbit, orbit- db, ipfs-log} && \
> npm install && \
> npm link {ipfs, ipfs-api} && \
> npm run {test, benchmark}:{electron, browser, node}:{go-ipfs, js-ipfs}

orbitdb-archive/orbit#206 (comment)

How to administer tests/Where are the tests administered:

  • Cluster manager tools to ship tests
  • There are a few products that are out there that are meant for this sort of administration
  • If we have no product benchmarking, using these tests will be useless.
  • We need to have something to execute the tests before we can have tests.
  • It would be to outline what we want our testing environment would look like
  • We want a network that we can privatize, and we want runners (size of files generated before sending, the number of files, and hash the files so that they can communicate to one another and test the distribution)
  • Does writing the test lab make sense? Should we try testing one of these tools? What would be the best option?
    • @why believes that we can get our own testing environment quickly. for the purpose of the tests that we want to build right now, he thinks that we can build it quickly.
    • What are the differences between building our own and using a tool?
      • being able to program our own distrubances in the environment
      • we should be able to generate scheduled distrubances,
  • Get really good reporting for failures, either a tool within a container, that can gather statistics on how things took, general progress, that will give us the ability to dig into the performance data

IPFS Interop Tests

https://github.com/ipfs/ipfs-interop-stress-tests

IPFS Interface Tests

  • Run interface-ipfs-core tests over ipfs-api+js-ipfs ipfs-api+go-ipfs and js-ipfs

IPFS Benchmarks for different combinations of implementations

ipfs/js-ipfs#556
ipfs/js-ipfsd-ctl#136

  • js-ipfs
  • ipfs-api + js-ipfs
  • ipfs-api + go-ipfs

Test Tooling

Goal: Figure out what tooling we want to use

  • we don't have enough experience with the tooling within the team to know which one we want to use.
  • we may need to spend some time as a team evaluating these tools that are more network oriented.
  • before we jump into building anything, we need to be more familiar with the tool we want to build
  • We should create a timeline for this (i,e. we have played with these tools by a certain time). Create a set of criteria that these tools will have to meet based off of the tests that we've outlined above.
  • tools don't distinguish between js-ipfs and go-ipfs,

List of state-of-the-art tools to evaluate

  • cluster orchestrators:
    • kubernetes
    • otto / nomad / the hashicorp stuff?
    • swarmKit
  • golang/build
  • https://saltstack.com (has capabilities for orchestrating containers + full machines and "real time" capabilities)

List of criteria to evaluate the tools

Main concern: Can we execute our tests with this tool?

  • Interoperability between go-ipfs and js-ipfs

  • Capabilities:

    • can it build real networks (across machines over the internet)?
    • can it run containers (process bundles / pods) in isolation?
    • can we manipulate network connectivity?
    • what network topologies can we make?
    • can it run browser nodes? (phantom? electron? actual browsers?)
    • can it collect stats for us easily? if not, what?
    • does it have a visual dashboard?
    • can it run multiple diffrent tests at the same time?
    • can we ssh in?
    • can it be completely automated?
  • Reputation

    • what do people say about it?
    • what good things do people highlight?
    • what bad things do people highlight?
    • are there red flags people point out?
    • are there public comparisons?
  • Ease of use

    • ease of install
    • how easy is it to get statistics on clients?
    • how easy to start a demon on client?
    • orchestration ease (rebooting, running arbitrary commands, etc)
    • how easy is it to ssh-in / use the command line?
    • how easy is it to manage? what needs to be kept up?
    • how easy is it to automate?
@ianopolous
Copy link
Member

As a data point, at my work we use kubernetes for testing, CI and manual deployment, and also prometheus. And both make the whole process totally painfree and we get great metrics.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants