Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better project structure #1

Open
HaoLiHaiO opened this issue Jul 24, 2024 · 0 comments
Open

Better project structure #1

HaoLiHaiO opened this issue Jul 24, 2024 · 0 comments

Comments

@HaoLiHaiO
Copy link

Hi, I like this project idea and it could become a nice little collection of chaos monkey programs. I think that to be scalable, it needs to be better structured. I was thinking of something like:

chaos-marmosets-main/
│
├── .gitignore
├── LICENSE
├── Makefile
├── README.md
│
├── docs/
│   ├── NEWS.md
│   ├── divide-by-zero-python.1.md
│   ├── divide-by-zero.1.md
│   ├── leak-memory.1.md
│   └── seg-fault.1.md
│
├── src/
│   ├── divide-by-zero/
│   │   ├── main.c
│   │   ├── main.py
│   │   └── README.md
│   │
│   ├── leak-memory/
│   │   ├── main.c
│   │   └── README.md
│   │
│   └── seg-fault/
│       ├── main.c
│       └── README.md
│
└── include/
    └── noinline.h

Adding individual READMEs in the different directories would allow to have a less bloated main README. The main README could contain why the project (what problem it solves), warnings, and how to use the tools, e.g. do you run all the programs at the same time? Do we create a random program execution script which would execute divide by 0 or segfault etc. randomly? You mention creating releases which is very specific. I think something like this could be interesting:

# Chaos Marmosets

This project contains small programs that behave badly. They can be used for 
[chaos engineering](https://en.wikipedia.org/wiki/Chaos_engineering) to test system behavior and 
infrastructure setup for those cases.

## Prerequisites

### Linux
- GCC
- Make
- Python 3

### macOS
- Xcode Command Line Tools
- Python 3

### Windows
- MinGW or Visual Studio
- Python 3

## Installation

### Linux and macOS

1. Clone the repository:
    ```sh
    git clone https://github.com/yourusername/chaos-marmosets.git
    cd chaos-marmosets
    ```

2. Build the C programs:
    ```sh
    make
    ```

3. (Optional) Create a release tarball:
    ```sh
    make dist
    ```

### Windows

1. Clone the repository:
    ```sh
    git clone https://github.com/bdrung/chaos-marmosets.git
    cd chaos-marmosets
    ```

etc.

For execution, we should be able to specify what we want to run, one program, two programs or all a the same time, I am thinking of something like this:

#!/bin/bash

# Show how to use the program
usage() {
    echo "Usage: $0 --run=<program1>[,<program2>,<program3>,...|all]"
    exit 1
}

# Check if run is provided else show how to use
if [[ $# -ne 1 || $1 != --run=* ]]; then
    usage
fi

# Parse --run
programs=${1#--run=}

# Executing the programs from ./bin
run_program() {
    case $1 in
        divide-by-zero)
            ./bin/divide-by-zero &
            ;;
        divide-by-zero-python)
            python3 src/divide-by-zero/main.py &
            ;;
        leak-memory)
            ./bin/leak-memory &
            ;;
        seg-fault)
            ./bin/seg-fault &
            ;;
        *)
            echo "Unknown program: $1"
            usage
            ;;
    esac
}

# Run all programs based on the flag
if [[ $programs == "all" ]]; then
    run_program divide-by-zero
    run_program divide-by-zero-python
    run_program leak-memory
    run_program seg-fault
else
    IFS=',' read -r -a program_array <<< "$programs"
    for program in "${program_array[@]}"; do
        run_program "$program"
    done
fi

# Wait until complete
wait

This script is just an example and would need some improvement. The same can easily be done in Powershell for Windows users.

I am done talking about the structure. Finally, if you're interested in collaborating on this project. I would be interested in adding new features like CPU exhaustion, disk I/O stress, network floods and packet loss scenarios, descriptor leaks, etc.

Let me know what you think!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant