Skip to content

vr_devel

Michal Opala edited this page Apr 23, 2024 · 2 revisions

Virtual Router Development

This section should serve as an example how you can develop one-apps appliances, we're presenting a concrete example of the VRouter appliance.

Unit Testing

Each feature in VRouter (like HAproxy) may contain RSpec tests, for example here. Those tests should check if ONEAPP_* and VROUTER_* environment variables are parsed correctly in the ruby code and if the resulting configuration files are correctly rendered, they also check if various API payloads are processed successfully.

Important

The idea behind those tests is to reduce the time spent on manual integration testing (which requires image builds and VM deployments). If you can verify that your config rendering or API handling code works correctly even before you build and deploy VMs, just go for it!

Warning

Please make sure all the existing tests are passing on your PR's branch. This will be checked automatically in Github Actions for your specific PR. You can read all the currently implemented GA workflows here.

Manual Testing

Please consider the following architecture diagram.

        public network
      ┌───────────────────────┬──────────────────┬──────────────   Users
      │                       │                  │
   ┌──┴─┐  VIP: 10.2.1.200 ┌──┴─┐             ┌──┴─┐  10.2.11.203
┌──┤eth0├──┐            ┌──┤eth0├──┐       ┌──┤eth0├──┐
│  └────┘  │            │  └────┘  │       │  └────┘  │
│ VR1      │            │ VR2      │       │ VNF      │
│          │            │          │       │          │   ETH0_EP:5432 ┌────► 172.20.0.104:2345
│ HAProxy  │            │ HAProxy  │       │ HAProxy  │                │
│  ┌────┐  │            │  ┌────┐  │       │  ┌────┐  │                └────► 172.20.0.105:2345
└──┤eth1├──┘            └──┤eth1├──┘       └──┤eth1├──┘
   └─┬──┘                  └─┬──┘             └─┬──┘
     │    VIP: 172.20.0.100  │                  │
     │                       │                  │
     │   private network     │                  │
     └──────────────┬────────┴────────┬─────────┴───────────────
               172.20.0.104      172.20.0.105
              ┌─────┴────┐      ┌─────┴────┐
              │ BACKEND1 │      │ BACKEND2 │
              └──────────┘      └──────────┘
  • Two types of VR/VNF will be deployed: regular VR and VNF inside a OneFlow service.
  • Two backends will be deployed and used by both HAProxy instances from both VR and VNF.
  • VR instances will be running in the HA mode, hence two VIPs.
  • VNF instance will be running in the SOLO mode, no VIPs required.

This can be considered a basic environment for testing dynamic load-balancing, both modes can be manually tested at once, the VR and VNF (inside OneFlow) scenarios. To make the process a little more convenient, you can use OpenNebula Terraform Provider to deploy and later destroy everything in your local OpenNebula cluster. Here's the example:

Warning

Please customize all IP addresses to match your actual OpenNebula cluster.

terraform {
  required_providers {
    opennebula = {
      source = "OpenNebula/opennebula"
      version = ">= 1.4.0"
    }
  }
}

provider "opennebula" {
  endpoint      = "http://10.2.11.40:2633/RPC2"
  flow_endpoint = "http://10.2.11.40:2474"
  username      = "oneadmin"
  password      = "asd"
}

data "opennebula_virtual_network" "service" {
  name = "service"
}

data "opennebula_virtual_network" "private" {
  name = "private"
}

resource "opennebula_image" "router" {
  name         = "router"
  datastore_id = "1"
  persistent   = false
  permissions  = "642"
  dev_prefix   = "vd"
  driver       = "qcow2"
  path         = "http://10.2.11.40/images/service_VRouter.qcow2"
}

resource "opennebula_image" "backend" {
  name         = "backend"
  datastore_id = "1"
  persistent   = false
  permissions  = "642"
  dev_prefix   = "vd"
  driver       = "qcow2"
  path         = "http://10.2.11.40/images/alpine318.qcow2"
}

resource "opennebula_virtual_router_instance_template" "router" {
  name        = "router"
  permissions = "642"
  cpu         = "0.5"
  vcpu        = "1"
  memory      = "512"

  context = {
    SET_HOSTNAME = "$NAME"
    NETWORK      = "YES"
    TOKEN        = "YES"
    REPORT_READY = "NO"

    SSH_PUBLIC_KEY = "$USER[SSH_PUBLIC_KEY]"
    PASSWORD       = "asd"

    # NAT4

    ONEAPP_VNF_NAT4_ENABLED        = "YES"
    ONEAPP_VNF_NAT4_INTERFACES_OUT = "eth0"

    # DNS

    ONEAPP_VNF_DNS_ENABLED    = "YES"
    ONEAPP_VNF_DNS_INTERFACES = "eth1"

    # HAPROXY

    ONEAPP_VNF_HAPROXY_ENABLED         = "YES"
    ONEAPP_VNF_HAPROXY_ONEGATE_ENABLED = "YES"

    ONEAPP_VNF_HAPROXY_LB0_IP   = "<ETH0_EP0>"
    ONEAPP_VNF_HAPROXY_LB0_PORT = "5432"
  }

  os {
    arch = "x86_64"
    boot = ""
  }

  disk {
    image_id = opennebula_image.router.id
  }

  graphics {
    keymap = "en-us"
    listen = "0.0.0.0"
    type   = "VNC"
  }
}

resource "opennebula_virtual_router" "router" {
  name        = "router"
  permissions = "642"

  instance_template_id = opennebula_virtual_router_instance_template.router.id
}

resource "opennebula_virtual_router_instance" "router" {
  count       = 2
  name        = "router_${count.index}"
  permissions = "642"
  memory      = "512"
  cpu         = "0.5"

  virtual_router_id = opennebula_virtual_router.router.id
}

resource "opennebula_virtual_router_nic" "eth0" {
  depends_on = [
    opennebula_virtual_router_instance.router,
  ]

  model       = "virtio"
  floating_ip = true

  virtual_router_id = opennebula_virtual_router.router.id
  network_id        = data.opennebula_virtual_network.service.id
}

resource "opennebula_virtual_router_nic" "eth1" {
  depends_on = [
    opennebula_virtual_router_instance.router,
    opennebula_virtual_router_nic.eth0,
  ]

  model       = "virtio"
  floating_ip = true

  virtual_router_id = opennebula_virtual_router.router.id
  network_id        = data.opennebula_virtual_network.private.id
}

resource "opennebula_template" "vnf" {
  name        = "vnf"
  permissions = "642"
  cpu         = "0.5"
  vcpu        = "1"
  memory      = "512"

  context = {
    SET_HOSTNAME = "$NAME"
    NETWORK      = "YES"
    TOKEN        = "YES"
    REPORT_READY = "YES"

    SSH_PUBLIC_KEY = "$USER[SSH_PUBLIC_KEY]"
    PASSWORD       = "asd"

    # ROUTER4

    ONEAPP_VNF_ROUTER4_ENABLED = "YES"

    # NAT4

    ONEAPP_VNF_NAT4_ENABLED        = "YES"
    ONEAPP_VNF_NAT4_INTERFACES_OUT = "eth0"

    # DNS

    ONEAPP_VNF_DNS_ENABLED    = "YES"
    ONEAPP_VNF_DNS_INTERFACES = "eth1"

    # HAPROXY

    ONEAPP_VNF_HAPROXY_ENABLED         = "YES"
    ONEAPP_VNF_HAPROXY_ONEGATE_ENABLED = "YES"

    ONEAPP_VNF_HAPROXY_LB0_IP   = "<ETH0_EP0>"
    ONEAPP_VNF_HAPROXY_LB0_PORT = "5432"
  }

  os {
    arch = "x86_64"
    boot = ""
  }

  disk {
    image_id = opennebula_image.router.id
  }

  graphics {
    keymap = "en-us"
    listen = "0.0.0.0"
    type   = "VNC"
  }
}

resource "opennebula_template" "backend" {
  name        = "backend"
  permissions = "642"
  cpu         = "0.5"
  vcpu        = "1"
  memory      = "512"

  context = {
    SET_HOSTNAME = "$NAME"
    NETWORK      = "YES"
    TOKEN        = "YES"
    REPORT_READY = "YES"
    BACKEND      = "YES"

    SSH_PUBLIC_KEY = "$USER[SSH_PUBLIC_KEY]"

    START_SCRIPT_BASE64 = base64encode(
      <<-SHELL
          #!/bin/sh
          set -e
          apk --no-cache add iproute2 jq nginx
          LOCAL_IP=$(ip -j a s dev eth0 | jq -r '.[0].addr_info | map(select(.family == "inet"))[0].local')
          echo "$LOCAL_IP" > /var/lib/nginx/html/index.html
          cat > /etc/nginx/http.d/default.conf <<'EOT'
          server {
            listen 2345 default_server;
            location / {
              root /var/lib/nginx/html/;
            }
          }
          EOT
          rc-update add nginx default
          # HAPROXY
          onegate vm update --data "ONEGATE_HAPROXY_LB0_IP=<ETH0_EP0>"
          onegate vm update --data "ONEGATE_HAPROXY_LB0_PORT=5432"
          onegate vm update --data "ONEGATE_HAPROXY_LB0_SERVER_HOST=$LOCAL_IP"
          onegate vm update --data "ONEGATE_HAPROXY_LB0_SERVER_PORT=2345"
      SHELL
    )
  }

  os {
    arch = "x86_64"
    boot = ""
  }

  disk {
    image_id = opennebula_image.backend.id
  }

  graphics {
    keymap = "en-us"
    listen = "0.0.0.0"
    type   = "VNC"
  }
}

resource "opennebula_service_template" "service" {
  name        = "service"
  permissions = "642"

  template = jsonencode({
    TEMPLATE = {
      BODY = {
        name       = "service"
        deployment = "straight"
        roles = [
          {
            name                 = "vnf"
            cardinality          = 1
            min_vms              = 1
            cooldown             = 5
            elasticity_policies  = []
            scheduled_policies   = []
            vm_template          = tonumber(opennebula_template.vnf.id)
            vm_template_contents = <<-TEMPLATE
              NIC = [
                NAME       = "_NIC0",
                NETWORK_ID = "$service" ]
              NIC = [
                NAME       = "_NIC1",
                NETWORK_ID = "$private" ]
            TEMPLATE
          },
          {
            name                 = "backend"
            parents              = ["vnf"]
            cardinality          = 2
            min_vms              = 1
            cooldown             = 5
            elasticity_policies  = []
            scheduled_policies   = []
            vm_template          = tonumber(opennebula_template.backend.id)
            vm_template_contents = <<-TEMPLATE
              NIC = [
                NAME       = "_NIC0",
                DNS        = "$${vnf.TEMPLATE.CONTEXT.ETH1_IP}",
                GATEWAY    = "$${vnf.TEMPLATE.CONTEXT.ETH1_IP}",
                NETWORK_ID = "$private" ]
            TEMPLATE
          },
        ]
        networks = {
          service = "M|network|service||id:${data.opennebula_virtual_network.service.id}"
          private = "M|network|private||id:${data.opennebula_virtual_network.private.id}"
        }
      }
    }
  })
}

resource "opennebula_service" "service" {
  depends_on = [
    opennebula_virtual_router_nic.eth0,
    opennebula_virtual_router_nic.eth1,
  ]

  name = "service"

  template_id = opennebula_service_template.service.id

  timeouts {
    create = "15m"
    delete = "5m"
  }
}

Note

You can use one-apps to build service_VRouter.qcow2 and alpine318.qcow2, then you can serve them in NGINX or any other HTTP server (oneliners like python3 -m http.server 8080 should do just fine).

Your final workflow would look like:

  1. Make a change in the code, somewhere here or maybe here.
  2. Make sure unit tests are passing: run appliances/lib/tests.sh and appliances/VRouter/tests.sh.
  3. Add more unit tests specific to your change or feature.
  4. Build the VRouter image: make service_VRouter (and the other one if you don't have it already).
  5. Serve images from local HTTP server.
  6. (Re)Create the environment using terraform commands: terraform init, terraform destroy, terraform apply.
  7. Connect to both VR/VNF instances ssh root@10.2.11.200 and ssh@10.2.11.203 and examine them.

In this particular case you should see two identical dynamic load-balancers health-checking the same two backends:

$ cat /etc/haproxy/servers.cfg
frontend lb0_5432
    mode tcp
    bind 10.2.11.200:5432
    default_backend lb0_5432

backend lb0_5432
    mode tcp
    balance roundrobin
    option tcp-check
    server lb0_172.20.0.104_2345 172.20.0.104:2345 check observe layer4 error-limit 50 on-error mark-down
    server lb0_172.20.0.105_2345 172.20.0.105:2345 check observe layer4 error-limit 50 on-error mark-down
$ cat /etc/haproxy/servers.cfg
frontend lb0_5432
    mode tcp
    bind 10.2.11.203:5432
    default_backend lb0_5432

backend lb0_5432
    mode tcp
    balance roundrobin
    option tcp-check
    server lb0_172.20.0.104_2345 172.20.0.104:2345 check observe layer4 error-limit 50 on-error mark-down
    server lb0_172.20.0.105_2345 172.20.0.105:2345 check observe layer4 error-limit 50 on-error mark-down

You should be able to connect to both backends through both load-balancers:

$ curl http://10.2.11.200:5432
172.20.0.104
$ curl http://10.2.11.200:5432
172.20.0.105
$ curl http://10.2.11.203:5432
172.20.0.104
$ curl http://10.2.11.203:5432
172.20.0.105
Clone this wiki locally