Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

request help: config_etcd.lua has been consistently and frequently reporting errors #2695

Closed
Applenice opened this issue Nov 10, 2020 · 68 comments
Labels
checking check first if this issue occurred discuss

Comments

@Applenice
Copy link
Contributor

Issue description

After installing APISIX 2.0, the apisix/logs/error.log file shows that config_etcd.lua has been consistently and frequently reporting errors. The phenomenon still exists after reinstalling apisix and etcd without modifying any configuration during the period. Nearly 2,200 lines of error logs typed in nearly 20 minutes, similar to the following:

2020/11/10 20:18:30 [error] 31545#31545: *7224 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/plugin_metadata, context: ngx.timer
2020/11/10 20:18:30 [error] 31545#31545: *7226 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/proto, context: ngx.timer
2020/11/10 20:18:30 [error] 31545#31545: *7225 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/ssl, context: ngx.timer
2020/11/10 20:18:30 [error] 31545#31545: *7227 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/consumers, context: ngx.timer
2020/11/10 20:18:30 [error] 31545#31545: *7228 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/upstreams, context: ngx.timer
2020/11/10 20:18:30 [error] 31550#31550: *7229 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/services, context: ngx.timer
2020/11/10 20:18:30 [error] 31550#31550: *7232 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/proto, context: ngx.timer
2020/11/10 20:18:30 [error] 31550#31550: *7233 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/global_rules, context: ngx.timer
2020/11/10 20:18:30 [error] 31550#31550: *7231 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/upstreams, context: ngx.timer
2020/11/10 20:18:30 [error] 31550#31550: *7230 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/plugin_metadata, context: ngx.timer

Installation method

yum install -y apisix-2.0-0.el7.noarch.rpm

Environment

  • apisix version (cmd: apisix version): 2.0
  • OS: CentOS Linux release 7.8.2003 (Core)
$ apisix version
2.0
$ etcd --version
etcd Version: 3.4.13
Git SHA: ae9734ed2
Go Version: go1.12.17
Go OS/Arch: linux/amd64

What should I do?

@nic-chen
Copy link
Member

nic-chen commented Nov 10, 2020

hi,does your etcd enable auth?

@idbeta
Copy link
Contributor

idbeta commented Nov 11, 2020

can you try to check etcd data like this?

etcdctl get --prefix "/apisix"

@Applenice
Copy link
Contributor Author

hi,does your etcd enable auth?

No configuration, a freshly installed state of etcd

@Applenice
Copy link
Contributor Author

can you try to check etcd data like this?

etcdctl get --prefix "/apisix"

No information was returned after execution😐😐

@idbeta
Copy link
Contributor

idbeta commented Nov 11, 2020

Can you try to run make init in the APISIX directory?

@Applenice
Copy link
Contributor Author

Can you try to run make init in the APISIX directory?

No make command can be used.

$ pwd
/usr/local/apisix
$ ls
apisix  client_body_temp  conf  deps  fastcgi_temp  logs  proxy_temp  scgi_temp  uwsgi_temp
$ cd apisix/
$ pwd
/usr/local/apisix/apisix
$ ls
admin  api_router.lua  balancer  balancer.lua  consumer.lua  core  core.lua  debug.lua  discovery  http  init.lua  plugin.lua  plugins  router.lua  schema_def.lua  script.lua  ssl  stream  upstream.lua  utils

No change after trying to execute apisix init and apisix init_etcd😔

$ apisix
Usage: apisix [action] <argument>

help:       show this message, then exit
init:       initialize the local nginx.conf
init_etcd:  initialize the data of etcd
start:      start the apisix server
stop:       stop the apisix server
restart:    restart the apisix server
reload:     reload the apisix server
version:    print the version of apisix

@souzens
Copy link

souzens commented Nov 11, 2020

the same wrong
can't work now on 2.0 apisix
run in k8s & apache/apisix:latest docker image

but apisix-dashboard 2.0rc3 works well

./etcd --version
etcd Version: 3.4.13
Git SHA: ae9734ed2
Go Version: go1.12.17
Go OS/Arch: linux/amd64

./etcdctl --endpoints=10.111.9.154:2379 get --prefix "/apisix"
/apisix/routes/328088132001988967
{"id":"328088132001988967","create_time":1605085365,"update_time":1605086330,"uris":["/*"],"name":"test-pre","methods":["GET","HEAD","POST","PUT","DELETE","OPTIONS","PATCH"],"hosts":["venice.test-pre.com"],"vars":[],"upstream":{"nodes":[{"host":"venice.test-pre.svc.cluster.local","port":80,"weight":1}],"timeout":{"connect":6000,"read":6000,"send":6000},"type":"roundrobin"}}```

/usr/local/apisix $ curl http://127.0.0.1:9080/apisix/admin/routes/328088132001988967 -H 'X-AP
I-KEY: edd1c9f034335f136f87ad84b625c8f1'
<html>
<head><title>500 Internal Server Error</title></head>
<body>
<center><h1>500 Internal Server Error</h1></center>
<hr><center>openresty</center>
</body>
</html>

/usr/local/apisix/logs $ tail -n 10 error.log 
2020/11/11 17:57:00 [error] 26#26: *223005 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/routes, context: ngx.timer
2020/11/11 17:57:07 [error] 32#32: *226586 lua entry thread aborted: runtime error: /usr/local/apisix/apisix/core/etcd.lua:80: attempt to index field 'body' (a nil value)
stack traceback:
coroutine 0:
        /usr/local/apisix/apisix/core/etcd.lua: in function 'get'
        /usr/local/apisix/apisix/admin/routes.lua:166: in function </usr/local/apisix/apisix/admin/routes.lua:160>
        /usr/local/apisix/apisix/admin/init.lua:146: in function 'handler'
        /usr/local/apisix//deps/share/lua/5.1/resty/radixtree.lua:730: in function 'dispatch'
        /usr/local/apisix/apisix/init.lua:754: in function 'http_admin'
        content_by_lua(nginx.conf:148):2: in main chunk, client: 127.0.0.1, server: , request: "GET /apisix/admin/routes/328088132001988967 HTTP/1.1", host: "127.0.0.1:9080"```

@idbeta
Copy link
Contributor

idbeta commented Nov 11, 2020

cc @gxthrj Do you have any idea?

@souzens
Copy link

souzens commented Nov 11, 2020

by test just now, apisix2.0 will report error above when run wtih etcd-3.4.13
instead of etcd3.4.9 run ok

@moonming
Copy link
Member

by test just now, apisix2.0 will report error above when run wtih etcd-3.4.13
instead of etcd3.4.9 run ok

@nic-chen please take a look

@nic-chen
Copy link
Member

by test just now, apisix2.0 will report error above when run wtih etcd-3.4.13
instead of etcd3.4.9 run ok

@nic-chen please take a look

working on it.

@nic-chen
Copy link
Member

@souzens

Thanks for feedback.

but it works fine on my env using etcd-3.4.13. could you please provide more details ? thanks.

@idbeta please help check. thanks

@ziyou434
Copy link

I also have this problem, and both versions of etcd3.4.13 and etcd 3.4.9 report errors

@nic-chen
Copy link
Member

I also have this problem, and both versions of etcd3.4.13 and etcd 3.4.9 report errors

Thanks for feedback.

Could you provide the steps and config details ? thanks.

@ziyou434
Copy link

ziyou434 commented Nov 12, 2020

I also have this problem, and both versions of etcd3.4.13 and etcd 3.4.9 report errors

Thanks for feedback.

Could you provide the steps and config details ? thanks.

apisix2.0-alpine
helm install etcd bitnami/etcd -n api-gateway --set auth.rbac.enabled=false --set image.tag=3.4.9

I have no name!@etcd-0:/opt/bitnami/etcd$ etcdctl get --prefix "/apisix" /apisix/consumers/ init_dir /apisix/global_rules/ init_dir /apisix/node_status/ init_dir /apisix/plugin_metadata/ init_dir /apisix/plugins/ init_dir /apisix/proto/ init_dir /apisix/routes/ init_dir /apisix/services/ init_dir /apisix/ssl/ init_dir /apisix/stream_routes/ init_dir /apisix/upstreams/ init_dir

@tokers
Copy link
Contributor

tokers commented Nov 12, 2020

@ziyou434 Could you provides the options that used for etcd start.

@ziyou434
Copy link

ziyou434 commented Nov 12, 2020

@ziyou434 Could you provides the options that used for etcd start.

I use bitnami/etcd chart ,and --set auth.rbac.enabled=false.
The chart use setup.sh to start etcd

setup.sh

#!/bin/bash

set -o errexit
set -o pipefail
set -o nounset

# Debug section
exec 3>&1
exec 4>&2

if [[ "${BITNAMI_DEBUG:-false}" = true ]]; then
    echo "==> Bash debug is on"
else
    echo "==> Bash debug is off"
    exec 1>/dev/null
    exec 2>/dev/null
fi

# Constants
HOSTNAME="$(hostname -s)"
AUTH_OPTIONS=""
export ETCDCTL_ENDPOINTS="etcd-0.etcd-headless.api-gateway.svc.cluster.local:2380"
export ROOT_PASSWORD="${ETCD_ROOT_PASSWORD:-}"
if [[ -n "${ETCD_ROOT_PASSWORD:-}" ]]; then
  unset ETCD_ROOT_PASSWORD
fi
# Functions
## Store member id for later member replacement
store_member_id() {
    while ! etcdctl $AUTH_OPTIONS member list; do sleep 1; done
    etcdctl $AUTH_OPTIONS member list | grep -w "$HOSTNAME" | awk '{ print $1}' | awk -F "," '{ print $1}' > "$ETCD_DATA_DIR/member_id"
    echo "==> Stored member id: $(cat ${ETCD_DATA_DIR}/member_id)" 1>&3 2>&4
    exit 0
}
## Configure RBAC
configure_rbac() {
    # When there's more than one replica, we can assume the 1st member
    # to be created is "etcd-0" since a statefulset is used
    if [[ -n "${ROOT_PASSWORD:-}" ]] && [[ "$HOSTNAME" == "etcd-0" ]]; then
        echo "==> Configuring RBAC authentication!" 1>&3 2>&4
        etcd &
        ETCD_PID=$!
        while ! etcdctl $AUTH_OPTIONS member list; do sleep 1; done
        echo "$ROOT_PASSWORD" | etcdctl $AUTH_OPTIONS user add root --interactive=false
        etcdctl $AUTH_OPTIONS auth enable
        kill "$ETCD_PID"
        sleep 5
    fi
}
## Checks whether there was a disaster or not
is_disastrous_failure() {
    local endpoints_array=(${ETCDCTL_ENDPOINTS//,/ })
    local active_endpoints=0
    local -r min_endpoints=$(((1 + 1)/2))

    for e in "${endpoints_array[@]}"; do
        if [[ "$e" != "$ETCD_ADVERTISE_CLIENT_URLS" ]] && (unset -v ETCDCTL_ENDPOINTS; etcdctl $AUTH_OPTIONS  endpoint health --endpoints="$e"); then
            active_endpoints=$((active_endpoints + 1))
        fi
    done
    if [[ $active_endpoints -lt $min_endpoints ]]; then
        true
    else
        false
    fi
}

## Check wether the member was succesfully removed from the cluster
should_add_new_member() {
    return_value=0
    if (grep -E "^Member[[:space:]]+[a-z0-9]+\s+removed\s+from\s+cluster\s+[a-z0-9]+$" "$(dirname "$ETCD_DATA_DIR")/member_removal.log") || \
       ! ([[ -d "$ETCD_DATA_DIR/member/snap" ]] && [[ -f "$ETCD_DATA_DIR/member_id" ]]); then
        rm -rf $ETCD_DATA_DIR/* 1>&3 2>&4
    else
        return_value=1
    fi
    rm -f "$(dirname "$ETCD_DATA_DIR")/member_removal.log" 1>&3 2>&4
    return $return_value
}

if [[ ! -d "$ETCD_DATA_DIR" ]]; then
    echo "==> Creating data dir..." 1>&3 2>&4
    echo "==> There is no data at all. Initializing a new member of the cluster..." 1>&3 2>&4
    store_member_id & 1>&3 2>&4
    configure_rbac
else
    echo "==> Detected data from previous deployments..." 1>&3 2>&4
    if [[ $(stat -c "%a" "$ETCD_DATA_DIR") != *700 ]]; then
        echo "==> Setting data directory permissions to 700 in a recursive way (required in etcd >=3.4.10)" 1>&3 2>&4
        chmod -R 700 $ETCD_DATA_DIR
    else
        echo "==> The data directory is already configured with the proper permissions" 1>&3 2>&4
    fi
    if [[ 1 -eq 1 ]]; then
        echo "==> Single node cluster detected!!" 1>&3 2>&4
    elif is_disastrous_failure; then
        echo "==> Cluster not responding!!" 1>&3 2>&4
        echo "==> Disaster recovery is disabled, the cluster will try to recover on it's own..." 1>&3 2>&4
    elif should_add_new_member; then
        echo "==> Adding new member to existing cluster..." 1>&3 2>&4
        etcdctl $AUTH_OPTIONS member add "$HOSTNAME" --peer-urls="http://${HOSTNAME}.etcd-headless.api-gateway.svc.cluster.local:2380" | grep "^ETCD_" > "$ETCD_DATA_DIR/new_member_envs"
        sed -ie "s/^/export /" "$ETCD_DATA_DIR/new_member_envs"
        echo "==> Loading env vars of existing cluster..." 1>&3 2>&4
        source "$ETCD_DATA_DIR/new_member_envs" 1>&3 2>&4
        store_member_id & 1>&3 2>&4
    else
        echo "==> Updating member in existing cluster..." 1>&3 2>&4
        etcdctl $AUTH_OPTIONS member update "$(cat "$ETCD_DATA_DIR/member_id")" --peer-urls="http://${HOSTNAME}.etcd-headless.api-gateway.svc.cluster.local:2380" 1>&3 2>&4
    fi
fi
exec etcd 1>&3 2>&4

@tokers
Copy link
Contributor

tokers commented Nov 12, 2020

@ziyou434 Could you provides the options that used for etcd start.

I use bitnami/etcd chart ,and --set auth.rbac.enabled=false.
The chart use setup.sh to start etcd

setup.sh

#!/bin/bash

set -o errexit
set -o pipefail
set -o nounset

# Debug section
exec 3>&1
exec 4>&2

if [[ "${BITNAMI_DEBUG:-false}" = true ]]; then
    echo "==> Bash debug is on"
else
    echo "==> Bash debug is off"
    exec 1>/dev/null
    exec 2>/dev/null
fi

# Constants
HOSTNAME="$(hostname -s)"
AUTH_OPTIONS=""
export ETCDCTL_ENDPOINTS="etcd-0.etcd-headless.api-gateway.svc.cluster.local:2380"
export ROOT_PASSWORD="${ETCD_ROOT_PASSWORD:-}"
if [[ -n "${ETCD_ROOT_PASSWORD:-}" ]]; then
  unset ETCD_ROOT_PASSWORD
fi
# Functions
## Store member id for later member replacement
store_member_id() {
    while ! etcdctl $AUTH_OPTIONS member list; do sleep 1; done
    etcdctl $AUTH_OPTIONS member list | grep -w "$HOSTNAME" | awk '{ print $1}' | awk -F "," '{ print $1}' > "$ETCD_DATA_DIR/member_id"
    echo "==> Stored member id: $(cat ${ETCD_DATA_DIR}/member_id)" 1>&3 2>&4
    exit 0
}
## Configure RBAC
configure_rbac() {
    # When there's more than one replica, we can assume the 1st member
    # to be created is "etcd-0" since a statefulset is used
    if [[ -n "${ROOT_PASSWORD:-}" ]] && [[ "$HOSTNAME" == "etcd-0" ]]; then
        echo "==> Configuring RBAC authentication!" 1>&3 2>&4
        etcd &
        ETCD_PID=$!
        while ! etcdctl $AUTH_OPTIONS member list; do sleep 1; done
        echo "$ROOT_PASSWORD" | etcdctl $AUTH_OPTIONS user add root --interactive=false
        etcdctl $AUTH_OPTIONS auth enable
        kill "$ETCD_PID"
        sleep 5
    fi
}
## Checks whether there was a disaster or not
is_disastrous_failure() {
    local endpoints_array=(${ETCDCTL_ENDPOINTS//,/ })
    local active_endpoints=0
    local -r min_endpoints=$(((1 + 1)/2))

    for e in "${endpoints_array[@]}"; do
        if [[ "$e" != "$ETCD_ADVERTISE_CLIENT_URLS" ]] && (unset -v ETCDCTL_ENDPOINTS; etcdctl $AUTH_OPTIONS  endpoint health --endpoints="$e"); then
            active_endpoints=$((active_endpoints + 1))
        fi
    done
    if [[ $active_endpoints -lt $min_endpoints ]]; then
        true
    else
        false
    fi
}

## Check wether the member was succesfully removed from the cluster
should_add_new_member() {
    return_value=0
    if (grep -E "^Member[[:space:]]+[a-z0-9]+\s+removed\s+from\s+cluster\s+[a-z0-9]+$" "$(dirname "$ETCD_DATA_DIR")/member_removal.log") || \
       ! ([[ -d "$ETCD_DATA_DIR/member/snap" ]] && [[ -f "$ETCD_DATA_DIR/member_id" ]]); then
        rm -rf $ETCD_DATA_DIR/* 1>&3 2>&4
    else
        return_value=1
    fi
    rm -f "$(dirname "$ETCD_DATA_DIR")/member_removal.log" 1>&3 2>&4
    return $return_value
}

if [[ ! -d "$ETCD_DATA_DIR" ]]; then
    echo "==> Creating data dir..." 1>&3 2>&4
    echo "==> There is no data at all. Initializing a new member of the cluster..." 1>&3 2>&4
    store_member_id & 1>&3 2>&4
    configure_rbac
else
    echo "==> Detected data from previous deployments..." 1>&3 2>&4
    if [[ $(stat -c "%a" "$ETCD_DATA_DIR") != *700 ]]; then
        echo "==> Setting data directory permissions to 700 in a recursive way (required in etcd >=3.4.10)" 1>&3 2>&4
        chmod -R 700 $ETCD_DATA_DIR
    else
        echo "==> The data directory is already configured with the proper permissions" 1>&3 2>&4
    fi
    if [[ 1 -eq 1 ]]; then
        echo "==> Single node cluster detected!!" 1>&3 2>&4
    elif is_disastrous_failure; then
        echo "==> Cluster not responding!!" 1>&3 2>&4
        echo "==> Disaster recovery is disabled, the cluster will try to recover on it's own..." 1>&3 2>&4
    elif should_add_new_member; then
        echo "==> Adding new member to existing cluster..." 1>&3 2>&4
        etcdctl $AUTH_OPTIONS member add "$HOSTNAME" --peer-urls="http://${HOSTNAME}.etcd-headless.api-gateway.svc.cluster.local:2380" | grep "^ETCD_" > "$ETCD_DATA_DIR/new_member_envs"
        sed -ie "s/^/export /" "$ETCD_DATA_DIR/new_member_envs"
        echo "==> Loading env vars of existing cluster..." 1>&3 2>&4
        source "$ETCD_DATA_DIR/new_member_envs" 1>&3 2>&4
        store_member_id & 1>&3 2>&4
    else
        echo "==> Updating member in existing cluster..." 1>&3 2>&4
        etcdctl $AUTH_OPTIONS member update "$(cat "$ETCD_DATA_DIR/member_id")" --peer-urls="http://${HOSTNAME}.etcd-headless.api-gateway.svc.cluster.local:2380" 1>&3 2>&4
    fi
fi
exec etcd 1>&3 2>&4

The start up script seems normal, could you paste some etcd logs?

@ziyou434
Copy link

@ziyou434 Could you provides the options that used for etcd start.

I use bitnami/etcd chart ,and --set auth.rbac.enabled=false.
The chart use setup.sh to start etcd
setup.sh

#!/bin/bash

set -o errexit
set -o pipefail
set -o nounset

# Debug section
exec 3>&1
exec 4>&2

if [[ "${BITNAMI_DEBUG:-false}" = true ]]; then
    echo "==> Bash debug is on"
else
    echo "==> Bash debug is off"
    exec 1>/dev/null
    exec 2>/dev/null
fi

# Constants
HOSTNAME="$(hostname -s)"
AUTH_OPTIONS=""
export ETCDCTL_ENDPOINTS="etcd-0.etcd-headless.api-gateway.svc.cluster.local:2380"
export ROOT_PASSWORD="${ETCD_ROOT_PASSWORD:-}"
if [[ -n "${ETCD_ROOT_PASSWORD:-}" ]]; then
  unset ETCD_ROOT_PASSWORD
fi
# Functions
## Store member id for later member replacement
store_member_id() {
    while ! etcdctl $AUTH_OPTIONS member list; do sleep 1; done
    etcdctl $AUTH_OPTIONS member list | grep -w "$HOSTNAME" | awk '{ print $1}' | awk -F "," '{ print $1}' > "$ETCD_DATA_DIR/member_id"
    echo "==> Stored member id: $(cat ${ETCD_DATA_DIR}/member_id)" 1>&3 2>&4
    exit 0
}
## Configure RBAC
configure_rbac() {
    # When there's more than one replica, we can assume the 1st member
    # to be created is "etcd-0" since a statefulset is used
    if [[ -n "${ROOT_PASSWORD:-}" ]] && [[ "$HOSTNAME" == "etcd-0" ]]; then
        echo "==> Configuring RBAC authentication!" 1>&3 2>&4
        etcd &
        ETCD_PID=$!
        while ! etcdctl $AUTH_OPTIONS member list; do sleep 1; done
        echo "$ROOT_PASSWORD" | etcdctl $AUTH_OPTIONS user add root --interactive=false
        etcdctl $AUTH_OPTIONS auth enable
        kill "$ETCD_PID"
        sleep 5
    fi
}
## Checks whether there was a disaster or not
is_disastrous_failure() {
    local endpoints_array=(${ETCDCTL_ENDPOINTS//,/ })
    local active_endpoints=0
    local -r min_endpoints=$(((1 + 1)/2))

    for e in "${endpoints_array[@]}"; do
        if [[ "$e" != "$ETCD_ADVERTISE_CLIENT_URLS" ]] && (unset -v ETCDCTL_ENDPOINTS; etcdctl $AUTH_OPTIONS  endpoint health --endpoints="$e"); then
            active_endpoints=$((active_endpoints + 1))
        fi
    done
    if [[ $active_endpoints -lt $min_endpoints ]]; then
        true
    else
        false
    fi
}

## Check wether the member was succesfully removed from the cluster
should_add_new_member() {
    return_value=0
    if (grep -E "^Member[[:space:]]+[a-z0-9]+\s+removed\s+from\s+cluster\s+[a-z0-9]+$" "$(dirname "$ETCD_DATA_DIR")/member_removal.log") || \
       ! ([[ -d "$ETCD_DATA_DIR/member/snap" ]] && [[ -f "$ETCD_DATA_DIR/member_id" ]]); then
        rm -rf $ETCD_DATA_DIR/* 1>&3 2>&4
    else
        return_value=1
    fi
    rm -f "$(dirname "$ETCD_DATA_DIR")/member_removal.log" 1>&3 2>&4
    return $return_value
}

if [[ ! -d "$ETCD_DATA_DIR" ]]; then
    echo "==> Creating data dir..." 1>&3 2>&4
    echo "==> There is no data at all. Initializing a new member of the cluster..." 1>&3 2>&4
    store_member_id & 1>&3 2>&4
    configure_rbac
else
    echo "==> Detected data from previous deployments..." 1>&3 2>&4
    if [[ $(stat -c "%a" "$ETCD_DATA_DIR") != *700 ]]; then
        echo "==> Setting data directory permissions to 700 in a recursive way (required in etcd >=3.4.10)" 1>&3 2>&4
        chmod -R 700 $ETCD_DATA_DIR
    else
        echo "==> The data directory is already configured with the proper permissions" 1>&3 2>&4
    fi
    if [[ 1 -eq 1 ]]; then
        echo "==> Single node cluster detected!!" 1>&3 2>&4
    elif is_disastrous_failure; then
        echo "==> Cluster not responding!!" 1>&3 2>&4
        echo "==> Disaster recovery is disabled, the cluster will try to recover on it's own..." 1>&3 2>&4
    elif should_add_new_member; then
        echo "==> Adding new member to existing cluster..." 1>&3 2>&4
        etcdctl $AUTH_OPTIONS member add "$HOSTNAME" --peer-urls="http://${HOSTNAME}.etcd-headless.api-gateway.svc.cluster.local:2380" | grep "^ETCD_" > "$ETCD_DATA_DIR/new_member_envs"
        sed -ie "s/^/export /" "$ETCD_DATA_DIR/new_member_envs"
        echo "==> Loading env vars of existing cluster..." 1>&3 2>&4
        source "$ETCD_DATA_DIR/new_member_envs" 1>&3 2>&4
        store_member_id & 1>&3 2>&4
    else
        echo "==> Updating member in existing cluster..." 1>&3 2>&4
        etcdctl $AUTH_OPTIONS member update "$(cat "$ETCD_DATA_DIR/member_id")" --peer-urls="http://${HOSTNAME}.etcd-headless.api-gateway.svc.cluster.local:2380" 1>&3 2>&4
    fi
fi
exec etcd 1>&3 2>&4

The start up script seems normal, could you paste some etcd logs?

sure.

2020-11-12 03:02:10.270824 I | pkg/flags: recognized and used environment variable ETCD_ADVERTISE_CLIENT_URLS=http://etcd-0.etcd-headless.api-gateway.svc.cluster.local:2379
2020-11-12 03:02:10.270943 I | pkg/flags: recognized and used environment variable ETCD_DATA_DIR=/bitnami/etcd/data
2020-11-12 03:02:10.271003 I | pkg/flags: recognized and used environment variable ETCD_INITIAL_ADVERTISE_PEER_URLS=http://etcd-0.etcd-headless.api-gateway.svc.cluster.local:2380
2020-11-12 03:02:10.271030 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_CLIENT_URLS=http://0.0.0.0:2379
2020-11-12 03:02:10.271049 I | pkg/flags: recognized and used environment variable ETCD_LISTEN_PEER_URLS=http://0.0.0.0:2380
2020-11-12 03:02:10.271073 I | pkg/flags: recognized and used environment variable ETCD_NAME=etcd-0
2020-11-12 03:02:10.271194 W | pkg/flags: unrecognized environment variable ETCD_SERVICE_HOST=172.20.199.58
2020-11-12 03:02:10.271210 W | pkg/flags: unrecognized environment variable ETCD_PORT_2380_TCP_ADDR=172.20.199.58
2020-11-12 03:02:10.271222 W | pkg/flags: unrecognized environment variable ETCD_PORT_2379_TCP=tcp://172.20.199.58:2379
2020-11-12 03:02:10.271241 W | pkg/flags: unrecognized environment variable ETCD_PORT_2380_TCP_PROTO=tcp
2020-11-12 03:02:10.271254 W | pkg/flags: unrecognized environment variable ETCD_PORT_2379_TCP_PORT=2379
2020-11-12 03:02:10.271262 W | pkg/flags: unrecognized environment variable ETCD_PORT_2379_TCP_ADDR=172.20.199.58
2020-11-12 03:02:10.271273 W | pkg/flags: unrecognized environment variable ETCD_PORT_2380_TCP_PORT=2380
2020-11-12 03:02:10.271287 W | pkg/flags: unrecognized environment variable ETCD_PORT_2380_TCP=tcp://172.20.199.58:2380
2020-11-12 03:02:10.271309 W | pkg/flags: unrecognized environment variable ETCD_SERVICE_PORT_CLIENT=2379
2020-11-12 03:02:10.271323 W | pkg/flags: unrecognized environment variable ETCD_SERVICE_PORT_PEER=2380
2020-11-12 03:02:10.271335 W | pkg/flags: unrecognized environment variable ETCD_PORT_2379_TCP_PROTO=tcp
2020-11-12 03:02:10.271350 W | pkg/flags: unrecognized environment variable ETCD_PORT=tcp://172.20.199.58:2379
2020-11-12 03:02:10.271361 W | pkg/flags: unrecognized environment variable ETCD_SERVICE_PORT=2379
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-11-12 03:02:10.271400 I | etcdmain: etcd Version: 3.4.9
2020-11-12 03:02:10.271413 I | etcdmain: Git SHA: 54ba95891
2020-11-12 03:02:10.271423 I | etcdmain: Go Version: go1.12.17
2020-11-12 03:02:10.271429 I | etcdmain: Go OS/Arch: linux/amd64
2020-11-12 03:02:10.271439 I | etcdmain: setting maximum number of CPUs to 2, total number of available CPUs is 2
2020-11-12 03:02:10.271522 W | etcdmain: found invalid file/dir member_id under data dir /bitnami/etcd/data (Ignore this if you are upgrading etcd)
2020-11-12 03:02:10.271540 N | etcdmain: the server is already initialized as member before, starting as etcd member...
[WARNING] Deprecated '--logger=capnslog' flag is set; use '--logger=zap' flag instead
2020-11-12 03:02:10.271813 I | embed: name = etcd-0
2020-11-12 03:02:10.271836 I | embed: data dir = /bitnami/etcd/data
2020-11-12 03:02:10.271847 I | embed: member dir = /bitnami/etcd/data/member
2020-11-12 03:02:10.271859 I | embed: heartbeat = 100ms
2020-11-12 03:02:10.271865 I | embed: election = 1000ms
2020-11-12 03:02:10.271881 I | embed: snapshot count = 100000
2020-11-12 03:02:10.271895 I | embed: advertise client URLs = http://etcd-0.etcd-headless.api-gateway.svc.cluster.local:2379
2020-11-12 03:02:10.271908 I | embed: initial advertise peer URLs = http://etcd-0.etcd-headless.api-gateway.svc.cluster.local:2380
2020-11-12 03:02:10.271920 I | embed: initial cluster =
2020-11-12 03:02:10.276156 I | etcdserver: restarting member 8ecb0b7cde5e4235 in cluster 2b0eb2956f410bc1 at commit index 107
raft2020/11/12 03:02:10 INFO: 8ecb0b7cde5e4235 switched to configuration voters=()
raft2020/11/12 03:02:10 INFO: 8ecb0b7cde5e4235 became follower at term 4
raft2020/11/12 03:02:10 INFO: newRaft 8ecb0b7cde5e4235 [peers: [], term: 4, commit: 107, applied: 0, lastindex: 107, lastterm: 4]
2020-11-12 03:02:10.279984 W | auth: simple token is not cryptographically signed
2020-11-12 03:02:10.283770 I | etcdserver: starting server... [version: 3.4.9, cluster version: to_be_decided]
raft2020/11/12 03:02:10 INFO: 8ecb0b7cde5e4235 switched to configuration voters=(10289330404592599605)
2020-11-12 03:02:10.284749 I | etcdserver/membership: added member 8ecb0b7cde5e4235 [http://etcd-0.etcd-headless.api-gateway.svc.cluster.local:2380] to cluster 2b0eb2956f410bc1
2020-11-12 03:02:10.285151 N | etcdserver/membership: set the initial cluster version to 3.4
2020-11-12 03:02:10.285318 I | etcdserver/api: enabled capabilities for version 3.4
2020-11-12 03:02:10.287927 I | embed: listening for peers on [::]:2380
raft2020/11/12 03:02:11 INFO: 8ecb0b7cde5e4235 is starting a new election at term 4
raft2020/11/12 03:02:11 INFO: 8ecb0b7cde5e4235 became candidate at term 5
raft2020/11/12 03:02:11 INFO: 8ecb0b7cde5e4235 received MsgVoteResp from 8ecb0b7cde5e4235 at term 5
raft2020/11/12 03:02:11 INFO: 8ecb0b7cde5e4235 became leader at term 5
raft2020/11/12 03:02:11 INFO: raft.node: 8ecb0b7cde5e4235 elected leader 8ecb0b7cde5e4235 at term 5
2020-11-12 03:02:11.477491 I | etcdserver: published {Name:etcd-0 ClientURLs:[http://etcd-0.etcd-headless.api-gateway.svc.cluster.local:2379]} to cluster 2b0eb2956f410bc1
2020-11-12 03:02:11.477626 I | embed: ready to serve client requests
2020-11-12 03:02:11.478714 N | embed: serving insecure client requests on [::]:2379, this is strongly discouraged!

@souzens
Copy link

souzens commented Nov 12, 2020

@souzens

Thanks for feedback.

but it works fine on my env using etcd-3.4.13. could you please provide more details ? thanks.

@idbeta please help check. thanks

you deployed etcd in single mode or cluster mode?

i tested that apisix2.0 works ok in etcd single mode..

nohup /home/admin/etcd-v3.4.9-linux-amd64/etcd --data-dir /home/admin/etcd/data --listen-client-urls http://10.111.9.155:2379 --advertise-client-urls http://10.111.9.155:2379 >> /home/admin/etcd.log 2>&1 &

but in cluster mode will report errors

etcd.conf

name: etcd@10.111.9.155
data-dir: /home/admin/etcd/data
listen-peer-urls: http://10.111.9.155:2380
listen-client-urls: http://10.111.9.155:2379
advertise-client-urls: http://10.111.9.155:2379
listen-peer-urls: http://10.111.9.155:2380
initial-advertise-peer-urls: http://10.111.9.155:2380
initial-cluster-token: etcd-cluster-token
initial-cluster-state: new
initial-cluster: etcd@10.111.9.154=http://10.111.9.154:2380,etcd@10.111.21.245=http://10.111.21.245:2380,etcd@10.111.9.155=http://10.111.9.155:2380

@nic-chen
Copy link
Member

@souzens
Thanks for feedback.
but it works fine on my env using etcd-3.4.13. could you please provide more details ? thanks.
@idbeta please help check. thanks

you deployed etcd in single mode or cluster mode?

i tested that apisix2.0 works ok in etcd single mode..

nohup /home/admin/etcd-v3.4.9-linux-amd64/etcd --data-dir /home/admin/etcd/data --listen-client-urls http://10.111.9.155:2379 --advertise-client-urls http://10.111.9.155:2379 >> /home/admin/etcd.log 2>&1 &

but in cluster mode will report errors

etcd.conf

name: etcd@10.111.9.155
data-dir: /home/admin/etcd/data
listen-peer-urls: http://10.111.9.155:2380
listen-client-urls: http://10.111.9.155:2379
advertise-client-urls: http://10.111.9.155:2379
listen-peer-urls: http://10.111.9.155:2380
initial-advertise-peer-urls: http://10.111.9.155:2380
initial-cluster-token: etcd-cluster-token
initial-cluster-state: new
initial-cluster: etcd@10.111.9.154=http://10.111.9.154:2380,etcd@10.111.21.245=http://10.111.21.245:2380,etcd@10.111.9.155=http://10.111.9.155:2380

I tried ETCD cluster, It works fine too.

please confirm your ETCD cluster all ready, and all nodes work well.

@yankunsam
Copy link

by test just now, apisix2.0 will report error above when run wtih etcd-3.4.13
instead of etcd3.4.9 run ok

docker.io/bitnami/etcd:3.4.9-debian-10-r34? The same errors.

@tokers
Copy link
Contributor

tokers commented Nov 12, 2020

That's strange, Could you use tcpdump in your environment, let's capture some HTTP packets between APISIX and the etcd to see whether the body is abnormal.

@yankunsam
Copy link

how to tell apisix the password of etcd?

@yankunsam
Copy link

etcdserver: failed to apply request "header:<ID:14393352380727342208 > put:<key:"/apisix/proto/" value_size:8 >" with response "" took (1.669µs) to execute, err is auth: user name is empty
2020-11-12 06:44:28.061760 W | etcdserver: failed to apply request "header:<ID:14393352380727342209 > put:<key:"/apisix/plugin_metadata/" value_size:8 >" with response "" took (1.479µs) to execute, err is auth: user name is empty

@tokers
Copy link
Contributor

tokers commented Nov 12, 2020

how to tell apisix the password of etcd?

See https://github.com/apache/apisix/blob/master/conf/config-default.yaml for the details.

@csh995426531
Copy link

me too. etcd version 3.4.13

@ziyou434
Copy link

I find etcdctl get /apisix/services will return null,exist but null。/apisix/routes... is the same.
I guess maybe errors occured in admin_init, but i haven't found.

@tokers
Copy link
Contributor

tokers commented Nov 12, 2020

That's strange, Could you use tcpdump in your environment, let's capture some HTTP packets between APISIX and the etcd to see whether the body is abnormal.

@ziyou434 Could you also provide the config.yaml of APISIX.

@ziyou434
Copy link

sudo tcpdump -Ans0 'tcp and port 2379' -iany

I find apisix’s etcd key is /apisix/routes,but /apisix/routes/ in etcd ,there is a extra “/”,the key in etcd is different from apisix, maybe this is error.

@ziyou434
Copy link

That's really strange.

@ziyou434 Could you help to login into the etcd container to APISIX container to capture some packets?

sudo tcpdump -Ans0 'tcp and port 2379' -iany

We need to observe the data on the wire (HTTP request/response).

PS you may need to install tcpdump.

There is no yum in the command line of the container,I can not install tcpdump

@ziyou434
Copy link

ziyou434 commented Nov 12, 2020

I could not find the source of the problem, but found other methods, when I changed etcd host “http://my-etcd-headless.{namespace}.svc.cluster.local:2379” to "http://{ip address}:2379", no error was reported.
There seems to be a problem with address resolution.

@nic-chen
Copy link
Member

I could not find the source of the problem, but found other methods, when I changed etcd host “http://my-etcd-headless.{namespace}.svc.cluster.local:2379” to "http://{ip address}:2379", no error was reported.
There seems to be a problem with address resolution.

I think this is the reason.

@nic-chen
Copy link
Member

@souzens @yankunsam do you use domain name as etcd host too ?

@csh995426531
Copy link

maybe because of this problem https://stackoverflow.com/questions/54788528/etcd-v3-api-unavailable/56800553#56800553

@juzhiyuan juzhiyuan added checking check first if this issue occurred discuss labels Nov 12, 2020
@souzens
Copy link

souzens commented Nov 13, 2020

@souzens @yankunsam do you use domain name as etcd host too ?

case 1 : etcd cluster mode deployed in k8s , use domain name as "http://apisix-etcd-0.apisix-etcd.apisix.svc.cluster.local:2379"

REPORT ERROR

2020/11/13 10:35:25 [error] 54#54: *5375 [lua] config_etcd.lua:448: failed to fetch data from etcd: /usr/local/apisix//deps/share/lua/5.1/resty/etcd/v3.lua:149: attempt to concatenate local 'err' (a nil value)
stack traceback:
        /usr/local/apisix//deps/share/lua/5.1/resty/etcd/v3.lua:149: in function 'new'
        /usr/local/apisix/apisix/core/config_etcd.lua:416: in function </usr/local/apisix/apisix/core/config_etcd.lua:414>
        [C]: in function 'xpcall'
        /usr/local/apisix/apisix/core/config_etcd.lua:414: in function </usr/local/apisix/apisix/core/config_etcd.lua:405>,  etcd key: /apisix/consumers, context: ngx.timer
2020/11/13 10:35:25 [error] 54#54: *5376 [lua] config_etcd.lua:448: failed to fetch data from etcd: /usr/local/apisix//deps/share/lua/5.1/resty/etcd/v3.lua:149: attempt to concatenate local 'err' (a nil value)
stack traceback:
        /usr/local/apisix//deps/share/lua/5.1/resty/etcd/v3.lua:149: in function 'new'
        /usr/local/apisix/apisix/core/config_etcd.lua:416: in function </usr/local/apisix/apisix/core/config_etcd.lua:414>
        [C]: in function 'xpcall'
        /usr/local/apisix/apisix/core/config_etcd.lua:414: in function </usr/local/apisix/apisix/core/config_etcd.lua:405>,  etcd key: /apisix/upstreams, context: ngx.timer

case 2 : etcd deployed with cluster mode in k8s , use pod IP as "http://10.128.8.88:2379"
RUN OK

case 3 : etcd deployed with cluster mode in VM , use server ip as - "http://10.111.9.155:2379"

REPORT ERROR

2020/11/10 20:18:30 [error] 31545#31545: *7224 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/plugin_metadata, context: ngx.timer
2020/11/10 20:18:30 [error] 31545#31545: *7226 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/proto, context: ngx.timer
2020/11/10 20:18:30 [error] 31545#31545: *7225 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/ssl, context: ngx.timer
2020/11/10 20:18:30 [error] 31545#31545: *7227 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/consumers, context: ngx.timer
2020/11/10 20:18:30 [error] 31545#31545: *7228 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/upstreams, context: ngx.timer
2020/11/10 20:18:30 [error] 31550#31550: *7229 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/services, context: ngx.timer
2020/11/10 20:18:30 [error] 31550#31550: *7232 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/proto, context: ngx.timer
2020/11/10 20:18:30 [error] 31550#31550: *7233 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/global_rules, context: ngx.timer
2020/11/10 20:18:30 [error] 31550#31550: *7231 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/upstreams, context: ngx.timer
2020/11/10 20:18:30 [error] 31550#31550: *7230 [lua] config_etcd.lua:428: failed to fetch data from etcd: failed to read etcd dir,  etcd key: /apisix/plugin_metadata, context: ngx.timer

case 4 : etcd deployed with single mode in VM , use server ip as - "http://10.111.9.155:2379"

RUN OK

@idbeta
Copy link
Contributor

idbeta commented Nov 13, 2020

@souzens I really want to give you a like, but I don’t have permission.

@tokers
Copy link
Contributor

tokers commented Nov 13, 2020

maybe because of this problem https://stackoverflow.com/questions/54788528/etcd-v3-api-unavailable/56800553#56800553

Indeed. I have reproduced this problem after closing the grpc gateway.

@souzens @ziyou434 you may add enable-grpc-gateway explicitly to solve this problem.

@membphis @spacewander I think some notes should be added into document to remind users the deployment of etcd.

@souzens
Copy link

souzens commented Nov 13, 2020

maybe because of this problem https://stackoverflow.com/questions/54788528/etcd-v3-api-unavailable/56800553#56800553

Indeed. I have reproduced this problem after closing the grpc gateway.

@souzens @ziyou434 you may add enable-grpc-gateway explicitly to solve this problem.

@membphis @spacewander I think some notes should be added into document to remind users the deployment of etcd.

Indeed caused by this ,apisix run well when i enable enable-grpc-gateway ,thank u all

@yankunsam
Copy link

@souzens @yankunsam do you use domain name as etcd host too ?

It works now. Thanks

@Applenice
Copy link
Contributor Author

maybe because of this problem https://stackoverflow.com/questions/54788528/etcd-v3-api-unavailable/56800553#56800553

Indeed, it works fine after adding, thanks!

@Applenice
Copy link
Contributor Author

After adding enable-grpc-gateway: true, after working for a while, it starts reporting errors at a high frequency,The error.log file has reached 24M in 40 minutes.😳

2020/11/13 16:20:18 [error] 8944#8944: *831322 [lua] config_etcd.lua:448: failed to fetch data from etcd: /usr/local/apisix/apisix/core/etcd.lua:115: bad argument #1 to 'ipairs' (table expected, got nil)
stack traceback:
	[C]: in function 'ipairs'
	/usr/local/apisix/apisix/core/etcd.lua:115: in function 'waitdir'
	/usr/local/apisix/apisix/core/config_etcd.lua:255: in function 'sync_data'
	/usr/local/apisix/apisix/core/config_etcd.lua:424: in function </usr/local/apisix/apisix/core/config_etcd.lua:414>
	[C]: in function 'xpcall'
	/usr/local/apisix/apisix/core/config_etcd.lua:414: in function </usr/local/apisix/apisix/core/config_etcd.lua:405>,  etcd key: /apisix/global_rules, context: ngx.timer

@nic-chen
Copy link
Member

After adding enable-grpc-gateway: true, after working for a while, it starts reporting errors at a high frequency,The error.log file has reached 24M in 40 minutes.😳


2020/11/13 16:20:18 [error] 8944#8944: *831322 [lua] config_etcd.lua:448: failed to fetch data from etcd: /usr/local/apisix/apisix/core/etcd.lua:115: bad argument #1 to 'ipairs' (table expected, got nil)

stack traceback:

	[C]: in function 'ipairs'

	/usr/local/apisix/apisix/core/etcd.lua:115: in function 'waitdir'

	/usr/local/apisix/apisix/core/config_etcd.lua:255: in function 'sync_data'

	/usr/local/apisix/apisix/core/config_etcd.lua:424: in function </usr/local/apisix/apisix/core/config_etcd.lua:414>

	[C]: in function 'xpcall'

	/usr/local/apisix/apisix/core/config_etcd.lua:414: in function </usr/local/apisix/apisix/core/config_etcd.lua:405>,  etcd key: /apisix/global_rules, context: ngx.timer

you could try to run 'apisix init'

@Applenice
Copy link
Contributor Author

After adding enable-grpc-gateway: true, after working for a while, it starts reporting errors at a high frequency,The error.log file has reached 24M in 40 minutes.😳


2020/11/13 16:20:18 [error] 8944#8944: *831322 [lua] config_etcd.lua:448: failed to fetch data from etcd: /usr/local/apisix/apisix/core/etcd.lua:115: bad argument #1 to 'ipairs' (table expected, got nil)

stack traceback:

	[C]: in function 'ipairs'

	/usr/local/apisix/apisix/core/etcd.lua:115: in function 'waitdir'

	/usr/local/apisix/apisix/core/config_etcd.lua:255: in function 'sync_data'

	/usr/local/apisix/apisix/core/config_etcd.lua:424: in function </usr/local/apisix/apisix/core/config_etcd.lua:414>

	[C]: in function 'xpcall'

	/usr/local/apisix/apisix/core/config_etcd.lua:414: in function </usr/local/apisix/apisix/core/config_etcd.lua:405>,  etcd key: /apisix/global_rules, context: ngx.timer

you could try to run 'apisix init'

No improvement

@spacewander
Copy link
Member

@Applenice
Can it be solved by: #2687?

@Applenice
Copy link
Contributor Author

@Applenice
Can it be solved by: #2687?

This needs to be tested and I am currently using : apisix-2.0-0.el7.noarch.rpm

@Applenice
Copy link
Contributor Author

@Applenice
Can it be solved by: #2687?

I've deployed with *master 3ff46e2 on Ubuntu and it's working fine, thanks!

@tokers
Copy link
Contributor

tokers commented Nov 14, 2020

I think it's time to close this thread.

@ziyou434
Copy link

也许是因为这个问题https://stackoverflow.com/questions/54788528/etcd-v3-api-unavailable/56800553#56800553

确实。关闭grpc网关后,我已重现此问题。

您可以添加@souzens @ ziyou434enable-grpc-gateway以明确解决此问题。

@membphis @spacewander我认为应该在文档中添加一些注释,以提醒用户etcd的部署。

I don't know where to change "enable-grpc-gateway", and I use etcd grpc-proxy start in etcd, but apisix still hava the mistake.

@tokers
Copy link
Contributor

tokers commented Nov 27, 2020

That's really awkward since etcd misses this option in the command line help message, just use this option: --enable-grpc-gateway.

@idbeta
Copy link
Contributor

idbeta commented Nov 28, 2020

That's really awkward since etcd misses this option in the command line help message, just use this option: --enable-grpc-gateway.

Could it be added to the APISIX documentation?

@tokers
Copy link
Contributor

tokers commented Nov 29, 2020

That's really awkward since etcd misses this option in the command line help message, just use this option: --enable-grpc-gateway.

Could it be added to the APISIX documentation?

Already added, see https://github.com/apache/apisix/blob/master/doc/install-dependencies.md for the details.

@kanghuzai
Copy link

能不能语言自信点,母语不说飙英文,就能国际化成功吗

@apache apache locked and limited conversation to collaborators Mar 16, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
checking check first if this issue occurred discuss
Projects
None yet
Development

No branches or pull requests