-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Caught Segmentation fault #1064
Comments
@hzariv this config file loads OK for me. Please get a stack trace w/ symbols either using GDB or the references decode stack python script. |
also @hzariv can you try on a non-CE version of Docker? I have had seg faults on CE version. |
I can repro, looking. |
Likely regression from #932
|
@hzariv the main issue right now is that you have tracing configured on the HTTP listeners, but no tracing driver specified. I'm not sure if the crash is a regression or not. https://github.com/lyft/envoy/pull/1029/files will actually fix this crash. @goaway is on vacation right now and will finish when he gets back. @RomanDzhabarov can you potentially take a look at this just to make sure there is no larger issue here? I think we can wait for #1029 for the fix. |
Yup, the issue is that startSpan will return nullptr right now in case global (server) tracer is not configured. Mike's PR was pretty close to be done, I'll take a look/fix/merge it. The problem is well scoped and does not affect anything if there is a proper configuration. |
@mattklein123 is there a workaround to unblock me? if not what is the ETA for merging the PR fix? |
Delete all tracing config like:
It's a NOP |
Thanks that fixed the crash but now I am getting: Any hint how to debug this? |
Never mind upstream connect error above. It is a docker networking issue :-( |
Should be fixed by #1029 with original config. |
Signed-off-by: Mike Schore <mike.schore@gmail.com> Signed-off-by: JP Simard <jp@jpsim.com>
Signed-off-by: Mike Schore <mike.schore@gmail.com> Signed-off-by: JP Simard <jp@jpsim.com>
I am running docker image lyft/envoy:latest and getting the following crash using a config file is front proxy: docker run -p 8080:8080 -p 8001:8001 envoy-test
[2017-06-08 16:25:44.200][1][warning][main] initializing epoch 0 (hot restart version=8.2490552)
[2017-06-08 16:25:44.204][1][warning][main] all clusters initialized. initializing init manager
[2017-06-08 16:25:44.204][1][warning][main] all dependencies initialized. starting workers
[2017-06-08 16:25:44.204][1][warning][main] starting main dispatch loop
[2017-06-08 16:27:02.711][6][critical][backtrace] Caught Segmentation fault, suspect faulting address 0x0
[2017-06-08 16:27:02.714][6][critical][backtrace] Backtrace obj</usr/local/bin/envoy> thr<6> (use tools/stack_decode.py):
[2017-06-08 16:27:02.714][6][critical][backtrace] thr<6> #0 0x4b47a5
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #1 0x5f4c99
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #2 0x5f476d
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #3 0x5f94c7
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #4 0x5f5ff0
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #5 0x5f61f3
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #6 0x4aea77
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #7 0x708278
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #8 0x7064c8
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #9 0x706632
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #10 0x49a577
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #11 0x73a4a1
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #12 0x73abfe
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #13 0x494ac0
[2017-06-08 16:27:02.715][6][critical][backtrace] thr<6> #14 0x74434d
[2017-06-08 16:27:02.716][6][critical][backtrace] thr<6> obj</lib/x86_64-linux-gnu/libpthread.so.0>
[2017-06-08 16:27:02.716][6][critical][backtrace] thr<6> #15 0x7f8865b0b183
[2017-06-08 16:27:02.716][6][critical][backtrace] thr<6> obj</lib/x86_64-linux-gnu/libc.so.6>
[2017-06-08 16:27:02.716][6][critical][backtrace] thr<6> #16 0x7f8865838bec
[2017-06-08 16:27:02.716][6][critical][backtrace] end backtrace thread 6
This is on MacOS Sierra running docker version docker version
Client:
Version: 17.03.1-ce
API version: 1.24 (downgraded from 1.27)
Go version: go1.7.5
Git commit: c6d412e
Built: Tue Mar 28 00:40:02 2017
OS/Arch: darwin/amd64
Server:
Version: 1.12.0
API version: 1.24 (minimum version )
Go version: go1.6.3
Git commit: 8eab29e
Built: Thu Jul 28 23:54:00 2016
OS/Arch: linux/amd64
Experimental: false
Dockerfile:
FROM lyft/envoy:latest
ADD ./service-envoy.json /etc/service-envoy.json
ENTRYPOINT ["/usr/local/bin/envoy","-c", "/etc/service-envoy.json"]
Config file:
{
"listeners": [
{
"address": "tcp://0.0.0.0:8080",
"filters": [
{
"type": "read",
"name": "http_connection_manager",
"config": {
"tracing": {
"operation_name": "ingress"
},
"codec_type": "auto",
"stat_prefix": "ingress_http",
"route_config": {
"virtual_hosts": [
{
"name": "service80",
"domains": [""],
"routes": [
{
"timeout_ms": 0,
"prefix": "/",
"cluster": "local_service80"
}
]
}
]
},
"filters": [
{
"type" : "decoder",
"name" : "fault",
"config" : {
"abort" :
{
"abort_percent" : 100,
"http_status" : 403
},
"headers" : [{"name" : "x-ebay-pes"}]
}
},
{
"type": "decoder",
"name": "router",
"config": {}
}
]
}
}
]
},
{
"address": "tcp://0.0.0.0:8081",
"filters": [
{
"type": "read",
"name": "http_connection_manager",
"config": {
"tracing": {
"operation_name": "ingress"
},
"codec_type": "auto",
"stat_prefix": "ingress_http",
"route_config": {
"virtual_hosts": [
{
"name": "service81",
"domains": [""],
"routes": [
{
"timeout_ms": 0,
"prefix": "/",
"cluster": "local_service81"
}
]
}
]
},
"filters": [
{
"type": "decoder",
"name": "router",
"config": {}
}
]
}
}
]
},
{
"address": "tcp://0.0.0.0:8082",
"filters": [
{
"type": "read",
"name": "http_connection_manager",
"config": {
"tracing": {
"operation_name": "ingress"
},
"codec_type": "auto",
"stat_prefix": "ingress_http",
"route_config": {
"virtual_hosts": [
{
"name": "service82",
"domains": [""],
"routes": [
{
"timeout_ms": 0,
"prefix": "/",
"cluster": "local_service82"
}
]
}
]
},
"filters": [
{
"type": "decoder",
"name": "router",
"config": {}
}
]
}
}
]
},
{
"address": "tcp://0.0.0.0:8083",
"filters": [
{
"type": "read",
"name": "http_connection_manager",
"config": {
"tracing": {
"operation_name": "ingress"
},
"codec_type": "auto",
"stat_prefix": "ingress_http",
"route_config": {
"virtual_hosts": [
{
"name": "service83",
"domains": [""],
"routes": [
{
"timeout_ms": 0,
"prefix": "/",
"cluster": "local_service83"
}
]
}
]
},
"filters": [
{
"type": "decoder",
"name": "router",
"config": {}
}
]
}
}
]
}
],
"admin": {
"access_log_path": "/dev/null",
"address": "tcp://0.0.0.0:8001"
},
"cluster_manager": {
"clusters": [
{
"name": "local_service80",
"connect_timeout_ms": 250,
"type": "strict_dns",
"lb_type": "round_robin",
"hosts": [
{
"url": "tcp://127.0.0.1:9080"
}
]
},
{
"name": "local_service81",
"connect_timeout_ms": 250,
"type": "strict_dns",
"lb_type": "round_robin",
"hosts": [
{
"url": "tcp://127.0.0.1:9081"
}
]
},
{
"name": "local_service82",
"connect_timeout_ms": 250,
"type": "strict_dns",
"lb_type": "round_robin",
"hosts": [
{
"url": "tcp://127.0.0.1:9082"
}
]
},
{
"name": "local_service83",
"connect_timeout_ms": 250,
"type": "strict_dns",
"lb_type": "round_robin",
"hosts": [
{
"url": "tcp://127.0.0.1:9083"
}
]
}
]
}
}
The text was updated successfully, but these errors were encountered: