-
Notifications
You must be signed in to change notification settings - Fork 1.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kafka.common.KafkaTimeoutError: ('Failed to update metadata after %s secs.', 60.0) #607
Comments
resolved. must set all brokers for bootstrap_servers. |
@archiechen I am getting the same error too, what did you do? |
so what is the fix here? |
Also wondering fix. I see this error when running kafka via Docker but not when installing from binary. EDIT: So wondering what the root cause may be to help me pinpoint issues between environments. |
Anyone get an answer to this? |
docker default binds to ipv6 only. but usually you may miss configuring listeners of it. |
I am re-opening, I also saw the same issue in production with The error showed up after rolling the kafka cluster, and the I watched the tcp stream and the producer never tried to even talk to Kafka, so it wasn't timing out cluster side. When I restarted the service, it immediately started working. |
I too saw this exact issue with
My producer is running inside a docker container, but both kafka brokers are running in linux via supervisor. Both of my brokers are included via |
Another cause for this issue is using a
|
Steps to reproduce the issue:
|
here's the logs right before the timeout:
|
I've also noticed that using kafka-python in the interpreter works just fine, with the exact same bootstrap_servers and everything else. The bug only surfaces when executed as part of a Python program. |
If this happens before any message is able to be sent then it typically indicates a low-level connection error. Take a look at the kafka.conn.BrokerConnection debug logs. I've made several changes to the network connection code on master to handle various edge cases that you may be hitting. If you are able to test with master, can you check whether this issue is fixed now? |
I am having the same issue with 1.4.2 and 1.4.3-dev0. Do not understand where i can find the debug logs you are talking about. Would they be somewhere in the docker volume?
|
to enable python logging at its most simple form:
|
I am having the same issue while using minikube and https://github.com/Yolean/kubernetes-kafka (Kubernetes version 1.9.0, Kafka 1.0) |
Whats the fix for this issue guys? |
If running the kafka brokers inside a container, make sure that it advertises the correct hostnames that are accessible by the clients. If not specified it will use the canonical hostname of the container and that may be an internal one that cannot be used outside of the container. You can set the advertised hosts with
|
I had same problem, |
Had the same issue. |
You will also see this if you disallow topic creation on publish on the broker (auto.create.topics.enable=false) and then try to produce to a topic that hasn't been created yet. |
Hi, How to fix this problem ? |
I have encountered this problem when I connect to kafka deployed with docker. How can I solve this problem? |
@jeffwidman |
For others in thread, this error means that the client could not connect to a node. Most likely it's not visible. Please make sure the |
Right, but I saw it in a non-Kubernetes environment.
I haven't examined the actual code path, but it would be nice if we threw a more obvious error for that scenario. |
Hi, i have the same problem with kafka inside Docker.
Versions: "Python" 2.7 | "kafka-python" 1.4.3 | and "spotify/kafka" Docker Image. |
I solved this problem after starting zookeeper on my macOS,
Btw, you can enable logging debug for more information:
Because producer complained like this: |
The |
Has anyone found the concrete fix for this issue. |
@parasjain can you try that and see if it helps? |
Also encountered this problem in version 1.4.5, because my topic contains a comma (,) |
I solved the problem by setting |
update: found that using container name everywhere instead of using 192.168.99.100 works pretty well |
Ran into this issue in a Kubernetes environment (incubator/kafka Helm chart). The issue was that the number of replicas was less than the value for "default.replication.factor" |
THANK YOU! |
I meet this problem.My env is |
I had this issue when porting the code from python 2 to 3. Solved by changing kafkaValue = json.dumps(someDict, separators=(',', ':'))
producer.send(
topic=bytes(topicNameAsString),
key=bytes(someString),
value=kafkaValue) to kafkaValue = json.dumps(someDict, separators=(',', ':'))
producer.send(
topic=topicNameAsString, # <-- just string
key=bytes(someString, encoding='utf8'), # <-- encoding='utf8'
value=bytes(kafkaValue, encoding='utf8')) # <-- bytes |
can't believe this is still open - either topic is automatically created in both cases or in neither. This is puzzling! |
I don't think this issue is terribly useful to keep open. There can be several causes for a timeout when fetching metadata. It could be networking, topic creation problems, topic naming problems, etc -- note all of the different "me too" explanations so far. I'm going to close this because it does not appear to be a specific bug in kafka-python that could be fixed with a PR. |
run into the same error message, but the root cause in my case was related to an outdated invalid intermediate root cert in the CA Bundle, fixed once updated the intermediate cert. |
I'm still facing the problem in Kuberentes with kafka-python 1.4.7. |
If useful for anyone, I started see this error after |
I used docker-compose, in which the Kafka advertised listeners are: Then I changed the port in python code to 29029 and solved the problem. |
I was publishing to |
Very confusing how everyone has had this problem for years, and no fixes have been put in, even after moving onto 2.0.2 (which I've been testing on). |
not for me. Its same in cmd also. |
I use docker in VM Ubuntu 20.04 LTS with Bridge network interface and my host is a Windows 10, when i try to consume or produce topics from Windows always receive timeout errors, to resolve this error kafka.common.KafkaTimeoutError: ('Failed to update metadata after %s secs.', 60.0), in docker-compose i set |
I was with the same problem, using the kafka in the container docker and the kafka-python in a conda enviroment. UPDATE: The comand was executed, but the message not sended. I tested directally in the docker container, and work. |
I was facing the same issue. I was using docker-compose. I realized the problem was from the Kafka listeners. So I update Kafka environments to this: (I'm not sure which line fixed it, but it working)
|
remove |
kafka version: 0.8.2.0-1.kafka1.3.2.p0.15 (cloudera released)
But it's ok on 2.0.0-1.kafka2.0.0.p0.12.
The text was updated successfully, but these errors were encountered: