Skip to content
This repository has been archived by the owner on Apr 17, 2019. It is now read-only.

Write etcd config file fails: no ipv4 address for eth1 #1573

Closed
valentin-krasontovitsch opened this issue Aug 19, 2016 · 20 comments
Closed

Write etcd config file fails: no ipv4 address for eth1 #1573

valentin-krasontovitsch opened this issue Aug 19, 2016 · 20 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@valentin-krasontovitsch

Using contrib/ansible, trying to setup a k8s cluster

  • with Vagrant (1.8.5),
  • using centos7 as OS,
  • VirtualBox (Manager 5.0.26) as provider and
  • running on OS X El Capitan,

running vagrant up produces the following error message:

TASK [etcd : Write etcd config file] *******************************************
fatal: [kube-master-1]: FAILED! => {
    "changed": false, 
    "failed": true, 
    "msg": "AnsibleUndefinedVariable: {{ etcd_peer_url_scheme }}:// \
        {{ etcd_machine_address }}:{{ etcd_peer_port }}: \
        {{ hostvars[inventory_hostname]['ansible_' + etcd_interface].ipv4.address }}: \
        'dict object' has no attribute 'ipv4'"}

Upon more precise inspection of the task, it turns out that a template file is used, which tries to access hostvars[kube-master-1][eth1].ipv4.address. Upon debugging the corresponding hostvars, I observed that that interface does not have an ipv4 section.

To get this far I had to do the following edits of contrib/ansible/vagrant/Vagrantfile:

  • comment out # require 'vagrant-aws'
  • add source_type: 'packageManager', in ansible.extra_vars = { ... }
  • configure virtual machine to use local proxy by adding line config.proxy.http = ENV['http_proxy'] (using vagrant-proxyconf plugin)

Hence, in order to reproduce, the last and first step might be unnecessary.

@the0rem
Copy link

the0rem commented Aug 28, 2016

Seems to be an issue where the centos7 image does not have an ipv4 address for eth1. The box needs to be fixed to fit the deployment but in the meantime fedora/coreos seems to work ok.

@ingvagabund
Copy link
Contributor

ingvagabund commented Aug 29, 2016

Hi @valentin-krasontovitsch, just deployed the cluster with vagrant with centos7, it works. Running with Vagrant 1.8.1. on Fedora 24. Just, instead of VirtualBox I run on libvirt.

This could be VirtualBox configuration specific issue. @the0rem, any advices how to configure it?

@the0rem
Copy link

the0rem commented Aug 29, 2016

Hi ingvagabund,

Unfortunately I'm not able figure that out. For reference I am running:

  • El Capitan 10.11.5
  • Vagrant 1.8.5
  • Virtualbox 5.0.4

I am leaning towards https://github.com/kubernetes/minikube for dev related work. Xhyve wins me over in that respect considering the latest movements of docker on Mac OSX.

@valentin-krasontovitsch
Copy link
Author

Thanks @ingvagabund and @the0rem for taking time to look into this. VirtualBox may very well be the culprit here. As a workaround, restarting the virtual machine fixed this problem for me: After the restart, the interface in question had an IP address assigned.

Trying to figure out if kubernetes will fit our needs as an orchestrating tool for a multi node environment, minikube is unfortunately not able to deliver a proof of concept, since the features we're looking to test are precisely not supported by minikube (single node cluster only).

@the0rem , what do you mean by

Xhyve wins me over in that respect considering the latest movements of docker on Mac OSX.

Would you care to elaborate?

@ingvagabund
Copy link
Contributor

Closing the issue then as this is related to VirtualBox.

@valentin-krasontovitsch
Copy link
Author

@ingvagabund, is it ovbious that the issue cannot be with the configuration of Virtualbox (using centos) in the provided Vagrantfile?

@ingvagabund
Copy link
Contributor

@valentin-krasontovitsch I misinterpreted you previous comment (reading restarting virtual box instead of provision VM).

So the issue is reproducible everytime you vagrant up with Virtualbox? After restarting provisioned VM(s) ip addresses are assigned?

@ingvagabund ingvagabund reopened this Aug 29, 2016
@valentin-krasontovitsch
Copy link
Author

Yes, just pulled clean rep, applied the changes mentioned in bottom of original post, and got the same error message.

And yes, after sshing into the machine, and running sudo reboot, and then sshing in again and running ip addr, eth1 does have an ip address.

Also, running vagrant provision after this does not fail on the above mentioned task anymore.

@valentin-krasontovitsch
Copy link
Author

By the by, I run into more issues after doing the above. Should I post them as well, or wait for this to get resolved / at least reproduced first?

@ingvagabund
Copy link
Contributor

If it is related to the ipv4, you can post it here. Otherwise, it if is Vagrant deployment related, open new issue for it. Thanks.

@nanne007
Copy link

nanne007 commented Sep 1, 2016

Came across the same problem here. eth1 does not have ipv4 address.

@imanis
Copy link

imanis commented Feb 7, 2017

The same problem here, does any one find a solution?
Vagrant (1.9.1),
using centos7 as OS,
VirtualBox (Manager 5.1.14) as provider and running on OS X Sierra

@harobed
Copy link
Contributor

harobed commented Feb 22, 2017

Same problem here.

$ vagrant version
Installed Version: 1.9.1
Latest Version: 1.9.1
$ VBoxManage --version
5.1.10r112026
$ vagrant status
Current machine states:

kube-master-1             running (virtualbox)
kube-node-1               running (virtualbox)
kube-node-2               running (virtualbox)
kube-node-3               running (virtualbox)
$ vagrant ssh kube-master-1
Last login: Wed Feb 22 17:18:52 2017 from 10.0.2.2
[vagrant@kube-master-1 ~]$ sudo su
[root@kube-master-1 vagrant]# cat /etc/centos-release
CentOS Linux release 7.3.1611 (Core)
[root@kube-master-1 vagrant]# ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:22:5b:53 brd ff:ff:ff:ff:ff:ff
    inet 10.0.2.15/24 brd 10.0.2.255 scope global dynamic eth0
       valid_lft 84232sec preferred_lft 84232sec
    inet6 fe80::5054:ff:fe22:5b53/64 scope link
       valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST> mtu 1500 qdisc pfifo_fast state DOWN qlen 1000
    link/ether 08:00:27:e5:1a:80 brd ff:ff:ff:ff:ff:ff

I can see no IP on eth1 and this interface is down.

@harobed
Copy link
Contributor

harobed commented Feb 22, 2017

Same issue: hashicorp/vagrant#8250 ?

@harobed
Copy link
Contributor

harobed commented Feb 22, 2017

@jasonbrooks or @ingvagabund can check this pull request?

@valentin-krasontovitsch this patch fix your problem?

harobed pushed a commit to harobed/contrib that referenced this issue Feb 22, 2017
@valentin-krasontovitsch
Copy link
Author

@harobed thanks for the patch! Not working on kubernets anymore at the moment, so please don't wait for me for an answer - I'm not gonna make any promises that I'll try to reproduce this. But probably I will try tomorrow : )

@harobed
Copy link
Contributor

harobed commented Mar 1, 2017

Fixed for me with Vagrant 1.9.2 with CentOS Linux release 7.3.1611 (Core).

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

Prevent issues from auto-closing with an /lifecycle frozen comment.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Dec 21, 2017
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or @fejta.
/lifecycle rotten
/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Jan 20, 2018
@fejta-bot
Copy link

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/close

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

8 participants