-
Notifications
You must be signed in to change notification settings - Fork 78
Conversation
Minimal proof of concept: Vagrant.configure(2) do |vagrant|
vagrant.vm.box = 'debian/jessie64'
vagrant.vm.hostname = 'test.landrush.dev'
# This adds eth1; which will get an 172.xxx.xxx.xxx IP address
vagrant.vm.network 'private_network', type: 'dhcp'
# Landrush configuration
vagrant.landrush.enabled = true
vagrant.landrush.tld = 'landrush.dev'
vagrant.landrush.interface = 'eth0'
# VirtualBox settings (1 GB RAM @ 2 cores)
vagrant.vm.provider :virtualbox do |vb|
vb.customize ['modifyvm', :id, '--memory', '1024', '--cpus', '2']
end
end Result (some bits cut out for brevity) $ vagrant up
...
==> default: Machine booted and ready!
==> default: [landrush] setting up machine's DNS to point to our server
==> default: [landrush] network: :private_network, {:type=>"dhcp", :protocol=>"tcp", :id=>"1a0b587c-df14-423c-8959-2b87b8ed8283"}
==> default: [landrush] network: :forwarded_port, {:guest=>22, :host=>2222, :host_ip=>"127.0.0.1", :id=>"ssh", :auto_correct=>true, :protocol=>"tcp"}
==> default: Checking for guest additions in VM...
==> default: Setting hostname...
==> default: Configuring and enabling network interfaces...
[landrush-ip] Platform: linux/amd64
[landrush-ip] Using https://github.com/Werelds/landrush-ip/releases/download/0.1.1/linux_amd64_landrush-ip
==> default: [landrush] adding machine entry: test.landrush.dev => 10.0.2.15 SSH into the box to check that we indeed got eth0: $ vagrant ssh
The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Sat Aug 15 15:03:29 2015 from 10.0.2.2
vagrant@test:~$ hostname -I
10.0.2.15 172.28.128.3
vagrant@test:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:fe:b8:21 brd ff:ff:ff:ff:ff:ff
inet 10.0.2.15/24 brd 10.0.2.255 scope global eth0
valid_lft forever preferred_lft forever
inet6 fe80::a00:27ff:fefe:b821/64 scope link
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 08:00:27:62:7b:15 brd ff:ff:ff:ff:ff:ff
inet 172.28.128.3/24 brd 172.28.128.255 scope global eth1
valid_lft forever preferred_lft forever Last but not least, let's ping the bastard: $ ping test.landrush.dev
PING test.landrush.dev (10.0.2.15): 56 data bytes
Request timeout for icmp_seq 0``` |
Further thoughts from my end:
|
👍 |
So sorry for the lack of activity on this, I have been ridiculously busy. I did get a few more things implemented a couple of weeks ago (changed my logging slightly and added the exclude option), just haven't had time to 100% test it yet. I'll see if I can wrap up over the next couple of weeks. |
@Werelds can you rebase this patch? |
+1 |
Will do as soon as I get a chance. |
+1 |
TBH, I am a bit skeptical about this. I don't like the idea of including a non standard binary. How do we make enhancement and bug fixes to it? We are having a hard time keeping this project alive and we basically take on another dependent one. I agree that not all guest will have ipconfig or ip installed, but it can be provisioned. It is for sure a bit more effort, but there are examples out there which already do it. You just need to figure out which package manager to use and then install the right package. That said, why not skip all the tools and go to the proc filesystem directly. Just catting something like /proc/net/tcp or /proc/net/arp should give us the same information we would get via this little go program. We also can use all of Ruby's string manipulation capabilities to extract the information we need. Thoughts? |
Agree with your concerns. |
Right, so the same approach could be taken to get iptables or ifconfig installed.
I am not an expert either, but I had a quick look at the files and they contain the information we need. Some files encode the IP in reverse order in HEX, but that's easy to decode. ATM, /proc/net/arp seems to be the easiest to use. One thing which is not clear to me (in neither solution) is how to make a decision about which interface to use, if there are multiple. In the most common case you will have two interfaces. The first, the default Vagrant interface and the second the private network interface, either configured explicitly by the user or added by Landrush. But what if the user adds multiple network configurations. This will result into multiple routable interfaces. Either one needs to choose one "randomly" or based on some heuristics (10.x.x.x addresses before 192.x.x.x etc) or one lets the user interactively choose the IP to bind to. But as said, this is really an orthogonal issue. |
If anything, the I only did it in a separate project because it seemed the best way at the time, but the 97 lines in main.go can easily be put inside this project and compiled on Travis, as well as be pushed with every release. That said, even Do you have an example of how to retrieve it all from |
Here's some sample data for you from one of my VMs. You're underestimating the complexity ;) Interfaces, their IPs and MAC addresses:
Now, let's take a look at
There's nothing to be gained here. These are neighbours. None of this can be resolved back to local without odd tools. Next up,
local_address can be resolved to an IP address, but then you still have absolutely no idea what interface they belong to. Let's try
So now what we need to do is to take the destinations, the masks and match them up with the active connections. Let's convert some of the data first (won't filter duplicates). And I'll even use Ruby for this :P
Destination in
Mask in
So far so good, looks like we can get somewhere now, right? By using these destinations and their masks we should be able to figure out which local addresses belong to which interface. local_address in
Well...crap. eth1 apparently has no active connection and thus we can't figure out what its IP is. And that's the interface I'd be interested in with this particular VM. Edit: just FYI, in case you didn't read the original issue, the source code for the binary is at https://github.com/Werelds/landrush-ip As I said, it can easily be put in this repository and automatically compiled with Travis. The current releases were build and pushed from Travis with 0 interaction from my end ;) |
Maybe. I have to admit that I have not really tried (I still would like to do so). There is also nothing directly coming up with Google :-(
I would assume that there are not so many variants on how to install it. Also I would just start with the most common one. Also in most cases either ipconfig or ip should already be installed, right? So one would start with checking whether the binary exists in the first place.
So you would make the Go code part of Landrush and make a release it together with each Landrush release? How so? The release process would need build the Go binaries put them on GitHub (provided that's where you want to host them) and build the gem for upload to Rubygems, right? And would you align versions? Otherwise how do you match which binary to download w/i the Ruby code? You are saying this can all be easily done via Travis. Would you create the setup for this? What if I want to release locally? In this case I would need to install Golang, right? What else would I need to cross compile? Look, I am actually quite impressed by what your 97 lines of Go can do and I get your point that you can actually build binaries for all platforms. That's nice. I just don't see the whole picture yet on how to manage this from a project point of view. This complexity you won't have if you stick to the standard Vagrant capabilities approach. IMO it is not as elegant and does potentially not cover all OS types and flavors, but it straight forward and uses a "standard" approach. |
Again, I think you underestimate it. If I look at Debian, Wheezy has ifconfig/ip in freebsd-net-tools/iproute respectively. For Jessie, both change: net-tools, iproute2. Alpine 3.2 doesn't have ifconfig, ip is in iproute2; on 3.3 you do get ifconfig. SUSE doesn't have ifconfig either, Arch does. And that's just 4 of the open distributions, I haven't even touched on RedHat, CentOS, derivatives of the above, nor have I considered the different repositories that they might be in. Package names do tend to be And no, they're definitely not installed by default. There are many boxes out there that are very stripped down.
Entirely up to the maintainers. I'm happy for anyone to take my code and use it as they see fit, move the repository to someone else's ownership. It could be reduced from the 15 binaries it produces now, to only produce what Landrush needs. Considering the size of the binaries, they could even be included in the gem, although I personally think that would be a waste. As for versioning, again depends on what the maintainers want. You could include it in this repo and simply produce a matching build for every version of Landrush. Or you could do what I did now, which is to ensure backwards compatibility.
Happy to help, can also just be taken from my repository. The build steps are extremely minimal. https://github.com/Werelds/landrush-ip/blob/master/.travis.yml -> Installs gox (go cross-building tool) and ghr (tool to push to GitHub Releases) and that's it. Go projects on Travis default to running https://github.com/Werelds/landrush-ip/blob/master/Makefile -> The default build just builds all possible targets (15 of them currently; all 15 combined are about 45 megs) and if it's a tag, pushes a new release (so not every repository push gets released).
Depends on how it's set up; it's currently set up to always download the binary from my repo, which means you don't need Go installed.
I only offered this as an alternative after getting in touch with @phinze at the time. And that's still what it is, an alternative way. I agree that it probably would be nicer to do it without an extra binary, but both approaches have their drawbacks. I personally don't work with RedHat or CentOS anymore for example, so I have no idea where they keep |
Is the solution, going forward, to have the go tool built from GitHub using TravisCI, and packaged and deployed as a Vagrant Capability plugin, so that it is trivially consumable with landrush? |
http.use_ssl = true | ||
http.verify_mode = OpenSSL::SSL::VERIFY_PEER | ||
|
||
response = http.request(Net::HTTP::Get.new('/repos/werelds/landrush-ip/releases/latest')) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we really want to include latest? Would it not be better to use an explicit tag? Otherwise behavior of a given Landrush version might change, if landrush-ip gets a new release which can change to hard to track problems? WDYT?
After looking more into this, I think you might be right.
Sure, the uniformity of the solution is very appealing.
Right. For me these are the open questions right now:
At the moment I am tempted to go for the former. If need be we can decide to integrate the GO code later. BTW, should we really just download the latest landrush-ip version or is it not better to download a fixed version? I really would like to resolve this asap, since I think we should get a release out. Unfortunately, I am traveling next week. I might be able to do some work on Landrush, but otherwise the week after next. Maybe we can get to an agreement on how to proceed until then, so that we can cut a 1.0.0 release. WDYT? |
I think @joshrivers may have a point, that could actually be a nice way of doing it. The code to download and run the tool can then be 100% encapsulated and the binary would follow the cap plugin's versioning. It basically means taking my landrush-ip repository, wrapping it in the structure for a capability plugin and adjusting the Travis integration to not only build the binary, but also build and publish the gem. So we would have:
One of the decisions to make then would be whether to include the binary in the gem or not; in my opinion, fetching it from GitHub the way I do now saves a lot of space, it's only done once and it's pretty quick. The only change necessary would be a version check to make sure the binary is updated inside a guest if the plugin is updated. I'll try to do a PoC of this today if I have the time, should be relatively quick. |
+1 Sounds like a good plan |
@Werelds any updates on this? Did you have any success to get it to work as a capability plugin? |
Really sorry about the lack of activity, but work has been incredibly hectic. So no, not yet @hferentschik - will see if I can do it over the weekend. |
Quick update: I think I've got the cap plugin structure sorted (sorry, as I said, I'm not a Ruby developer so there's a lot I gotta figure out how to do exactly). Now I need to adjust my fork's code to rely on this gem and these caps and test that. Standalone it works. I'm changing the approach a bit: the binary itself is now much more minimal, it only dumps out interfaces in a plain text, JSON or YAML format. The cap plugin also just returns this data (obviously I parse the YAML and return a hash from the capability :P). The actual NIC selection, exclusion logic and such I'll do in Landrush. This way most of the real logic is still within Landrush (making it easier for contributors here to do something with the data) and it opens the cap plugin up to potentially be used in other plugins too. I'm just not sure how to effectively test Landrush + IP yet, as I don't want to push the cap plugin until I've done a test. I think using a local path in my Gemfile might do the trick (as I do this for the cap plugin itself too), just need to test that. |
Nice. I think this turns out nice now.
+1 Very good. I like this approach a lot. FYI, I am working on #171 which will configure host visibility on Windows automatically, similar to OS X /etc/resolver approach. I also have some work in progress for #189. My idea is that in case of a statically defined private network IP, we should be able to extract the IP from the Vagrant config. In this case one even does not need to run anything on the guest. All this work together could then be a nice package for a 1.1.0 release. I am hoping to work on the issues I mentioned some time next week. |
Nice! Windows support is on my list after I put this initial version up for your review. I only wrote the caps explicitly for Linux, doing them for BSD, Windows and so on will be trivial. I've now tested it with Landrush, adjusted my code in this PR accordingly. I've got a day off tomorrow, so before I push I want to write the documentation and some tests. |
Just pushed a commit with the necessary changes, some of it probably needs to be reverted though. I had to lock rake to 0.10, it wouldn't work for me otherwise (fresh install of El Capitan, haven't set rvm up yet). Also included a simple Vagrantfile just for demo purposes for you guys. I did my best with the actual code and tests, but I would appreciate an actual Ruby developer doing a review of my shit on both ends :p https://github.com/Werelds/landrush-ip just for reference. |
The pull request does still contain 5 commits, are they all needed or just the last one ("This should be it.")? I am asking since I am getting merge conflicts when I am trying to pull your changes. Can you please make sure this applies to master and potentially drop unnecessary commits? Maybe we should even just close this pull request and create a new one. WDYT? |
|
||
Or, if you know which interface you need the IP of you can specify that too (default is none): | ||
|
||
config.landrush.interface = 'eth0' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1 good idea
Overall I like the approach. Let's try to get this into a shape were I can apply it cleanly onto master. I added some comments for now.
ok
odd. I can take a look at it. Not quite sure atm what you want to achieve with the changes to Rakefile.
Right, I guess we would somehow use this Vagrantfile as part of an integration test via the new harness using Cucumber + Aruba. Check the features directory if you are interested.
got it |
I'm happy to do a new PR with only the necessary changes; I just continued on the existing ones now as they contained the relevant configuration changes. I thought about using the Vagrantfile for tests somehow, but I had no idea how that would work. Seems that is the solution. |
cool
sure
I think the Cucumber framework we have now is a good fit here. |
@hferentschik I've brought my fork's master up to date and created a new branch. Changes are in and the current tests work (also works with rake as normal now). I'll now see if I can add Cucumber tests before I do a new PR; I don't think it's a good idea for me to do a forced push to the branch I used here. How do you want to proceed with expanding OS support; shall we first get the Linux implementation going fully (as that's the most common anyway)? |
@hferentschik I've created the new PR at #193 so I'll close this one as it is now defunct. |
Sweet. Checking the new pull request. |
As discussed in #114 this is an attempt at making the IP selection more flexible on guests that have multiple NICs defined.
These may be actual NICs but may also just be tunnels defined by a number of tools. A very common use case are guests with Docker installed; currently, landrush will pick up the IP address of the
docker0
interface rather thanethX
.This pull request is not yet complete, as further discussion is required regarding the options, this serves as a proof of concept.