Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support new host_type=rdma option #41

Closed
ppf2 opened this issue Oct 29, 2014 · 4 comments
Closed

Support new host_type=rdma option #41

ppf2 opened this issue Oct 29, 2014 · 4 comments
Labels

Comments

@ppf2
Copy link
Member

ppf2 commented Oct 29, 2014

Would like to see support for a host_type similar to what we have for the AWS plugin. For azure, it can be one of the following types:

private_ip
public_ip
RDMA (http://blogs.technet.com/b/windowshpc/archive/2014/01/30/new-high-performance-capabilities-for-windows-azure.aspx)

It will also be nice for it to default to use the better performing RDMA connection and fall back to private_if if none is provided.

@dadoonet dadoonet added the new label Oct 30, 2014
@dadoonet dadoonet changed the title Support new host_type option Support new host_type=rdma option Feb 13, 2015
@dadoonet
Copy link
Member

Waiting for #63 to be merged first so we can use Azure API for this.

@dadoonet
Copy link
Member

I did some tests today and I launched a A8 Windows instance.
It sounds like that by default Windows machine has 2 network cards Microsoft Hyper-V Network Adapter.
In the documentation, they wrote that:

We have virtualized RDMA through Hyper-V with near bare metal performance of less than 3 microsecond latency and greater than 3.5 gigabytes per second bandwidth.

So I wonder if we really have to do any change in the plugin code as by default we are using the private_ip which correspond to Microsoft Hyper-V Network Adapter. Is my understanding correct?

Is there any other setting I missed or any configuration you have to do on the windows machine to explicitly set a RDMA card?

@johnarnold
Copy link

RDMA connectivity is not supported on Linux VMs, FYI.

On Windows machines, you can only leverage it via Network Direct & MPI. In other words, I don't think you can use it today unless you move off of the regular network stack for inter-node I/O.

I'd love to see faster shard recoveries and such. With the Premium Storage, we're seeing pretty good write performance, but it still pretty much sucks when you have to reboot stuff or a node dies.

@dadoonet
Copy link
Member

Closing as now opened at elastic/elasticsearch#12449

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants