-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Least Square Solution Layer (for ELM) #2565
Conversation
@bhack : I am aware of that discussion. Here is reply of inventor of ELM : http://www.ntu.edu.sg/home/egbhuang/pdf/ELM-Rosenblatt-Neumann.pdf Moreover, whosever is correct, You can not ignore the good performance of ELMs or whatever they may be called, and that's what I think is the focus of caffe. May be we can change the name of method in comments (the only place where I mentioned ELM). |
Following up bhack's comment, regarding the ELM method itself, take into account also this: And especially this: I'm not sure if this method should be integrated in Caffe, yet... |
@lunzueta |
Here there are more comments about this discussion and also about Yann Lecun's recent comments about ELM: http://www.reddit.com/r/MachineLearning/comments/34u0go/yann_lecun_whats_so_great_about_extreme_learning/ They also discuss about the performance of ELM and so on. This is a quite recent discussion, so I think that we should be careful with this issue before integrating this method in Caffe. |
@lunzueta : yes, We should be careful, that's why we are discussing it. Right :D ? In comments, they are discussing performance of ELM, and most of them agree that it performs well considering its simplicity and low training time. I haven't gone much deeper in those conversation but I want to raise some points :
|
Moreover the complaint was anonymous, but the paper I mentioned in above comments is published, and I am sure that it would have gone through rigorous evaluation before acceptance due to the delicacy of the issue , it is addressing. |
…tion to math_functions.cpp and header file
High performance implementation of Extreme Learning Machines (fast randomized neural networks). https://github.com/akusok/hpelm , good code and paper ; |
Hello guys, I am an author of HPELM. I would be glad to integrate my toolbox with you code if you tell me what you want :-) About ELM in general:
|
|
Closing as out-of-scope. |
Currently this pull request is in progress. I need help to decide which Library should be used.
reference : http://www.extreme-learning-machines.org
This PR includes two layers :
Issues Left :
Libraries I am considering : CULA or MAGMA for GPU. Please suggest !!!
Future Plans : Implementing other variants of ELM like Sparse ELM or Kernelled ELM etc.