Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Least Square Solution Layer (for ELM) #2565

Closed
wants to merge 60 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
60 commits
Select commit Hold shift + click to select a range
33a0c28
Adding ls_layer.cpp declarations only
Macbull Jun 6, 2015
98ae3b6
Adding ls_layer.cpp LayerSetUp definition
Macbull Jun 6, 2015
bbdbd53
Adding ls_layer.cpp forward definition
Macbull Jun 6, 2015
0b14dcf
Adding LSLayer to common_layers.hpp
Macbull Jun 6, 2015
64b1ac9
Adding LSLayer parameters to caffe.proto
Macbull Jun 6, 2015
2cfa399
Adding Transpose Layer declarations
Macbull Jun 6, 2015
66daaed
Adding Transpose Layer definitions
Macbull Jun 6, 2015
cc9d1df
Adding transpose layer to common_layers.hpp
Macbull Jun 6, 2015
5ff5757
Merge branch 'master' into ELM
Macbull Jun 6, 2015
aefb08f
Fixing a error in caffe.proto(parameter was not named)
Macbull Jun 8, 2015
946d1ed
removing gpu function declarations
Macbull Jun 9, 2015
1157590
fixed missing semicolon
Macbull Jun 9, 2015
785a03f
removing gpu functions
Macbull Jun 9, 2015
af7d1a7
Adding omatcopy and dgels function to math_functions.hpp
Macbull Jun 10, 2015
9dd1458
Adding definitions of omatcopy and dgels(sgels) to math_functions.cpp
Macbull Jun 10, 2015
0d93392
Adding omatcopy for double
Macbull Jun 10, 2015
a1206c3
replacing float with Dtype for omatcopy
Macbull Jun 10, 2015
10172bf
Transpose layer completed
Macbull Jun 10, 2015
c91ce87
added call to lapack dgels function in LS_layes
Macbull Jun 10, 2015
9f31110
changing name of cafde_cpu_dgels to more generic caffe_cpu_gels
Macbull Jun 10, 2015
e4a2bf4
Updating name of dgels in function call
Macbull Jun 10, 2015
7134040
Fixing some annoying bugs
Macbull Jun 12, 2015
a59f045
Added new type of weight sharing, COPY, where the target layer does n…
Macbull Jun 18, 2015
a904863
removing share_mode fron proto, explicitly added condition for transp…
Macbull Jun 18, 2015
334f54f
Mentioning purpose of this repo in ReadMe
Macbull Jun 16, 2015
82085aa
Update README.md
Macbull Jun 19, 2015
575402c
Update README.md
Macbull Jun 19, 2015
9467929
Update README.md
Macbull Jun 19, 2015
976b1e8
Update README.md
Macbull Jun 19, 2015
954c94f
Update README.md
Macbull Jun 19, 2015
b4d22e2
Update README.md
Macbull Jun 19, 2015
3862e3a
Update README.md
Macbull Jun 19, 2015
35207c3
Update README.md
Macbull Jun 20, 2015
9f15f5b
reading remote sensing dataset from CSV files
Macbull Jun 20, 2015
e80043d
fixing ls_layer
Macbull Jun 20, 2015
068fd9c
fixing label reading
Macbull Jun 20, 2015
b372d0c
fixing ls layer
Macbull Jun 20, 2015
bccc29f
Merge branch 'ELM' of https://github.com/Macbull/ELM-Caffe into ELM
Macbull Jun 20, 2015
8b13040
csv file was space seperated, so made the change in functions accordi…
Macbull Jun 20, 2015
51d1193
adding ?gelss to math_functions.hpp
Macbull Jun 20, 2015
267f114
adding ?gelss definations to cpp
Macbull Jun 20, 2015
4b8e3f3
using gelss in ls_layer
Macbull Jun 21, 2015
d8f2b6e
fixing gelss (float)
Macbull Jun 21, 2015
fd88c82
fixing gelss for double and showing status of gelss in LOG
Macbull Jun 21, 2015
1c0a068
no change at all
Macbull Jun 22, 2015
07ba7bc
adding gelsd(min-norm solution using svd and divide and conquer) func…
Macbull Jun 22, 2015
9a9d557
using gelsd in LS layer
Macbull Jun 22, 2015
c7efd9a
Merge branch 'ELM' into remotesensing
Macbull Jun 22, 2015
32938f8
fixing capital letters and overwriting problem of y and beta
Macbull Jun 22, 2015
9e03394
Deleting Rank after computation in ?gels? functions
Macbull Jun 22, 2015
dbc6c94
removing remotesensing from ELM branch
Macbull Jun 22, 2015
efc20cb
cpp file for training or testing ELM Classification
Macbull Jun 24, 2015
3a55ebd
adding protofile and model and training dataset for ELM CLassification
Macbull Jun 24, 2015
c465f2f
solve test output[0] and out[1] problem
Macbull Jun 24, 2015
177f889
making seperate prototxt for train and test , due to input of net(lac…
Macbull Jun 24, 2015
da357e3
Uniform filler and bias added to inner product layer
Macbull Jun 24, 2015
3a6e261
Adding train prototxt (missed in last commit
Macbull Jun 24, 2015
eff989f
fixing alignment of tr.prototxt
Macbull Jun 24, 2015
a98656e
fixing alignment of ts.prototxt
Macbull Jun 24, 2015
bf0c857
fixing alignment of ts.prototxt again
Macbull Jun 24, 2015
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,30 @@
Note : This repo is not aimed to be merged with Caffe, and so is being maintained as seperate repo, instead of a fork.

This Repo is aimed to enhance Caffe to include layers that are required to construct Extreme Learning Machine. Currently, only Least square layer is available for constructing ELM (by combining Inner Product layer and Sigmoid Layer). Iterative Least square support is under development to make ELM Online Sequential.

Additionally Transpose Layer is provided with this repo to make the construction of stacked ELM-Auto Encoders possible.

####LS Layer
- Bottom : "data"
- Bottom : "labels"
- Param{Name : "beta"}
- //no top
- beta (ß) is the weight calculated as Least square solution of Hß = Y, where H is "data" or bottom[0] and Y is "labels" or bottom[1].
- Requires Intel MKL Library


####Transpose Layer
- //no bottom
- Param{Name : "beta"}
- Param{Name : "transposed_beta"}
- //no top
- Currently requires Intel MKL Library, but will soon be updated to work without it.


####Other changes includes:
- addition of some functions to math_functions.cpp and hpp.
- some changes to net.cpp so that, transpose layer can be setup to share weights of any layer without knowing the size of blob.

# Caffe

Caffe is a deep learning framework made with expression, speed, and modularity in mind.
Expand Down
81 changes: 81 additions & 0 deletions examples/elm/elm_classification_tr.prototxt
Original file line number Diff line number Diff line change
@@ -0,0 +1,81 @@
name: "asr"
input: "data"
input_dim: 800
input_dim: 65
input_dim: 1
input_dim: 1
input: "labels"
input_dim: 800
input_dim: 8
input_dim: 1
input_dim: 1
layer{
name: "inner1"
type: "InnerProduct"
bottom: "data"
top: "inner1"
inner_product_param: {
num_output: 300
weight_filler: {
type: "uniform"
min: -1
max: 1
}
bias_filler: {
type: "uniform"
min: -1
max: 1
}
}
}
layer {
name: "sig1"
type: "Sigmoid"
bottom: "inner1"
top: "sig1"
}

layer {
name: "ls"
type: "LS"
bottom: "sig1"
bottom: "labels"
include {
phase: TRAIN
}
param: {
name: "shared"
}

}
layer {
name: "tr"
type: "Transpose"
include {
phase: TRAIN
}
param: {
name: "shared"
}
param: {
name: "transposed"
}
}
layer {
name: "inner2"
type: "InnerProduct"
bottom: "sig1"
top: "inner2"
param: {
name: "transposed"
}
inner_product_param: {
num_output: 8
}
}
layer {
name: "sig2"
type: "Sigmoid"
bottom: "inner2"
top: "out"
}
76 changes: 76 additions & 0 deletions examples/elm/elm_classification_ts.prototxt
Original file line number Diff line number Diff line change
@@ -0,0 +1,76 @@
name: "asr"
input: "data"
input_dim: 3800
input_dim: 65
input_dim: 1
input_dim: 1
layer{
name: "inner1"
type: "InnerProduct"
bottom: "data"
top: "inner1"
inner_product_param: {
num_output: 300
weight_filler: {
type: "uniform"
min: -1
max: 1
}
bias_filler: {
type: "uniform"
min: -1
max: 1
}
}
}
layer {
name: "sig1"
type: "Sigmoid"
bottom: "inner1"
top: "sig1"
}

layer {
name: "ls"
type: "LS"
bottom: "sig1"
bottom: "labels"
include {
phase: TRAIN
}
param: {
name: "shared"
}

}
layer {
name: "tr"
type: "Transpose"
include {
phase: TRAIN
}
param: {
name: "shared"
}
param: {
name: "transposed"
}
}
layer {
name: "inner2"
type: "InnerProduct"
bottom: "sig1"
top: "inner2"
param: {
name: "transposed"
}
inner_product_param: {
num_output: 8
}
}
layer {
name: "sig2"
type: "Sigmoid"
bottom: "inner2"
top: "out"
}
801 changes: 801 additions & 0 deletions examples/elm/tr_hyp

Large diffs are not rendered by default.

Loading