This fork of Keras offers the following contributions:
- Caffe to Keras conversion module
- Layer-specific learning rates
- New layers for multimodal data
Contact email: marc.bolanos@ub.edu
GitHub page: https://github.com/MarcBS
MarcBS/keras has been tested with: Python 2.7 and Python 3.6 and with the Theano and Tensorflow backends.
This module allows to convert Caffe models to Keras for their later training or test use. See this README for further information.
Please, be aware that this feature is not regularly maintained. Thus, some layers or parameter definitions introduced in newer versions of either Keras or Caffe might not be compatible with the converter.
For this reason, any pull requests with updated versions of the caffe2keras converter are highly welcome!
This functionality allows to add learning rates multipliers to each of the learnable layers in the networks. During training they will be multiplied by the global learning rate for modifying the weight of the error on each layer independently. Here is a simple example of usage:
x = Dense(100, W_learning_rate_multiplier=10.0, b_learning_rate_multiplier=10.0) (x)
LSTM layers:
- LSTMCond: LSTM conditioned to the previously generated word (additional input with previous word).
- AttLSTM: LSTM with Attention mechanism.
- AttLSTMCond: LSTM with Attention mechanism and conditioned to previously generated word.
- AttConditionalLSTMCond: ConditionalLSTM similar to Nematus with Attention mechanism and conditioned to previously generated word.
- AttLSTMCond2Inputs: LSTM with double Attention mechanism (one for each input) and conditioned to previously generated word.
- AttLSTMCond3Inputs: LSTM with triple Attention mechanism (one for each input) and conditioned to previously generated word.
- others
And their corresponding GRU version:
- GRUCond: GRU conditioned to the previously generated word (additional input with previous word).
- AttGRUCond: GRU with Attention mechanism and conditioned to previously generated word.
- AttConditionalGRUCond: ConditionalGRU as in Nematus with Attention
- ClassActivationMapping: Class Activation Mapping computation used in GAP networks.
- CompactBilinearPooling: compact version of bilinear pooling for merging multimodal data.
- MultiHeadAttention: Multi-head attention layer. Multi-Head Attention consists of h attention layers running in parallel. Base of the Transformer model.
You can see more practical examples in projects which use this library:
ABiViRNet for Video Description
Egocentric Video Description based on Temporally-Linked Sequences
NMT-Keras: Neural Machine Translation.
In order to install the library you just have to follow these steps:
- Clone this repository:
git clone https://github.com/MarcBS/keras.git
- Include the repository path into your PYTHONPATH:
export PYTHONPATH=$PYTHONPATH:/path/to/keras
For additional information on the Deep Learning library, visit the official web page www.keras.io or the GitHub repository https://github.com/keras-team/keras.