Skip to content
/ LLLNet Public

Keras Implementation of "Look, Listen and Learn" Model

Notifications You must be signed in to change notification settings

Kajiyu/LLLNet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

LLLNet

About

This is a Keras implementation of "Look, Listen and Learn" Model on the research by R. Arandjelovic and A. Zisserman, at DeepMind. This model can get cross-modal features between audios and images.

Core Concept

Audio-visual correspondence task (AVC)

Different Point from Original Model

  • SqueezeNet is used for visual CNN. Model Figure

About

Keras Implementation of "Look, Listen and Learn" Model

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published