您的位置:首页 > 其它

READING NOTE:LCNN: Lookup-based Convolutional Neural Network

2016-11-24 08:20 405 查看
TITLE: LCNN: Lookup-based Convolutional Neural Network

AUTHOR: Hessam Bagherinezhad, Mohammad Rastegari, Ali Farhadi

ASSOCIATION: University of Washington, Allen Institute for AI

FROM: arXiv:1611.06473

CONTRIBUTIONS

LCNN, a lookup-based convolutional neural network is introduced that encodes convolutions by few lookups to a dictionary that is trained to cover the space of weights in CNNs.

METHOD

The main idea of the work is decoding the weights of the convolutional layer using a dictionary D and two tensors, I and C , like the following figure illustrated.



where k is the size of the dictionary D , m is the size of input channel. The weight tensor can be constructed by the linear combination of S words in dictionary D as follows:

W [:,r,c] =∑ t=1 S C [t,r,c] ⋅D [I [t,r,c] ,:] ∀r,c

where S is the size of number of components in the linear combinations. Then the convolution can be computed fast using a shared dictionary. we can convolve the input with all of the dictionary vectors, and then compute the output according to I and C . Since the dictionary D is shared among all weight filters in a layer, we can precompute the convolution between the input tensor X and all the dictionary vectors. Given S which is defined as:

S [i,:,:] =X∗D [i,:] ∀1≤i≤k

the convolution operation can be computed as

X∗W=S∗P

where P can be expressed by I and C :

P j,r,c ={C t,r,c 0 ∃t:I t,r,c =jotherwise

The idea can be illustrated in the following figure:



thus the the dictionary and the lookup parameters can be trained jointly.

ADVANTAGES

It speeds up inference.

Few-shot learning. The shared dictionary in LCNN allows a neural network to learn from very few training examples on novel categories

LCNN needs fewer iteration to train.

DISADVANTAGES

Performance is hurt because of the estimation of the weights
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐