A Tutorial on Deep Learning Part 2: Autoencoders, Convolutional Neural Networks and Recurrent Neural
2017-10-27 00:19
603 查看
In the previous tutorial, I discussed the use of deep networks to classify nonlinear data. In addition totheir ability to handle nonlinear data, deep networks also have a special strength in their flexibility whichsets them apart from other tranditional
machine learning models: we can modify them in many ways tosuit our tasks. In the following, I will discuss three most common modifications:
Unsupervised learning and data compression via autoencoders which require modifications in the lossfunction,
Translational invariance via convolutional neural networks which require modifications in the networkarchitecture,
Variable-sized sequence prediction via recurrent neural networks which require modifications in thenetwork architecture.
The flexibility of neural networks is a very powerful property. In many cases, these changes lead to greatimprovements in accuracy compared to basic models that we discussed in the previous tutorial.In the last part of the tutorial, I will also explain how
to parallelize the training of neural networks.This is also an important topic because parallelizing neural networks has played an important role in thecurrent deep learning movement.
2 Autoencoders
One of the first important results in Deep Learning since early 2000 was the use of Deep Belief Networks [15]to pretrain deep networks. This approach is based on the observation that random initialization is abad idea, and that pretraining each layer with
an unsupervised learning algorithm can allow for betterinitial weights. Examples of such unsupervised algorithms are Deep Belief Networks, which are based onRestricted Boltzmann Machines, and Deep Autoencoders, which are based on Autoencoders. Althoughthe
first breakthrough result is related to Deep Belief Networks, similar gains can also be obtained laterby Autoencoders [4]. In the following section, I will only describe the Autoencoder algorithm because it issimpler to understand.
machine learning models: we can modify them in many ways tosuit our tasks. In the following, I will discuss three most common modifications:
Unsupervised learning and data compression via autoencoders which require modifications in the lossfunction,
Translational invariance via convolutional neural networks which require modifications in the networkarchitecture,
Variable-sized sequence prediction via recurrent neural networks which require modifications in thenetwork architecture.
The flexibility of neural networks is a very powerful property. In many cases, these changes lead to greatimprovements in accuracy compared to basic models that we discussed in the previous tutorial.In the last part of the tutorial, I will also explain how
to parallelize the training of neural networks.This is also an important topic because parallelizing neural networks has played an important role in thecurrent deep learning movement.
2 Autoencoders
One of the first important results in Deep Learning since early 2000 was the use of Deep Belief Networks [15]to pretrain deep networks. This approach is based on the observation that random initialization is abad idea, and that pretraining each layer with
an unsupervised learning algorithm can allow for betterinitial weights. Examples of such unsupervised algorithms are Deep Belief Networks, which are based onRestricted Boltzmann Machines, and Deep Autoencoders, which are based on Autoencoders. Althoughthe
first breakthrough result is related to Deep Belief Networks, similar gains can also be obtained laterby Autoencoders [4]. In the following section, I will only describe the Autoencoder algorithm because it issimpler to understand.
相关文章推荐
- 【Notes on Neural Networks and Deep Learning】(to be continued)
- End-to-End Learning of Deformable Mixture of Parts and Deep Convolutional Neural Networks for Human
- 论文阅读:End-to-End Learning of Deformable Mixture of Parts and Deep Convolutional Neural Networks for H
- Neural Networks and Deep Learning(神经网络与深度学习)_On the exercises and problems
- neural-networks-and-deep-learning network.py
- Neural Networks and Deep Learning之中文翻译-关于本书
- 深度学习入门:Simultaneous Feature Learning and Hash Coding with Deep Neural Networks
- 极简笔记 DeepID-Net: Object Detection with Deformable Part Based Convolutional Neural Networks
- Neural Networks and Deep Learning 学习笔记(十)
- A 'Brief' History of Neural Nets and Deep Learning, Part 2
- learning and transferring mid-level image representations using convolutional neural networks
- 深度学习入门笔记:Fast Image Search with Deep Convolutional Neural Networks and Efficient Hashing Codes
- 《Neural Networks and Deep Learning》读书笔记:最简单的识别MNIST的神经网络程序(2)
- Neural Networks And Deep Learning(1)
- Neural Networks and Deep Learning总结
- neural-networks-and-deep-learning misleading_gradient.py
- A 'Brief' History of Neural Nets and Deep Learning, Part 3
- Learning_and_Transferring_Mid-Level_Image_Representations_using_Convolutional_Neural_Networks
- Learning and Transferring Mid-Level Image Representations using Convolutional Neural Networks
- A 'Brief' History of Neural Nets and Deep Learning, Part 4