【转载】Project on Learning Deep Belief Nets
2015-04-20 19:04
316 查看
Project on Learning Deep Belief Nets
Deep Belief Nets (DBN's) will be explained in the lecture on Oct 29. Instead of learning layers of features by backpropagating errors, they learn one layer at a time by trying to build a generative model of the data or the activities of the feature detectors in the layer below. After they have learned features in this way, they can be fine-tuned with backpropagation. Their main advantage is that they can learn the layers of features from large sets of unlabelled data.The data and the code
For this project you should use the MNIST dataset. This data and the code for training DBN's is available here. You want the code for classifying images, not the code for deep autoencoders.The main point of the project
You should investigate how the relative performance of three learning methods changes as you change the relative amounts of labelled and unlabelled data. The three methods are DBN's, SVMlite, and SVMlite applied to the features learned by DBN's. SVMlite is explained in several places on the web. We will add more information soon on the easiest way to use SVMlite with Matlab for multiclass classification (as opposed to 2 class which is easier). Support vector machines will be explained in the lecture on Nov 12, but you dont need to understand much about them to run SVMlite.Choosing the data
You have to decide how much labelled and unlabelled data to use. Using small amounts of labelled data is a good place to start because it makes the supervised learning fast and it gives a high error-rate which makes comparisons easier.Designing the Deep Belief Network
You have to decide how many hidden layers the DBN should have, how many units each layer should have, and how long the pre-training should be. This project does not involve writing your own code from scratch, so we will expect you to perform sensible experiments to choose these numbers (and the numbers of labelled and unlabelled examples) and it will be evaluated on how well you do this.Extra work for teams
If you are working as a pair, you should also compare the three methods above with a single feedforward network trained with backpropagation on the labelled data.相关文章推荐
- 笔记 NEURAL NETS FOR VISION CVPR 2012 Tutorial on Deep Learning
- DBN训练学习-A fast Learning algorithm for deep belief nets
- PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation
- A Fast Learning Algorithm for Deep Belief Nets
- DeepLearningToolBox学习——DBN(Deep Belief Net )
- 转载 10 Python Machine Learning Projects on GitHub
- 【转载】A shallow understanding on deep learning
- 深度学习,A fast learning algorithm for deep belief nets
- 【转载】Distributed Deep Learning on MPP and Hadoop
- 【转载】卷积神经网络CNN经典模型整理(AlexNet,GoogleNet,VGG,Deep Residual Learning)
- 微软亚洲实验室一篇超过人类识别率的论文:Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification ImageNet Classification
- CVG Talk on Deep Learning
- 【转载】Deep learning these days
- paper 124:【转载】无监督特征学习——Unsupervised feature learning and deep learning
- 卷积神经网络CNN经典模型整理Lenet,Alexnet,Googlenet,VGG,Deep Residual Learning
- 卷积神经网络CNN经典模型整理Lenet,Alexnet,Googlenet,VGG,Deep Residual Learning
- Paper Read: Robust Deep Multi-modal Learning Based on Gated Information Fusion Network
- 卷积神经网络CNN经典模型整理Lenet,Alexnet,Googlenet,VGG,Deep Residual Learning,squeezenet
- Coursera 吴恩达DeepLearning.AI 第五课 sequence model 序列模型 第二周 Operations on word vectors - v2
- 卷积神经网络CNN经典模型整理(AlexNet,GoogleNet,VGG,Deep Residual Learning)