神经网络压缩加速
2016-12-06 22:11
591 查看
深度学习模型压缩方法综述(一)
深度网络模型压缩 - CNN Compression
神经网络压缩:Deep Compression
神经网络压缩(1):Deep Compression
Deep Learning(深度学习)之(六)【深度神经网络压缩】Deep Compression (ICLR2016 Best Paper)
CNN 模型压缩与加速算法综述
pruning
Pruning Convolutional Neural Networks for Resource Efficient Inference
神经网络压缩(3):Learning both Weights and Connections for Efficient Neural Network
Dynamic
Network Surgery for Efficient DNNs
pytorch基于卷积层通道剪枝的方法
Quantization
神经网络压缩(8)Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights
https://github.com/Zhouaojun/Incremental-Network-Quantization
网络压缩-量化方法对比
CNN网络量化 - Quantized Convolutional
Neural Networks for Mobile Devices
tensorflow模型量化
tensorflow模型的定点化
Why are Eight Bits Enough for Deep Neural Networks?
tensorflow 量化和裁剪的资料
Binary Network
【论文笔记】二值化神经网络(Binarized Neural Network)
CNN网络二值化--XNOR-Net:
ImageNet Classification Using Binary Convolutional Neural Networks
深度学习——缩减+召回加速网络训练
【网络优化】超轻量级网络SqueezeNet算法详解
BinaryConnect: Training
Deep Neural Networks with binary weights during propagations
CNN网络结构 - Refining Architectures
of Deep Convolutional Neural Networks
CNN网络分解--Factorized Convolutional
Neural Networks
论文笔记:DeepRebirth——从非权重层入手来进行模型压缩
论文笔记:ThiNet——一种filter级的模型裁剪算法
http://www.jianshu.com/u/f5c90c3856bb https://github.com/yihui-he/channel-pruning https://github.com/jacobgil/pytorch-pruning https://github.com/eeric/channel_prune https://github.com/tostq/Caffe-Python-Tutorial/blob/master/prune.py https://github.com/tostq/Caffe-Python-Tutorial/blob/master/quantize.py https://github.com/IntelLabs/SkimCaffe https://github.com/WWLoveBasketball/PRUNE-BASED-ON-TENSOTFLOW https://github.com/Aaron-Zhao123/LeNet5-testing https://github.com/garion9013/impl-pruning-TF https://github.com/garion9013/impl-pruning-caffemodel https://github.com/shekkizh/TensorflowProjects/tree/master/Model_Pruning https://github.com/ex4sperans/pruning_with_tensorflow https://github.com/DNNToolBox/Net-Trim-v1 https://github.com/DAVIDNEWGATE/Project https://github.com/yiwenguo/Dynamic-Network-Surgery https://github.com/ZhouYuSong/caffe-pruned https://github.com/sh0416/pruning https://github.com/liuzhuang13/slimming
深度网络模型压缩 - CNN Compression
神经网络压缩:Deep Compression
神经网络压缩(1):Deep Compression
Deep Learning(深度学习)之(六)【深度神经网络压缩】Deep Compression (ICLR2016 Best Paper)
CNN 模型压缩与加速算法综述
pruning
Pruning Convolutional Neural Networks for Resource Efficient Inference
神经网络压缩(3):Learning both Weights and Connections for Efficient Neural Network
Dynamic
Network Surgery for Efficient DNNs
pytorch基于卷积层通道剪枝的方法
Network Slimming
Iterative Pruning
Quantization神经网络压缩(8)Incremental Network Quantization: Towards Lossless CNNs with Low-precision Weights
https://github.com/Zhouaojun/Incremental-Network-Quantization
网络压缩-量化方法对比
CNN网络量化 - Quantized Convolutional
Neural Networks for Mobile Devices
tensorflow模型量化
tensorflow模型的定点化
Why are Eight Bits Enough for Deep Neural Networks?
tensorflow 量化和裁剪的资料
Binary Network
【论文笔记】二值化神经网络(Binarized Neural Network)
CNN网络二值化--XNOR-Net:
ImageNet Classification Using Binary Convolutional Neural Networks
深度学习——缩减+召回加速网络训练
【网络优化】超轻量级网络SqueezeNet算法详解
BinaryConnect: Training
Deep Neural Networks with binary weights during propagations
CNN网络结构 - Refining Architectures
of Deep Convolutional Neural Networks
CNN网络分解--Factorized Convolutional
Neural Networks
论文笔记:DeepRebirth——从非权重层入手来进行模型压缩
论文笔记:ThiNet——一种filter级的模型裁剪算法
http://www.jianshu.com/u/f5c90c3856bb https://github.com/yihui-he/channel-pruning https://github.com/jacobgil/pytorch-pruning https://github.com/eeric/channel_prune https://github.com/tostq/Caffe-Python-Tutorial/blob/master/prune.py https://github.com/tostq/Caffe-Python-Tutorial/blob/master/quantize.py https://github.com/IntelLabs/SkimCaffe https://github.com/WWLoveBasketball/PRUNE-BASED-ON-TENSOTFLOW https://github.com/Aaron-Zhao123/LeNet5-testing https://github.com/garion9013/impl-pruning-TF https://github.com/garion9013/impl-pruning-caffemodel https://github.com/shekkizh/TensorflowProjects/tree/master/Model_Pruning https://github.com/ex4sperans/pruning_with_tensorflow https://github.com/DNNToolBox/Net-Trim-v1 https://github.com/DAVIDNEWGATE/Project https://github.com/yiwenguo/Dynamic-Network-Surgery https://github.com/ZhouYuSong/caffe-pruned https://github.com/sh0416/pruning https://github.com/liuzhuang13/slimming
相关文章推荐
- 当前深度神经网络模型压缩和加速方法速览
- 神经网络模型压缩与加速
- 综述论文:当前深度神经网络模型压缩和加速方法速览
- 当前深度神经网络模型压缩和加速方法速览
- 阅读笔记:深度神经网络模型压缩与加速
- 【AAAI Oral】阿里提出新神经网络算法,压缩掉最后一个比特
- Batch Normalization:加速神经网络训练的通用手段
- Batch Normalization 神经网络加速算法
- 嵌入式/压缩神经网络相关工作汇总
- 神经网络arm neon加速实现
- 【神经网络与深度学习】Google Snappy - 一个高速压缩库
- Intel DAAL AI加速——神经网络
- [总结]神经网络・压缩 compression(cnn,rnn)
- 【深度神经网络压缩】Deep Compression (ICLR2016 Best Paper)
- 基于ARM在cpu上做神经网络加速【转】
- 深度神经网络模型压缩
- NIPS 2016论文:英特尔中国研究院在神经网络压缩算法上的最新成果
- 神经网络压缩(3):Learning both Weights and Connections for Efficient Neural Network
- paper reading之卷积神经网络压缩(一)
- 【两项业界最佳】普林斯顿新算法自动生成高性能神经网络,同时超高效压缩