Real-World Multiobject, Multigrasp Detection的demo复现
论文的资源网址及其意义
https://github.com/ivalab #论文资源的主网址,其中 grasp_multiObject_multiGrasp 是指导复现 demo的
电脑的环境
python 3.5+tensorflow+cuda9.1+cudnn7
在复现过程中遇到的问题
第一类:缺少easydict, numpy, sharply等等包
解决办法:直接pip install 一下就好,然后如果出错就sudo pip install
第二类:ros和tensorflow的python环境相冲突
_graspRGD.py --net res50 --dataset grasp Traceback (most recent call last): File "./tools/demo_graspRGD.py", line 20, in <module> from model.test import im_detect File "/home/david/grasping_program/grasp_multiObject_multiGrasp-master/tools/../lib/model/test.py", line 10, in <module> import cv2 ImportError: /opt/ros/kinetic/lib/python2.7/dist-packages/cv2.so: undefined symbol: PyCObject_Type
解决错误具体参照:
https://blog.csdn.net/qq_34544129/article/details/81946494
可以在需要运行的python文件(即使用import cv2的python文件)中(注意要找到那个文件,然后在开头添加以下代码),添加以下代码:
import sys sys.path.remove('/opt/ros/kinetic/lib/python2.7/dist-packages')
通过这两行代码可以把ROS写入path中的路径给清除,进行可以import anaconda中的cv2包。
第三类:运行过程中的不兼容
1. 找不到目录
直接在主文件夹下面新建一个目录就好,论文作者应该是只把自己做的包的一部分给丢上来啦,所以缺的路径自己建好文件夹,然后丢进去
找不到文件也是一样的,创建目录,然后把下载好的文件放进去
2. TypeError: bottleneck() argument after ** must be a mapping, not tuple
在测试的时候发现
print(type(unit)) #测试格,输出的格式是tuple
print (unit) #输出为(256, 64, 1),是一个元组
所以解决问题的思路应该是把格式从tuple转成mapping,并且还得是变量类型的转换。
在翻上去查到了unit的各个含义以后,见 https://blog.csdn.net/stesha_chen/article/details/81870591
加了一行转换的代码:
unit={'depth':unit[0],'depth_bottleneck':unit[1],'stride':unit[2]} #强行把格式从tuple改成mapping
然后如果直接用常量类型转换(‘depth’=256)的话,就会出现后面tensor格式不对齐的现象+
总结
下次运行demo可能会更得心应手一些,原来找不到目录和找不到包还有格式不通这种事情都会出现,既然知道了会出现的话下次就不会很慌啦
调程序必须得学会阅读错误,把错误的意思搞明白,不然每次直接百度,一对一的找答案,就很机械,要学会逻辑推理。
然后接下来的工作:
- 进一步的阅读论文,学会创建自己的数据集然后用tensorflow训练一遍
- 深入理解tensorflow的框架然后改一改框架
- 点赞
- 收藏
- 分享
- 文章举报
- Object Detection -- 论文YOLO(You Only Look Once: Unified, Real-Time Object Detection)解读
- Object Detection by Color: Using the GPU for Real-Time Video Image Processing
- 【CV论文阅读】YOLO:Unified, Real-Time Object Detection
- MOT (Beyond Pixels: Leveraging Geometry and Shape Cues for Online Multi-Object Tracking)demo阅读
- 《You Only Look Once:Unified,Real-Time Object Detection》笔记
- RCNN学习笔记(6):You Only Look Once(YOLO):Unified, Real-Time Object Detection
- Faster RCNN: Towards RealTime Object Detection with Region Proposal Networks+Visualizing and Underst
- 论文阅读:You Only Look Once: Unified, Real-Time Object Detection
- 论文翻译:Multi-View 3D Object Detection Network for Autonomous Driving
- 论文笔记|You Only Look Once: Unified, Real-Time Object Detection
- YOLO: Real-Time Object Detection解读
- dlib 16 dlib自带demo Max-Margin Object Detection(MMOD)
- Faster rcnn:Towards Real-Time Object Detection with Region Proposal Networks阐述及实战
- Rotated Feature Network for multi-orientation object detection
- 【论文阅读】Fast and Accurate Object Detection Using Image Cropping/Resizing in Multi-View 4K Sports Video
- tensorflow object detection demo
- Faster RCNN: Towards RealTime Object Detection with Region Proposal Networks+Visualizing and Underst
- 论文提要“You Only Look Once: Unified, Real-Time Object Detection”
- Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks 代码编译
- YOLO原理--读《You Only Look Once:Unified, Real-Time Object Detection》