您的位置:首页 > 移动开发 > Objective-C

tensorflow object detection API安装

2017-12-01 10:45 513 查看
http://blog.csdn.net/u010122972/article/details/77385793

终端进入models根目录 ==>>  models/research

object_detection 能够对ssd_mobilenets进行训练,为了体验效果,对object_detection 进行了安装

1.安装依赖项

我的是ubuntu 14.04,故在终端中输入如下命令

sudo pip install pillow
sudo pip install lxml
sudo pip install jupyter
sudo pip install matplotlib

如果是ubuntu 16.04,可输入:

sudo apt-get install protobuf-compiler python-pil python-lxml
sudo pip install jupyter
sudo pip install matplotlib

2.编译protobuf

终端进入models根目录,输入

protoc object_detection/protos/*.proto --python_out=.

(默认已经安装protobuf)

3.添加库路径

终端进入models根目录,输入

export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

每次终端都需要输入一次

4.验证

终端进入models根目录,输入

python object_detection/builders/model_builder_test.py

若显示OK,则已经成功安装

===============================update===================================
http://blog.csdn.net/u010302327/article/details/78248394
train自己的pb:

1. python object_detection/train.py --train_dir object_detection/train --pipeline_config_path
object_detection/VOC2012/ssd_mobilenet_v1_voc2012.config

报错:
https://github.com/tensorflow/models/issues/1817
2017-06-29 17:24:13.193833: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1030] Creating TensorFlow device (/gpu:7) -> (device: 7, name: Tesla K80, pci bus id: 0000:00:0b.0)
2017-06-29 17:24:15.414228: I tensorflow/core/common_runtime/simple_placer.cc:675] Ignoring device specification /device:GPU:0 for node 'prefetch_queue_Dequeue' because the input edge from 'prefetch_queue' is a reference connection and already has a device field set to /device:CPU:0
INFO:tensorflow:Restoring parameters from /home/ubuntu/models/data_xxxx/model.ckpt
INFO:tensorflow:Starting Session.
INFO:tensorflow:Saving checkpoint to path data_doliprane/model.ckpt
INFO:tensorflow:Starting Queues.
INFO:tensorflow:global_step/sec: 0
[1]    4359 killed     python object_detection/train.py --train_dir=data_xxxx


1.1 修改如下,错误依旧:
https://stackoverflow.com/questions/45150773/tensorflow-object-detection-training-killed-resource-starvation
To quote from the issue, with my comments:

The section in your new config will look like this:

train_input_reader:
{ tf_record_input_reader { input_path: "PATH_TO_BE_CONFIGURED/pet_train.record" } label_map_path: "PATH_TO_BE_CONFIGURED/pet_label_map.pbtxt"queue_capacity:
100 # change this number min_after_dequeue: 10 # change this number (strictly less than the above)
}

You can also set these for 
eval_input_reader
.
For this one I am using 
20,
10
 and for 
train
 I
use 100
,
10
, although I think I could go lower. My training takes less than 8Gb of RAM.
1.2 修改如下,可以跑了:
Hi again guys, we have found a solutionchanging the 
batch_size
 to
one. By default this parameter is set to 32, so probably this needs too much RAM.
I don't understand why this is consuming this extremely amount of RAM, but you can change this and train a model in a normal environment.

2. thus we have a pb file.....

$ python object_detection/export_inference_graph.py --input_type image_tensor --pipeline_config_path object_detection/VOC2012/ssd_mobilenet_v1_voc2012.config --trained_checkpoint_prefix object_detection/train/model.ckpt-200 --output_directory object_detection/VOC2012/model/

3. tensorborad

jiao@jiao-linux:~/code/source/tensorflow/models/research$ tensorboard --logdir='home/jiao/code/source/tensorflow/models/reseatch/object_detection/VOC2012/ssd_mobilenet_train_logs'

TensorBoard 0.4.0rc3 at http://jiao-linux:6006 (Press CTRL+C to quit)

4.利用训练好的模型进行图片测试

(1)下载labelimage源码

curl -O https://raw.githubusercontent.com/tensorflow/tensorflow/r1.3/tensorflow/examples/label_image/label_image.py
(2)read it README.md
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: