您的位置:首页 > Web前端

【深度学习】【caffe实用工具1】笔记23 Windows下【Caffe实用工具】之convert_cifar_data的用法

2017-08-09 16:24 881 查看
/*********************************************************************************************************************************
文件说明:
【1】This script converts the CIFAR dataset to the leveldb format used by caffe to perform classification.
【2】这个脚本文件将CIFAR的数据集转换为用于执行分类任务的LEVELDB数据库格式
用    法:
convert_cifar_data input_folder output_db_file db_type
【1】input_folder:图片或者二进制数据文件所在的文件夹,如下所示:
E://caffeInstall2013CUDAVersion//caffe-master//data//cifar10//cifar-10-binary//cifar-10-batches-bin//
【2】output_db_file:生成的LEVEL数据库文件或者LMDB类型的数据库文件放在哪个文件夹下,一下放在下面的文件夹下:
E://caffeInstall2013CUDAVersion//caffe-master//examples//cifar10//
【3】db_type:生成数据库类型的选项[leveldb/lmdb]
【注意0】:
源代码的注释说明稍微有点误差
【注意1】:
*.bat命令行格式: convert_cifar_data input_folder output_db_file db_type
【注意2】:
在程序中直接给argv[1],argv[2],argv[3]传递参数,如下面的格式所示:
argv[1] = "E://caffeInstall2013CUDAVersion//caffe-master//data//cifar10//cifar-10-binary//cifar-10-batches-bin//";
argv[2] = "E://caffeInstall2013CUDAVersion//caffe-master//examples//cifar10//";
argv[3] = "lmdb";
数据下载地址:
【1】官网:The CIFAR dataset could be downloaded at http://www.cs.toronto.edu/~kriz/cifar.html 【2】二进制文件的下载地址,从这个下载下来的数据等同于运行.\caffe-master\data\cifar10\get_cifar10.sh这个shell文件
下载下来的数据
**********************************************************************************************************************************/
#include <fstream>                             //【1】STL中的文件流头文件
#include <string>                              //【2】STL中的字符串序列容器头文件

#include "boost/scoped_ptr.hpp"
#include "glog/logging.h"
#include "google/protobuf/text_format.h"
#include "stdint.h"

#include "caffe/proto/caffe.pb.h"
#include "caffe/util/db.hpp"
#include "caffe/util/format.hpp"

using caffe::Datum;
using boost::scoped_ptr;
using std::string;
namespace db = caffe::db;

/***********************************************************************************************************************
文件说明:
全局常量的定义
************************************************************************************************************************/
const int kCIFARSize = 32;
const int kCIFARImageNBytes = 3072;
const int kCIFARBatchSize = 10000;
const int kCIFARTrainBatches = 5;

void read_image(std::ifstream* file, int* label, char* buffer)
{
char label_char;
file->read(&label_char, 1);
*label = label_char;
file->read(buffer, kCIFARImageNBytes);
return;
}

void convert_dataset(const string& input_folder, const string& output_folder, const string& db_type)
{
scoped_ptr<db::DB> train_db(db::GetDB(db_type));
train_db->Open(output_folder + "/cifar10_train_" + db_type, db::NEW);
scoped_ptr<db::Transaction> txn(train_db->NewTransaction());
// Data buffer
int label;
char str_buffer[kCIFARImageNBytes];
Datum datum;
datum.set_channels(3);
datum.set_height(kCIFARSize);
datum.set_width(kCIFARSize);

LOG(INFO) << "Writing Training data";
for (int fileid = 0; fileid < kCIFARTrainBatches; ++fileid)
{
// Open files
LOG(INFO) << "Training Batch " << fileid + 1;
std::string     batchFileName = input_folder + "/data_batch_" + caffe::format_int(fileid + 1) + ".bin";
std::ifstream   data_file(batchFileName.c_str(),
std::ios::in | std::ios::binary);
CHECK(data_file) << "Unable to open train file #" << fileid + 1;
for (int itemid = 0; itemid < kCIFARBatchSize; ++itemid)
{
read_image(&data_file, &label, str_buffer);
datum.set_label(label);
datum.set_data(str_buffer, kCIFARImageNBytes);
string out;
CHECK(datum.SerializeToString(&out));
txn->Put(caffe::format_int(fileid * kCIFARBatchSize + itemid, 5), out);
}
}
txn->Commit();
train_db->Close();

LOG(INFO) << "Writing Testing data";
scoped_ptr<db::DB> test_db(db::GetDB(db_type));
test_db->Open(output_folder + "/cifar10_test_" + db_type, db::NEW);
txn.reset(test_db->NewTransaction());
// Open files
std::ifstream data_file((input_folder + "/test_batch.bin").c_str(), std::ios::in | std::ios::binary);
CHECK(data_file) << "Unable to open test file.";
for (int itemid = 0; itemid < kCIFARBatchSize; ++itemid)
{
read_image(&data_file, &label, str_buffer);
datum.set_label(label);
datum.set_data(str_buffer, kCIFARImageNBytes);
string out;
CHECK(datum.SerializeToString(&out));
txn->Put(caffe::format_int(itemid, 5), out);
}
txn->Commit();
test_db->Close();
}

int main(int argc, char** argv)
{
FLAGS_alsologtostderr = 1;

argv[1] = "E://caffeInstall2013CUDAVersion//caffe-master//data//cifar10//cifar-10-binary//cifar-10-batches-bin//";
argv[2] = "E://caffeInstall2013CUDAVersion//caffe-master//examples//cifar10//";
argv[3] = "lmdb";

google::InitGoogleLogging(argv[0]);
convert_dataset(string(argv[1]), string(argv[2]), string(argv[3]));

std::system("pause");
return 0;
}
准备的数据,下载下来的二进制文件数据所在的文件夹和相应的数据:






最终生成的LMDB数据库格式的数据:

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐