您的位置:首页 > 理论基础 > 计算机网络

Dog Breed Identification——Kaggle比赛微调GoogleNet网络(重点学习图片标签预处理及保存.csv文件)

2020-07-26 20:10 323 查看

Contents

  • 二、模型使用微调FineTuning的GoogleNet
  • 三、对测试集分类并保存为要求文件格式
  • 四、源代码【附详细注释】

  •       Kaggle中的Dog Breed Identification比赛,从官网上的“data”下载数据及后得到一个名为
    dog-breed-identification
    的文件夹,其中包含

          其中
    train
    文件夹包含10222张图片,图像格式为JPG,每张图像的命名为一个随机的唯一的id(例如
    000bec180eb18c7604dcecc8fe0dba07.jpg
    ,并不是从1开始按序排列的);
    test
    文件夹包含10357张图片,其格式与文件命名与
    train
    类似。

         

    labels.csv
    文件包含训练集图像的标签,文件共10222行,每行包含两列,第一列是图像id,第二列对应狗的类别,此数据集中狗的类别一共有120种。

         

    sample_submission
    文件为测试集和最终给出的csv格式范例。

          下载数据集后,需要对数据集进行预处理,使得在训练网络时可以将测试集中的每张照片得到其对应的正确的标签。主要思路为:
          首先从训练集中划分出验证数据集,用于调整超参数。训练集划分之后,数据集应该包含4个部分: 划分后的训练集、划分后的验证集、完整训练集、完整测试集

          对于4个部分,分别建立4个文件夹:train, valid, train_valid, test。在上述4个文件夹中,对每个类别都建立一个文件夹,文件夹名称为某一个类别名,在其中存放属于该类别的图像。前三个部分的标签已知,所以各有120个子文件夹,而测试集的标签未知,所以仅建立一个名为unknown的子文件夹,存放所有测试数据。

    一、预处理官网下载的数据集(重点理解)

    1 建立文件夹函数 os.makedirs(path)

          在当前工作目录下判断某文件夹是否存在并建立新文件夹。

    def mkdir_if_not_exist(path):
    if not os.path.exists(os.path.join(*path)):
    os.makedirs(os.path.join(*path))

    2 os.listdir() 返回文件夹包含的文件或文件夹的名字的列表

    train_files = os.listdir(os.path.join(data_dir, train_dir))

         

    train_files
    表示文件夹
    train
    中的所有图片的列表。

    3 train_files = os.listdir(os.path.join(data_dir, train_dir))

         

    shuffle()
    方法将列表的所有元素随机排序,即将文件夹
    train
    中的所有图片打乱顺序。

    4 file.split(’.’)[0]

    for i, file in enumerate(train_files):
    img_id = file.split('.')[0]
    img_label = id2label[img_id] # 通过图片id获取对应图片的class
    # id2label是一个字典,其中键值对为id-class

         

    file
    是形式为
    id.jpg
    的字符串,file.split(’.’)返回
    ['id', 'jpg']
    的列表;
    file.split('.')[0]
    表示取得图片对应的id。
         
    .split(str=" ", num)
    方法通过指定分隔符对字符串进行切片,如果参数
    num
    有指定值,则分隔
    num+1
    个子字符串。

    5 shutil.copy

    shutil.copy(source, destination)
    source/destination —  都是字符串形式的路径,其中destination是:
    1、可以是一个文件的名称,则将source文件复制为新名称的destination
    2、可以是一个文件夹,则将source文件复制到destination文件夹中
    shutil.copy((os.path.join(data_dir, train_dir, file)), os.path.join(new_data_dir, 'train', img_label))
    # 表示对于每一个下载的数据集中的图片,对应其类别将该图片复制到对应的文件夹中。

    二、模型使用微调FineTuning的GoogleNet

         

    nn.models
    中的
    googlenet
    模型具体如下:(可直接拖到此代码块底部······)

    GoogLeNet(
    (conv1): BasicConv2d(
    (conv): Conv2d(3, 64, kernel_size=(7, 7), stride=(2, 2), padding=(3, 3), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (maxpool1): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
    (conv2): BasicConv2d(
    (conv): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (conv3): BasicConv2d(
    (conv): Conv2d(64, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (maxpool2): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
    (inception3a): Inception(
    (branch1): BasicConv2d(
    (conv): Conv2d(192, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch2): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(192, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(96, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch3): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(192, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(16, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch4): Sequential(
    (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
    (1): BasicConv2d(
    (conv): Conv2d(192, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    )
    (inception3b): Inception(
    (branch1): BasicConv2d(
    (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch2): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(128, 192, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch3): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(256, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(32, 96, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch4): Sequential(
    (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
    (1): BasicConv2d(
    (conv): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    )
    (maxpool3): MaxPool2d(kernel_size=3, stride=2, padding=0, dilation=1, ceil_mode=True)
    (inception4a): Inception(
    (branch1): BasicConv2d(
    (conv): Conv2d(480, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch2): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(480, 96, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(96, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(96, 208, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(208, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch3): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(480, 16, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(16, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(16, 48, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch4): Sequential(
    (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
    (1): BasicConv2d(
    (conv): Conv2d(480, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    )
    (inception4b): Inception(
    (branch1): BasicConv2d(
    (conv): Conv2d(512, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch2): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(112, 224, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(224, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch3): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch4): Sequential(
    (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
    (1): BasicConv2d(
    (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    )
    (inception4c): Inception(
    (branch1): BasicConv2d(
    (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch2): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(128, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch3): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(512, 24, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(24, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(24, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch4): Sequential(
    (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
    (1): BasicConv2d(
    (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    )
    (inception4d): Inception(
    (branch1): BasicConv2d(
    (conv): Conv2d(512, 112, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(112, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch2): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(512, 144, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(144, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(144, 288, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(288, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch3): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(512, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch4): Sequential(
    (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
    (1): BasicConv2d(
    (conv): Conv2d(512, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(64, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    )
    (inception4e): Inception(
    (branch1): BasicConv2d(
    (conv): Conv2d(528, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch2): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(528, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch3): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(528, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch4): Sequential(
    (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
    (1): BasicConv2d(
    (conv): Conv2d(528, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    )
    (maxpool4): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=True)
    (inception5a): Inception(
    (branch1): BasicConv2d(
    (conv): Conv2d(832, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(256, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch2): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(832, 160, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(160, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(160, 320, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(320, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch3): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(832, 32, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(32, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(32, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch4): Sequential(
    (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
    (1): BasicConv2d(
    (conv): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    )
    (inception5b): Inception(
    (branch1): BasicConv2d(
    (conv): Conv2d(832, 384, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (branch2): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(832, 192, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(192, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(192, 384, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(384, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch3): Sequential(
    (0): BasicConv2d(
    (conv): Conv2d(832, 48, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(48, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (1): BasicConv2d(
    (conv): Conv2d(48, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    (branch4): Sequential(
    (0): MaxPool2d(kernel_size=3, stride=1, padding=1, dilation=1, ceil_mode=True)
    (1): BasicConv2d(
    (conv): Conv2d(832, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    )
    )
    (aux1): InceptionAux(
    (conv): BasicConv2d(
    (conv): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (fc1): Linear(in_features=2048, out_features=1024, bias=True)
    (fc2): Linear(in_features=1024, out_features=1000, bias=True)
    )
    (aux2): InceptionAux(
    (conv): BasicConv2d(
    (conv): Conv2d(528, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
    (bn): BatchNorm2d(128, eps=0.001, momentum=0.1, affine=True, track_running_stats=True)
    )
    (fc1): Linear(in_features=2048, out_features=1024, bias=True)
    (fc2): Linear(in_features=1024, out_features=1000, bias=True)
    )
    (avgpool): AdaptiveAvgPool2d(output_size=(1, 1))
    (dropout): Dropout(p=0.2, inplace=False)
    (fc): Linear(in_features=1024, out_features=1000, bias=True)
    )

          可以看到

    models.googlenet
    网络的输出层为
    (fc): Linear(in_features=1024, out_features=1000, bias=True)
    ,而本次数据集共120个类别,故将输出层进行修改,而训练网络的目的也是为了训练输出层
    Linear
    的参数,这里将输出层改为:

    finetune_net.fc = nn.Sequential(nn.Linear(in_features=1024, out_features=120, bias=True))

    三、对测试集分类并保存为要求文件格式

    3.1 python tolist()方法

    output = torch.softmax(output, dim=1)
    preds += output.tolist()

         

    tolist()
    方法将数组或者矩阵转换成列表,
    softmax()
    返回一个
    [batch_size, 120]
    的矩阵,
    preds
    以列表形式存储整个测试集的结果,为之后将索引转换为对应类型及保存为
    .csv
    格式的文件做准备。

    3.2 python sorted()函数

    sorted() 函数对所有可迭代的对象进行排序操作。
    sort 与 sorted 区别:

    1. sort 是应用在 list 上的方法,sorted 可以对所有可迭代的对象进行排序操作。
    2. list 的 sort 方法返回的是对已经存在的列表进行操作,而内建函数 sorted 方法返回的是一个新的 list,而不是在原来的基础上进行的操作。
    ids = sorted(os.listdir(os.path.join(new_data_dir, 'test/unknown')))

    该代码用于将

    test
    文件夹中的测试数据集名字按照升序排列。

    3.3 生成
    .csv
    格式文件提交结果

    with open('submission.csv', 'w') as f:
    f.write('id,' + ','.join(train_valid_ds.classes) + '\n')
    for i, output in zip(ids, preds):
    f.write(i.split('.')[0] + ',' + ','.join(
    [str(num) for num in output]) + '\n')
    train_valid_ds = datasets.ImageFolder(root=os.path.join(new_data_dir, 'train_valid'), transform=transform_train)

          这里需注意,

    ImageFolder
    的路径
    root
    下的子文件夹即默认为对应该数据集的分类
    ,可通过
    .classes
    来获取每个子文件夹的名字,进而获取数据集的120个类型名称。

          同上所述,ids保存test文件夹中的测试数据集名字,其名字格式均为

    'id.jpg'
    ,故使用
    .split('.')
    来获取图片名字。因为保存结果文件为
    .csv
    格式文件,其为逗号分隔值文件,故这里使用
    ','
    来进行文件内的数据分割。

    四、源代码【附详细注释】

    import torch
    from torch import utils, nn
    from torch.utils import data
    import torchvision
    from torchvision import transforms, datasets, models
    import pandas as pd
    import os
    import random
    import time
    import shutil
    device = ('cuda' if torch.cuda.is_available() else 'cpu')
    
    data_dir = './dog-breed-identification'  #下载数据集目录
    label_file, train_dir, test_dir = 'labels.csv', 'train', 'test'  # 文件夹data_dir中的文件及子文件夹
    new_data_dir = './train_valid_test'  # 整理数据后存放的文件夹
    valid_ratio = 0.1  # 验证集所占比例
    
    def mkdir_if_not_exist(path):
    if not os.path.exists(os.path.join(*path)):
    os.makedirs(os.path.join(*path))
    def reorg_dog_data(data_dir, label_file, train_dir, test_dir, new_data_dir, valid_ratio):
    # 读取训练数据标签
    labels = pd.read_csv(os.path.join(data_dir, label_file)) # labels是一个列表 labels.size = ([10022,2])
    id2label = {id: value for id, value in labels.values} # id2label是一个字典,其中键值对为id-class
    
    # 随机打乱训练数据
    train_files = os.listdir(os.path.join(data_dir, train_dir)) # train_files是个列表
    random.shuffle(train_files)
    
    # 源训练集
    valid_size = int(len(train_files) * valid_ratio)
    for i, file in enumerate(train_files):
    img_id = file.split('.')[0]  # file是形式为id.jpg的字符串
    img_label = id2label[img_id]
    if i < valid_size:
    mkdir_if_not_exist([new_data_dir, 'valid', img_label])
    shutil.copy((os.path.join(data_dir, train_dir, file)), (os.path.join(new_data_dir, 'valid', img_label)))
    else:
    mkdir_if_not_exist([new_data_dir, 'train', img_label])
    shutil.copy((os.path.join(data_dir, train_dir, file)), os.path.join(new_data_dir, 'train', img_label))
    mkdir_if_not_exist([new_data_dir, 'train_valid', img_label])
    shutil.copy(os.path.join(data_dir, train_dir, file), os.path.join(new_data_dir, 'train_valid', img_label))
    
    # 测试集
    mkdir_if_not_exist([new_data_dir, 'test', 'unknown'])
    for test_file in os.listdir(os.path.join(data_dir, test_dir)):
    shutil.copy(os.path.join(data_dir, test_dir, test_file), os.path.join(new_data_dir, 'test', 'unknown'))
    
    # 进行数据预处理(此过程在普通laptop上运行时间较长,本人运行一下午后放弃了。。。
    reorg_dog_data(data_dir, label_file, train_dir, test_dir, new_data_dir, valid_ratio)
    
    # 图像增强
    
    transform_train = transforms.Compose([
    transforms.RandomResizedCrop(224, scale=(0.1, 1.0)),
    transforms.RandomHorizontalFlip(0.5),
    transforms.ColorJitter(0.5, 0.5, 0.5),
    transforms.ToTensor(),
    # 对三个通道做标准化,(0.485, 0.456, 0.406)和(0.229, 0.224, 0.225)是在ImageNet上计算得的各通道均值与方差
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),  # ImageNet上的均值和方差
    ])
    transform_test = transforms.Compose([
    transforms.Resize(256),
    transforms.CenterCrop(224),
    transforms.ToTensor(),
    transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
    ])
    
    # 加载数据集
    # new_data_dir目录下有train, valid, train_valid, test四个目录
    # 这四个目录中,每个子目录表示一种类别,目录中是属于该类别的所有图像
    train_ds = datasets.ImageFolder(root=os.path.join(new_data_dir, 'train'), transform=transform_train)
    valid_ds = datasets.ImageFolder(root=os.path.join(new_data_dir, 'valid'), transform=transform_test)
    train_valid_ds = datasets.ImageFolder(root=os.path.join(new_data_dir, 'train_valid'), transform=transform_train)test_ds = datasets.ImageFolder(root=os.path.join(new_data_dir, 'test'), transform=transform_test)
    batch_size = 128
    train_iter = torch.utils.data.DataLoader(train_ds, batch_size=batch_size, shuffle=True)
    valid_iter = torch.utils.data.DataLoader(valid_ds, batch_size=batch_size, shuffle=True)
    train_valid_iter = torch.utils.data.DataLoader(train_valid_ds, batch_size=batch_size, shuffle=True)
    test_iter = torch.utils.data.DataLoader(test_ds, batch_size=batch_size, shuffle=False)
    
    # FineTuning GoogleNet模型
    def get_net(device):
    # 指定pretrained=True来自动下载并加载预训练的模型参数。在第一次使用时需要联网下载模型参数。
    finetune_net = models.googlenet(pretrained=True)
    finetune_net.fc = nn.Sequential(nn.Linear(in_features=1024, out_features=120, bias=True))
    return finetune_net
    
    # 定义训练函数
    def evaluate_loss_acc(data_iter, net, device):
    # 计算data_iter上的平均损失与准确率
    loss = nn.CrossEntropyLoss()
    is_training = net.training
    net.eval()
    l_sum, acc_sum, n = 0.0, 0.0, 0
    with torch.no_grad():
    for X, y in data_iter:
    X, y = X.to(device), y.to(device)
    y_hat = net(X)
    l = loss(y_hat, y)
    l_sum += l.item()
    _, predicted = torch.max(y_hat.data, dim=1)
    acc_sum += predicted.eq(y.data).sum().item()
    n += y.shape[0]
    net.train(is_training)  # 恢复net的train/eval状态
    return l_sum / n, acc_sum / n
    
    def train(net, train_iter, valid_iter, num_epochs, lr, wd, device, lr_period, lr_decay):
    loss = nn.CrossEntropyLoss()
    optimizer = torch.optim.SGD(net.fc.parameters(), lr=lr, momentum=0.9, weight_decay=wd)
    net = net.to(device)
    for epoch in range(num_epochs):
    train_l_sum, n, start = 0.0, 0, time.time()
    
    # 设置学习率lr衰减
    if epoch > 0 and epoch % lr_period == 0:  # 每lr_period个epoch,学习率衰减一次
    lr = lr * lr_decay
    for param_group in optimizer.param_groups:
    param_group['lr'] = lr
    
    for X, y in train_iter:
    X, y = X.to(device), y.to(device)
    optimizer.zero_grad()
    y_hat = net(X)
    l = loss(y_hat, y)
    l.backward()
    optimizer.step()
    train_l_sum +=l.item()
    n += y.shape[0]
    time_s = ('time %.2f sec' %(time.time()-start))
    if valid_iter is not None:
    valid_loss, valid_acc = evaluate_loss_acc(valid_iter, net, device)
    epoch_s = ("epoch %d, train loss %f, valid loss %f, valid acc %f, " % (epoch + 1, train_l_sum / n, valid_loss, valid_acc))
    else:
    epoch_s = ('epoch %d, train loss %f' % (epoch + 1, train_l_sum / n))
    print(epoch_s + time_s + ', lr ' + str(lr))
    
    if __name__ == '__main__':
    # 参数设置并调整
    num_epochs, lr_period, lr_decay = 20, 10, 0.1
    lr, wd = 0.03, 1e-4
    net = get_net(device)
    
    train(net, train_iter, valid_iter, num_epochs, lr, wd, device, lr_period, lr_decay)
    
    # 通过valid_iter调整好参数后:在完整数据集上训练模型
    train(net, train_valid_iter, None, num_epochs, lr, wd, device, lr_period, lr_decay)
    
    # 测试并将结果保存为所要求的格式
    preds=[]
    for X, _ in test_iter:
    X = X.to(device)
    output = net(X)
    output = torch.softmax(output, dim=1)
    preds += output.tolist()ids = sorted(os.listdir(os.path.join(new_data_dir, 'test/unknown')))
    with open('submission.csv', 'w') as f:
    f.write('id,' + ','.join(train_valid_ds.classes) + '\n')
    for i, output in zip(ids, preds):
    f.write(i.split('.')[0] + ',' + ','.join(
    [str(num) for num in output]) + '\n')

    参考代码:从零开始学Pytorch(十九)之Kaggle上的狗品种识别

    欢迎关注【OAOA】

    内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
    标签: