您的位置:首页 > 理论基础 > 计算机网络

利用pytorch实现神经网络风格迁移Neural Transfer

2017-12-05 17:58 736 查看

风格迁移 Neural Transfer



风格迁移,即获取两个图片(一张内容图片content-image、一张风格图片style-image),从而生成一张新的拥有style-image图像风格的内容图像。如上图,最右边的乌龟图像拥有了中间海浪图像的风格。

数学基础

基本思路

最基本的思想是很简单的,首先我们定义一个两个距离,一个为内容距离(DC) 另一个为风格距离(DS). DC 测量两个图片之间内容有多大的区别,而 DS则测量两个图像之间有多大的风格区别。然后我们输入一个图像(通常拥有噪声),随后我们将它进行变形,变形过程中使这个图片和另外两个内容和风格图像的距离最小化。

数学推导

首先,我们假设 Cnn是一个已经训练好的一个深度神经网络, X 是任意图像。 Cnn(X) 则是神经网络输入图像 X 后产生的结果(该神经网络中包含诸多特征图)。设 FXL∈Cnn(X) 为在 L层的特征图,其中所有的特征向量都被集中到一个单独的向量中。我们定义FXL为X中L层的内容。然后如果Y是另一个与 X有相同size的图像,然后我们定义在L层的内容距离为

DLC(X,Y)=∥FXL−FYL∥2=∑i(FXL(i)−FYL(i))2

其中 FXL(i) 是 FXL.中第ith 个元素。另外定义FkXL 为X图像中第 L层第k个特征向量,其中k≤K,K为层L中包含的特征向量数。通过Gram produce对所有特征向量FkXL ( k≤K)进行计算,随后定义图像 X中L 层的风格损失函数GXL。换句话说,GXL 是一个 K\ x\ K 的矩阵,然后每个GXL中kth 行 lth 列的元素GXL(k,l) 是通过FkXL 和FlXL 进行向量乘积计算出来的。

GXL(k,l)=⟨FkXL,FlXL⟩=∑iFkXL(i).FlXL(i)

其中FkXL(i)是FkXL中第ith 个元素。我们将GXL(k,l)视为测量特征图 k 和 l相互关系的函数。这样GXL 就相当于一个在L层中X的 相关矩阵,注意 GXL 的size只取决于特征图(feature maps)的数量而不是X的大小。然后如果Y是另外一个任意尺寸的图像,我们定义在层L中的风格距离函数为:

DLS(X,Y)=∥GXL−GYL∥2=∑k,l(GXL(k,l)−GYL(k,l))2

为了最小化 DC(X,C) between a variable image X and target content-image C and DS(X,S) between X and target style-image S, both computed at several layers , we compute and sum the gradients (derivative with respect to X) of each distance at each wanted layer:

∇extittotal(X,S,C)=∑LCwCLC.∇LCextitcontent(X,C)+∑LSwSLS.∇LSextitstyle(X,S)

Where LC and LS are respectivement the wanted layers (arbitrary stated) of content and style and wCLC and wSLS the weights (arbitrary stated) associated with the style or the content at each wanted layer. Then, we run a gradient descent over X:

X←X−α∇extittotal(X,S,C)

在相应的特征层计算相关梯度得到最终结果。

主要程序

图像载入

载入图像输入大小无要求,最终会被剪裁到相同大小,这是因为神经网络设计了一个特定的输入大小,因此内容图像和风格图像必须大小一致。

# desired size of the output image
imsize = 512 if use_cuda else 128  # use small size if no gpu

loader = transforms.Compose([
transforms.Scale(imsize),  # scale imported image
transforms.ToTensor()])  # transform it into a torch tensor

loader_new = transforms.Compose([ # 通过loader_new 可以将任意大小图像剪裁到相同大小
transforms.Scale(imsize),
transforms.RandomCrop(imsize),
transforms.ToTensor()])

def image_loader(image_name):
image = Image.open(image_name)
image = Variable(loader(image))
# fake batch dimension required to fit network's input dimensions
image = image.unsqueeze(0)
return image

style_img = image_loader("images/picasso.jpg").type(dtype)
content_img = image_loader("images/dancing.jpg").type(dtype)

assert style_img.size() == content_img.size(), \
"we need to import style and content images of the same size"


导入的PIL图像的像素值范围是0-255,转化为torch.tensors的时候会变为0-1 。注意:在pytorch中训练好的网络是按照0-1的tensor来的。如果你将0-255的图像 放入pytoch训练好的网络就没有任何效果。而对于Caffe是0-255,是可以使用的。

内容损失

The content loss is a function that takes as input the feature maps FXL at a layer L in a network fed by X and return the weigthed content distance wCL.DLC(X,C) between

this image and the content image. Hence, the weight wCL and the target content FCL are parameters of the function. We implement this function as a torch module with a constructor that takes these parameters as input. The distance ∥FXL−FYL∥2 is the Mean Square Error between the two sets of feature maps, that can be computed using a criterion
nn.MSELoss
stated as a third parameter.

We will add our content losses at each desired layer as additive modules of the neural network. That way, each time we will feed the network with an input image X, all the content losses will be computed at the desired layers and, thanks to autograd, all the gradients will be computed. For that, we just need to make the
forward
method of our module returning the input: the module becomes a ”transparent layer” of the neural network. The computed loss is saved as a parameter of the module.

Finally, we define a fake
backward
method, that just call the backward method of
nn.MSELoss
in order to reconstruct the gradient.This method returns the computed loss: this will be useful when running the gradient descent in order to display the evolution of style and content losses.

总结:自己定义了一个“内容损失”模块,在之后会加入到根据训练好的VGG神经网络而创建的model中,注意,此层在整个model中是“透明的”,意思是,这个内容损失模块输入和输出是一样的,并没有像一般神经网络一样会对参数进行更新。这个内容损失模块只是为了计算其输入图像和内容图像的差距损失,在最后对输入图像进行更新时就是最小化此损失。

class ContentLoss(nn.Module):

def __init__(self, target, weight):
super(ContentLoss, self).__init__()
# we 'detach' the target content from the tree used
self.target = target.detach() * weight
# to dynamically compute the gradient: this is a stated value,
# not a variable. Otherwise the forward method of the criterion
# will throw an error.
self.weight = weight
self.criterion = nn.MSELoss()

def forward(self, input):
self.loss = self.criterion(input * self.weight, self.target)
self.output = input
return self.output

def backward(self, retain_graph=True):
self.loss.backward(retain_graph=retain_graph)
return self.loss


GramMatric函数

gramMatric即相关矩阵函数的简化版,为了更快读更方便的计算。

class GramMatrix(nn.Module):

def forward(self, input):
a, b, c, d = input.size()  # a = batch size(=1)
# b=number of feature maps
# (c,d)=dimensions of a f. map (N=c*d)

features = input.view(a * b, c * d)  # resise F_XL into \hat F_XL

G = torch.mm(features, features.t())  # compute the gram product

# we 'normalize' the values of the gram matrix
# by dividing by the number of element in each feature maps.
return G.div(a * b * c * d)


定义风格损失函数

######################################################################
#
# The longer is the feature maps dimension :math:`N`, the bigger are the
# values of the gram matrix. Therefore, if we don't normalize by :math:`N`,
# the loss computed at the first layers (before pooling layers) will have
# much more importance during the gradient descent. We dont want that,
# since the most interesting style features are in the deepest layers!
#
# 风格损失模块和内容模块几乎是一样的,但我们需要将gramMatrix加到类中
# Then, the style loss module is implemented exactly the same way than the
# content loss module, but we have to add the ``gramMatrix`` as a
# parameter:
#

class StyleLoss(nn.Module):

def __init__(self, target, weight):
super(StyleLoss, self).__init__()
self.target = target.detach() * weight
self.weight = weight
self.gram = GramMatrix()
self.criterion = nn.MSELoss()

def forward(self, input):
self.output = input.clone()
self.G = self.gram(input)
self.G.mul_(self.weight)
self.loss = self.criterion(self.G, self.target)
return self.output

def backward(self, retain_graph=True):
self.loss.backward(retain_graph=retain_graph)
return self.loss


定义神经网络

######################################################################
# A ``Sequential`` module contains an ordered list of child modules. For
# instance, ``vgg19.features`` contains a sequence (Conv2d, ReLU,
# Maxpool2d, Conv2d, ReLU...) aligned in the right order of depth. As we
# said in *Content loss* section, we wand to add our style and content
# loss modules as additive 'transparent' layers in our network, at desired
# depths. For that, we construct a new ``Sequential`` module, in wich we
# are going to add modules from ``vgg19`` and our loss modules in the
# right order:
# 根据VGG19构造一个和VGG19结构类似的神经网络,其中包括设计好的内容损失层和风格损失层
# 这两个层在对于在网络中的训练作用为0,我们需要的是图像在经过时产生的损失值。
#

# desired depth layers to compute style/content losses :
content_layers_default = ['conv_4']
style_layers_default = ['conv_1', 'conv_2', 'conv_3', 'conv_4', 'conv_5']

def get_style_model_and_losses(cnn, style_img, content_img,
style_weight=1000, content_weight=1,
content_layers=content_layers_default,
style_layers=style_layers_default):
cnn = copy.deepcopy(cnn)

# just in order to have an iterable access to or list of content/syle
# losses
content_losses = []
style_losses = []

model = nn.Sequential()  # the new Sequential module network
gram = GramMatrix()  # we need a gram module in order to compute style targets

# move these modules to the GPU if possible:
if use_cuda:
model = model.cuda()
gram = gram.cuda()

i = 1
for layer in list(cnn):
if isinstance(layer, nn.Conv2d):
name = "conv_" + str(i)
model.add_module(name, layer)

if name in content_layers:
# add content loss:
target = model(content_img).clone()
content_loss = ContentLoss(target, content_weight)
model.add_module("content_loss_" + str(i), content_loss)
content_losses.append(content_loss)

if name in style_layers:
# add style loss:
target_feature = model(style_img).clone()
target_feature_gram = gram(target_feature)
style_loss = StyleLoss(target_feature_gram, style_weight)
model.add_module("style_loss_" + str(i), style_loss)
style_losses.append(style_loss)

if isinstance(layer, nn.ReLU):
name = "relu_" + str(i)
model.add_module(name, layer)

if name in content_layers:
# add content loss:
target = model(content_img).clone()
content_loss = ContentLoss(target, content_weight)
model.add_module("content_loss_" + str(i), content_loss)
content_losses.append(content_loss)

if name in style_layers:
# add style loss:
target_feature = model(style_img).clone()
target_feature_gram = gram(target_feature)
style_loss = StyleLoss(target_feature_gram, style_weight)
model.add_module("style_loss_" + str(i), style_loss)
style_losses.append(style_loss)

i += 1

if isinstance(layer, nn.MaxPool2d):
name = "pool_" + str(i)
model.add_module(name, layer)  # ***

return model, style_losses, content_losses


输入图像

######################################################################
# 输入图像
# ~~~~~~~~~~~
# 为了方便,输入图像为内容图像的copy,也可以创造一个白噪声图片

input_img = content_img.clone()
# if you want to use a white noise instead uncomment the below line:
# input_img = Variable(torch.randn(content_img.data.size())).type(dtype)

# add the original input image to the figure:
plt.figure()
imshow(input_img.data, title='Input Image')


定义优化

######################################################################
# 梯度下降
# ~~~~~~~~~~~~~~~~
#
# 这里我们使用L-BFGS算法来进行梯度下降,不同于训练一个网络,我们想要训练这个输入图片以降低内容/风格损失。我们就简单
# 创建一个python 的L-BFGS优化器,将输入图像当做变量来进行优化。但是optim.LBFGS()接受的第一个参数是一个Pytorch中包含需要进行梯度更新的Variable列表
# 我们的输入图像是一个Variable类型但不是计算树中的一部分。为了让这个函数知道输入图像这个Variable需要进行梯度计算。
# 一种可能的方法就是从输入图像中构造一个Parameter对象。然后我们只需要将其给了优化器的构造器即可。

def get_input_param_optimizer(input_img):
# this line to show that input is a parameter that requires a gradient
input_param = nn.Parameter(input_img.data)
optimizer = optim.LBFGS([input_param])
return input_param, optimizer


定义执行函数

######################################################################
# **Last step**: the loop of gradient descent. At each step, we must feed
# the network with the updated input in order to compute the new losses,
# we must run the ``backward`` methods of each loss to dynamically compute
# their gradients and perform the step of gradient descent. The optimizer
# requires as argument a "closure": a function that reevaluates the model
# and returns the loss.
# 最后一步:进行梯度下降的循环。每一步我们必须将更新后的数值输入到网络中去计算新的损失
# 在个损失中我们用backward方法来计算他们的梯度然后进行梯度下降,优化器需要一个功能函数来
# 重新
#
#
# However, there's a small catch. The optimized image may take its values
# between :math:`-\infty` and :math:`+\infty` instead of staying between 0
# and 1. In other words, the image might be well optimized and have absurd
# values. In fact, we must perform an optimization under constraints in
# order to keep having right vaues into our input image. There is a simple
# solution: at each step, to correct the image to maintain its values into
# the 0-1 interval.
#

def run_style_transfer(cnn, content_img, style_img, input_img, num_steps=300,
style_weight=1000, content_weight=1):
"""Run the style transfer."""
print('Building the style transfer model..')
model, style_losses, content_losses = get_style_model_and_losses(cnn,
style_img, content_img, style_weight, content_weight)
input_param, optimizer = get_input_param_optimizer(input_img)

print('Optimizing..')
run = [0]
since = time.time()
while run[0] <= num_steps:

def closure():
# correct the values of updated input image
input_param.data.clamp_(0, 1)

optimizer.zero_grad()
model(input_param)
style_score = 0
content_score = 0

for sl in style_losses:
style_score += sl.backward()
for cl in content_losses:
content_score += cl.backward()

run[0] += 1
if run[0] % 50 == 0:
print("run {}:".format(run))
print('Style Loss : {:4f} Content Loss: {:4f}'.format(
style_score.data[0], content_score.data[0]))
print()

return style_score + content_score

optimizer.step(closure)

time_elapsed = time.time() - since
print('Training complete in {:.0f}m {:.0f}s'.format(
time_elapsed // 60, time_elapsed % 60))

# a last correction...
input_param.data.clamp_(0, 1)

return input_param.data


参考资料:

1、A Neural Algorithm of Artistic Style https://arxiv.org/abs/1508.06576

2、http://pytorch.org/tutorials/advanced/neural_style_tutorial.html
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: 
相关文章推荐