您的位置:首页 > 其它

tensorflow代价敏感因子、增加正则化项、学习率衰减

2018-01-01 21:50 796 查看
1.代价敏感:

 
outputs, end_points = vgg.all_cnn(Xinputs,
num_classes=num_classes,
is_training=True,
dropout_keep_prob=0.5,
spatial_squeeze=True,
scope='all_cnn'

cross_entrys=tf.nn.softmax_cross_entropy_with_logits(logits=outputs, labels=Yinputs)
# w_temp = tf.matmul(Yinputs, w_ls) #代价敏感因子w_ls=tf.Variable(np.array(w,dtype='float32'),name="w_ls",trainable=False),w是权重项链表
# loss=tf.reduce_mean(tf.multiply(cross_entrys,w_temp))  #代价敏感下的交叉熵损失


2. 正则化项:

weights_norm=tf.reduce_sum(input_tensor=weight_dacay*tf.stack([tf.nn.l2_loss(i) for i in tf.get_collection('weights')]),name='weights_norm' )
loss=tf.add(cross_entrys,weights_norm) #包含正则化项损失,对应于caffe里面的weight-decay因子λ,因为在梯度反向传递时'l2-正则化:1/2*λ*||W||^2'对应的更新值就是权重衰减因子,W-△w=w-(△w_分类损失部分+λ*w)=-△w_分类损失部分+(1-λ)*w。通常λ=0.001~0.0005


3. 学习率衰减:

global_step = tf.Variable(0, trainable=False)
add_g=global_step.assign_add(1)
starter_learning_rate = 0.001
decay_steps = 10
#tf.train.下面有多个衰减函数可用
learning_rate = tf.train.exponential_decay(starter_learning_rate, global_step, decay_steps, decay_rate=0.01)
#train_op = tf.train.MomentumOptimizer(learning_rate,0.9).minimize(loss) #用于优化损失
#decayed_learning_rate = learning_rate *  decay_rate ^ (global_step / decay_steps)
init = tf.initialize_all_variables()
# 启动图 (graph),查看衰减状态
with tf.Session() as sess:
sess.run(init)
for i in range(15):
_,r=sess.run([add_g, learning_rate])
print(_,"=",r)
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: