您的位置:首页 > 其它

Coursera | Andrew Ng (02-week-3-3.3)—超参数训练的实践:Pandas VS Caviar

2018-01-21 10:02 597 查看
该系列仅在原课程基础上部分知识点添加个人学习笔记,或相关推导补充等。如有错误,还请批评指教。在学习了 Andrew Ng 课程的基础上,为了更方便的查阅复习,将其整理成文字。因本人一直在学习英语,所以该系列以英文为主,同时也建议读者以英文为主,中文辅助,以便后期进阶时,为学习相关领域的学术论文做铺垫。- ZJ

Coursera 课程 |deeplearning.ai |网易云课堂

转载请注明作者和出处:ZJ 微信公众号-「SelfImprovementLab」

知乎https://zhuanlan.zhihu.com/c_147249273

CSDNhttp://blog.csdn.net/junjun_zhao/article/details/79118185

3.3 Hyperparameters tuning in practice:Pandas vs.Caviar

超参数训练的实践:Pandas VS Caviar

(字幕来源:网易云课堂)



You have now heard a lot about how to search for good hyperparameters.Before wrapping up our discussion on hyperparameter search,I want to share with you just a couple of final tips and tricks for how to organize your hyperparameter search process.Deep learning today is applied to many different application areas and that intuitions about hyperparameter settings from one application area may or may not transfer to a different one.There is a lot of cross-fertilization among different applications’ domains, so for example, I’ve seen ideas developed in the computer vision community, such as Confonets or ResNets, which we’ll talk about in a later course, successfully applied to speech.I’ve seen ideas that were first developed in speech successfully applied in NLP,and so on.So one nice development in deep learning is that people from different application domains do read increasingly research papers from other application domains to look for inspiration for cross-fertilization.In terms of your settings for the hyperparameters,though,I’ve seen that intuitions do get stale.So even if you work on just one problem, say logistics,you might have found a good setting for the hyperparameters and kept on developing your algorithm, or maybe seen your data gradually change over the course of several months, or maybe just upgraded servers in your data center.And because of those changes, the best setting of your hyperparameters can get stale.So I recommend maybe just retesting or reevaluating your hyperparameters at least once every several months to make sure that you’re still happy with the values you have.



到现在为止 你已经听了许多关于如何搜索最优超参数的内容,在结束我们关于超参数搜索的讨论之前,我想最后和你分享一些建议和技巧,关于如何组织你的超参数搜索过程,如今的深度学习已经应用到许多不同的领域,某个应用领域的超参数设定,有可能通用于另一领域,不同的应用领域出现相互交融,比如 我曾经看过计算机视觉领域中涌现的巧妙方法,比如说 Confonets 或 ResNets 这我们会在后续课程中讲到,它还成功应用于语音,我还看过最初起源于语音的想法成功应用于NLP,等等,深度学习领域中 发展很好的一点是,不同应用领域的人们会阅读越来越多其它研究领域的文章,跨领域去寻找灵感,就超参数的设定而言,我见到过有些直觉想法变得很缺乏新意,所以 即使你只研究一个问题 比如说逻辑学,你也许已经找到一组好的超参数设置 并继续发展算法,或也许在几个月的过程中 观察到你的数据会逐渐改变,或也许只是在你的数据中心 更新了服务器,正因为有了这些变化,你原来的超参数的设定不再好用,所以我建议,或许只是重新测试或评估你的超参数 每隔几个月至少一次,以确保你对数值依然很满意。

Finally, in terms of how people go about searching for hyperparameters,I see maybe two major schools of thought, or maybe two major different ways in which people go about it.One way is if you babysit one model.And usually you do this if you have maybe a huge data set but not a lot of computational resources, not a lot of CPUs and GPUs,so you can basically afford to train only one model or a very small number of models at a time.In that case you might gradually babysit that model even as it’s training.So, for example, on Day 0 you might initialize your parameter as random and then start training.And you gradually watch your learning curve, maybe the cost function J or your data set error or something else, gradually decrease over the first day.Then at the end of day one, you might say, gee, looks it’s learning quite well,I’m going to try increasing the learning rate a little bit and see how it does.And then maybe it does better.And then that’s your Day 2 performance.And after two days you say, okay, it’s still doing quite well.Maybe I’ll fill the momentum term a bit or decrease the learning variable a bit now, and then you’re now into Day 3.And every day you kind of look at it and try nudging up and down your parameters.And maybe on one day you found your learning rate was too big.So you might go back to the previous day’s model, and so on.But you’re kind of babysitting the model one day at a time even as it’s training over a course of many days or over the course of several different weeks.So that’s one approach, and people that babysit one model, that is watching performance and patiently nudging the learning rate up or down.But that’s usually what happens if you don’t have enough computational capacity to train a lot of models at the same time.



最后 关于如何搜索超参数的问题,我见过大概两种重要的思想流派,或人们通常采用的两种重要但不同的方式,一种是你照看一个模型,通常是有庞大的数据组,但没有许多计算资源或足够的CPU和GPU的前提下,基本而言 你只可以一次负担起试验一个模型或一小批模型,这种情况下 即使当它在试验时 你也可以逐渐改良,比如 第 0 天 你将随机参数初始化,然后开始试验,然后你逐渐观察自己的学习曲线 也许是损失函数 J,或者数据设置误差或其他的东西 在第一天内逐渐减少,那这一天末的时候 你可能会说 看 它学习得真不错,我要试着增加一点学习速率 看看它会怎样,也许结果证明它做得更好,那是你第二天的表现,两天后 你会说 它依旧做得不错,也许我现在可以填充下 momentum或减少变量,然后 进入第三天,每天 你都会观察它 不断调整你的参数,也许有一天你会发现你的学习率太大了,所以 你可能又回归之前的模型 像这样,但你可以说是在每天花时间照看此模型,即使是它在许多天或许多星期的试验过程中,所以 这是一个人们照料一个模型的方法,观察它的表现 耐心地调试学习率,但那通常是因为你没有足够的计算能力,不能在同一时间试验大量模型时才采取的办法。

The other approach would be if you train many models in parallel.So you might have some setting of the hyperparameters andjust let it run by itself ,either for a day or even for multiple days,and then you get some learning curve like that; and this could be a plot of the cost function J or cost of your training error or cost of your data set error, but some metric in your tracking.And then at the same time you might start up a different model with a different setting of the hyperparameters.And so, your second model might generate a different learning curve,maybe one that looks like that.I will say that one looks better.And at the same time, you might train a third model, which might generate a learning curve that looks like that, and another one that, maybe this one diverges so it looks like that, and so on.Or you might train many different models in parallel, where these orange lines are different models, right, and so this way you can try a lot of different hyperparameter settings and then just maybe quickly at the end pick the one that works best.Looks like in this example it was, maybe this curve that look best.



另一种方法则是同时试验多种模型,你设置了一些超参数,尽管让它自己运行 或者是一天甚至多天,然后你会获得像这样的学习曲线,这可以是损失函数 J 或实验误差的损失或数据设置误差的损失 但都是你曲线轨迹的度量,同时 你可以开始一个有着不同,超参数设定的不同模型,所以 你的第二个模型会生成一个不同的学习曲线,也许是像这样的一条,我会说这条看起来更好些,与此同时 你可以试验第三种模型,其可能产生一条像这样的学习曲线 还有另一条,也许这条有所偏离 像这样 等等,或者你可以同时平行试验许多不同的模型,橙色的线就是不同的模型 对 是这样,用这种方式你可以试验许多不同的参数设定,然后只是最后快速选择工作效果最好的那个,在这个例子中 也许这条看起来是最好的。

So to make an analogy,I’m going to call the approach on the left the panda approach.When pandas have children, they have very few children,usually one child at a time, and then they really put a lot of effort into making sure that the baby panda survives.So that’s really babysitting.One model for one baby panda.Whereas the approach on the right is more like what fish do.I’m going to call this the caviar strategy.There’s some fish that lay over 100 million eggs in one mating season.But the way fish reproduce is they lay a lot of eggs and don’t pay too much attention to any one of them but just see that hopefully one of them, or maybe a bunch of them, will do well.So I guess, this is really the difference between how mammalsre produce versus how fish and a lot of reptiles reproduce.But I’m going to call it the panda approach versus the caviar approach, since that’s more fun and memorable.So the way to choose between these two approaches is really a function of how much computational resources you have.If you have enough computers to train a lot of models in parallel, then by all means take the caviar approach and try a lot of different hyperparameters and see what works.But in some application domains, I see this in some online advertising settings as well as in some computer vision applications, where there’s just so much data and the models you want to train are so big that it’s difficult to train a lot of models at the same time.It’s really application dependent of course, but I’ve seen those communities use the panda approach a little bit more, where you are kind of babying a single model along and nudging the parameters up and down and trying to make this one model work.Although, of course, even the panda approach, having trained one model and then seen it work or not work, maybe in the second week or the third week, maybe I should initialize a different model and then baby that one along just like even pandas, I guess, can have multiple children in their lifetime, even if they have only one, or a very small number of children, at any one time.



打个比方,我把左边的方法称为熊猫方式,当熊猫有了孩子 他们的孩子非常少,一次通常只有一个,然后他们花费很多精力抚养熊猫宝宝以确保其能成活,所以 这的确是一种照料,一种模型类似于一只熊猫宝宝,对比而言 右边的方式更像鱼类的行为我称之为鱼子酱方式,在交配季节 有些鱼类会产下一亿颗卵,但鱼类繁殖的方式是 它们会产很多卵,但不对其中任何一个多加照料,只是希望其中一个 或其中一群 能够表现出色,我猜 这就是哺乳动物繁衍和,鱼类 很多爬虫类动物繁衍的区别,我将称之为熊猫方式与鱼子酱方式,因为这很有趣 更容易记住,所以 这两个方式的选择,是由你拥有的计算资源决定的,如果你拥有足够的计算机去平行试验许多模型,那绝对采取鱼子酱方式,尝试许多不同的超参数 看效果怎样,但在一些应用领域 比如在线广告设置,和计算机视觉应用领域,那里的数据太多了 需要试验大量的模型,所以同时试验大量的模型是很困难的,它的确是依赖于应用的过程,但我看到那些应用熊猫方式多一些的组织,那里 你会像对婴儿一样照看一个模型,调试参数 试着让它工作运转,尽管 当然 甚至是在熊猫方式中 试验一个模型,观察它工作与否 也许第二或第三个星期后,也许我应该建立一个不同的模型,像熊猫那样照料它 我猜 这样一生中可以培育几个孩子,即使它们一次只有一个孩子 或孩子的数量很少。

So hopefully this gives you a good sense of how to go about the hyperparameter search process.Now, it turns out that there’s one other technique that can make your neural network much more robust to the choice of hyperparameters. It doesn’t work for all neural networks, but when it does,it can make the hyperparameter search much easier and also make training go much faster.Let’s talk about this technique in the next video.

所以希望你能学会,如何进行超参数的搜索过程,现在 还有另一种技巧,能使你的神经网络变得更加坚实,它并不是对所有的神经网络都适用 但当适用时,它可以使超参数搜索变得容易许多并加速试验过程,我们在下个视频中再讲解这个技巧。

重点总结:

超参数调试实践–Pandas vs. Caviar

在超参数调试的实际操作中,我们需要根据我们现有的计算资源来决定以什么样的方式去调试超参数,进而对模型进行改进。下面是不同情况下的两种方式:



在计算资源有限的情况下,使用第一种,仅调试一个模型,每天不断优化;

在计算资源充足的情况下,使用第二种,同时并行调试多个模型,选取其中最好的模型。

参考文献:

[1]. 大树先生.吴恩达Coursera深度学习课程 DeepLearning.ai 提炼笔记(2-3)– 超参数调试 和 Batch Norm

PS: 欢迎扫码关注公众号:「SelfImprovementLab」!专注「深度学习」,「机器学习」,「人工智能」。以及 「早起」,「阅读」,「运动」,「英语 」「其他」不定期建群 打卡互助活动。

内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息