Google advances AI with ‘one model to learn them all
2017-06-21 19:13
609 查看
Google quietly released an academic paper that could provide a blueprint for
the future of machine learning. Called “One Model to Learn Them All,” it lays out a template for how to create a single machine learning model that can address multiple tasks well.
The MultiModel, as the Google researchers call it, was trained on a variety of tasks, including translation, language parsing, speech recognition, image recognition, and object detection. While its results don’t show radical improvements over existing approaches,
they illustrate that training a machine learning system on a variety of tasks could help boost its overall performance.
For example, the MultiModel improved its accuracy on machine translation, speech, and parsing tasks when trained on all of the operations it was capable of, compared to when the model was just trained on one operation.
Google’s paper could provide a template for the development of future machine learning systems that are more broadly applicable, and potentially more accurate, than the narrow solutions that populate much of the market today. What’s more, these techniques (or
those they spawn) could help reduce the amount of training data needed to create a viable machine learning algorithm.
That’s because the team’s results show that when the MultiModel is trained on all the tasks it’s capable of, its accuracy improves on tasks with less training data. That’s important, since it can be difficult to accumulate a sizable enough set of training data
in some domains.
However, Google doesn’t claim to have a master algorithm that can learn everything at once. As its name implies, the MultiModel network includes systems that are tailor-made to address different challenges, along with systems that help direct input to those
expert algorithms. This research does show that the approach Google took could be useful for future development of similar systems that address different domains.
It’s also worth noting that there’s plenty more testing to be done. Google’s results haven’t been verified, and it’s hard to know how well this research generalizes to other fields. The Google Brain team has released
the MultiModel code as part of the TensorFlow open source project, so other people can experiment with it and find out.
Google also has some clear paths to improvement. The team pointed out that they didn’t spend a lot of time optimizing some of the system’s fixed parameters (known as “hyperparameters” in machine learning speak), and going through more extensive tweaking could
help improve accuracy in the future.
the future of machine learning. Called “One Model to Learn Them All,” it lays out a template for how to create a single machine learning model that can address multiple tasks well.
The MultiModel, as the Google researchers call it, was trained on a variety of tasks, including translation, language parsing, speech recognition, image recognition, and object detection. While its results don’t show radical improvements over existing approaches,
they illustrate that training a machine learning system on a variety of tasks could help boost its overall performance.
For example, the MultiModel improved its accuracy on machine translation, speech, and parsing tasks when trained on all of the operations it was capable of, compared to when the model was just trained on one operation.
Google’s paper could provide a template for the development of future machine learning systems that are more broadly applicable, and potentially more accurate, than the narrow solutions that populate much of the market today. What’s more, these techniques (or
those they spawn) could help reduce the amount of training data needed to create a viable machine learning algorithm.
That’s because the team’s results show that when the MultiModel is trained on all the tasks it’s capable of, its accuracy improves on tasks with less training data. That’s important, since it can be difficult to accumulate a sizable enough set of training data
in some domains.
However, Google doesn’t claim to have a master algorithm that can learn everything at once. As its name implies, the MultiModel network includes systems that are tailor-made to address different challenges, along with systems that help direct input to those
expert algorithms. This research does show that the approach Google took could be useful for future development of similar systems that address different domains.
It’s also worth noting that there’s plenty more testing to be done. Google’s results haven’t been verified, and it’s hard to know how well this research generalizes to other fields. The Google Brain team has released
the MultiModel code as part of the TensorFlow open source project, so other people can experiment with it and find out.
Google also has some clear paths to improvement. The team pointed out that they didn’t spend a lot of time optimizing some of the system’s fixed parameters (known as “hyperparameters” in machine learning speak), and going through more extensive tweaking could
help improve accuracy in the future.
相关文章推荐
- One Model To Learn Them All原文谷歌翻译版本
- 【深度学习】One Model to Learn Them All详解
- One Model To Learn Them All原文谷歌翻译版本
- How to search All-In-OneCode Framework with google or Bing
- CoreDataErrorThe model used to open the store is incompatible with the one used to create the store
- 【FAQ Fix】 JPA Double insert with @OneToMany and CASCADE.ALL
- Tomcat404错误:All URLs used to access the Manager application should now start with one of the followi
- CoreDataErrorThe model used to open the store is incompatible with the one used to create the store
- The model used to open the store is incompatible with the one used to create the store
- CoreData修改了数据模型报错 The model used to open the store is incompatible with the one used to create the store
- The model used to open the store is incompatible with the one used to create the store错误
- The model used to open the store is incompatible with the one used to create the store
- The model used to open the store is incompatible with the one used to create the store
- 【转】Programmers Need To Learn Statistics Or I Will Kill Them All
- The model used to open the store is incompatible with the one used to create the store
- CoreData报错:The model used to open the store is incompatible with the one used to create the store
- The model used to open the store is incompatible with the one used to create the store
- Google在推动AI普及又往前迈了一步-Learn with Google AI
- Learn with Google AI上线了! 让Google专家教你AI新技术
- coreData报错:The model used to open the store is incompatible with the one used to create the store