您的位置:首页 > 大数据 > 人工智能

How To Do Research In the MIT AI Lab_1_Reading AI

2016-06-27 20:10 597 查看
How To Do Research In the MIT AI Lab

Many researchers spend more than half their time reading. You can learn a lot more quickly from other people’s work than from doing your own. This section talks about reading within AI; section covers reading about other subjects.

The time to start reading is now. Once you start seriously working on your thesis you’ll have less time, and your reading will have to be more focused on the topic area. During your first two years, you’ll mostly be doing class work and getting up to speed on AI in general. For this it suffices to read textbooks and published journal articles. (Later, you may read mostly drafts; see section .)

The amount of stuff you need to have read to have a solid grounding in the field may seem intimidating, but since AI is still a small field, you can in a couple years read a substantial fraction of the significant papers that have been published. What’s a little tricky is figuring out which ones those are. There are some bibliographies that are useful: for example, the syllabi of the graduate AI courses. The reading lists for the AI qualifying exams at other universities-particularly Stanford-are also useful, and give you a less parochial outlook. If you are interested in a specific subfield, go to a senior grad student in that subfield and ask him what are the ten most important papers and see if he’ll lend you copies to Xerox. Recently there have been appearing a lot of good edited collections of papers from a subfield, published particularly by Morgan-Kauffman.

The AI lab has three internal publication series, the Working Papers, Memos, and Technical Reports, in increasing order of formality. They are available on racks in the eighth floor play room. Go back through the last couple years of them and snag copies of any that look remotely interesting. Besides the fact that a lot of them are significant papers, it’s politically very important to be current on what people in your lab are doing.

There’s a whole bunch of journals about AI, and you could spend all your time reading them. Fortunately, only a few are worth looking at. The principal journal for central-systems stuff is Artificial Intelligence, also referred to as
the Journal of Artificial Intelligence'', or
AIJ”. Most of the really important papers in AI eventually make it into AIJ, so it’s worth scanning through back issues every year or so; but a lot of what it prints is really boring. Computational Intelligence is a new competitor that’s worth checking out. Cognitive Science also prints a fair number of significant AI papers. Machine Learning is the main source on what it says. IEEE PAMI is probably the best established vision journal; two or three interesting papers per issue. The International Journal of Computer Vision (IJCV) is new and so far has been interesting. Papers in Robotics Research are mostly on dynamics; sometimes it also has a landmark AIish robotics paper. IEEE Robotics and Automation has occasional good papers.

It’s worth going to your computer science library (MIT’s is on the first floor of Tech Square) every year or so and flipping through the last year’s worth of AI technical reports from other universities and reading the ones that look interesting.

Reading papers is a skill that takes practice. You can’t afford to read in full all the papers that come to you. There are three phases to reading one. The first is to see if there’s anything of interest in it at all. AI papers have abstracts, which are supposed to tell you what’s in them, but frequently don’t; so you have to jump about, reading a bit here or there, to find out what the authors actually did. The table of contents, conclusion section, and introduction are good places to look. If all else fails, you may have to actually flip through the whole thing. Once you’ve figured out what in general the paper is about and what the claimed contribution is, you can decide whether or not to go on to the second phase, which is to find the part of the paper that has the good stuff. Most fifteen page papers could profitably be rewritten as one-page papers; you need to look for the page that has the exciting stuff. Often this is hidden somewhere unlikely. What the author finds interesting about his work may not be interesting to you, and vice versa. Finally, you may go back and read the whole paper through if it seems worthwhile.

Read with a question in mind.
How can I use this?''
Does this really do what the author claims?” “What if…?” Understanding what result has been presented is not the same as understanding the paper. Most of the understanding is in figuring out the motivations, the choices the authors made (many of them implicit), whether the assumptions and formalizations are realistic, what directions the work suggests, the problems lying just over the horizon, the patterns of difficulty that keep coming up in the author’s research program, the political points the paper may be aimed at, and so forth.

It’s a good idea to tie your reading and programming together. If you are interested in an area and read a few papers about it, try implementing toy versions of the programs being described. This gives you a more concrete understanding.

Most AI labs are sadly inbred and insular; people often mostly read and cite work done only at their own school. Other institutions have different ways of thinking about problems, and it is worth reading, taking seriously, and referencing their work, even if you think you know what’s wrong with them.

Often someone will hand you a book or paper and exclaim that you should read it because it’s (a) the most brilliant thing ever written and/or (b) precisely applicable to your own research. Usually when you actually read it, you will find it not particularly brilliant and only vaguely applicable. This can be perplexing. “Is there something wrong with me? Am I missing something?” The truth, most often, is that reading the book or paper in question has, more or less by chance, made your friend think something useful about your research topic by catalyzing a line of thought that was already forming in their head.
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签:  麻省理工 ai