·搜索引擎Nutch介绍(1)-使用Nutch
2004-12-27 14:31
609 查看
tutorial |
Requirements
Java 1.4.x, either from Sun or IBM on Linux is preferred. SetNUTCH_JAVA_HOMEto the root of your JVM installation.
Apache's Tomcat 4.x.
On Win32, cygwin, for shell support. (If you plan to use CVS on Win32, be sure to select the cvs and openssh packages when you install, in the "Devel" and "Net" categories, respectively.)
Up to a gigabyte of free disk space, a high-speed connection, and an hour or so.
Getting Started
First, you need to get a copy of the Nutch code. You can download a release from http://www.nutch.org/release/. Unpack the release and connect to its top-level directory. Or, check out the latest source code from CVS and build it with Ant.Try the following command:
bin/nutchThis will display the documentation for the Nutch command script.
Now we're ready to crawl. There are two approaches to crawling:
Intranet crawling, with the
crawlcommand.
Whole-web crawling, with much greater control, using the lower level
inject,
generate,
fetchand
updatedbcommands.
Intranet Crawling
Intranet crawling is more appropriate when you intend to crawl up to around one million pages on a handful of web servers.Intranet: Configuration
To configure things for intranet crawling you must:Create a flat file of root urls. For example, to crawl the
nutch.orgsite you might start with a file named
urlscontaining just the Nutch home page. All other Nutch pages should be reachable from this page. The
urlsfile would thus look like:
http://www.nutch.org/
Edit the file
conf/crawl-urlfilter.txtand replace
MY.DOMAIN.NAMEwith the name of the domain you wish to crawl. For example, if you wished to limit the crawl to the
nutch.orgdomain, the line should read:
+^http://([a-z0-9]*/.)*nutch.org/This will include any url in the domain
nutch.org.
Intranet: Running the Crawl
Once things are configured, running the crawl is easy. Just use the crawl command. Its options include:-dirdir names the directory to put the crawl in.
-depthdepth indicates the link depth from the root page that should be crawled.
-delaydelay determines the number of seconds between accesses to each host.
-threadsthreads determines the number of threads that will fetch in parallel.
For example, a typical call might be:
bin/nutch crawl urls -dir crawl.test -depth 3 >& crawl.logTypically one starts testing one's configuration by crawling at low depths, and watching the output to check that desired pages are found. Once one is more confident of the configuration, then an appropriate depth for a full crawl is around 10.
Once crawling has completed, one can skip to the Searching section below.
Whole-web Crawling
Whole-web crawling is designed to handle very large crawls which may take weeks to complete, running on multiple machines.Whole-web: Concepts
Nutch data is of two types:The web database. This contains information about every page known to Nutch, and about links between those pages.
A set of segments. Each segment is a set of pages that are fetched and indexed as a unit. Segment data consists of the following types:
a fetchlist is a file that names a set of pages to be fetched
the fetcher output is a set of files containing the fetched pages
the index is a Lucene-format index of the fetcher output.
In the following examples we will keep our web database in a directory named db and our segments in a directory named segments:
mkdir db mkdir segments
Whole-web: Boostrapping the Web Database
The admin tool is used to create a new, empty database:bin/nutch admin db -createThe injector adds urls into the database. Let's inject URLs from the DMOZ Open Directory. First we must download and uncompress the file listing all of the DMOZ pages. (This is a 200+Mb file, so this will take a few minutes.)
wget http://rdf.dmoz.org/rdf/content.rdf.u8.gz gunzip content.rdf.u8.gzNext we inject a random subset of these pages into the web database. (We use a random subset so that everyone who runs this tutorial doesn't hammer the same sites.) DMOZ contains around three million URLs. We inject one out of every 3000, so that we end up with around 1000 URLs:
bin/nutch inject db -dmozfile content.rdf.u8 -subset 3000This also takes a few minutes, as it must parse the full file.
Now we have a web database with around 1000 as-yet unfetched URLs in it.
Whole-web: Fetching
To fetch, we first generate a fetchlist from the database:bin/nutch generate db segmentsThis generates a fetchlist for all of the pages due to be fetched. The fetchlist is placed in a newly created segment directory. The segment directory is named by the time it's created. We save the name of this segment in the shell variable >s1:
s1=`ls -d segments/2* | tail -1` echo $s1Now we run the fetcher on this segment with:
bin/nutch fetch $s1When this is complete, we update the database with the results of the fetch:
bin/nutch updatedb db $s1Now the database has entries for all of the pages referenced by the initial set.
Next we run five iterations of link analysis on the database in order to prioritize which pages to next fetch:
bin/nutch analyze db 5Now we fetch a new segment with the top-scoring 1000 pages:
bin/nutch generate db segments -topN 1000 s2=`ls -d segments/2* | tail -1` echo $s2 bin/nutch fetch $s2 bin/nutch updatedb db $s2 bin/nutch analyze db 2Let's fetch one more round:
bin/nutch generate db segments -topN 1000 s3=`ls -d segments/2* | tail -1` echo $s3 bin/nutch fetch $s3 bin/nutch updatedb db $s3 bin/nutch analyze db 2By this point we've fetched a few thousand pages. Let's index them!
Whole-web: Indexing
To index each segment we use the index command, as follows:bin/nutch index $s1 bin/nutch index $s2 bin/nutch index $s3Then, before we can search a set of segments, we need to delete duplicate pages. This is done with:
bin/nutch dedup segments dedup.tmpNow we're ready to search!
Searching
To search you need to put the nutch war file into your servlet container. (If instead of downloading a Nutch release you checked the sources out of CVS, then you'll first need to build the war file, with the command ant war.)Assuming you've unpacked Tomcat as ~/local/tomcat, then the Nutch war file may be installed with the commands:
rm -rf ~/local/tomcat/webapps/ROOT* cp nutch*.war ~/local/tomcat/webapps/ROOT.warThe webapp finds its indexes in ./segments, relative to where you start Tomcat, so, if you've done intranet crawling, connect to your crawl directory, or, if you've done whole-web crawling, don't change directories, and give the command:
~/local/tomcat/bin/catalina.sh startThen visit http://localhost:8080/ and have fun!
相关文章推荐
- ·搜索引擎Nutch介绍(1)-简介
- 使用Lucene开发自己的搜索引擎–(3)indexer索引程序中基本类介绍
- centos 4.4配置使用 and Nutch搜索引擎(第1期)_ Nutch简介及安装
- ··· Socks Online - 在内部网也能使用QQ ···
- ·Java Open Single Sign-On Project-介绍
- Nutch1.2搜索引擎使用详解
- ·H.248协议介绍
- UITableView使用<2>UITableViewCell的介绍
- Nutch1.9安装配置与基本使用介绍
- opendir,closedir,readdir ,telldir的·使用详解及例子
- Nutch介绍及使用(验证)
- Nutch2.2.1介绍及使用
- ·在Tomcat中使用JAASRealm
- 使用nutch搭建类似百度/谷歌的搜索引擎
- 【搜索引擎Jediael开发笔记3】使用HtmlParser提取网页中的链接 分类: H3_NUTCH 2014-05-20 20:50 1211人阅读 评论(0) 收藏
- 使用 Hadoop,Nutch ,Hbase,Solr 搭建搜索引擎之Nutch2.2.1
- 使用 Hadoop,Nutch ,Hbase,Solr 搭建搜索引擎之搭建solr4.9.1
- ··· Socks Online - 在内部网也能使用QQ ···
- 使用 Hadoop,Nutch ,Hbase,Solr 搭建搜索引擎之Hbase-0.94.27.搭建
- 开源搜索引擎Nutch 0.9的安装使用