您的位置:首页 > 其它

Lucene学习总结之七:Lucene搜索过程解析(2)

2010-12-23 13:57 429 查看

二、Lucene搜索详细过程

为了解析Lucene对索引文件搜索的过程,预先写入索引了如下几个文件:

file01.txt: apple apples cat dog

file02.txt: apple boy cat category

file03.txt: apply dog eat etc

file04.txt: apply cat foods

2.1、打开IndexReader指向索引文件夹

代码为:

IndexReader reader = IndexReader.open(FSDirectory.open(indexDir));

其实是调用了DirectoryReader.open(Directory, IndexDeletionPolicy, IndexCommit, boolean, int) 函数,其主要作用是生成一个SegmentInfos.FindSegmentsFile对象,并用它来找到此索引文件中所有的段,并打开这些段。

SegmentInfos.FindSegmentsFile.run(IndexCommit commit)主要做以下事情:

2.1.1、找到最新的segment_N文件

由于segment_N是整个索引中总的元数据,因而正确的选择segment_N更加重要。

然而有时候为了使得索引能够保存在另外的存储系统上,有时候需要用NFS mount一个远程的磁盘来存放索引,然而NFS为了提高性能,在本地有Cache,因而有可能使得此次打开的索引不是另外的writer写入的最新信息,所以在此处用了双保险。

一方面,列出所有的segment_N,并取出其中的最大的N,设为genA

String[] files = directory.listAll();

long genA = getCurrentSegmentGeneration(files);

long getCurrentSegmentGeneration(String[] files) {

long max = -1;

for (int i = 0; i < files.length; i++) {

String file = files[i];

if (file.startsWith(IndexFileNames.SEGMENTS) //"segments_N"

&& !file.equals(IndexFileNames.SEGMENTS_GEN)) { //"segments.gen"

long gen = generationFromSegmentsFileName(file);

if (gen > max) {

max = gen;

}

}

}

return max;

}

另一方面,打开segment.gen文件,从中读出N,设为genB

IndexInput genInput = directory.openInput(IndexFileNames.SEGMENTS_GEN);

int version = genInput.readInt();

long gen0 = genInput.readLong();

long gen1 = genInput.readLong();

if (gen0 == gen1) {

genB = gen0;

}

在genA和genB中去较大者,为gen,并用此gen构造要打开的segments_N的文件名

if (genA > genB)

gen = genA;

else

gen = genB;

String segmentFileName = IndexFileNames.fileNameFromGeneration(IndexFileNames.SEGMENTS, "", gen);//segmentFileName "segments_4"

2.1.2、通过segment_N文件中保存的各个段的信息打开各个段

从segment_N中读出段的元数据信息,生成SegmentInfos

SegmentInfos infos = new SegmentInfos();

infos.read(directory, segmentFileName);

SegmentInfos.read(Directory, String) 代码如下:

int format = input.readInt();

version = input.readLong();

counter = input.readInt();

for (int i = input.readInt(); i > 0; i—) {

//读出每一个段,并构造SegmentInfo对象

add(new SegmentInfo(directory, format, input));

}

SegmentInfo(Directory dir, int format, IndexInput input)构造函数如下:

name = input.readString();

docCount = input.readInt();

delGen = input.readLong();

docStoreOffset = input.readInt();

if (docStoreOffset != -1) {

docStoreSegment = input.readString();

docStoreIsCompoundFile = (1 == input.readByte());

} else {

docStoreSegment = name;

docStoreIsCompoundFile = false;

}

hasSingleNormFile = (1 == input.readByte());

int numNormGen = input.readInt();

normGen = new long[numNormGen];

for(int j=0;j<numNormGen;j++) {

normGen[j] = input.readLong();

}

isCompoundFile = input.readByte();

delCount = input.readInt();

hasProx = input.readByte() == 1;

其实不用多介绍,看过Lucene学习总结之三:Lucene的索引文件格式 (2)一章,就很容易明白。

根据生成的SegmentInfos打开各个段,并生成ReadOnlyDirectoryReader

SegmentReader[] readers = new SegmentReader[sis.size()];

for (int i = sis.size()-1; i >= 0; i—) {

//打开每一个段

readers[i] = SegmentReader.get(readOnly, sis.info(i), termInfosIndexDivisor);

}

SegmentReader.get(boolean, Directory, SegmentInfo, int, boolean, int) 代码如下:

instance.core = new CoreReaders(dir, si, readBufferSize, termInfosIndexDivisor);

instance.core.openDocStores(si); //生成用于读取存储域和词向量的对象。

instance.loadDeletedDocs(); //读取被删除文档(.del)文件

instance.openNorms(instance.core.cfsDir, readBufferSize); //读取标准化因子(.nrm)

CoreReaders(Directory dir, SegmentInfo si, int readBufferSize, int termsIndexDivisor)构造函数代码如下:

cfsReader = new CompoundFileReader(dir, segment + "." + IndexFileNames.COMPOUND_FILE_EXTENSION, readBufferSize); //读取cfs的reader

fieldInfos = new FieldInfos(cfsDir, segment + "." + IndexFileNames.FIELD_INFOS_EXTENSION); //读取段元数据信息(.fnm)

TermInfosReader reader = new TermInfosReader(cfsDir, segment, fieldInfos, readBufferSize, termsIndexDivisor); //用于读取词典信息(.tii .tis)

freqStream = cfsDir.openInput(segment + "." + IndexFileNames.FREQ_EXTENSION, readBufferSize); //用于读取freq

proxStream = cfsDir.openInput(segment + "." + IndexFileNames.PROX_EXTENSION, readBufferSize); //用于读取prox

FieldInfos(Directory d, String name)构造函数如下:

IndexInput input = d.openInput(name);

int firstInt = input.readVInt();

size = input.readVInt();

for (int i = 0; i < size; i++) {

//读取域名

String name = StringHelper.intern(input.readString());

//读取域的各种标志位

byte bits = input.readByte();

boolean isIndexed = (bits & IS_INDEXED) != 0;

boolean storeTermVector = (bits & STORE_TERMVECTOR) != 0;

boolean storePositionsWithTermVector = (bits & STORE_POSITIONS_WITH_TERMVECTOR) != 0;

boolean storeOffsetWithTermVector = (bits & STORE_OFFSET_WITH_TERMVECTOR) != 0;

boolean omitNorms = (bits & OMIT_NORMS) != 0;

boolean storePayloads = (bits & STORE_PAYLOADS) != 0;

boolean omitTermFreqAndPositions = (bits & OMIT_TERM_FREQ_AND_POSITIONS) != 0;

//将读出的域生成FieldInfo对象,加入fieldinfos进行管理

addInternal(name, isIndexed, storeTermVector, storePositionsWithTermVector, storeOffsetWithTermVector, omitNorms, storePayloads, omitTermFreqAndPositions);

}

CoreReaders.openDocStores(SegmentInfo)主要代码如下:

fieldsReaderOrig = new FieldsReader(storeDir, storesSegment, fieldInfos, readBufferSize, si.getDocStoreOffset(), si.docCount); //用于读取存储域(.fdx, .fdt)

termVectorsReaderOrig = new TermVectorsReader(storeDir, storesSegment, fieldInfos, readBufferSize, si.getDocStoreOffset(), si.docCount); //用于读取词向量(.tvx, .tvd, .tvf)

初始化生成的ReadOnlyDirectoryReader,对打开的多个SegmentReader中的文档编号

在Lucene中,每个段中的文档编号都是从0开始的,而一个索引有多个段,需要重新进行编号,于是维护数组start[],来保存每个段的文档号的偏移量,从而第i个段的文档号是从start[i]至start[i]+Num

private void initialize(SegmentReader[] subReaders) {

this.subReaders = subReaders;

starts = new int[subReaders.length + 1];

for (int i = 0; i < subReaders.length; i++) {

starts[i] = maxDoc;

maxDoc += subReaders[i].maxDoc();

if (subReaders[i].hasDeletions())

hasDeletions = true;

}

starts[subReaders.length] = maxDoc;

}

2.1.3、得到的IndexReader对象如下

reader ReadOnlyDirectoryReader (id=466)
closed false
deletionPolicy null

//索引文件夹
directory SimpleFSDirectory (id=31)
checked false
chunkSize 104857600
directory File (id=487)
path "D://lucene-3.0.0//TestSearch//index"
prefixLength 3
isOpen true
lockFactory NativeFSLockFactory (id=488)
hasChanges false
hasDeletions false
maxDoc 12
normsCache HashMap<K,V> (id=483)
numDocs -1
readOnly true
refCount 1
rollbackHasChanges false
rollbackSegmentInfos null

//段元数据信息
segmentInfos SegmentInfos (id=457)
elementCount 3
elementData Object[10] (id=532)
[0] SegmentInfo (id=464)
delCount 0
delGen -1
diagnostics HashMap<K,V> (id=537)
dir SimpleFSDirectory (id=31)
docCount 4
docStoreIsCompoundFile false
docStoreOffset -1
docStoreSegment "_0"
files null
hasProx true
hasSingleNormFile true
isCompoundFile 1
name "_0"
normGen null
preLockless false
sizeInBytes -1
[1] SegmentInfo (id=517)
delCount 0
delGen -1
diagnostics HashMap<K,V> (id=542)
dir SimpleFSDirectory (id=31)
docCount 4
docStoreIsCompoundFile false
docStoreOffset -1
docStoreSegment "_1"
files null
hasProx true
hasSingleNormFile true
isCompoundFile 1
name "_1"
normGen null
preLockless false
sizeInBytes -1
[2] SegmentInfo (id=470)
delCount 0
delGen -1
diagnostics HashMap<K,V> (id=547)
dir SimpleFSDirectory (id=31)
docCount 4
docStoreIsCompoundFile false
docStoreOffset -1
docStoreSegment "_2"
files null
hasProx true
hasSingleNormFile true
isCompoundFile 1
name "_2"
normGen null
preLockless false
sizeInBytes -1
generation 4
lastGeneration 4
modCount 4
pendingSegnOutput null
userData HashMap<K,V> (id=533)
version 1268193441675
segmentInfosStart null
stale false
starts int[4] (id=484)

//每个段的Reader
subReaders SegmentReader[3] (id=467)
[0] ReadOnlySegmentReader (id=492)
closed false
core SegmentReader$CoreReaders (id=495)
cfsDir CompoundFileReader (id=552)
cfsReader CompoundFileReader (id=552)
dir SimpleFSDirectory (id=31)
fieldInfos FieldInfos (id=553)
fieldsReaderOrig FieldsReader (id=554)
freqStream CompoundFileReader$CSIndexInput (id=555)
proxStream CompoundFileReader$CSIndexInput (id=556)
readBufferSize 1024
ref SegmentReader$Ref (id=557)
segment "_0"
storeCFSReader null
termsIndexDivisor 1
termVectorsReaderOrig null
tis TermInfosReader (id=558)
tisNoIndex null
deletedDocs null
deletedDocsDirty false
deletedDocsRef null
fieldsReaderLocal SegmentReader$FieldsReaderLocal (id=496)
hasChanges false
norms HashMap<K,V> (id=500)
normsDirty false
pendingDeleteCount 0
readBufferSize 1024
readOnly true
refCount 1
rollbackDeletedDocsDirty false
rollbackHasChanges false
rollbackNormsDirty false
rollbackPendingDeleteCount 0
si SegmentInfo (id=464)
singleNormRef SegmentReader$Ref (id=504)
singleNormStream CompoundFileReader$CSIndexInput (id=506)
termVectorsLocal CloseableThreadLocal<T> (id=508)
[1] ReadOnlySegmentReader (id=493)
closed false
core SegmentReader$CoreReaders (id=511)
cfsDir CompoundFileReader (id=561)
cfsReader CompoundFileReader (id=561)
dir SimpleFSDirectory (id=31)
fieldInfos FieldInfos (id=562)
fieldsReaderOrig FieldsReader (id=563)
freqStream CompoundFileReader$CSIndexInput (id=564)
proxStream CompoundFileReader$CSIndexInput (id=565)
readBufferSize 1024
ref SegmentReader$Ref (id=566)
segment "_1"
storeCFSReader null
termsIndexDivisor 1
termVectorsReaderOrig null
tis TermInfosReader (id=567)
tisNoIndex null
deletedDocs null
deletedDocsDirty false
deletedDocsRef null
fieldsReaderLocal SegmentReader$FieldsReaderLocal (id=512)
hasChanges false
norms HashMap<K,V> (id=514)
normsDirty false
pendingDeleteCount 0
readBufferSize 1024
readOnly true
refCount 1
rollbackDeletedDocsDirty false
rollbackHasChanges false
rollbackNormsDirty false
rollbackPendingDeleteCount 0
si SegmentInfo (id=517)
singleNormRef SegmentReader$Ref (id=519)
singleNormStream CompoundFileReader$CSIndexInput (id=520)
termVectorsLocal CloseableThreadLocal<T> (id=521)
[2] ReadOnlySegmentReader (id=471)
closed false
core SegmentReader$CoreReaders (id=475)
cfsDir CompoundFileReader (id=476)
cfsReader CompoundFileReader (id=476)
dir SimpleFSDirectory (id=31)
fieldInfos FieldInfos (id=480)
fieldsReaderOrig FieldsReader (id=570)
freqStream CompoundFileReader$CSIndexInput (id=571)
proxStream CompoundFileReader$CSIndexInput (id=572)
readBufferSize 1024
ref SegmentReader$Ref (id=573)
segment "_2"
storeCFSReader null
termsIndexDivisor 1
termVectorsReaderOrig null
tis TermInfosReader (id=574)
tisNoIndex null
deletedDocs null
deletedDocsDirty false
deletedDocsRef null
fieldsReaderLocal SegmentReader$FieldsReaderLocal (id=524)
hasChanges false
norms HashMap<K,V> (id=525)
normsDirty false
pendingDeleteCount 0
readBufferSize 1024
readOnly true
refCount 1
rollbackDeletedDocsDirty false
rollbackHasChanges false
rollbackNormsDirty false
rollbackPendingDeleteCount 0
si SegmentInfo (id=470)
singleNormRef SegmentReader$Ref (id=527)
singleNormStream CompoundFileReader$CSIndexInput (id=528)
termVectorsLocal CloseableThreadLocal<T> (id=530)
synced HashSet<E> (id=485)
termInfosIndexDivisor 1
writeLock null
writer null

从上面的过程来看,IndexReader有以下几个特性:

段元数据信息已经被读入到内存中,因而索引文件夹中因为新添加文档而新增加的段对已经打开的reader是不可见的。

.del文件已经读入内存,因而其他的reader或者writer删除的文档对打开的reader也是不可见的。

打开的reader已经有inputstream指向cfs文件,从段合并的过程我们知道,一个段文件从生成起就不会改变,新添加的文档都在新的段中,删除的文档都在.del中,段之间的合并是生成新的段,而不会改变旧的段,只不过在段的合并过程中,会将旧的段文件删除,这没有问题,因为从操作系统的角度来讲,一旦一个文件被打开一个inputstream也即打开了一个文件描述符,在内核中,此文件会保持reference count,只要reader还没有关闭,文件描述符还在,文件是不会被删除的,仅仅reference count减一。

以上三点保证了IndexReader的snapshot的性质,也即一个IndexReader打开一个索引,就好像对此索引照了一张像,无论背后索引如何改变,此IndexReader在被重新打开之前,看到的信息总是相同的。

严格的来讲,Lucene的文档号仅仅对打开的某个reader有效,当索引发生了变化,再打开另外一个reader的时候,前面reader的文档0就不一定是后面reader的文档0了,因而我们进行查询的时候,从结果中得到文档号的时候,一定要在reader关闭之前应用,从存储域中得到真正能够唯一标识你的业务逻辑中的文档的信息,如url,md5等等,一旦reader关闭了,则文档号已经无意义,如果用其他的reader查询这些文档号,得到的可能是不期望的文档。

2.2、打开IndexSearcher

代码为:

IndexSearcher searcher = new IndexSearcher(reader);

其过程非常简单:

private IndexSearcher(IndexReader r, boolean closeReader) {

reader = r;

//当关闭searcher的时候,是否关闭其reader

this.closeReader = closeReader;

//对文档号进行编号

List<IndexReader> subReadersList = new ArrayList<IndexReader>();

gatherSubReaders(subReadersList, reader);

subReaders = subReadersList.toArray(new IndexReader[subReadersList.size()]);

docStarts = new int[subReaders.length];

int maxDoc = 0;

for (int i = 0; i < subReaders.length; i++) {

docStarts[i] = maxDoc;

maxDoc += subReaders[i].maxDoc();

}

}

IndexSearcher表面上看起来好像仅仅是reader的一个封装,它的很多函数都是直接调用reader的相应函数,如:int docFreq(Term term),Document doc(int i),int maxDoc()。然而它提供了两个非常重要的函数:

void setSimilarity(Similarity similarity),用户可以实现自己的Similarity对象,从而影响搜索过程的打分,详见有关Lucene的问题(4):影响Lucene对文档打分的四种方式

一系列search函数,是搜索过程的关键,主要负责打分的计算和倒排表的合并。

因而在某些应用之中,只想得到某个词的倒排表的时候,最好不要用IndexSearcher,而直接用IndexReader.termDocs(Term term),则省去了打分的计算。
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: