您的位置:首页 > 运维架构

Hadoop源码分析笔记(十四):名字节点--远程接口ClientProtocol

2013-06-26 23:16 615 查看

远程接口ClientProtocol

           ClientProtocol的方法分为两组,实现Hadoop文件系统的远程方法和实现多功能工具dfsadmin的远程方法。实现文件系统的远程方法可以进一步区分,将读写文件数据和其他操作分开。

文件和目录相关事务

     在Java系统中,对文件数据的读写是通过输入/输出流实现的,其他的文件管理功能由java.io.File封装。在Hadoop文件系统中,这部分的功能由FileSystem的抽象方法定义,并由具体文件系统实现。
      1、几个典型的方法
      ClientProtocol.mkdirs()可在目录树中创建一个字目录。NameNode.mkdirs()实现了远程接口方法,但主要的创建目录工作由FSNamesystem.mkdirs()和FSNamesystem.mkdirsInternal()完成。mkdirsInternal()放在目录树上执行实际的创建动作前需要进行以下检查:
     1) 用户有访问路径的权限(checkTraverse())。
     2)通过dir.isDir()方法判断目录不存在,如果目录已经存在,可以直接返回。
     3)名字节点不处于安全模式,可通过isInSafeMode()判断。
     4)检查DFSUtil.isValidateName()的结果,保证输入的目录名合法。
     5)用户在将创建目录的父目录上有写权限,由checkAncestorAccess()判断。
     6)利用checkFsObjectLimit(),检查创建目录后会达到系统容量的极限。系统容量极限保存在FSNamesystem.maxFsObject中,可由${dfs.max.objects}设置,默认值为0,即没有极限。进行上述检查可以保证HDFS目录数中INode的数量和数据块的数量和小于极限值。
    完成上述六项检查后,mkdirsInternal()才调用FSDirectory.mkdirs(),在目录数的对应位置添加了一个新的INodeDirectory对象。方法mkdirs()在mkdirsInternal()成功返回后,会通过FSEditLog.logSync(),保证创建目录的日志记录持久化后才返回。代码如下:
      
/**
* Create all the necessary directories
*/
public boolean mkdirs(String src, PermissionStatus permissions
) throws IOException {
boolean status = mkdirsInternal(src, permissions);
getEditLog().logSync();
if (status && auditLog.isInfoEnabled() && isExternalInvocation()) {
final HdfsFileStatus stat = dir.getFileInfo(src);
logAuditEvent(UserGroupInformation.getCurrentUser(),
Server.getRemoteIp(),
"mkdirs", src, null, stat);
}
return status;
}
/**
* Create all the necessary directories
*/
private synchronized boolean mkdirsInternal(String src,
PermissionStatus permissions) throws IOException {
NameNode.stateChangeLog.debug("DIR* NameSystem.mkdirs: " + src);
if (isPermissionEnabled) {
checkTraverse(src);
}
if (dir.isDir(src)) {
// all the users of mkdirs() are used to expect 'true' even if
// a new directory is not created.
return true;
}
if (isInSafeMode())
throw new SafeModeException("Cannot create directory " + src, safeMode);
if (!DFSUtil.isValidName(src)) {
throw new IOException("Invalid directory name: " + src);
}
if (isPermissionEnabled) {
checkAncestorAccess(src, FsAction.WRITE);
}

// validate that we have enough inodes. This is, at best, a
// heuristic because the mkdirs() operation migth need to
// create multiple inodes.
checkFsObjectLimit();

if (!dir.mkdirs(src, permissions, false, now())) {
throw new IOException("Invalid directory name: " + src);
}
return true;
}
       在分析FSDirectory时讲解过delete()如何在目录树上删除项目。FSNamesystem.deleteInternal()在进行一番检查后,调用FSDirectory.delete(),删除以路径src为根的子目录树。和FSNamesystem.mkdirs()类似,由于操作修改了目录树,需要等待日志持久化。
       如果被删除的子目录树中有文件,它们拥有的数据块已经在FSDirectory.delete()调用中,通过FSNamesystem,removePathAndBlock()进入无效数据块副本invalidateSet中。deleteInternal()方法也就不需要关注这些数据块副本。代码如下:
       
/**
* Remove the indicated filename from namespace. If the filename
* is a directory (non empty) and recursive is set to false then throw exception.
*/
public boolean delete(String src, boolean recursive) throws IOException {
if ((!recursive) && (!dir.isDirEmpty(src))) {
throw new IOException(src + " is non empty");
}
boolean status = deleteInternal(src, true);
getEditLog().logSync();
if (status && auditLog.isInfoEnabled() && isExternalInvocation()) {
logAuditEvent(UserGroupInformation.getCurrentUser(),
Server.getRemoteIp(),
"delete", src, null, null);
}
return status;
}

/**
* Remove the indicated filename from the namespace.  This may
* invalidate some blocks that make up the file.
*/
synchronized boolean deleteInternal(String src,
boolean enforcePermission) throws IOException {
if (NameNode.stateChangeLog.isDebugEnabled()) {
NameNode.stateChangeLog.debug("DIR* NameSystem.delete: " + src);
}
if (isInSafeMode())
throw new SafeModeException("Cannot delete " + src, safeMode);
if (enforcePermission && isPermissionEnabled) {
checkPermission(src, false, null, FsAction.WRITE, null, FsAction.ALL);
}

return dir.delete(src);
}
       远程方法getFileInfo()用于获取文件/目录属性,由于它不对目录树进行修改,所以,在权限检查后就可以调用FSDirectory的同名方法,并返回该方法的返回值。代码如下:
      
/** Get the file info for a specific file.
* @param src The string representation of the path to the file
* @throws IOException if permission to access file is denied by the system
* @return object containing information regarding the file
*         or null if file not found
*/
HdfsFileStatus getFileInfo(String src) throws IOException {
if (isPermissionEnabled) {
checkTraverse(src);
}
return dir.getFileInfo(src);
}
        修改文件/目录的文件主标识符和用户组标识符使用的是setOwner(),它的实现如下:
      
/**
* Set owner for an existing file.
* @throws IOException
*/
public void setOwner(String src, String username, String group
) throws IOException {
synchronized (this) {
if (isInSafeMode())
throw new SafeModeException("Cannot set owner for " + src, safeMode);
FSPermissionChecker pc = checkOwner(src);
if (!pc.isSuper) {
if (username != null && !pc.user.equals(username)) {
throw new AccessControlException("Non-super user cannot change owner.");
}
if (group != null && !pc.containsGroup(group)) {
throw new AccessControlException("User does not belong to " + group
+ " .");
}
}
dir.setOwner(src, username, group);
}
getEditLog().logSync();
if (auditLog.isInfoEnabled() && isExternalInvocation()) {
final HdfsFileStatus stat = dir.getFileInfo(src);
logAuditEvent(UserGroupInformation.getCurrentUser(),
Server.getRemoteIp(),
"setOwner", src, null, stat);
}
}

读数据使用的方法

      和上一节讨论的那些远程方法不同,名字节点需要为文件读写准备多个远程接口,如客户端读文件时,就会使用getBlockLocations()和reportBadBlocks()方法。
      客户端在和数据节点建立流式接口的TCP连接,读取文件数据前需要定位数据的位置,使用的就是远程方法getBlockLocations()。它的返回值LocatedBlocks对象,包含了一系列的LocatedBlock实例,通过这些信息,客户端就知道需要到哪些数据节点上去获取数据。
        
/** {@inheritDoc} */
public LocatedBlocks   getBlockLocations(String src,
long offset,
long length) throws IOException {
myMetrics.incrNumGetBlockLocations();
return namesystem.getBlockLocations(getClientMachine(),
       NameNode.getBlockLocations()调用的其实还是FSNamesystem中的同名方法,但这两个方法的参数不一样,NameNode中的方法会调用getClientMachine()获取客户端地址,后续的处理逻辑中,会使用这个地址对LocatedBlocks对象中的数据节点列表进行排序,保证客户端能访问离它位置最近的数据块副本。
       FSNamesystem.getBlockLocations()有三个重载的方法。
       第一个getBlockLocations()会对即将返回的数据节点列表进行整理,尽量找一个离客户端网络距离近的节点放在数组的开始,它使用了前面分析的NetworkTopology,pseudoSortByDistance()方法,第二个getBlockLocations()方法增加了权限检查和参数检查。
      
/**
* Get block locations within the specified range.
*
* @see #getBlockLocations(String, long, long)
*/
LocatedBlocks getBlockLocations(String clientMachine, String src,
long offset, long length) throws IOException {
LocatedBlocks blocks = getBlockLocations(src, offset, length, true, true);
if (blocks != null) {
//sort the blocks
DatanodeDescriptor client = host2DataNodeMap.getDatanodeByHost(
clientMachine);
for (LocatedBlock b : blocks.getLocatedBlocks()) {
clusterMap.pseudoSortByDistance(client, b.getLocations());
}
}
return blocks;
}
/**
* Get block locations within the specified range.
* @see ClientProtocol#getBlockLocations(String, long, long)
*/
public LocatedBlocks getBlockLocations(String src, long offset, long length,
boolean doAccessTime, boolean needBlockToken) throws IOException {
if (isPermissionEnabled) {
checkPathAccess(src, FsAction.READ);
}

if (offset < 0) {
throw new IOException("Negative offset is not supported. File: " + src );
}
if (length < 0) {
throw new IOException("Negative length is not supported. File: " + src );
}
final LocatedBlocks ret = getBlockLocationsInternal(src,
offset, length, Integer.MAX_VALUE, doAccessTime, needBlockToken);
if (auditLog.isInfoEnabled() && isExternalInvocation()) {
logAuditEvent(UserGroupInformation.getCurrentUser(),
Server.getRemoteIp(),
"open", src, null, null);
}
return ret;
}
      FSNamesystem.getBlockLocationsInternal()才是真正获取返回LocatedBlocks对象的地方。这个方法比较复杂,我们分批解释。
      
private synchronized LocatedBlocks getBlockLocationsInternal(String src,
long offset,
long length,
int nrBlocksToReturn,
boolean doAccessTime,
boolean needBlockToken)
throws IOException {
INodeFile inode = dir.getFileINode(src);
if(inode == null) {
return null;
}
if (doAccessTime && isAccessTimeSupported()) {
dir.setTimes(src, inode, -1, now(), false);
}
Block[] blocks = inode.getBlocks();
if (blocks == null) {
return null;
}
if (blocks.length == 0) {
return inode.createLocatedBlocks(new ArrayList<LocatedBlock>(blocks.length));
}
List<LocatedBlock> results;
results = new ArrayList<LocatedBlock>(blocks.length);
        按照惯例,方法入口处处理各种特殊情况,如要读取的文件不存在或文件为空等,上面的代码中,方法还调用了FSDirectory.setTimes(),更新文件的访问时间。
        接下来的代码(如下),根据读取数据起始位置偏移量offset,计算开始的数据块,并保存在变量curBlk中。当然,如果起始位置比文件还大,方法就可以返回了,返回值为空。
    
int curBlk = 0;
long curPos = 0, blkSize = 0;
int nrBlocks = (blocks[0].getNumBytes() == 0) ? 0 : blocks.length;
for (curBlk = 0; curBlk < nrBlocks; curBlk++) {
blkSize = blocks[curBlk].getNumBytes();
assert blkSize > 0 : "Block of size 0";
if (curPos + blkSize > offset) {
break;
}
curPos += blkSize;
}

if (nrBlocks > 0 && curBlk == nrBlocks)   // offset >= end of file
return null;
        FSNamesystem.getBlockLocationsInternal()方法最复杂的逻辑出现在西面,其目的是找出读取数据范围内的所有可用数据块副本,并生产LocatedBlocks对象。如果数据块是正在构造文件的最后一个块,我们认为当前参与数据流管道的各个数据节点都是可用节点;其他情况,需要通过名字节点第二关系中的方法,获得包含数据块副本的节点数、损坏的副本数等信息,判断是否有可用的“正常副本”。如果有,找出这些副本所在的数据节点并保存在machineSet中;如果没有,machineSet则包含了拥有该数据块的所有节点,同时,LocatedBlock对象的标志位blockCorrupt会设置ture。代码如下:
     
long endOff = offset + length;

do {
// get block locations
int numNodes = blocksMap.numNodes(blocks[curBlk]);
int numCorruptNodes = countNodes(blocks[curBlk]).corruptReplicas();
int numCorruptReplicas = corruptReplicas.numCorruptReplicas(blocks[curBlk]);
if (numCorruptNodes != numCorruptReplicas) {
LOG.warn("Inconsistent number of corrupt replicas for " +
blocks[curBlk] + "blockMap has " + numCorruptNodes +
" but corrupt replicas map has " + numCorruptReplicas);
}
DatanodeDescriptor[] machineSet = null;
boolean blockCorrupt = false;
if (inode.isUnderConstruction() && curBlk == blocks.length - 1
&& blocksMap.numNodes(blocks[curBlk]) == 0) {
// get unfinished block locations
INodeFileUnderConstruction cons = (INodeFileUnderConstruction)inode;
machineSet = cons.getTargets();
blockCorrupt = false;
} else {
blockCorrupt = (numCorruptNodes == numNodes);
int numMachineSet = blockCorrupt ? numNodes :
(numNodes - numCorruptNodes);
machineSet = new DatanodeDescriptor[numMachineSet];
if (numMachineSet > 0) {
numNodes = 0;
for(Iterator<DatanodeDescriptor> it =
blocksMap.nodeIterator(blocks[curBlk]); it.hasNext();) {
DatanodeDescriptor dn = it.next();
boolean replicaCorrupt = corruptReplicas.isReplicaCorrupt(blocks[curBlk], dn);
if (blockCorrupt || (!blockCorrupt && !replicaCorrupt))
machineSet[numNodes++] = dn;
}
}
}
LocatedBlock b = new LocatedBlock(blocks[curBlk], machineSet, curPos,
blockCorrupt);
if(isAccessTokenEnabled && needBlockToken) {
b.setBlockToken(accessTokenHandler.generateToken(b.getBlock(),
EnumSet.of(BlockTokenSecretManager.AccessMode.READ)));
}

results.add(b);
curPos += blocks[curBlk].getNumBytes();
curBlk++;
} while (curPos < endOff
&& curBlk < blocks.length
&& results.size() < nrBlocksToReturn);

return inode.createLocatedBlocks(results);
}
      通过getBlockLocation()的返回值,客户端就可以和数据节点联系,并通过数据节点的流式接口,读取文件内容。
      和客户端读文件相关的另一个方法ClientProtocol.reportBadBlocks(),当客户端读取数据发现数据,即对数据进行校验失败时,通过该方法,将情况汇报给名字节点。这个方法也存在于远程接口DatanodeProtocol中,它们的方法签名是一致的,也就是说,当IPC请求发送到名字节点是,它们调用的是相同的NameNode.reportBadBlocks()方法,不需要区分这是一个ClientProtocol接口上的远程方法,还是DatanodeProtocol上的调用。代码如下:
       
/**
* The client has detected an error on the specified located blocks
* and is reporting them to the server.  For now, the namenode will
* mark the block as corrupt.  In the future we might
* check the blocks are actually corrupt.
*/
public void reportBadBlocks(LocatedBlock[] blocks) throws IOException {
stateChangeLog.info("*DIR* NameNode.reportBadBlocks");
for (int i = 0; i < blocks.length; i++) {
Block blk = blocks[i].getBlock();
DatanodeInfo[] nodes = blocks[i].getLocations();
for (int j = 0; j < nodes.length; j++) {
DatanodeInfo dn = nodes[j];
namesystem.markBlockAsCorrupt(blk, dn);
}
}
}
       NameNode.reportBadBlocks()调用markBlockAsCorrput()标记损坏副本。

 写数据使用的方法

      写数据远比读数据复杂,名字节点不但需要处理来自ClentProtocol的请求,而且还需要处理DatanodeProtocol上的远程调用,同时,还要维护客户端的写文件租约,是Hadoop文件系统中最复杂的流程。
      1、写文件:打开文件
       在ClientProtocol中和写文件相关的远程方法,它们是create()、append()、addBlock()、abandonBlock()、fsync()、complete()、renewLease()和recoverLease()。数据节点和名字节点间的DatanodeProtocol远程接口,有blocksBeginWrittenReport()、blockReceived()、nextGenerationStamp()、commitBlockSynchronization()参与到写文件的过程中。
      NameNode.create()其实调用了FSNamesystem.startFile(),在目录树中创建文件。代码如下:
      
/** {@inheritDoc} */
public void create(String src,
FsPermission masked,
String clientName,
boolean overwrite,
boolean createParent,
short replication,
long blockSize
) throws IOException {
String clientMachine = getClientMachine();
if (stateChangeLog.isDebugEnabled()) {
stateChangeLog.debug("*DIR* NameNode.create: file "
+src+" for "+clientName+" at "+clientMachine);
}
if (!checkPathLength(src)) {
throw new IOException("create: Pathname too long.  Limit "
+ MAX_PATH_LENGTH + " characters, " + MAX_PATH_DEPTH + " levels.");
}
namesystem.startFile(src,
new PermissionStatus(UserGroupInformation.getCurrentUser().getShortUserName(),
null, masked),
clientName, clientMachine, overwrite, createParent, replication, blockSize);
myMetrics.incrNumFilesCreated();
myMetrics.incrNumCreateFileOps();
}
      如下面代码,startFile()方法的主要工作由startFileInternal()完成,和其他修改目录树的FSNamesystem方法类似,调用结束后,需要通过logSync()同步日志。
      
/**
* Create a new file entry in the namespace.
*
* @see ClientProtocol#create(String, FsPermission, String, boolean, short, long)
*
* @throws IOException if file name is invalid
*         {@link FSDirectory#isValidToCreate(String)}.
*/
void startFile(String src, PermissionStatus permissions,
String holder, String clientMachine,
boolean overwrite, boolean createParent, short replication, long blockSize
) throws IOException {
startFileInternal(src, permissions, holder, clientMachine, overwrite, false,
createParent, replication, blockSize);
getEditLog().logSync();
if (auditLog.isInfoEnabled() && isExternalInvocation()) {
final HdfsFileStatus stat = dir.getFileInfo(src);
logAuditEvent(UserGroupInformation.getCurrentUser(),
Server.getRemoteIp(),
"create", src, null, stat);
}
}
          startFileInternal()方法的关键逻辑是在目录树指定位置插入一个新的INodeFileUnderConstruction节点(文件创建)或将目录树原有的INodeFile转变成构建状态节点,然后在租约管理器中添加记录。
          但在执行上述逻辑之前,startFileInternal()需要进行大量的检查,包括:
          1)文件系统不能处于安全模式,安全模式只提供HDFS的只读视图。
          2)通过DFSUtil.isValidName()判断文件名是否合法。
          3)如果创建/打开文件的目标src在目录树中几经存在,它不能是一个目录。
          4)操作权限检查,对于创建操作,用户在父目录上有写权限;对应追加操作,或者是覆盖创建操作(文件存在时),用户在该文件上必须有写权限。
          5)标志位createParent为false时,需要判断待创建文件的父目录是否存在。
          6)调用FSNamesystem.recoverLeaseInternal(),判断文件是否已经被其他客户端打开,防止同时有多个客户端写同一个文件。
          7)通过verifyReplication()检查等待打开的文件的副本数是否落在有效范围内。
          8)如果是通过append()打开文件,需要保证文件已经存在并且不是目录。
          9)如果是文件创建,待创建的文件不应该存在,当然,如果允许覆盖创建(标志位overwrite的值为true),则删除原文件。
           代码如下:
           
private synchronized void startFileInternal(String src,
PermissionStatus permissions,
String holder,
String clientMachine,
boolean overwrite,
boolean append,
boolean createParent,
short replication,
long blockSize
) throws IOException {
if (NameNode.stateChangeLog.isDebugEnabled()) {
NameNode.stateChangeLog.debug("DIR* NameSystem.startFile: src=" + src
+ ", holder=" + holder
+ ", clientMachine=" + clientMachine
+ ", createParent=" + createParent
+ ", replication=" + replication
+ ", overwrite=" + overwrite
+ ", append=" + append);
}

if (isInSafeMode())
throw new SafeModeException("Cannot create file" + src, safeMode);
if (!DFSUtil.isValidName(src)) {
throw new IOException("Invalid file name: " + src);
}

// Verify that the destination does not exist as a directory already.
boolean pathExists = dir.exists(src);
if (pathExists && dir.isDir(src)) {
throw new IOException("Cannot create file "+ src + "; already exists as a directory.");
}

if (isPermissionEnabled) {
if (append || (overwrite && pathExists)) {
checkPathAccess(src, FsAction.WRITE);
}
else {
checkAncestorAccess(src, FsAction.WRITE);
}
}

if (!createParent) {
verifyParentDir(src);
}

try {
INode myFile = dir.getFileINode(src);
recoverLeaseInternal(myFile, src, holder, clientMachine, false);

try {
verifyReplication(src, replication, clientMachine);
} catch(IOException e) {
throw new IOException("failed to create "+e.getMessage());
}
if (append) {
if (myFile == null) {
throw new FileNotFoundException("failed to append to non-existent file "
+ src + " on client " + clientMachine);
} else if (myFile.isDirectory()) {
throw new IOException("failed to append to directory " + src
+" on client " + clientMachine);
}
} else if (!dir.isValidToCreate(src)) {
if (overwrite) {
delete(src, true);
} else {
throw new IOException("failed to create file " + src
+" on client " + clientMachine
+" either because the filename is invalid or the file exists");
}
}

DatanodeDescriptor clientNode =
host2DataNodeMap.getDatanodeByHost(clientMachine);

if (append) {
//
// Replace current node with a INodeUnderConstruction.
// Recreate in-memory lease record.
//
INodeFile node = (INodeFile) myFile;
INodeFileUnderConstruction cons = new INodeFileUnderConstruction(
node.getLocalNameBytes(),
node.getReplication(),
node.getModificationTime(),
node.getPreferredBlockSize(),
node.getBlocks(),
node.getPermissionStatus(),
holder,
clientMachine,
clientNode);
dir.replaceNode(src, node, cons);
leaseManager.addLease(cons.clientName, src);

} else {
// Now we can add the name to the filesystem. This file has no
// blocks associated with it.
//
checkFsObjectLimit();

// increment global generation stamp
long genstamp = nextGenerationStamp();
INodeFileUnderConstruction newNode = dir.addFile(src, permissions,
replication, blockSize, holder, clientMachine, clientNode, genstamp);
if (newNode == null) {
throw new IOException("DIR* NameSystem.startFile: " +
"Unable to add file to namespace.");
}
leaseManager.addLease(newNode.clientName, src);
if (NameNode.stateChangeLog.isDebugEnabled()) {
NameNode.stateChangeLog.debug("DIR* NameSystem.startFile: "
+"add "+src+" to namespace for "+holder);
}
}
} catch (IOException ie) {
NameNode.stateChangeLog.warn("DIR* NameSystem.startFile: "
+ie.getMessage());
throw ie;
}
}
        在完成了上述多项检查后,对于append()操作,需要将原有目录树上的INodeFile对象,替换为INodeFileUnderConstruction对象,然后通过LeaseManager.addLease()在租约管理器中添加一条记录。
        如果是创建操作,通过FSDirectory.addFile(),往目录树中添加新文件的INodeFileUnderConstruction对象并在日志中记录文件打开操作,当然,也需要添加记录到租约管理器。
         create()调用的FSDirectory.addFile()方法代码如下:

         
/**
* Add the given filename to the fs.
*/
INodeFileUnderConstruction addFile(String path,
PermissionStatus permissions,
short replication,
long preferredBlockSize,
String clientName,
String clientMachine,
DatanodeDescriptor clientNode,
long generationStamp)
throws IOException {
waitForReady();

// Always do an implicit mkdirs for parent directory tree.
long modTime = FSNamesystem.now();
if (!mkdirs(new Path(path).getParent().toString(), permissions, true,
modTime)) {
return null;
}
INodeFileUnderConstruction newNode = new INodeFileUnderConstruction(
permissions,replication,
preferredBlockSize, modTime, clientName,
clientMachine, clientNode);
synchronized (rootDir) {
newNode = addNode(path, newNode, -1, false);
}
if (newNode == null) {
NameNode.stateChangeLog.info("DIR* FSDirectory.addFile: "
+"failed to add "+path
+" to the file system");
return null;
}
// add create file record to log, record new generation stamp
fsImage.getEditLog().logOpenFile(path, newNode);

NameNode.stateChangeLog.debug("DIR* FSDirectory.addFile: "
+path+" is added to the file system");
return newNode;
}
       ClientProtocol.append()也调用startFileInternal(),但和create()不同,追加操作需要返回LocatedBlock对象,客户端通过这个对象建立数据流管道,往数据块追加数据。所以appendFile()在调用了startFileInternal()方法后,剩下的工作就是构建返回的LocatedBlock对象。
       这个过程涉及了名字节点第二关系,因此有点繁琐。首先执行的操作时再次检查租约,以防止出现startFileInternal()调用结束后文件被删除的特殊情况。如果文件的最后一个数据块已经被写满,append()将返回null;否则,在blocksMap中,将当前保存在该数据块的数据节点找出来,保持在数组targets中。然后,在上述数据节点的描述符中移除该数据块的信息,并把targets数组保存在已经打开的目录树节点中,即INodeFileUnderConstruction对象中个。前面解读方法getBlockLocationsInternal()时,如果客户端要读取的是正在构建文件的最后一个数据块,则不能从blocksMap获取数据块的位置信息,而需要使用INodeFileUnderConstuction实例中的数据流管道成员,原因也在此。
       根据上述信息和当前数据块的剩余可写空间,即可构建返回的LocatedBlock对象。
      appendFile()方法代码如下:
       
/**
* Append to an existing file in the namespace.
*/
LocatedBlock appendFile(String src, String holder, String clientMachine
) throws IOException {
if (supportAppends == false) {
throw new IOException("Append to hdfs not supported." +
" Please refer to dfs.support.append configuration parameter.");
}
startFileInternal(src, null, holder, clientMachine, false, true,
false, (short)maxReplication, (long)0);
getEditLog().logSync();

//
// Create a LocatedBlock object for the last block of the file
// to be returned to the client. Return null if the file does not
// have a partial block at the end.
//
LocatedBlock lb = null;
synchronized (this) {
// Need to re-check existence here, since the file may have been deleted
// in between the synchronized blocks
INodeFileUnderConstruction file = checkLease(src, holder);

Block[] blocks = file.getBlocks();
if (blocks != null && blocks.length > 0) {
Block last = blocks[blocks.length-1];
BlockInfo storedBlock = blocksMap.getStoredBlock(last);
if (file.getPreferredBlockSize() > storedBlock.getNumBytes()) {
long fileLength = file.computeContentSummary().getLength();
DatanodeDescriptor[] targets = new DatanodeDescriptor[blocksMap.numNodes(last)];
Iterator<DatanodeDescriptor> it = blocksMap.nodeIterator(last);
for (int i = 0; it != null && it.hasNext(); i++) {
targets[i] = it.next();
}
// remove the replica locations of this block from the blocksMap
for (int i = 0; i < targets.length; i++) {
targets[i].removeBlock(storedBlock);
}
// set the locations of the last block in the lease record
file.setLastBlock(storedBlock, targets);

lb = new LocatedBlock(last, targets,
fileLength-storedBlock.getNumBytes());
if (isAccessTokenEnabled) {
lb.setBlockToken(accessTokenHandler.generateToken(lb.getBlock(),
EnumSet.of(BlockTokenSecretManager.AccessMode.WRITE)));
}

// Remove block from replication queue.
updateNeededReplications(last, 0, 0);

// remove this block from the list of pending blocks to be deleted.
// This reduces the possibility of triggering HADOOP-1349.
//
for (DatanodeDescriptor dd : targets) {
String datanodeId = dd.getStorageID();
Collection<Block> v = recentInvalidateSets.get(datanodeId);
if (v != null && v.remove(last)) {
if (v.isEmpty()) {
recentInvalidateSets.remove(datanodeId);
}
pendingDeletionBlocksCount--;
}
}
}
}
}
if (lb != null) {
if (NameNode.stateChangeLog.isDebugEnabled()) {
NameNode.stateChangeLog.debug("DIR* NameSystem.appendFile: file "
+src+" for "+holder+" at "+clientMachine
+" block " + lb.getBlock()
+" block size " + lb.getBlock().getNumBytes());
}
}

if (auditLog.isInfoEnabled() && isExternalInvocation()) {
logAuditEvent(UserGroupInformation.getCurrentUser(),
Server.getRemoteIp(),
"append", src, null, null);
}
return lb;
}
      appendFile()方法后半部分的代码用于处理追加操作对名字节点第二关系的影响。首先停止打开数据块上可能的复杂,即使该数据块副本数不足。当然,如果有打开数据块的副本删除请求,也需要取消。
        由appendFile()的处理逻辑可知,如果打开文件当前正在写数据,则这些数据块保存在INodeFileUnderConstruction对象中。在数据块提交前,名字节点第二关系中的数据节点描述符、无效数据块副本集合和等待复制请求队列都不保存这些数据块副本的信息,但数据块映射中还是保存了数据块的信息,只不过不能通过数据块映射,获取副本的位置信息。

写文件:数据块提交、添加/放弃数据块

        获取ClientProtocol.append()返回的非空LocatedBlock对象后,客户端就可以根据对象中的字段,利用数据节点提供的流式接口写数据了。数据节点成功接收到一个数据块后,必须使用远程方法ClientProtocol.blockReceived(),向名字节点提交数据块。为减少到名字节点的请求量,数据节点将多个提交合并成一个请求。NameNode.blockReceived()通过循环,为每一个提交调用一次FSNamesystem中的同名方法。这个方法其实是一个名字节点第二关系方法,由于它被数据块复制请求使用,所以方法不会涉及打开文件的INodeFileUnderConstruction对象,也不会访问租约管理器。
        由于添加数据块所需的逻辑都实现在addStoreBlock()中,blockReceived()也就变的简洁,调用addStoredBlock()方法,完成数据块提交。同时,这个方法具有典型的名字节点第二关系方法特征,它会判断数据节点当前的状态,也会修改pendingReplications中的信息。代码如下:
       
/**
* The given node is reporting that it received a certain block.
*/
public synchronized void blockReceived(DatanodeID nodeID,
Block block,
String delHint
) throws IOException {
DatanodeDescriptor node = getDatanode(nodeID);
if (node == null || !node.isAlive) {
NameNode.stateChangeLog.warn("BLOCK* NameSystem.blockReceived: " + block
+ " is received from dead or unregistered node " + nodeID.getName());
throw new IOException(
"Got blockReceived message from unregistered or dead node " + block);
}

if (NameNode.stateChangeLog.isDebugEnabled()) {
NameNode.stateChangeLog.debug("BLOCK* NameSystem.blockReceived: "
+block+" is received from " + nodeID.getName());
}

// Check if this datanode should actually be shutdown instead.
if (shouldNodeShutdown(node)) {
setDatanodeDead(node);
throw new DisallowedDatanodeException(node);
}

// get the deletion hint node
DatanodeDescriptor delHintNode = null;
if(delHint!=null && delHint.length()!=0) {
delHintNode = datanodeMap.get(delHint);
if(delHintNode == null) {
NameNode.stateChangeLog.warn("BLOCK* NameSystem.blockReceived: "
+ block
+ " is expected to be removed from an unrecorded node "
+ delHint);
}
}

//
// Modify the blocks->datanode map and node's map.
//
pendingReplications.remove(block);
addStoredBlock(block, node, delHintNode );

// decrement number of blocks scheduled to this datanode.
node.decBlocksScheduled();
}
        数据块提交意味着对某一个数据块的副本的写操作已经结束,副本的位置信息进入名字节点第二关系。如果客户端还希望继续往文本中写入数据,这个时候就需要使用ClientProtocol.addBlock(),分配新的数据块。通过create()创建一个新文件后,如果有数据写入,客户端也会通过这个方法,获取文件的第一个数据块。
      getAdditionalBlock()方法实现了addBlock()的主要功能,它们的参数表示一样的,包括:申请分配新的数据块的文件(即src)和客户端clientName,excludeNodes包括了一些数据节点,分配数据块时,要排除这些节点。这个方法的参数不多,业务逻辑需要的参数,如数据块的大小、打开文件的客户端等,都可以在INodeFileUnderConstruction对象中找到。
      和打开文件相比,getAdditionalBlock()需要执行的检查少了很多,这些检查包括:是否达到系统容量、增加数据块的文件是否已经打开,文件的写进度和系统是否处于安全模式等。
     文件写进度检查checkFileProgress()是为了保证数据安全引入的一个机制,分配新数据块时,要保证文件倒数第二个数据块的副本达到系统最小副本minReplication(默认值为1)。
         getAdditonalBlock()和checkFileProgress()代码如下:
     
public LocatedBlock getAdditionalBlock(String src,
String clientName,
List<Node> excludedNodes
) throws IOException {
long fileLength, blockSize;
int replication;
DatanodeDescriptor clientNode = null;
Block newBlock = null;

NameNode.stateChangeLog.debug("BLOCK* NameSystem.getAdditionalBlock: file "
+src+" for "+clientName);

synchronized (this) {
// have we exceeded the configured limit of fs objects.
checkFsObjectLimit();

INodeFileUnderConstruction pendingFile  = checkLease(src, clientName);

//
// If we fail this, bad things happen!
//
if (!checkFileProgress(pendingFile, false)) {
throw new NotReplicatedYetException("Not replicated yet:" + src);
}
fileLength = pendingFile.computeContentSummary().getLength();
blockSize = pendingFile.getPreferredBlockSize();
clientNode = pendingFile.getClientNode();
replication = (int)pendingFile.getReplication();
}

// choose targets for the new block tobe allocated.
DatanodeDescriptor targets[] = replicator.chooseTarget(replication,
clientNode,
excludedNodes,
blockSize);
if (targets.length < this.minReplication) {
throw new IOException("File " + src + " could only be replicated to " +
targets.length + " nodes, instead of " +
minReplication);
}

// Allocate a new block and record it in the INode.
synchronized (this) {
if (isInSafeMode()) {
throw new SafeModeException("Cannot add block to " + src, safeMode);
}
INode[] pathINodes = dir.getExistingPathINodes(src);
int inodesLen = pathINodes.length;
checkLease(src, clientName, pathINodes[inodesLen-1]);
INodeFileUnderConstruction pendingFile  = (INodeFileUnderConstruction)
pathINodes[inodesLen - 1];

if (!checkFileProgress(pendingFile, false)) {
throw new NotReplicatedYetException("Not replicated yet:" + src);
}

// allocate new block record block locations in INode.
newBlock = allocateBlock(src, pathINodes);
pendingFile.setTargets(targets);

for (DatanodeDescriptor dn : targets) {
dn.incBlocksScheduled();
}
}

// Create next block
LocatedBlock b = new LocatedBlock(newBlock, targets, fileLength);
if (isAccessTokenEnabled) {
b.setBlockToken(accessTokenHandler.generateToken(b.getBlock(),
EnumSet.of(BlockTokenSecretManager.AccessMode.WRITE)));
}
return b;
}
/**
* Check that the indicated file's blocks are present and
* replicated.  If not, return false. If checkall is true, then check
* all blocks, otherwise check only penultimate block.
*/
synchronized boolean checkFileProgress(INodeFile v, boolean checkall) {
if (checkall) {
//
// check all blocks of the file.
//
for (Block block: v.getBlocks()) {
if (blocksMap.numNodes(block) < this.minReplication) {
return false;
}
}
} else {
//
// check the penultimate block of this file
//
Block b = v.getPenultimateBlock();
if (b != null) {
if (blocksMap.numNodes(b) < this.minReplication) {
return false;
}
}
}
return true;
}
        除去必要的参数获取和执行一些必要的检查,getAdditionalBlock()的分配数据块过程并不复杂,首先通过ReplicationTargetChooser的chooseTarget()方法选择数据块副本的保存位置,接下来分配一个数据块,和保存位置一起添加到INodeFileUnderConstruction对象中。
       在HDFS中,数据块的标识是怎么确定的呢?分配新数据块的allocateBlock()方法,以随机的方式产生数据块ID号,当然,如果随机产生的标识和系统中的标识冲突,那就重新产生,直到标识有效位置。代码如下:

     
/**
* Allocate a block at the given pending filename
*
* @param src path to the file
* @param inodes INode representing each of the components of src.
*        <code>inodes[inodes.length-1]</code> is the INode for the file.
*/
private Block allocateBlock(String src, INode[] inodes) throws IOException {
Block b = new Block(FSNamesystem.randBlockId.nextLong(), 0, 0);
while(isValidBlock(b)) {
b.setBlockId(FSNamesystem.randBlockId.nextLong());
}
b.setGenerationStamp(getGenerationStamp());
b = dir.addBlock(src, inodes, b);
NameNode.stateChangeLog.info("BLOCK* NameSystem.allocateBlock: "
+src+ ". "+b);
return b;
}
       分配的新数据块,通过FSDirectory.addBlock()添加到INodeFileUnderConstruction对象中,也添加到名字节点的blocksMap对象中。也就是说,数据块一经分配,就会立即进入到blocksMap中。代码如下:

/**
* Add a block to the file. Returns a reference to the added block.
*/
Block addBlock(String path, INode[] inodes, Block block) throws IOException {
waitForReady();

synchronized (rootDir) {
INodeFile fileNode = (INodeFile) inodes[inodes.length-1];

// check quota limits and updated space consumed
updateCount(inodes, inodes.length-1, 0,
fileNode.getPreferredBlockSize()*fileNode.getReplication(), true);

// associate the new list of blocks with this file
namesystem.blocksMap.addINode(block, fileNode);
BlockInfo blockInfo = namesystem.blocksMap.getStoredBlock(block);
fileNode.addBlock(blockInfo);

NameNode.stateChangeLog.debug("DIR* FSDirectory.addFile: "
+ path + " with " + block
+ " block is added to the in-memory "
+ "file system");
}
return block;
}


        最后介绍的是abandonBlock()方法,客户端根据addBlock()返回的LocatedBlock对象,创建数据口管道失败时,通过这个方法放弃已经申请的数据块,并重新申请。重新申请数据块时,出错的数据节点信息会放入请求的excludeNodes参数中,防止再次在该节点上分配数据块。
       由于这个时候数据节点上并不存在数据块的副本,FSNamesystem.abandomBlock()方法的实现相当简单,使用FSDirectory的成员函数removeBlock()在文件和blocksMap中移除数据块信息。代码如下:
     
/**
* The client would like to let go of the given block
*/
public synchronized boolean abandonBlock(Block b, String src, String holder
) throws IOException {
//
// Remove the block from the pending creates list
//
NameNode.stateChangeLog.debug("BLOCK* NameSystem.abandonBlock: "
+b+"of file "+src);
if (isInSafeMode()) {
throw new SafeModeException("Cannot abandon block " + b +
" for fle" + src, safeMode);
}
INodeFileUnderConstruction file = checkLease(src, holder);
dir.removeBlock(src, file, b);
NameNode.stateChangeLog.debug("BLOCK* NameSystem.abandonBlock: "
+ b
+ " is removed from pendingCreates");
return true;
}

写文件:关闭文件

        一般情况下,客户端通过create()创建文件并调用addBlock(),分配第一个数据块,或通过append()打开一个已经存在的文件(也可能需要分配数据块)后,就可以使用数据节点的流式接口,打开数据流管道并写文件数据,在写入一个数据块时,数据节点通过blockReceived(),提交数据块,而客户端则根据需要,或使用addBlock()分配新数据块继续写文件,或通过ClientProtocol的远程方法complete(),关闭文件。下面分析ClientProtocol.complete()的实现。
        和大多数ClientProtocol中的方法一样,关闭文件主要实现在FSNamesystem的completeFile()和completeFileInternal()中。completeFile()方法的返回值是一个枚举变量,它的含义如下:
            OPEARTION_FAILED:操作失败,如文件已经被其他客户端删除。
            STILL_WAITING:操作成功,但还需要等待数据节点的数据块提交。
            COMPLETE_SUCCESS:操作成功,文件被成功关闭。
            方法返回STILL_WAITTING时,表明文件还有数据块没有达到系统要求的最小副本,目前还不能判断文件能否成功关闭,客户端还需要继续等待。这里,使用了前面分析过的checkFileProgress()。如果这个方法返回ture,表明文件可以关闭,completeFileInternal()使用finalizeINodeFileUnderConstrucation(),执行相关工作。包括:从租约管理器中释放文件租约;将目录树中文件对象的INodeFileUnderConstruction对象替为INodeFile对象,并更新数据块映射对象blocksMap中的信息;调用FSDirectory.closeFile()关闭文件;使用前面分析过的checkReplicationFarctor()方法,判断文件是否有需要进行复制的数据块,以保证副本数达到文件的副本系数。这些工作完成后,方法返回COMPLETE_SUCCESS.
       代码如下:
       
/**
* The FSNamesystem will already know the blocks that make up the file.
* Before we return, we make sure that all the file's blocks have
* been reported by datanodes and are replicated correctly.
*/

enum CompleteFileStatus {
OPERATION_FAILED,
STILL_WAITING,
COMPLETE_SUCCESS
}

public CompleteFileStatus completeFile(String src, String holder) throws IOException {
CompleteFileStatus status = completeFileInternal(src, holder);
getEditLog().logSync();
return status;
}

private synchronized CompleteFileStatus completeFileInternal(String src,
String holder) throws IOException {
NameNode.stateChangeLog.debug("DIR* NameSystem.completeFile: " + src + " for " + holder);
if (isInSafeMode())
throw new SafeModeException("Cannot complete file " + src, safeMode);

INodeFileUnderConstruction pendingFile  = checkLease(src, holder);
Block[] fileBlocks =  dir.getFileBlocks(src);

if (fileBlocks == null ) {
NameNode.stateChangeLog.warn("DIR* NameSystem.completeFile: "
+ "failed to complete " + src
+ " because dir.getFileBlocks() is null,"
+ " pending from " + pendingFile.getClientMachine());
return CompleteFileStatus.OPERATION_FAILED;
}

if (!checkFileProgress(pendingFile, true)) {
return CompleteFileStatus.STILL_WAITING;
}

finalizeINodeFileUnderConstruction(src, pendingFile);

NameNode.stateChangeLog.info("DIR* NameSystem.completeFile: file " + src
+ " is closed by " + holder);
return CompleteFileStatus.COMPLETE_SUCCESS;
}
private void finalizeINodeFileUnderConstruction(String src,
INodeFileUnderConstruction pendingFile) throws IOException {
NameNode.stateChangeLog.info("Removing lease on  file " + src +
" from client " + pendingFile.clientName);
leaseManager.removeLease(pendingFile.clientName, src);

// The file is no longer pending.
// Create permanent INode, update blockmap
INodeFile newFile = pendingFile.convertToInodeFile();
dir.replaceNode(src, pendingFile, newFile);

// close file and persist block allocations for this file
dir.closeFile(src, newFile);

checkReplicationFactor(newFile);
}


        FSDirectory.closeFile()会在日志文件中记录文件关闭,和startFileInternal()中通过FSDirectory.addFile()记录的文件打开日志,正好对应。代码如下:
      
/**
* Close file.
*/
void closeFile(String path, INodeFile file) throws IOException {
waitForReady();
synchronized (rootDir) {
// file is closed
fsImage.getEditLog().logCloseFile(path, file);
if (NameNode.stateChangeLog.isDebugEnabled()) {
NameNode.stateChangeLog.debug("DIR* FSDirectory.closeFile: "
+path+" with "+ file.getBlocks().length
+" blocks is persisted to the file system");
}
}
}

租约管理器

      租约是HDFS实现中的一个重要概念。对于HDFS来说,租约是名字节点给予租约持有者,一般是客户端,在规定时间期限内一定权限(写文件)的合同。租约在分布式系统中有大量的应用,如动态主机设置协议(DHCP),该协议用于为用户提供网络参数配置,其最重要的功能就是动态分配IP地址。在DHCP中,租约就是服务器分配给客户端的网络参数配置实用期限,一定时间后,服务器可以回收该配置,提供给其他主机使用。当租约时间达到租约期限的50%时,DHCP客户端会尝试进行租约更新,当然,服务器可以确认租约,甚至取消租约。在NFS的版本4中,提供了基于租约的锁机制,当NFS的客户端希望对文件或文件的一部分进行锁定时,它向网络锁管理发起一个请求,管理器会返回给客户端一个租约,或者其他的隐含租约更新操作,维护队文件的锁定。
        HDFS中,为写打开的文件采取了类似机制,这个机制被实现为租约管理器。

        版权申明:本文部分摘自【蔡斌、陈湘萍】所著【Hadoop技术内幕 深入解析Hadoop Common和HDFS架构设计与实现原理】一书,仅作为学习笔记,用于技术交流,其商业版权由原作者保留,推荐大家购买图书研究,转载请保留原作者,谢谢!

       
内容来自用户分享和网络整理,不保证内容的准确性,如有侵权内容,可联系管理员处理 点击这里给我发消息
标签: