public class INodeDirectory
extends org.apache.hadoop.hdfs.server.namenode.INodeWithAdditionalFields
implements org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes
| Modifier and Type | Class and Description |
|---|---|
static class |
INodeDirectory.SnapshotAndINode
A pair of Snapshot and INode objects.
|
INode.BlocksMapUpdateInfo, INode.Feature, INode.QuotaDelta, INode.ReclaimContextINodeDirectoryAttributes.CopyWithQuota, INodeDirectoryAttributes.SnapshotCopy| Modifier and Type | Field and Description |
|---|---|
static int |
DEFAULT_FILES_PER_DIRECTORY |
| Constructor and Description |
|---|
INodeDirectory(INodeDirectory other,
boolean adopt,
INode.Feature... featuresToCopy)
Copy constructor
|
INodeDirectory(long id,
byte[] name,
org.apache.hadoop.fs.permission.PermissionStatus permissions,
long mtime)
constructor
|
| Modifier and Type | Method and Description |
|---|---|
boolean |
addChild(org.apache.hadoop.hdfs.server.namenode.INode node) |
boolean |
addChild(org.apache.hadoop.hdfs.server.namenode.INode node,
boolean setModTime,
int latestSnapshotId)
Add a child inode to the directory.
|
boolean |
addChildAtLoading(org.apache.hadoop.hdfs.server.namenode.INode node)
During image loading, the search is unnecessary since the insert position
should always be at the end of the map given the sequence they are
serialized on disk.
|
org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot |
addSnapshot(int id,
String name,
org.apache.hadoop.hdfs.server.namenode.LeaseManager leaseManager,
boolean captureOpenFiles,
int maxSnapshotLimit,
long mtime)
Add a snapshot.
|
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature |
addSnapshotFeature(DirectoryWithSnapshotFeature.DirectoryDiffList diffs) |
void |
addSnapshottableFeature()
add DirectorySnapshottableFeature
|
void |
addSpaceConsumed(QuotaCounts counts)
Check and add namespace/storagespace/storagetype consumed to itself and the ancestors.
|
INodeDirectory |
asDirectory()
Cast this inode to an
INodeDirectory. |
void |
cleanSubtree(INode.ReclaimContext reclaimContext,
int snapshotId,
int priorSnapshotId)
Clean the subtree under this inode and collect the blocks from the descents
for further block deletion/update.
|
void |
cleanSubtreeRecursively(INode.ReclaimContext reclaimContext,
int snapshot,
int prior,
Map<org.apache.hadoop.hdfs.server.namenode.INode,org.apache.hadoop.hdfs.server.namenode.INode> excludedNodes)
Call cleanSubtree(..) recursively down the subtree.
|
void |
clear()
Clear references to other objects.
|
void |
clearChildren()
Set the children list to null.
|
org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext |
computeContentSummary(int snapshotId,
org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext summary)
Count subtree content summary with a
ContentCounts. |
protected org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext |
computeDirectoryContentSummary(org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext summary,
int snapshotId) |
QuotaCounts |
computeQuotaUsage(BlockStoragePolicySuite bsps,
byte blockStoragePolicyId,
boolean useCache,
int lastSnapshotId)
Count subtree
Quota.NAMESPACE and Quota.STORAGESPACE usages. |
QuotaCounts |
computeQuotaUsage4CurrentDirectory(BlockStoragePolicySuite bsps,
byte storagePolicyId,
QuotaCounts counts)
Add quota usage for this inode excluding children.
|
void |
destroyAndCollectBlocks(INode.ReclaimContext reclaimContext)
Destroy self and clear everything! If the INode is a file, this method
collects its blocks for further block deletion.
|
void |
dumpTreeRecursively(PrintWriter out,
StringBuilder prefix,
int snapshot)
Dump tree recursively.
|
static void |
dumpTreeRecursively(PrintWriter out,
StringBuilder prefix,
Iterable<INodeDirectory.SnapshotAndINode> subs)
Dump the given subtrees.
|
org.apache.hadoop.hdfs.server.namenode.INode |
getChild(byte[] name,
int snapshotId) |
org.apache.hadoop.hdfs.util.ReadOnlyList<org.apache.hadoop.hdfs.server.namenode.INode> |
getChildrenList(int snapshotId) |
int |
getChildrenNum(int snapshotId) |
DirectoryWithSnapshotFeature.DirectoryDiffList |
getDiffs() |
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature |
getDirectorySnapshottableFeature() |
DirectoryWithQuotaFeature |
getDirectoryWithQuotaFeature()
If the directory contains a
DirectoryWithQuotaFeature, return it;
otherwise, return null. |
org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature |
getDirectoryWithSnapshotFeature()
If feature list contains a
DirectoryWithSnapshotFeature, return it;
otherwise, return null. |
byte |
getLocalStoragePolicyID() |
QuotaCounts |
getQuotaCounts()
Get the quota set for this inode
|
org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot |
getSnapshot(byte[] snapshotName) |
org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes |
getSnapshotINode(int snapshotId) |
byte |
getStoragePolicyID() |
boolean |
isDescendantOfSnapshotRoot(INodeDirectory snapshotRootDir)
Check if this directory is a descendant directory
of a snapshot root directory.
|
boolean |
isDirectory()
Check whether it's a directory
|
boolean |
isSnapshottable() |
boolean |
isWithSnapshot()
Is this file has the snapshot feature?
|
boolean |
metadataEquals(org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes other)
Compare the metadata with another INodeDirectory
|
void |
recordModification(int latestSnapshotId)
This inode is being modified.
|
boolean |
removeChild(org.apache.hadoop.hdfs.server.namenode.INode child)
Remove the specified child from this directory.
|
boolean |
removeChild(org.apache.hadoop.hdfs.server.namenode.INode child,
int latestSnapshotId)
Remove the specified child from this directory.
|
org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot |
removeSnapshot(INode.ReclaimContext reclaimContext,
String snapshotName,
long mtime)
Delete a snapshot.
|
void |
removeSnapshottableFeature()
remove DirectorySnapshottableFeature
|
void |
renameSnapshot(String path,
String oldName,
String newName,
long mtime)
Rename a snapshot.
|
void |
replaceChild(org.apache.hadoop.hdfs.server.namenode.INode oldChild,
org.apache.hadoop.hdfs.server.namenode.INode newChild,
INodeMap inodeMap)
Replace the given child with a new child.
|
org.apache.hadoop.hdfs.server.namenode.INode |
saveChild2Snapshot(org.apache.hadoop.hdfs.server.namenode.INode child,
int latestSnapshotId,
org.apache.hadoop.hdfs.server.namenode.INode snapshotCopy)
Save the child to the latest snapshot.
|
int |
searchChild(org.apache.hadoop.hdfs.server.namenode.INode inode)
Search for the given INode in the children list and the deleted lists of
snapshots.
|
void |
setSnapshotQuota(int snapshotQuota) |
String |
toDetailString() |
void |
undoRename4DstParent(BlockStoragePolicySuite bsps,
org.apache.hadoop.hdfs.server.namenode.INode deletedChild,
int latestSnapshotId)
Undo the rename operation for the dst tree, i.e., if the rename operation
(with OVERWRITE option) removes a file/dir from the dst tree, add it back
and delete possible record in the deleted list.
|
void |
undoRename4ScrParent(INodeReference oldChild,
org.apache.hadoop.hdfs.server.namenode.INode newChild)
This method is usually called by the undo section of rename.
|
static INodeDirectory |
valueOf(org.apache.hadoop.hdfs.server.namenode.INode inode,
Object path)
Cast INode to INodeDirectory.
|
addAclFeature, addFeature, addXAttrFeature, getAclFeature, getFeature, getFeatures, getFsPermissionShort, getId, getLocalNameBytes, getNext, getPermissionLong, removeAclFeature, removeFeature, removeXAttrFeature, setAccessTime, setLocalName, setModificationTime, setNext, updateModificationTimeasFile, asReference, asSymlink, compareTo, computeAndConvertContentSummary, computeContentSummary, computeQuotaUsage, computeQuotaUsage, dumpTreeRecursively, dumpTreeRecursively, equals, getAccessTime, getAclFeature, getFsPermission, getFullPathName, getGroupName, getKey, getLocalName, getModificationTime, getObjectString, getParent, getParentReference, getParentString, getPathComponents, getPathComponents, getPathNames, getStoragePolicyIDForQuota, getUserName, getXAttrFeature, hashCode, isAncestorDirectory, isDeleted, isFile, isInCurrentState, isInLatestSnapshot, isLastReference, isQuotaSet, isReference, isSetStoragePolicy, isSymlink, setAccessTime, setModificationTime, setParent, setParentReference, shouldRecordInSrcSnapshot, toStringpublic static final int DEFAULT_FILES_PER_DIRECTORY
public INodeDirectory(long id,
byte[] name,
org.apache.hadoop.fs.permission.PermissionStatus permissions,
long mtime)
public INodeDirectory(INodeDirectory other, boolean adopt, INode.Feature... featuresToCopy)
other - The INodeDirectory to be copiedadopt - Indicate whether or not need to set the parent field of child
INodes to the new nodefeaturesToCopy - any number of features to copy to the new node.
The method will do a reference copy, not a deep copy.public static INodeDirectory valueOf(org.apache.hadoop.hdfs.server.namenode.INode inode, Object path) throws FileNotFoundException, org.apache.hadoop.fs.PathIsNotDirectoryException
FileNotFoundExceptionorg.apache.hadoop.fs.PathIsNotDirectoryExceptionpublic final boolean isDirectory()
org.apache.hadoop.hdfs.server.namenode.INodeisDirectory in interface org.apache.hadoop.hdfs.server.namenode.INodeAttributesisDirectory in class org.apache.hadoop.hdfs.server.namenode.INodepublic final INodeDirectory asDirectory()
org.apache.hadoop.hdfs.server.namenode.INodeINodeDirectory.asDirectory in class org.apache.hadoop.hdfs.server.namenode.INodepublic byte getLocalStoragePolicyID()
getLocalStoragePolicyID in class org.apache.hadoop.hdfs.server.namenode.INodeHdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED if no policy has
been specified.public byte getStoragePolicyID()
getStoragePolicyID in class org.apache.hadoop.hdfs.server.namenode.INodepublic QuotaCounts getQuotaCounts()
org.apache.hadoop.hdfs.server.namenode.INodegetQuotaCounts in interface org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributesgetQuotaCounts in class org.apache.hadoop.hdfs.server.namenode.INodepublic void addSpaceConsumed(QuotaCounts counts)
org.apache.hadoop.hdfs.server.namenode.INodeaddSpaceConsumed in class org.apache.hadoop.hdfs.server.namenode.INodepublic final DirectoryWithQuotaFeature getDirectoryWithQuotaFeature()
DirectoryWithQuotaFeature, return it;
otherwise, return null.public org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature addSnapshotFeature(DirectoryWithSnapshotFeature.DirectoryDiffList diffs)
public final org.apache.hadoop.hdfs.server.namenode.snapshot.DirectoryWithSnapshotFeature getDirectoryWithSnapshotFeature()
DirectoryWithSnapshotFeature, return it;
otherwise, return null.public final boolean isWithSnapshot()
public DirectoryWithSnapshotFeature.DirectoryDiffList getDiffs()
public org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes getSnapshotINode(int snapshotId)
getSnapshotINode in class org.apache.hadoop.hdfs.server.namenode.INodeSnapshot.CURRENT_STATE_ID,
return this; otherwise return the corresponding snapshot inode.public String toDetailString()
toDetailString in class org.apache.hadoop.hdfs.server.namenode.INodepublic org.apache.hadoop.hdfs.server.namenode.snapshot.DirectorySnapshottableFeature getDirectorySnapshottableFeature()
public boolean isSnapshottable()
public boolean isDescendantOfSnapshotRoot(INodeDirectory snapshotRootDir)
snapshotRootDir - the snapshot root directorypublic org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot getSnapshot(byte[] snapshotName)
public void setSnapshotQuota(int snapshotQuota)
public org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot addSnapshot(int id,
String name,
org.apache.hadoop.hdfs.server.namenode.LeaseManager leaseManager,
boolean captureOpenFiles,
int maxSnapshotLimit,
long mtime)
throws SnapshotException
name - Name of the snapshot.mtime - The snapshot creation time set by Time.now().SnapshotExceptionpublic org.apache.hadoop.hdfs.server.namenode.snapshot.Snapshot removeSnapshot(INode.ReclaimContext reclaimContext, String snapshotName, long mtime) throws SnapshotException
snapshotName - Name of the snapshot.mtime - The snapshot deletion time set by Time.now().SnapshotExceptionpublic void renameSnapshot(String path, String oldName, String newName, long mtime) throws SnapshotException
path - The directory path where the snapshot was taken.oldName - Old name of the snapshotnewName - New name the snapshot will be renamed tomtime - The snapshot modification time set by Time.now().SnapshotExceptionpublic void addSnapshottableFeature()
public void removeSnapshottableFeature()
public void replaceChild(org.apache.hadoop.hdfs.server.namenode.INode oldChild,
org.apache.hadoop.hdfs.server.namenode.INode newChild,
INodeMap inodeMap)
public void recordModification(int latestSnapshotId)
org.apache.hadoop.hdfs.server.namenode.INoderecordModification in class org.apache.hadoop.hdfs.server.namenode.INodelatestSnapshotId - The id of the latest snapshot that has been taken.
Note that it is Snapshot.CURRENT_STATE_ID
if no snapshots have been taken.public org.apache.hadoop.hdfs.server.namenode.INode saveChild2Snapshot(org.apache.hadoop.hdfs.server.namenode.INode child,
int latestSnapshotId,
org.apache.hadoop.hdfs.server.namenode.INode snapshotCopy)
public org.apache.hadoop.hdfs.server.namenode.INode getChild(byte[] name,
int snapshotId)
name - the name of the childsnapshotId - if it is not Snapshot.CURRENT_STATE_ID, get the result
from the corresponding snapshot; otherwise, get the result from
the current directory.public int searchChild(org.apache.hadoop.hdfs.server.namenode.INode inode)
Snapshot.CURRENT_STATE_ID if the inode is in the children
list; Snapshot.NO_SNAPSHOT_ID if the inode is neither in the
children list nor in any snapshot; otherwise the snapshot id of the
corresponding snapshot diff list.public org.apache.hadoop.hdfs.util.ReadOnlyList<org.apache.hadoop.hdfs.server.namenode.INode> getChildrenList(int snapshotId)
snapshotId - if it is not Snapshot.CURRENT_STATE_ID, get the result
from the corresponding snapshot; otherwise, get the result from
the current directory.public boolean removeChild(org.apache.hadoop.hdfs.server.namenode.INode child,
int latestSnapshotId)
public boolean removeChild(org.apache.hadoop.hdfs.server.namenode.INode child)
child - the child inode to be removedpublic boolean addChild(org.apache.hadoop.hdfs.server.namenode.INode node,
boolean setModTime,
int latestSnapshotId)
node - INode to insertsetModTime - set modification time for the parent node
not needed when replaying the addition and
the parent already has the proper mod timepublic boolean addChild(org.apache.hadoop.hdfs.server.namenode.INode node)
public boolean addChildAtLoading(org.apache.hadoop.hdfs.server.namenode.INode node)
public QuotaCounts computeQuotaUsage(BlockStoragePolicySuite bsps, byte blockStoragePolicyId, boolean useCache, int lastSnapshotId)
org.apache.hadoop.hdfs.server.namenode.INodeQuota.NAMESPACE and Quota.STORAGESPACE usages.
With the existence of INodeReference, the same inode and its
subtree may be referred by multiple INodeReference.WithName nodes and a
INodeReference.DstReference node. To avoid circles while quota usage computation,
we have the following rules:
1. For aINodeReference.DstReferencenode, since the node must be in the current tree (or has been deleted as the end point of a series of rename operations), we compute the quota usage of the referred node (and its subtree) in the regular manner, i.e., including every inode in the current tree and in snapshot copies, as well as the size of diff list. 2. For aINodeReference.WithNamenode, since the node must be in a snapshot, we only count the quota usage for those nodes that still existed at the creation time of the snapshot associated with theINodeReference.WithNamenode. We do not count in the size of the diff list.
computeQuotaUsage in class org.apache.hadoop.hdfs.server.namenode.INodebsps - Block storage policy suite to calculate intended storage type usageblockStoragePolicyId - block storage policy id of the current INodeuseCache - Whether to use cached quota usage. Note that
INodeReference.WithName node never uses cache for its subtree.lastSnapshotId - Snapshot.CURRENT_STATE_ID indicates the
computation is in the current tree. Otherwise the id
indicates the computation range for a
INodeReference.WithName node.public QuotaCounts computeQuotaUsage4CurrentDirectory(BlockStoragePolicySuite bsps, byte storagePolicyId, QuotaCounts counts)
public org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext computeContentSummary(int snapshotId,
org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext summary)
throws org.apache.hadoop.security.AccessControlException
org.apache.hadoop.hdfs.server.namenode.INodeContentCounts.computeContentSummary in class org.apache.hadoop.hdfs.server.namenode.INodesnapshotId - Specify the time range for the calculation. If this
parameter equals to Snapshot.CURRENT_STATE_ID,
the result covers both the current states and all the
snapshots. Otherwise the result only covers all the
files/directories contained in the specific snapshot.summary - the context object holding counts for the subtree.org.apache.hadoop.security.AccessControlExceptionprotected org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext computeDirectoryContentSummary(org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext summary,
int snapshotId)
throws org.apache.hadoop.security.AccessControlException
org.apache.hadoop.security.AccessControlExceptionpublic void undoRename4ScrParent(INodeReference oldChild, org.apache.hadoop.hdfs.server.namenode.INode newChild)
1) remove the WithName node from the deleted list (if it exists) 2) replace the WithName node in the created list with srcChild 3) add srcChild back as a child of srcParent. Note that we already add the node into the created list of a snapshot diff in step 2, we do not need to add srcChild to the created list of the latest snapshot.We do not need to update quota usage because the old child is in the deleted list before.
oldChild - The reference node to be removed/replacednewChild - The node to be added backpublic void undoRename4DstParent(BlockStoragePolicySuite bsps, org.apache.hadoop.hdfs.server.namenode.INode deletedChild, int latestSnapshotId)
public void clearChildren()
public void clear()
org.apache.hadoop.hdfs.server.namenode.INodeclear in class org.apache.hadoop.hdfs.server.namenode.INodepublic void cleanSubtreeRecursively(INode.ReclaimContext reclaimContext, int snapshot, int prior, Map<org.apache.hadoop.hdfs.server.namenode.INode,org.apache.hadoop.hdfs.server.namenode.INode> excludedNodes)
public void destroyAndCollectBlocks(INode.ReclaimContext reclaimContext)
org.apache.hadoop.hdfs.server.namenode.INodedestroyAndCollectBlocks in class org.apache.hadoop.hdfs.server.namenode.INodereclaimContext - Record blocks and inodes that need to be reclaimed.public void cleanSubtree(INode.ReclaimContext reclaimContext, int snapshotId, int priorSnapshotId)
org.apache.hadoop.hdfs.server.namenode.INodeIn general, we have the following rules. 1. When deleting a file/directory in the current tree, we have different actions according to the type of the node to delete. 1.1 The current inode (this) is anINodeFile. 1.1.1 Ifprioris null, there is no snapshot taken on ancestors before. Thus we simply destroy (i.e., to delete completely, no need to save snapshot copy) the current INode and collect its blocks for further cleansing. 1.1.2 Else do nothing since the current INode will be stored as a snapshot copy. 1.2 The current inode is anINodeDirectory. 1.2.1 Ifprioris null, there is no snapshot taken on ancestors before. Similarly, we destroy the whole subtree and collect blocks. 1.2.2 Else do nothing with the current INode. Recursively clean its children. 1.3 The current inode is a file with snapshot. Call recordModification(..) to capture the current states. Mark the INode as deleted. 1.4 The current inode is anINodeDirectorywith snapshot feature. Call recordModification(..) to capture the current states. Destroy files/directories created after the latest snapshot (i.e., the inodes stored in the created list of the latest snapshot). Recursively clean remaining children. 2. When deleting a snapshot. 2.1 To cleanINodeFile: do nothing. 2.2 To cleanINodeDirectory: recursively clean its children. 2.3 To clean INodeFile with snapshot: delete the corresponding snapshot in its diff list. 2.4 To cleanINodeDirectorywith snapshot: delete the corresponding snapshot in its diff list. Recursively clean its children.
cleanSubtree in class org.apache.hadoop.hdfs.server.namenode.INodereclaimContext - Record blocks and inodes that need to be reclaimed.snapshotId - The id of the snapshot to delete.
Snapshot.CURRENT_STATE_ID means to delete the current
file/directory.priorSnapshotId - The id of the latest snapshot before the to-be-deleted snapshot.
When deleting a current inode, this parameter captures the latest
snapshot.public boolean metadataEquals(org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributes other)
metadataEquals in interface org.apache.hadoop.hdfs.server.namenode.INodeDirectoryAttributespublic void dumpTreeRecursively(PrintWriter out, StringBuilder prefix, int snapshot)
org.apache.hadoop.hdfs.server.namenode.INodedumpTreeRecursively in class org.apache.hadoop.hdfs.server.namenode.INodeprefix - The prefix string that each line should print.public static void dumpTreeRecursively(PrintWriter out, StringBuilder prefix, Iterable<INodeDirectory.SnapshotAndINode> subs)
prefix - The prefix string that each line should print.subs - The subtrees.public final int getChildrenNum(int snapshotId)
Copyright © 2008–2023 Apache Software Foundation. All rights reserved.