public static class INodeReference.WithName extends INodeReference
INodeReference.DstReference, INodeReference.WithCount, INodeReference.WithNameINode.BlocksMapUpdateInfo, INode.Feature, INode.QuotaDelta, INode.ReclaimContextINodeAttributes.SnapshotCopy| Constructor and Description |
|---|
WithName(INodeDirectory parent,
INodeReference.WithCount referred,
byte[] name,
int lastSnapshotId) |
| Modifier and Type | Method and Description |
|---|---|
void |
cleanSubtree(INode.ReclaimContext reclaimContext,
int snapshot,
int prior)
Clean the subtree under this inode and collect the blocks from the descents
for further block deletion/update.
|
org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext |
computeContentSummary(int snapshotId,
org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext summary)
Count subtree content summary with a
ContentCounts. |
QuotaCounts |
computeQuotaUsage(BlockStoragePolicySuite bsps,
byte blockStoragePolicyId,
boolean useCache,
int lastSnapshotId)
Count subtree
Quota.NAMESPACE and Quota.STORAGESPACE usages. |
void |
destroyAndCollectBlocks(INode.ReclaimContext reclaimContext)
Destroy self and clear everything! If the INode is a file, this method
collects its blocks for further block deletion.
|
int |
getLastSnapshotId() |
byte[] |
getLocalNameBytes() |
void |
setLocalName(byte[] name)
Set local file name
|
accept, asDirectory, asFile, asReference, asSymlink, clear, dumpTreeRecursively, getAccessTime, getDstSnapshotId, getFsPermission, getFsPermissionShort, getGroupName, getId, getLocalStoragePolicyID, getModificationTime, getPermissionLong, getPermissionStatus, getQuotaCounts, getReferredINode, getSnapshotINode, getStoragePolicyID, getUserName, isDirectory, isFile, isReference, isSymlink, setAccessTime, setModificationTime, toDetailString, tryRemoveReference, updateModificationTimeaddSpaceConsumed, compareTo, computeAndConvertContentSummary, computeContentSummary, computeQuotaUsage, computeQuotaUsage, dumpINode, dumpParentINodes, dumpTreeRecursively, dumpTreeRecursively, equals, getAccessTime, getAclFeature, getFsPermission, getFullPathAndObjectString, getFullPathName, getGroupName, getKey, getLocalName, getModificationTime, getObjectString, getParent, getParentReference, getParentString, getPathComponents, getPathComponents, getPathNames, getStoragePolicyIDForQuota, getUserName, getXAttrFeature, hashCode, isAncestorDirectory, isDeleted, isInCurrentState, isInLatestSnapshot, isLastReference, isQuotaSet, isSetStoragePolicy, setAccessTime, setModificationTime, setParent, setParentReference, shouldRecordInSrcSnapshot, toStringpublic WithName(INodeDirectory parent, INodeReference.WithCount referred, byte[] name, int lastSnapshotId)
public final byte[] getLocalNameBytes()
getLocalNameBytes in interface org.apache.hadoop.hdfs.server.namenode.INodeAttributesgetLocalNameBytes in class INodeReferencepublic final void setLocalName(byte[] name)
org.apache.hadoop.hdfs.server.namenode.INodesetLocalName in class INodeReferencepublic int getLastSnapshotId()
public final org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext computeContentSummary(int snapshotId,
org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext summary)
throws org.apache.hadoop.security.AccessControlException
org.apache.hadoop.hdfs.server.namenode.INodeContentCounts.computeContentSummary in class INodeReferencesnapshotId - Specify the time range for the calculation. If this
parameter equals to Snapshot.CURRENT_STATE_ID,
the result covers both the current states and all the
snapshots. Otherwise the result only covers all the
files/directories contained in the specific snapshot.summary - the context object holding counts for the subtree.org.apache.hadoop.security.AccessControlExceptionpublic final QuotaCounts computeQuotaUsage(BlockStoragePolicySuite bsps, byte blockStoragePolicyId, boolean useCache, int lastSnapshotId)
org.apache.hadoop.hdfs.server.namenode.INodeQuota.NAMESPACE and Quota.STORAGESPACE usages.
With the existence of INodeReference, the same inode and its
subtree may be referred by multiple INodeReference.WithName nodes and a
INodeReference.DstReference node. To avoid circles while quota usage computation,
we have the following rules:
1. For aINodeReference.DstReferencenode, since the node must be in the current tree (or has been deleted as the end point of a series of rename operations), we compute the quota usage of the referred node (and its subtree) in the regular manner, i.e., including every inode in the current tree and in snapshot copies, as well as the size of diff list. 2. For aINodeReference.WithNamenode, since the node must be in a snapshot, we only count the quota usage for those nodes that still existed at the creation time of the snapshot associated with theINodeReference.WithNamenode. We do not count in the size of the diff list.
computeQuotaUsage in class INodeReferencebsps - Block storage policy suite to calculate intended storage type usageblockStoragePolicyId - block storage policy id of the current INodeuseCache - Whether to use cached quota usage. Note that
INodeReference.WithName node never uses cache for its subtree.lastSnapshotId - Snapshot.CURRENT_STATE_ID indicates the
computation is in the current tree. Otherwise the id
indicates the computation range for a
INodeReference.WithName node.public void cleanSubtree(INode.ReclaimContext reclaimContext, int snapshot, int prior)
org.apache.hadoop.hdfs.server.namenode.INodeIn general, we have the following rules. 1. When deleting a file/directory in the current tree, we have different actions according to the type of the node to delete. 1.1 The current inode (this) is anINodeFile. 1.1.1 Ifprioris null, there is no snapshot taken on ancestors before. Thus we simply destroy (i.e., to delete completely, no need to save snapshot copy) the current INode and collect its blocks for further cleansing. 1.1.2 Else do nothing since the current INode will be stored as a snapshot copy. 1.2 The current inode is anINodeDirectory. 1.2.1 Ifprioris null, there is no snapshot taken on ancestors before. Similarly, we destroy the whole subtree and collect blocks. 1.2.2 Else do nothing with the current INode. Recursively clean its children. 1.3 The current inode is a file with snapshot. Call recordModification(..) to capture the current states. Mark the INode as deleted. 1.4 The current inode is anINodeDirectorywith snapshot feature. Call recordModification(..) to capture the current states. Destroy files/directories created after the latest snapshot (i.e., the inodes stored in the created list of the latest snapshot). Recursively clean remaining children. 2. When deleting a snapshot. 2.1 To cleanINodeFile: do nothing. 2.2 To cleanINodeDirectory: recursively clean its children. 2.3 To clean INodeFile with snapshot: delete the corresponding snapshot in its diff list. 2.4 To cleanINodeDirectorywith snapshot: delete the corresponding snapshot in its diff list. Recursively clean its children.
cleanSubtree in class INodeReferencereclaimContext - Record blocks and inodes that need to be reclaimed.snapshot - The id of the snapshot to delete.
Snapshot.CURRENT_STATE_ID means to delete the current
file/directory.prior - The id of the latest snapshot before the to-be-deleted snapshot.
When deleting a current inode, this parameter captures the latest
snapshot.public void destroyAndCollectBlocks(INode.ReclaimContext reclaimContext)
org.apache.hadoop.hdfs.server.namenode.INodedestroyAndCollectBlocks in class INodeReferencereclaimContext - Record blocks and inodes that need to be reclaimed.Copyright © 2008–2024 Apache Software Foundation. All rights reserved.