public abstract class INodeReference
extends org.apache.hadoop.hdfs.server.namenode.INode
| Modifier and Type | Class and Description |
|---|---|
static class |
INodeReference.DstReference |
static class |
INodeReference.WithCount
An anonymous reference with reference count.
|
static class |
INodeReference.WithName
A reference with a fixed name.
|
INode.BlocksMapUpdateInfo, INode.Feature, INode.QuotaDelta, INode.ReclaimContextINodeAttributes.SnapshotCopy| Constructor and Description |
|---|
INodeReference(org.apache.hadoop.hdfs.server.namenode.INode parent,
org.apache.hadoop.hdfs.server.namenode.INode referred) |
| Modifier and Type | Method and Description |
|---|---|
INodeDirectory |
asDirectory()
Cast this inode to an
INodeDirectory. |
org.apache.hadoop.hdfs.server.namenode.INodeFile |
asFile()
Cast this inode to an
INodeFile. |
INodeReference |
asReference()
Cast this inode to an
INodeReference. |
org.apache.hadoop.hdfs.server.namenode.INodeSymlink |
asSymlink()
Cast this inode to an
INodeSymlink. |
void |
cleanSubtree(INode.ReclaimContext reclaimContext,
int snapshot,
int prior)
Clean the subtree under this inode and collect the blocks from the descents
for further block deletion/update.
|
void |
clear()
Clear references to other objects.
|
org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext |
computeContentSummary(int snapshotId,
org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext summary)
Count subtree content summary with a
ContentCounts. |
QuotaCounts |
computeQuotaUsage(BlockStoragePolicySuite bsps,
byte blockStoragePolicyId,
boolean useCache,
int lastSnapshotId)
Count subtree
Quota.NAMESPACE and Quota.STORAGESPACE usages. |
void |
destroyAndCollectBlocks(INode.ReclaimContext reclaimContext)
Destroy self and clear everything! If the INode is a file, this method
collects its blocks for further block deletion.
|
void |
dumpTreeRecursively(PrintWriter out,
StringBuilder prefix,
int snapshot)
Dump tree recursively.
|
long |
getAccessTime(int snapshotId) |
int |
getDstSnapshotId() |
org.apache.hadoop.fs.permission.FsPermission |
getFsPermission(int snapshotId) |
short |
getFsPermissionShort() |
String |
getGroupName(int snapshotId) |
long |
getId()
Get inode id
|
byte[] |
getLocalNameBytes() |
byte |
getLocalStoragePolicyID() |
long |
getModificationTime(int snapshotId) |
long |
getPermissionLong() |
org.apache.hadoop.fs.permission.PermissionStatus |
getPermissionStatus(int snapshotId)
Get the
PermissionStatus |
QuotaCounts |
getQuotaCounts()
Get the quota set for this inode
|
org.apache.hadoop.hdfs.server.namenode.INode |
getReferredINode() |
org.apache.hadoop.hdfs.server.namenode.INodeAttributes |
getSnapshotINode(int snapshotId) |
byte |
getStoragePolicyID() |
String |
getUserName(int snapshotId) |
boolean |
isDirectory()
Check whether it's a directory
|
boolean |
isFile()
Check whether it's a file.
|
boolean |
isReference()
Check whether it's a reference.
|
boolean |
isSymlink()
Check whether it's a symlink
|
void |
setAccessTime(long accessTime)
Set last access time of inode.
|
void |
setLocalName(byte[] name)
Set local file name
|
void |
setModificationTime(long modificationTime)
Set the last modification time of inode.
|
static int |
tryRemoveReference(org.apache.hadoop.hdfs.server.namenode.INode inode)
Try to remove the given reference and then return the reference count.
|
org.apache.hadoop.hdfs.server.namenode.INode |
updateModificationTime(long mtime,
int latestSnapshotId)
Update modification time if it is larger than the current value.
|
addSpaceConsumed, compareTo, computeAndConvertContentSummary, computeContentSummary, computeQuotaUsage, computeQuotaUsage, dumpTreeRecursively, dumpTreeRecursively, equals, getAccessTime, getAclFeature, getFsPermission, getFullPathName, getGroupName, getKey, getLocalName, getModificationTime, getObjectString, getParent, getParentReference, getParentString, getPathComponents, getPathComponents, getPathNames, getStoragePolicyIDForQuota, getUserName, getXAttrFeature, hashCode, isAncestorDirectory, isDeleted, isInCurrentState, isInLatestSnapshot, isLastReference, isQuotaSet, isSetStoragePolicy, setAccessTime, setModificationTime, setParent, setParentReference, shouldRecordInSrcSnapshot, toDetailString, toStringpublic INodeReference(org.apache.hadoop.hdfs.server.namenode.INode parent,
org.apache.hadoop.hdfs.server.namenode.INode referred)
public static int tryRemoveReference(org.apache.hadoop.hdfs.server.namenode.INode inode)
public final org.apache.hadoop.hdfs.server.namenode.INode getReferredINode()
public final boolean isReference()
org.apache.hadoop.hdfs.server.namenode.INodeisReference in class org.apache.hadoop.hdfs.server.namenode.INodepublic final INodeReference asReference()
org.apache.hadoop.hdfs.server.namenode.INodeINodeReference.asReference in class org.apache.hadoop.hdfs.server.namenode.INodepublic final boolean isFile()
org.apache.hadoop.hdfs.server.namenode.INodeisFile in class org.apache.hadoop.hdfs.server.namenode.INodepublic final org.apache.hadoop.hdfs.server.namenode.INodeFile asFile()
org.apache.hadoop.hdfs.server.namenode.INodeINodeFile.asFile in class org.apache.hadoop.hdfs.server.namenode.INodepublic final boolean isDirectory()
org.apache.hadoop.hdfs.server.namenode.INodeisDirectory in interface org.apache.hadoop.hdfs.server.namenode.INodeAttributesisDirectory in class org.apache.hadoop.hdfs.server.namenode.INodepublic final INodeDirectory asDirectory()
org.apache.hadoop.hdfs.server.namenode.INodeINodeDirectory.asDirectory in class org.apache.hadoop.hdfs.server.namenode.INodepublic final boolean isSymlink()
org.apache.hadoop.hdfs.server.namenode.INodeisSymlink in class org.apache.hadoop.hdfs.server.namenode.INodepublic final org.apache.hadoop.hdfs.server.namenode.INodeSymlink asSymlink()
org.apache.hadoop.hdfs.server.namenode.INodeINodeSymlink.asSymlink in class org.apache.hadoop.hdfs.server.namenode.INodepublic byte[] getLocalNameBytes()
public void setLocalName(byte[] name)
org.apache.hadoop.hdfs.server.namenode.INodesetLocalName in class org.apache.hadoop.hdfs.server.namenode.INodepublic final long getId()
org.apache.hadoop.hdfs.server.namenode.INodegetId in class org.apache.hadoop.hdfs.server.namenode.INodepublic final org.apache.hadoop.fs.permission.PermissionStatus getPermissionStatus(int snapshotId)
org.apache.hadoop.hdfs.server.namenode.INodePermissionStatusgetPermissionStatus in class org.apache.hadoop.hdfs.server.namenode.INodepublic final String getUserName(int snapshotId)
getUserName in class org.apache.hadoop.hdfs.server.namenode.INodesnapshotId - if it is not Snapshot.CURRENT_STATE_ID, get the result
from the given snapshot; otherwise, get the result from the
current inode.public final String getGroupName(int snapshotId)
getGroupName in class org.apache.hadoop.hdfs.server.namenode.INodesnapshotId - if it is not Snapshot.CURRENT_STATE_ID, get the result
from the given snapshot; otherwise, get the result from the
current inode.public final org.apache.hadoop.fs.permission.FsPermission getFsPermission(int snapshotId)
getFsPermission in class org.apache.hadoop.hdfs.server.namenode.INodesnapshotId - if it is not Snapshot.CURRENT_STATE_ID, get the result
from the given snapshot; otherwise, get the result from the
current inode.public final short getFsPermissionShort()
public long getPermissionLong()
public final long getModificationTime(int snapshotId)
getModificationTime in class org.apache.hadoop.hdfs.server.namenode.INodesnapshotId - if it is not Snapshot.CURRENT_STATE_ID, get the result
from the given snapshot; otherwise, get the result from the
current inode.public final org.apache.hadoop.hdfs.server.namenode.INode updateModificationTime(long mtime,
int latestSnapshotId)
org.apache.hadoop.hdfs.server.namenode.INodeupdateModificationTime in class org.apache.hadoop.hdfs.server.namenode.INodepublic final void setModificationTime(long modificationTime)
org.apache.hadoop.hdfs.server.namenode.INodesetModificationTime in class org.apache.hadoop.hdfs.server.namenode.INodepublic final long getAccessTime(int snapshotId)
getAccessTime in class org.apache.hadoop.hdfs.server.namenode.INodesnapshotId - if it is not Snapshot.CURRENT_STATE_ID, get the result
from the given snapshot; otherwise, get the result from the
current inode.public final void setAccessTime(long accessTime)
org.apache.hadoop.hdfs.server.namenode.INodesetAccessTime in class org.apache.hadoop.hdfs.server.namenode.INodepublic final byte getStoragePolicyID()
getStoragePolicyID in class org.apache.hadoop.hdfs.server.namenode.INodepublic final byte getLocalStoragePolicyID()
getLocalStoragePolicyID in class org.apache.hadoop.hdfs.server.namenode.INodeHdfsConstants.BLOCK_STORAGE_POLICY_ID_UNSPECIFIED if no policy has
been specified.public void cleanSubtree(INode.ReclaimContext reclaimContext, int snapshot, int prior)
org.apache.hadoop.hdfs.server.namenode.INodeIn general, we have the following rules. 1. When deleting a file/directory in the current tree, we have different actions according to the type of the node to delete. 1.1 The current inode (this) is anINodeFile. 1.1.1 Ifprioris null, there is no snapshot taken on ancestors before. Thus we simply destroy (i.e., to delete completely, no need to save snapshot copy) the current INode and collect its blocks for further cleansing. 1.1.2 Else do nothing since the current INode will be stored as a snapshot copy. 1.2 The current inode is anINodeDirectory. 1.2.1 Ifprioris null, there is no snapshot taken on ancestors before. Similarly, we destroy the whole subtree and collect blocks. 1.2.2 Else do nothing with the current INode. Recursively clean its children. 1.3 The current inode is a file with snapshot. Call recordModification(..) to capture the current states. Mark the INode as deleted. 1.4 The current inode is anINodeDirectorywith snapshot feature. Call recordModification(..) to capture the current states. Destroy files/directories created after the latest snapshot (i.e., the inodes stored in the created list of the latest snapshot). Recursively clean remaining children. 2. When deleting a snapshot. 2.1 To cleanINodeFile: do nothing. 2.2 To cleanINodeDirectory: recursively clean its children. 2.3 To clean INodeFile with snapshot: delete the corresponding snapshot in its diff list. 2.4 To cleanINodeDirectorywith snapshot: delete the corresponding snapshot in its diff list. Recursively clean its children.
cleanSubtree in class org.apache.hadoop.hdfs.server.namenode.INodereclaimContext - Record blocks and inodes that need to be reclaimed.snapshot - The id of the snapshot to delete.
Snapshot.CURRENT_STATE_ID means to delete the current
file/directory.prior - The id of the latest snapshot before the to-be-deleted snapshot.
When deleting a current inode, this parameter captures the latest
snapshot.public void destroyAndCollectBlocks(INode.ReclaimContext reclaimContext)
org.apache.hadoop.hdfs.server.namenode.INodedestroyAndCollectBlocks in class org.apache.hadoop.hdfs.server.namenode.INodereclaimContext - Record blocks and inodes that need to be reclaimed.public org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext computeContentSummary(int snapshotId,
org.apache.hadoop.hdfs.server.namenode.ContentSummaryComputationContext summary)
throws org.apache.hadoop.security.AccessControlException
org.apache.hadoop.hdfs.server.namenode.INodeContentCounts.computeContentSummary in class org.apache.hadoop.hdfs.server.namenode.INodesnapshotId - Specify the time range for the calculation. If this
parameter equals to Snapshot.CURRENT_STATE_ID,
the result covers both the current states and all the
snapshots. Otherwise the result only covers all the
files/directories contained in the specific snapshot.summary - the context object holding counts for the subtree.org.apache.hadoop.security.AccessControlExceptionpublic QuotaCounts computeQuotaUsage(BlockStoragePolicySuite bsps, byte blockStoragePolicyId, boolean useCache, int lastSnapshotId)
org.apache.hadoop.hdfs.server.namenode.INodeQuota.NAMESPACE and Quota.STORAGESPACE usages.
With the existence of INodeReference, the same inode and its
subtree may be referred by multiple INodeReference.WithName nodes and a
INodeReference.DstReference node. To avoid circles while quota usage computation,
we have the following rules:
1. For aINodeReference.DstReferencenode, since the node must be in the current tree (or has been deleted as the end point of a series of rename operations), we compute the quota usage of the referred node (and its subtree) in the regular manner, i.e., including every inode in the current tree and in snapshot copies, as well as the size of diff list. 2. For aINodeReference.WithNamenode, since the node must be in a snapshot, we only count the quota usage for those nodes that still existed at the creation time of the snapshot associated with theINodeReference.WithNamenode. We do not count in the size of the diff list.
computeQuotaUsage in class org.apache.hadoop.hdfs.server.namenode.INodebsps - Block storage policy suite to calculate intended storage type usageblockStoragePolicyId - block storage policy id of the current INodeuseCache - Whether to use cached quota usage. Note that
INodeReference.WithName node never uses cache for its subtree.lastSnapshotId - Snapshot.CURRENT_STATE_ID indicates the
computation is in the current tree. Otherwise the id
indicates the computation range for a
INodeReference.WithName node.public final org.apache.hadoop.hdfs.server.namenode.INodeAttributes getSnapshotINode(int snapshotId)
getSnapshotINode in class org.apache.hadoop.hdfs.server.namenode.INodeSnapshot.CURRENT_STATE_ID,
return this; otherwise return the corresponding snapshot inode.public QuotaCounts getQuotaCounts()
org.apache.hadoop.hdfs.server.namenode.INodegetQuotaCounts in class org.apache.hadoop.hdfs.server.namenode.INodepublic final void clear()
org.apache.hadoop.hdfs.server.namenode.INodeclear in class org.apache.hadoop.hdfs.server.namenode.INodepublic void dumpTreeRecursively(PrintWriter out, StringBuilder prefix, int snapshot)
org.apache.hadoop.hdfs.server.namenode.INodedumpTreeRecursively in class org.apache.hadoop.hdfs.server.namenode.INodeprefix - The prefix string that each line should print.public int getDstSnapshotId()
Copyright © 2008–2023 Apache Software Foundation. All rights reserved.