Package org.apache.storm.hdfs.bolt
Class AbstractHdfsBolt
- java.lang.Object
-
- org.apache.storm.topology.base.BaseComponent
-
- org.apache.storm.topology.base.BaseRichBolt
-
- org.apache.storm.hdfs.bolt.AbstractHdfsBolt
-
- All Implemented Interfaces:
Serializable,IBolt,IComponent,IRichBolt
- Direct Known Subclasses:
AvroGenericRecordBolt,HdfsBolt,SequenceFileBolt
public abstract class AbstractHdfsBolt extends BaseRichBolt
- See Also:
- Serialized Form
-
-
Field Summary
Fields Modifier and Type Field Description protected OutputCollectorcollectorprotected StringconfigKeyprotected FileNameFormatfileNameFormatprotected IntegerfileRetryCountprotected org.apache.hadoop.fs.FileSystemfsprotected StringfsUrlprotected org.apache.hadoop.conf.ConfigurationhdfsConfigprotected IntegermaxOpenFilesprotected longoffsetprotected Partitionerpartitionerprotected List<RotationAction>rotationActionsprotected Map<String,Integer>rotationCounterMapprotected FileRotationPolicyrotationPolicyprotected TimerrotationTimerprotected SyncPolicysyncPolicyprotected IntegertickTupleIntervalprotected ObjectwriteLockprotected Map<String,Writer>writers
-
Constructor Summary
Constructors Constructor Description AbstractHdfsBolt()
-
Method Summary
All Methods Instance Methods Abstract Methods Concrete Methods Modifier and Type Method Description voidcleanup()Called when an IBolt is going to be shutdown.voiddeclareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer)Declare the output schema for all the streams of this topology.protected abstract voiddoPrepare(Map<String,Object> conf, TopologyContext topologyContext, OutputCollector collector)voidexecute(Tuple tuple)Process a single tuple of input.protected org.apache.hadoop.fs.PathgetBasePathForNextFile(Tuple tuple)Map<String,Object>getComponentConfiguration()Declare configuration specific to this component.protected abstract StringgetWriterKey(Tuple tuple)protected abstract WritermakeNewWriter(org.apache.hadoop.fs.Path path, Tuple tuple)voidprepare(Map<String,Object> conf, TopologyContext topologyContext, OutputCollector collector)Marked as final to prevent override.protected voidrotateOutputFile(Writer writer)
-
-
-
Field Detail
-
rotationActions
protected List<RotationAction> rotationActions
-
collector
protected OutputCollector collector
-
fs
protected transient org.apache.hadoop.fs.FileSystem fs
-
syncPolicy
protected SyncPolicy syncPolicy
-
rotationPolicy
protected FileRotationPolicy rotationPolicy
-
fileNameFormat
protected FileNameFormat fileNameFormat
-
fsUrl
protected String fsUrl
-
configKey
protected String configKey
-
writeLock
protected transient Object writeLock
-
rotationTimer
protected transient Timer rotationTimer
-
offset
protected long offset
-
fileRetryCount
protected Integer fileRetryCount
-
tickTupleInterval
protected Integer tickTupleInterval
-
maxOpenFiles
protected Integer maxOpenFiles
-
partitioner
protected Partitioner partitioner
-
hdfsConfig
protected transient org.apache.hadoop.conf.Configuration hdfsConfig
-
-
Method Detail
-
rotateOutputFile
protected void rotateOutputFile(Writer writer) throws IOException
- Throws:
IOException
-
prepare
public final void prepare(Map<String,Object> conf, TopologyContext topologyContext, OutputCollector collector)
Marked as final to prevent override. Subclasses should implement the doPrepare() method.- Parameters:
conf- The Storm configuration for this bolt. This is the configuration provided to the topology merged in with cluster configuration on this machine.topologyContext- This object can be used to get information about this task's place within the topology, including the task id and component id of this task, input and output information, etc.collector- The collector is used to emit tuples from this bolt. Tuples can be emitted at any time, including the prepare and cleanup methods. The collector is thread-safe and should be saved as an instance variable of this bolt object.
-
execute
public final void execute(Tuple tuple)
Description copied from interface:IBoltProcess a single tuple of input. The Tuple object contains metadata on it about which component/stream/task it came from. The values of the Tuple can be accessed using Tuple#getValue. The IBolt does not have to process the Tuple immediately. It is perfectly fine to hang onto a tuple and process it later (for instance, to do an aggregation or join).Tuples should be emitted using the OutputCollector provided through the prepare method. It is required that all input tuples are acked or failed at some point using the OutputCollector. Otherwise, Storm will be unable to determine when tuples coming off the spouts have been completed.
For the common case of acking an input tuple at the end of the execute method, see IBasicBolt which automates this.
- Parameters:
tuple- The input tuple to be processed.
-
getComponentConfiguration
public Map<String,Object> getComponentConfiguration()
Description copied from interface:IComponentDeclare configuration specific to this component. Only a subset of the "topology.*" configs can be overridden. The component configuration can be further overridden when constructing the topology usingTopologyBuilder- Specified by:
getComponentConfigurationin interfaceIComponent- Overrides:
getComponentConfigurationin classBaseComponent
-
declareOutputFields
public void declareOutputFields(OutputFieldsDeclarer outputFieldsDeclarer)
Description copied from interface:IComponentDeclare the output schema for all the streams of this topology.- Parameters:
outputFieldsDeclarer- this is used to declare output stream ids, output fields, and whether or not each output stream is a direct stream
-
cleanup
public void cleanup()
Description copied from interface:IBoltCalled when an IBolt is going to be shutdown. Storm will make a best-effort attempt to call this if the worker shutdown is orderly. TheConfig.SUPERVISOR_WORKER_SHUTDOWN_SLEEP_SECSsetting controls how long orderly shutdown is allowed to take. There is no guarantee that cleanup will be called if shutdown is not orderly, or if the shutdown exceeds the time limit.The one context where cleanup is guaranteed to be called is when a topology is killed when running Storm in local mode.
- Specified by:
cleanupin interfaceIBolt- Overrides:
cleanupin classBaseRichBolt
-
getBasePathForNextFile
protected org.apache.hadoop.fs.Path getBasePathForNextFile(Tuple tuple)
-
doPrepare
protected abstract void doPrepare(Map<String,Object> conf, TopologyContext topologyContext, OutputCollector collector) throws IOException
- Throws:
IOException
-
makeNewWriter
protected abstract Writer makeNewWriter(org.apache.hadoop.fs.Path path, Tuple tuple) throws IOException
- Throws:
IOException
-
-