c

cloudflow.akkastream

AkkaStreamletLogic

abstract class AkkaStreamletLogic extends StreamletLogic[AkkaStreamletContext]

Provides an entry-point for defining the behavior of an AkkaStreamlet. Override the run method to implement the specific logic / code that should be executed once the streamlet deployed as part of a running cloudflow application. See RunnableGraphStreamletLogic if you just want to create a RunnableGraph.

The usual process consists of getting akka stream Sources to inlets with atLeastOnceSource or atMostOnceSource, consuming elements from these using akka stream operators, and writing to outlets via Sinks that are provided by atLeastOnceSink or atMostOnceSink.

Linear Supertypes
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. AkkaStreamletLogic
  2. StreamletLogic
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AkkaStreamletLogic()(implicit context: AkkaStreamletContext)

Abstract Value Members

  1. abstract def run(): Unit

    This method is called when the streamlet is run.

    This method is called when the streamlet is run. Override this method to define what the specific streamlet logic should do.

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  6. def committableSink[T]: Sink[(T, Committable), NotUsed]

    Creates a sink, purely for committing the offsets that have been read further upstream.

    Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka. Uses a default CommitterSettings, which is configured through the default configuration in akka.kafka.committer.

  7. def committableSink[T](committerSettings: CommitterSettings): Sink[(T, Committable), NotUsed]

    Creates a sink, purely for committing the offsets that have been read further upstream.

    Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka.

  8. def committableSink[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings = defaultCommitterSettings): Sink[(T, Committable), NotUsed]

    Creates a sink for publishing records to the outlet.

    Creates a sink for publishing records to the outlet. The records are partitioned according to the partitioner of the outlet. Batches offsets from the contexts that accompany the records, and commits these to Kafka. The outlet specifies a cloudflow.streamlets.Codec that will be used to serialize the records that are written to Kafka.

  9. final def config: Config

    The full configuration for the AkkaStreamlet, containing all deployment-time configuration parameters on top of the normal configuration as loaded through ActorSystem.settings.config

  10. implicit val context: AkkaStreamletContext

    Returns the StreamletContext in which this StreamletLogic is run.

    Returns the StreamletContext in which this StreamletLogic is run. It can only be accessed when the streamlet is run.

    Definition Classes
    AkkaStreamletLogicStreamletLogic
  11. val defaultCommitterSettings: CommitterSettings

    The CommitterSettings that have been configured from the default configuration akka.kafka.committer.

  12. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  13. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  14. implicit final val executionContext: ExecutionContextExecutor

    The default ExecutionContext of the ActorSystem (the system dispatcher).

  15. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  16. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  17. def getCommittableSink[T](): Sink[Pair[T, Committable], NotUsed]

    Java API

  18. def getCommittableSink[T](committerSettings: CommitterSettings): Sink[Pair[T, Committable], NotUsed]

    Java API

  19. def getCommittableSink[T](outlet: CodecOutlet[T]): Sink[Pair[T, Committable], NotUsed]

    Java API

  20. def getCommittableSink[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings): Sink[Pair[T, Committable], NotUsed]

    Java API

  21. final def getConfig(): Config

    Java API

  22. def getContext(): AkkaStreamletContext

    Java API

    Java API

    Returns the StreamletContext in which this StreamletLogic is run. It can only be accessed when the streamlet is run.

    Definition Classes
    AkkaStreamletLogicStreamletLogic
  23. def getDefaultCommitterSettings(): CommitterSettings

    Java API

  24. def getExecutionContext(): ExecutionContextExecutor

    Java API

  25. final def getMountedPath(volumeMount: VolumeMount): Path

    The path mounted for a VolumeMount request from a streamlet.

    The path mounted for a VolumeMount request from a streamlet. In a clustered deployment, the mounted path will correspond to the requested mount path in the VolumeMount definition. In a local environment, this path will be replaced by a local folder.

    volumeMount

    the volumeMount declaration for which we want to obtain the mounted path.

    returns

    the path where the volume is mounted.

    Exceptions thrown

    [[cloudflow.streamlets.MountedPathUnavailableException MountedPathUnavailableException ]] in the case the path is not available.

  26. def getPlainSink[T](outlet: CodecOutlet[T]): Sink[T, NotUsed]

    Java API

  27. def getPlainSource[T](inlet: CodecInlet[T], resetPosition: ResetPosition): Source[T, NotUsed]

    Java API

  28. def getPlainSource[T](inlet: CodecInlet[T]): Source[T, NotUsed]

    Java API

  29. final def getSinkRef[T](outlet: CodecOutlet[T]): WritableSinkRef[T]

    Java API

  30. def getSourceWithCommittableContext[T](inlet: CodecInlet[T]): SourceWithContext[T, Committable, _]

    Java API

  31. final def getStreamletConfig(): Config

    Java API

  32. final def getStreamletRef(): String

    Java API

  33. def getSystem(): ActorSystem

    Java API

  34. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  35. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  36. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  37. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  38. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  39. def plainSink[T](outlet: CodecOutlet[T]): Sink[T, NotUsed]

    Creates a sink for publishing T records to the outlet.

    Creates a sink for publishing T records to the outlet. The records are partitioned according to the partitioner of the outlet. The outlet specifies a cloudflow.streamlets.Codec that will be used to serialize the records that are written to Kafka.

  40. def plainSource[T](inlet: CodecInlet[T], resetPosition: ResetPosition = Latest): Source[T, NotUsed]

    The plainSource emits T records (as received through the inlet).

    The plainSource emits T records (as received through the inlet).

    It has no support for committing offsets to Kafka. The inlet specifies a cloudflow.streamlets.Codec that will be used to deserialize the records read from Kafka.

  41. final def runGraph[T](graph: RunnableGraph[T]): T

    Java API Launch the execution of the graph.

  42. final def runGraph[T](graph: RunnableGraph[T]): T

    Launch the execution of the graph.

  43. final def signalReady(): Boolean

    Signals that the streamlet is ready to process data.

    Signals that the streamlet is ready to process data. signalReady completes the StreamletExecution.ready future. When a streamlet is run using the testkit, a StreamletExecution is returned. StreamletExecution.ready can be used for instance to wait for a server streamlet to signal that it is ready to accept requests.

  44. final def sinkRef[T](outlet: CodecOutlet[T]): WritableSinkRef[T]

    Creates a SinkRef to write to, for the specified CodeOutlet.

    Creates a SinkRef to write to, for the specified CodeOutlet. The records are partitioned according to the partitioner of the outlet.

    outlet

    the specified CodeOutlet

    returns

    the WritebleSinkRef created

  45. def sourceWithCommittableContext[T](inlet: CodecInlet[T]): SourceWithCommittableContext[T]

    This source emits T records together with the committable context, thus makes it possible to commit offset positions to Kafka (as received through the inlet).

    This source emits T records together with the committable context, thus makes it possible to commit offset positions to Kafka (as received through the inlet). This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time, but in failure cases, they can be duplicated.

    It is intended to be used with committableSink(outlet: CodecOutlet[T]), which commits the offset positions that accompany the records that are read from this source after the records have been written to the specified outlet.

    The inlet specifies a cloudflow.streamlets.Codec that is used to deserialize the records read from the underlying transport.

  46. final def streamletConfig: Config

    The subset of configuration specific to a single named instance of a streamlet.

    The subset of configuration specific to a single named instance of a streamlet.

    A cloudflow.streamlets.Streamlet can specify the set of environment- and instance-specific configuration keys it will use during runtime through cloudflow.streamlets.Streamlet.configParameters. Those keys will then be made available through this configuration.

  47. final def streamletRef: String

    The streamlet reference which identifies the streamlet in the blueprint.

    The streamlet reference which identifies the streamlet in the blueprint. It is used in a Streamlet for logging and metrics, referring back to the streamlet instance using a name recognizable by the user.

  48. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  49. implicit final val system: ActorSystem

    The ActorSystem that will run the Akkastreamlet.

  50. def toString(): String
    Definition Classes
    AnyRef → Any
  51. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  52. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  53. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Deprecated Value Members

  1. def getSinkWithOffsetContext[T](): Sink[Pair[T, CommittableOffset], NotUsed]

    Java API

    Java API

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use getCommittableSink instead.

  2. def getSinkWithOffsetContext[T](committerSettings: CommitterSettings): Sink[Pair[T, CommittableOffset], NotUsed]

    Java API

    Java API

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use getCommittableSink instead.

  3. def getSinkWithOffsetContext[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings): Sink[Pair[T, CommittableOffset], NotUsed]

    Java API

    Java API

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use getCommittableSink instead.

  4. def getSinkWithOffsetContext[T](outlet: CodecOutlet[T]): Sink[Pair[T, CommittableOffset], NotUsed]

    Java API

    Java API

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use getCommittableSink instead.

  5. def getSourceWithOffsetContext[T](inlet: CodecInlet[T]): SourceWithContext[T, CommittableOffset, _]

    Java API

    Java API

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.4) Use getSourceWithCommittableContext

  6. def sinkWithOffsetContext[T]: Sink[(T, CommittableOffset), NotUsed]

    Creates a sink, purely for committing the offsets that have been read further upstream.

    Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka.

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use committableSink instead.

  7. def sinkWithOffsetContext[T](committerSettings: CommitterSettings): Sink[(T, CommittableOffset), NotUsed]

    Creates a sink, purely for committing the offsets that have been read further upstream.

    Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka.

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use committableSink instead.

  8. def sinkWithOffsetContext[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings = defaultCommitterSettings): Sink[(T, CommittableOffset), NotUsed]

    Creates a sink for publishing records to the outlet.

    Creates a sink for publishing records to the outlet. The records are partitioned according to the partitioner of the outlet. Batches offsets from the contexts that accompany the records, and commits these to Kafka. The outlet specifies a cloudflow.streamlets.Codec that will be used to serialize the records that are written to Kafka.

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use committableSink instead.

  9. def sourceWithOffsetContext[T](inlet: CodecInlet[T]): SourceWithOffsetContext[T]

    This source emits T records together with the offset position as context, thus makes it possible to commit offset positions to Kafka (as received through the inlet).

    This source emits T records together with the offset position as context, thus makes it possible to commit offset positions to Kafka (as received through the inlet). This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time, but in failure cases, they can be duplicated.

    It is intended to be used with sinkWithOffsetContext(outlet: CodecOutlet[T]) or Committer.sinkWithOffsetContext, which both commit the offset positions that accompany the records, read from this source. sinkWithOffsetContext(outlet: CodecOutlet[T]) should be used if you want to commit the offset positions after records have been written to the specified outlet. The inlet specifies a cloudflow.streamlets.Codec that will be used to deserialize the records read from Kafka.

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.4) Use sourceWithCommittableContext

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped