c

cloudflow.akkastream

AkkaStreamletLogic

abstract class AkkaStreamletLogic extends StreamletLogic[AkkaStreamletContext]

Provides an entry-point for defining the behavior of an AkkaStreamlet. Override the run method to implement the specific logic / code that should be executed once the streamlet deployed as part of a running cloudflow application. See RunnableGraphStreamletLogic if you just want to create a RunnableGraph.

Linear Supertypes
StreamletLogic[AkkaStreamletContext], Serializable, Serializable, AnyRef, Any
Ordering
  1. Alphabetic
  2. By Inheritance
Inherited
  1. AkkaStreamletLogic
  2. StreamletLogic
  3. Serializable
  4. Serializable
  5. AnyRef
  6. Any
  1. Hide All
  2. Show All
Visibility
  1. Public
  2. All

Instance Constructors

  1. new AkkaStreamletLogic()(implicit context: AkkaStreamletContext)

Abstract Value Members

  1. abstract def run(): Unit

    This method is called when the streamlet is run.

    This method is called when the streamlet is run. Override this method to define what the specific streamlet logic should do.

Concrete Value Members

  1. final def !=(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  2. final def ##(): Int
    Definition Classes
    AnyRef → Any
  3. final def ==(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  4. final def asInstanceOf[T0]: T0
    Definition Classes
    Any
  5. def clone(): AnyRef
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()
  6. def clusterSharding(): ClusterSharding

    Helper method to make it easier to start typed cluster sharding with an classic actor system

  7. def committableSink[T]: Sink[(T, Committable), NotUsed]

    Creates a sink, purely for committing the offsets that have been read further upstream.

    Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka. Uses a default CommitterSettings, which is configured through the default configuration in akka.kafka.committer.

  8. def committableSink[T](committerSettings: CommitterSettings): Sink[(T, Committable), NotUsed]

    Creates a sink, purely for committing the offsets that have been read further upstream.

    Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka.

  9. def committableSink[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings = defaultCommitterSettings): Sink[(T, Committable), NotUsed]

    Creates a sink for publishing records to the outlet.

    Creates a sink for publishing records to the outlet. The records are partitioned according to the partitioner of the outlet. Batches offsets from the contexts that accompany the records, and commits these to Kafka. The outlet specifies a cloudflow.streamlets.Codec that will be used to serialize the records that are written to Kafka.

  10. final def config: Config

    The full configuration for the AkkaStreamlet, containing all deployment-time configuration parameters on top of the normal configuration as loaded through ActorSystem.settings.config

  11. implicit val context: AkkaStreamletContext

    Returns the StreamletContext in which this StreamletLogic is run.

    Returns the StreamletContext in which this StreamletLogic is run. It can only be accessed when the streamlet is run.

    Definition Classes
    AkkaStreamletLogicStreamletLogic
  12. val defaultCommitterSettings: CommitterSettings

    The akka.kafka.CommitterSettings that have been configured from the default configuration akka.kafka.committer.

  13. final def eq(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  14. def equals(arg0: Any): Boolean
    Definition Classes
    AnyRef → Any
  15. implicit final val executionContext: ExecutionContextExecutor

    The default ExecutionContext of the ActorSystem (the system dispatcher).

  16. def finalize(): Unit
    Attributes
    protected[lang]
    Definition Classes
    AnyRef
    Annotations
    @throws( classOf[java.lang.Throwable] )
  17. final def getClass(): Class[_]
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  18. def getCommittableSink[T](): Sink[Pair[T, Committable], NotUsed]

    Java API

  19. def getCommittableSink[T](committerSettings: CommitterSettings): Sink[Pair[T, Committable], NotUsed]

    Java API

  20. def getCommittableSink[T](outlet: CodecOutlet[T]): Sink[Pair[T, Committable], NotUsed]

    Java API

  21. def getCommittableSink[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings): Sink[Pair[T, Committable], NotUsed]

    Java API

  22. final def getConfig(): Config

    Java API

  23. def getContext(): AkkaStreamletContext

    Java API

    Java API

    Returns the StreamletContext in which this StreamletLogic is run. It can only be accessed when the streamlet is run.

    Definition Classes
    AkkaStreamletLogicStreamletLogic
  24. def getDefaultCommitterSettings(): CommitterSettings

    Java API

  25. def getExecutionContext(): ExecutionContextExecutor

    Java API

  26. final def getMountedPath(volumeMount: VolumeMount): Path

    The path mounted for a VolumeMount request from a streamlet.

    The path mounted for a VolumeMount request from a streamlet. In a clustered deployment, the mounted path will correspond to the requested mount path in the VolumeMount definition. In a local environment, this path will be replaced by a local folder.

    volumeMount

    the volumeMount declaration for which we want to obtain the mounted path.

    returns

    the path where the volume is mounted.

    Exceptions thrown

    [[cloudflow.streamlets.MountedPathUnavailableException MountedPathUnavailableException ]] in the case the path is not available.

  27. def getPlainSink[T](outlet: CodecOutlet[T]): Sink[T, NotUsed]

    Java API

  28. def getPlainSource[T](inlet: CodecInlet[T], resetPosition: ResetPosition): Source[T, NotUsed]

    Java API

  29. def getPlainSource[T](inlet: CodecInlet[T]): Source[T, NotUsed]

    Java API

  30. def getShardedPlainSource[T, M, E](inlet: CodecInlet[T], shardEntity: Entity[M, E], resetPosition: ResetPosition = Latest, kafkaTimeout: FiniteDuration = 10.seconds): Source[T, Future[NotUsed]]

    Java API

    Java API

    Annotations
    @ApiMayChange()
  31. def getShardedPlainSource[T, M, E](inlet: CodecInlet[T], shardEntity: Entity[M, E], kafkaTimeout: FiniteDuration): Source[T, Future[NotUsed]]

    Java API

    Java API

    Annotations
    @ApiMayChange()
  32. def getShardedSourceWithCommittableContext[T, M, E](inlet: CodecInlet[T], shardEntity: Entity[M, E], kafkaTimeout: FiniteDuration = 10.seconds): SourceWithContext[T, Committable, Future[NotUsed]]

    Java API

    Java API

    Annotations
    @ApiMayChange()
    See also

    shardedSourceWithCommittableContext

  33. final def getSinkRef[T](outlet: CodecOutlet[T]): WritableSinkRef[T]

    Java API

  34. def getSourceWithCommittableContext[T](inlet: CodecInlet[T]): SourceWithContext[T, Committable, _]

    Java API

  35. final def getStreamletConfig(): Config

    Java API

  36. final def getStreamletRef(): String

    Java API

  37. def getSystem(): ActorSystem

    Java API

  38. def hashCode(): Int
    Definition Classes
    AnyRef → Any
    Annotations
    @native()
  39. final def isInstanceOf[T0]: Boolean
    Definition Classes
    Any
  40. final def ne(arg0: AnyRef): Boolean
    Definition Classes
    AnyRef
  41. final def notify(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  42. final def notifyAll(): Unit
    Definition Classes
    AnyRef
    Annotations
    @native()
  43. def plainSink[T](outlet: CodecOutlet[T]): Sink[T, NotUsed]

    Creates a sink for publishing T records to the outlet.

    Creates a sink for publishing T records to the outlet. The records are partitioned according to the partitioner of the outlet. The outlet specifies a cloudflow.streamlets.Codec that will be used to serialize the records that are written to Kafka.

  44. def plainSource[T](inlet: CodecInlet[T], resetPosition: ResetPosition = Latest): Source[T, NotUsed]

    The plainSource emits T records (as received through the inlet).

    The plainSource emits T records (as received through the inlet).

    It has no support for committing offsets to Kafka. The inlet specifies a cloudflow.streamlets.Codec that will be used to deserialize the records read from Kafka.

  45. final def runGraph[T](graph: RunnableGraph[T]): T

    Java API Launch the execution of the graph.

  46. final def runGraph[T](graph: RunnableGraph[T]): T

    Launch the execution of the graph.

  47. def shardedPlainSource[T, M, E](inlet: CodecInlet[T], shardEntity: Entity[M, E], resetPosition: ResetPosition = Latest, kafkaTimeout: FiniteDuration = 10.seconds): Source[T, Future[NotUsed]]

    This source is designed to function the same as plainSource while also leveraging Akka Kafka Cluster Sharding for stateful streaming.

    This source is designed to function the same as plainSource while also leveraging Akka Kafka Cluster Sharding for stateful streaming.

    The plainSource emits T records (as received through the inlet).

    It has no support for committing offsets to Kafka.

    It is required to use this source with Akka Cluster. This source will start up Akka Cluster Sharding using the supplied shardEntity and configure the kafka external shard strategy to co-locate Kafka partition consumption with Akka Cluster shards.

    inlet

    the inlet to consume messages from. The inlet specifies a cloudflow.streamlets.Codec that is used to deserialize the records read from the underlying transport.

    shardEntity

    is used to specific the settings for the started shard region

    kafkaTimeout

    is used to specify the amount of time the message extractor will wait for a response from kafka

    Annotations
    @ApiMayChange()
  48. def shardedSourceWithCommittableContext[T, M, E](inlet: CodecInlet[T], shardEntity: Entity[M, E], kafkaTimeout: FiniteDuration = 10.seconds): SourceWithContext[T, CommittableOffset, Future[NotUsed]]

    This source is designed to function the same as sourceWithCommittableContext while also leveraging Akka Kafka Cluster Sharding for stateful streaming.

    This source is designed to function the same as sourceWithCommittableContext while also leveraging Akka Kafka Cluster Sharding for stateful streaming.

    This source emits T records together with the committable context, thus makes it possible to commit offset positions to Kafka using committableSink(outlet: CodecOutlet[T]).

    It is required to use this source with Akka Cluster. This source will start up Akka Cluster Sharding using the supplied shardEntity and configure the kafka external shard strategy to co-locate Kafka partition consumption with Akka Cluster shards.

    inlet

    the inlet to consume messages from. The inlet specifies a cloudflow.streamlets.Codec that is used to deserialize the records read from the underlying transport.

    shardEntity

    is used to specify the settings for the started shard region

    kafkaTimeout

    is used to specify the amount of time the message extractor will wait for a response from kafka

    Annotations
    @ApiMayChange()
  49. final def signalReady(): Boolean

    Signals that the streamlet is ready to process data.

    Signals that the streamlet is ready to process data. signalReady completes the cloudflow.streamlets.StreamletExecution#ready future. When a streamlet is run using the testkit, a cloudflow.streamlets.StreamletExecution is returned. cloudflow.streamlets.StreamletExecution#ready can be used for instance to wait for a server streamlet to signal that it is ready to accept requests.

  50. final def sinkRef[T](outlet: CodecOutlet[T]): WritableSinkRef[T]

    Creates a SinkRef to write to, for the specified CodeOutlet.

    Creates a SinkRef to write to, for the specified CodeOutlet. The records are partitioned according to the partitioner of the outlet.

    outlet

    the specified CodeOutlet

    returns

    the WritebleSinkRef created

  51. def sourceWithCommittableContext[T](inlet: CodecInlet[T]): SourceWithCommittableContext[T]

    This source emits T records together with the committable context, thus makes it possible to commit offset positions to Kafka (as received through the inlet).

    This source emits T records together with the committable context, thus makes it possible to commit offset positions to Kafka (as received through the inlet). This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time, but in failure cases, they can be duplicated.

    It is intended to be used with committableSink(outlet: CodecOutlet[T]), which commits the offset positions that accompany the records that are read from this source after the records have been written to the specified outlet.

    The inlet specifies a cloudflow.streamlets.Codec that is used to deserialize the records read from the underlying transport.

  52. final def streamletConfig: Config

    The subset of configuration specific to a single named instance of a streamlet.

    The subset of configuration specific to a single named instance of a streamlet.

    A cloudflow.streamlets.Streamlet can specify the set of environment- and instance-specific configuration keys it will use during runtime through cloudflow.streamlets.Streamlet#configParameters. Those keys will then be made available through this configuration.

  53. final def streamletRef: String

    The streamlet reference which identifies the streamlet in the blueprint.

    The streamlet reference which identifies the streamlet in the blueprint. It is used in a Streamlet for logging and metrics, referring back to the streamlet instance using a name recognizable by the user.

  54. final def synchronized[T0](arg0: ⇒ T0): T0
    Definition Classes
    AnyRef
  55. implicit final val system: ActorSystem

    The ActorSystem that will run the Akkastreamlet.

  56. def toString(): String
    Definition Classes
    AnyRef → Any
  57. final def wait(): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  58. final def wait(arg0: Long, arg1: Int): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... )
  59. final def wait(arg0: Long): Unit
    Definition Classes
    AnyRef
    Annotations
    @throws( ... ) @native()

Deprecated Value Members

  1. def getSinkWithOffsetContext[T](): Sink[Pair[T, CommittableOffset], NotUsed]

    Java API

    Java API

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use getCommittableSink instead.

  2. def getSinkWithOffsetContext[T](committerSettings: CommitterSettings): Sink[Pair[T, CommittableOffset], NotUsed]

    Java API

    Java API

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use getCommittableSink instead.

  3. def getSinkWithOffsetContext[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings): Sink[Pair[T, CommittableOffset], NotUsed]

    Java API

    Java API

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use getCommittableSink instead.

  4. def getSinkWithOffsetContext[T](outlet: CodecOutlet[T]): Sink[Pair[T, CommittableOffset], NotUsed]

    Java API

    Java API

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use getCommittableSink instead.

  5. def getSourceWithOffsetContext[T](inlet: CodecInlet[T]): SourceWithContext[T, CommittableOffset, _]

    Java API

    Java API

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.4) Use getSourceWithCommittableContext

  6. def sinkWithOffsetContext[T]: Sink[(T, CommittableOffset), NotUsed]

    Creates a sink, purely for committing the offsets that have been read further upstream.

    Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka.

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use committableSink instead.

  7. def sinkWithOffsetContext[T](committerSettings: CommitterSettings): Sink[(T, CommittableOffset), NotUsed]

    Creates a sink, purely for committing the offsets that have been read further upstream.

    Creates a sink, purely for committing the offsets that have been read further upstream. Batches offsets from the contexts that accompany the records, and commits these to Kafka.

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use committableSink instead.

  8. def sinkWithOffsetContext[T](outlet: CodecOutlet[T], committerSettings: CommitterSettings = defaultCommitterSettings): Sink[(T, CommittableOffset), NotUsed]

    Creates a sink for publishing records to the outlet.

    Creates a sink for publishing records to the outlet. The records are partitioned according to the partitioner of the outlet. Batches offsets from the contexts that accompany the records, and commits these to Kafka. The outlet specifies a cloudflow.streamlets.Codec that will be used to serialize the records that are written to Kafka.

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.1) Use committableSink instead.

  9. def sourceWithOffsetContext[T](inlet: CodecInlet[T]): SourceWithOffsetContext[T]

    This source emits T records together with the offset position as context, thus makes it possible to commit offset positions to Kafka (as received through the inlet).

    This source emits T records together with the offset position as context, thus makes it possible to commit offset positions to Kafka (as received through the inlet). This is useful when "at-least once delivery" is desired, as each message will likely be delivered one time, but in failure cases, they can be duplicated.

    It is intended to be used with sinkWithOffsetContext(outlet: CodecOutlet[T]) or akka.kafka.scaladsl.Committer#sinkWithOffsetContext, which both commit the offset positions that accompany the records, read from this source. sinkWithOffsetContext(outlet: CodecOutlet[T]) should be used if you want to commit the offset positions after records have been written to the specified outlet. The inlet specifies a cloudflow.streamlets.Codec that will be used to deserialize the records read from Kafka.

    Annotations
    @deprecated
    Deprecated

    (Since version 1.3.4) Use sourceWithCommittableContext

Inherited from Serializable

Inherited from Serializable

Inherited from AnyRef

Inherited from Any

Ungrouped