Getting Started


  • Java 11 or later
  • Gradle or Maven
  • Kotlin 1.3

Building and Running a Floodplain application

Building Floodplain application is pretty easy. I’m going to use gradle here, but Maven should also work.

Let’s create a new empty Gradle / Kotlin application:

frank@MacBook-Frank-2:~/git$ mkdir floodplain-demo
frank@MacBook-Frank-2:~/git$ cd floodplain-demo
frank@MacBook-Frank-2:~/git/floodplain-demo$ gradle init

Select type of project to generate:
  1: basic
  2: application
  3: library
  4: Gradle plugin
Enter selection (default: basic) [1..4] 2

Select implementation language:
  1: C++
  2: Groovy
  3: Java
  4: Kotlin
  5: Swift
Enter selection (default: Java) [1..5] 4

Select build script DSL:
  1: Groovy
  2: Kotlin
Enter selection (default: Kotlin) [1..2] 1

Project name (default: floodplain-demo):
Source package (default: floodplain.demo): io.floodplain.demo

2 actionable tasks: 2 executed

Now, let’s test if the build compiles:

frank@MacBook-Frank-2:~/git/floodplain-demo$ ls
build.gradle	gradle		gradlew		gradlew.bat	settings.gradle	src
frank@MacBook-Frank-2:~/git/floodplain-demo$ gradle build

8 actionable tasks: 8 executed

Now, add the floodplain dependency:

implementation 'io.floodplain:floodplain-dsl:0.8.28'

(Check the version, there might be a newer one: Maven Central)

Now we can replace the sample main function by something like this:

package io.floodplain.demo

import io.floodplain.kotlindsl.*

fun main(args: Array<String>) {
    stream {
        // create a config, named 'mypostgres', pointing to host "postgres" at port 5432, with username "postgres", password: "mysecretpassword" and use database "dvdrental"
        val pgConfig = postgresSourceConfig("mypostgres", "postgres", 5432, "postgres", "mysecretpassword", "dvdrental")

        // create a mongodb config, named 'mymongo' pointing to uri: 'mongodb://mongo' using database: "mydatabase"
        val mongoConfig = mongoConfig("mymongo", "mongodb://mongo", "mydatabase")

        // create a source, using schema "public" and table "film" and use the postgres
        postgresSource("public", "film", pgConfig) {
            // For each key, message, and secondary message, log the key
            each {
                    key, _, _ -> println("Key: $key")
            // ... add more transformers
            mongoSink("justfilm", "justfilm", mongoConfig)
    }.renderAndStart(URL("http://localhost:8083/connectors"), "localhost:9092")

(You’ll need to add some imports) And let’s check if the compilation worked:

frank@MacBook-Frank-2:~/git/floodplain-demo$ gradle build

8 actionable tasks: 6 executed, 2 up-to-date

So this was all pretty easy to set up and compile, but running it is a bit more involved, as we need a lot of things:

  • A source database, for now only postgres is supported, but any databases supported by Debezium should be easy to implement. The postgresSourceConfig should point to that database.
  • A Kafka cluster. The last parameter of the renderAndStart method points to this
  • A Zookeeper cluster (needed by Kafka) In the future Kafka should be able to run without Zookeeper, but for now we need it
  • A destination database. For now I only support MongoDB, but any Kafka Connect sink should be easy to implement.
  • A Kafka Connect instance, used in the renderAndStart method. This should contain all connector logic for sources and sinks. For each source or sink configuration, the Floodplain instance will POST a json object to that URL, to indicate how data should flow from or to Kafka.

This is a lot of work to just show how it works, for that we have a demo setup that starts all that in one single docker-compose file, along with some example data about a fictional DVD rental company. Along with the database, it also contains a simple Kotlin application that inserts a random payment record every few seconds, to simulate realistically changing data

To make this work, clone the demo setup:

git clone

Enter the folder:

cd floodplain-demo-setup

And start the cluster: (Added the docker-compose rm, so after stopping the cluster all containers will be deleted. For demos it is easier to start with a clean slate every time)

docker-compose up && docker-compose rm -f

When this is running, we should be able to run our definition:

gradle run

Now we should see our application start, you should see 1000 println’s of the keys in the Postgres table, and when you check the MongoDB database, you should find an up-to-date collection with the same contents as the ‘film’ table in Postgres.

Whenever you change Postgres, MongoDB should change too, as long as Floodplain keeps running.

Last modified May 13, 2020: moving stuff around (7b54cd6)