Transaction Management and Kotlin

While working on personal projects, I enjoy exploring new paradigms and design patterns. I needed an interface to support transaction management in a recent project, however I wasn’t using a standard Spring application. Spring provides the @Transaction annotation which allows easy rollback and transaction isolation for web requests. While exploring alternatives, I discovered a somewhat novel use for kotlin scoped function.

Kotlin scoped functions [docs] provide the ability to write a first class function which executes in the context of a provided object. Kotlin scoped functions were new to me so I will take some time to explain how they can be used. If you are already well versed on kotlin and scoped functions, skip to the next section.

If you are familiar with Java applications, kotlin’s scoped functions are a bit like creating an anonymous extension class which provides a new method. These scoped functions have access to all the base class’s fields provided they aren’t private (the same as a child class would). In kotlin packages, scoped functions are often used to create “DSL” configurations as they allow the inner code to access fields and methods which don’t appear to be defined in the enclosing scope. For example, consider the following builder pattern Java:

Java
static class DataClass {
    private final int id;
    private final String name;
    DataClass(int id, String name) {
        this.id = id;
        this.name = name;
    }
    static class DataClassBuilder {
        private int id;
        private String name;
        public DataClassBuilder id(int id) {
            this.id = id;
            return this;
        }
        public DataClassBuilder name(String name) {
            this.name = name;
            return this;
        }
        public DataClass build() {
            return new DataClass(id, name);
        }
    }
}

You could then use this builder by chaining calls to modify the builder object before building the result:

Java
var data = new DataClass.DataClassBuilder()
        .id(10)
        .name("Test Data")
        .build();

In kotlin, you could use a scoped function to accomplish the same thing without needing to chain method calls or write significant boilerplate for each field that needs to be added. The following kotlin sets up our TestData class and builder method.

Kotlin
class TestData {
    var id: Int? = null
    var name: String? = null
}

fun buildTestData(block: TestData.() -> Unit): TestData {
    val testData = TestData()
    testData.block()
    return testData
}

To use this builder, we can now provide a function to buildTestData which will implicitly have access to the internal values of the object being built.

Kotlin
buildTestData {
    id = 10
    name = "Test Data"
}

This example is a bit contrived as kotlin already has named parameters which makes builders largely unnecessary, however it shows how scoped functions work and the access they provide.


Now that we have the basics of scoped functions sorted out, let’s see how we can apply them to transaction management. When writing a multi layered application, it is often nice to divide access methods for different SQL tables into separate classes. Some libraries provide the ability to generate these classes implicitly based on the shape of the underlying table, however I prefer to “roll my own” when it comes to these Data Access Objects or DAOs.

In the project I was working on, this is exactly what I chose to do. I had 4 tables which I needed to perform transactions across, grids, columns, rows, and cells. I wanted the ability to execute read and write transactions across DAOs without forcing a single DAO to own all the logic required for a given use case. I came up with the following rough syntax for the interface I wanted:

Kotlin
val grid = read { gridDao.getGrid(gridId) }
val rows = read {
    val columns = columnDao.getColumns(gridId)
    rowDao.getRows(gridId, columns)
}
grid.rowCount++
val newRow = Row(...)
write {
    gridDao.saveGrid(grid)
    rowDao.addRow(grid, newRow)
}

This would allow me to retrieve the data I needed and return it out of the read transaction while also allowing for multiple consistent writes. The implementation I came up with to do this involved mutable DAO connections mixed with scoped functions. For my application, I intended to leverage read replica databases whenever possible, which is why the reader connection is considered different from the writer. For transactions which need consistency between reads and writes, I found I could just place all required logic in a “write” block.

Kotlin
open class GridRepositoryInstance(
    var connection: Connection? = null,
) {
    val gridDAO = GridDAO(this)
    val gridColumnDAO = GridColumnDAO(this)
    val gridRowDAO = GridRowDAO(this)
    val gridCellDAO = GridCellDAO(this)
}

class GridRepository(
    private val connectionManager: ConnectionManager,
    private val gridRepoInst: GridRepositoryInstance = GridRepositoryInstance()
) {
    fun <T> write(block: GridRepositoryInstance.() -> T): T {
        return connectionManager.writeConnection().use { connection ->
            connection.autoCommit = false
            val result = gridRepoInst.run {
                this@run.connection = connection
                block()
            }
            connection.commit()

            result
        }
    }

    fun <T> read(block: GridRepositoryInstance.() -> T): T {
        return connectionManager.readConnection().use { connection ->
            connection.autoCommit = true

            gridRepoInst.run {
                this@run.connection = connection
                block()
            }
        }
    }
}

This implementation has a few key advantages which I felt made it particularly easy to work with in practice:

  1. DAO objects could remain constant while the underlying connections were mutated between calls. This requires some coordination to make thread safe, but keeps the messy mutability to the repository instance object which is mostly boilerplate.
  2. The grid repository instance is passed into the repository object making it easy to mock and verify calls to. I found this to be particularly useful for testing the business logic of the application as calls to the mocked object could also easily be validated unlike when mocking connection objects themselves.

As with any implementation, there are also some pitfalls to note.

  1. As noted above, if the GridRepository object is a singleton which gets injected into the business logic controller, the mutability of the connection field creates major problems with parallel requests. In my case, I chose to simply create the GridRepository object on each request. This incurs some overhead but I didn’t find it to be a bottleneck.
  2. Unlike Spring’s @Transaction annotation, scoped blocks do not apply to internal method calls. To address this, I made extensive use of request operation packages. That is, classes which perform business logic against their own internal state and then publish the result to be consumed and committed by a higher level in the call stack. Another approach could be to use extension functions of the GridRepositoryClass to allow child calls access to the DAO objects implicitly, this could end up becoming messy very quickly.

While my experience in Kotlin is still limited, I find the language significantly more permissive than Java out of the box. I suspect this is a double edged sword as it turns developers loose to create a hell of their own design, however I also find it quite satisfying to uncover new idiomatic and elegant approaches to classic problems.