Monday, May 21, 2018

Narayana JDBC integration for Tomcat

Narayana implements JTA specification in Java. It's flexible and easy to be integrated to any system which desires transaction capabilities. As proof of the Narayana extensibility check our quickstarts like Spring Boot one or Camel one.
But this blogpost is different integration effort. It talks in details about Narayana integration with Apache Tomcat server.

If you do not care about details then just jump directly to the Narayana quickstarts in this area and use the code there for yourself.


If you want more detailed understanding read further.
All the discussed abilities are considered as the state of Narayana 5.8.1.Final or later.

Narayana, database resources and JDBC interface

All the proclaimed Narayana capabilities to integrate with other systems come from requirements for the system to conform with the JTA specification. JTA expects manageable resources which follows XA specification in particular. For case of the database resources the underlaying API is defined by JDBC specification. JDBC assembled resources manageable by transaction manager under package javax.sql It defines interfaces used for managing XA capabilities. The probably most noticable is XADataSource which serves as factory for XAConnection. From there we can otain XAResource. The XAResource is interface that the transaction manager works with. The instance of it participates in the two phase commit.

The workflow is to get or create the XADataSource, obtains XAConnection and as next the XAResource which is enlisted to the global transaction (managed by a transaction manager). Now we can call queries or statements through the XAConnection. When all the business work is finished the global transaction is commanded to commit which is propagated to call the commit on each enlisted XAResources.

It's important to mention that developer is not expected to do all this (getting xa resources, enlisting them to transaction manager…) All this handling is responsibility of the "container" which could be WildFly, Spring or Apache Tomcat in our case.
Normally the integration which ensures the database XAResource is enlisted to the transaction is provided by some pooling library. By the term pooling library we means code that manages a connection pool with capability enlisting database resource to the transaction.

We can say at the high level that integration parts are

  • the Apache Tomcat container
  • Narayana library
  • jdbc pooling library

In this article we will talk about Narayana JDBC transactional driver, Apache Commons DBCP and IronJacamar.

Narayana configuration with Tomcat

After the brief overview of integration requirements, let's elaborate on common settings needed for any integration approach you choose.
Be aware that each library needs a little bit different configuration and especially IronJacamar is specific.

JDBC pooling libraries integration

Narayana provides integration code in maven module tomcat-jta. That contains the glue code which integrates Narayana to the world of the Tomcat. If you write an application you will need the following:

  • providing Narayana itself to the application classpath
  • providing Narayana tomcat-jta module to the application classpath
  • configure WEB-INF/web.xml with NarayanaJtaServletContextListener which ensures the intialization of Narayana transaction manager
  • add META-INF/context.xml which setup Tomcat to start using implementation of JTA interfaces provided by Narayana
  • configure database resources to be XA aware and cooperate with Narayana by setting them up in the META-INF/context.xml

NOTE: if you expect to use the IronJacamar this requirements differs a bit!

If we take a look at the structure of the jar to be deployed we would get the picture possibly similar to this one:

  ├── META-INF
  │   └── context.xml
  └── WEB-INF
      ├── classes
      │   ├── application…
      │   └── jbossts-properties.xml
      ├── lib
      │   ├── arjuna-5.8.1.Final.jar
      │   ├── jboss-logging-3.2.1.Final.jar
      │   ├── jboss-transaction-spi-7.6.0.Final.jar
      │   ├── jta-5.8.1.Final.jar
      │   ├── postgresql-9.0-801.jdbc4.jar
      │   └── tomcat-jta-5.8.1.Final.jar
      └── web.xml
  

From this summary let's overview the configuration files one by one to see what's needed to be defined there.

Configuration files to be setup for the integration

WEB-INF/web.xml


  <web-app>
    <listener>
        <listener-class>org.jboss.narayana.tomcat.jta.NarayanaJtaServletContextListener</listener-class>
    </listener>
  </web-app>
  

The web.xml needs to define the NarayanaJtaServletContextListener to be loaded during context initialization to initialize the Narayana itself. Narayana needs to get running, for example, reaper thread that ensures transaction timeouts checking or thread of recovery manager.

WEB-INF/clases/jbossts-properties.xml

This file is not compulsory. The purpose is to configure the Narayana itself.
If you don't use your own configuration file then the default is in charge. See more at blogpost Narayana periodic recovery of XA transactions or consider settings done by the default descriptor jbossts-properties.xml at narayana-jts-idlj.

META-INF/context.xml


  <?xml version="1.0" encoding="UTF-8" standalone="no"?>
  <Context antiJarLocking="true" antiResourceLocking="true">
      <!-- Narayana resources -->
      <Transaction factory="org.jboss.narayana.tomcat.jta.UserTransactionFactory"/>
      <Resource factory="org.jboss.narayana.tomcat.jta.TransactionManagerFactory"
        name="TransactionManager" type="javax.transaction.TransactionManager"/>
      <Resource factory="org.jboss.narayana.tomcat.jta.TransactionSynchronizationRegistryFactory"
        name="TransactionSynchronizationRegistry" type="javax.transaction.TransactionSynchronizationRegistry"/>

      <Resource auth="Container" databaseName="test" description="Data Source"
        factory="org.postgresql.xa.PGXADataSourceFactory" loginTimeout="0"
        name="myDataSource" password="test" portNumber="5432" serverName="localhost"
        type="org.postgresql.xa.PGXADataSource" user="test" username="test"
        uniqueName="myDataSource" url="jdbc:postgresql://localhost:5432/test"/>
      <Resource auth="Container" description="Transactional Data Source"
        factory="org.jboss.narayana.tomcat.jta.TransactionalDataSourceFactory"
        initialSize="10" jmxEnabled="true" logAbandoned="true" maxAge="30000"
        maxIdle="16" maxTotal="4" maxWaitMillis="10000" minIdle="8"
        name="transactionalDataSource" password="test" removeAbandoned="true"
        removeAbandonedTimeout="60" testOnBorrow="true" transactionManager="TransactionManager"
        type="javax.sql.XADataSource" uniqueName="transactionalDataSource"
        username="test" validationQuery="select 1" xaDataSource="myDataSource"/>
  </Context>
  

I divide explanation this file into two parts. First are the generic settings - those needed for transaction manager integration (top part of the context.xml). The second part is on resource declaration that defines linking to the JDBC pooling library.

Transaction manager integration settings

We define implementation classes for the JTA api here. The implementation is provided by Narayana transaction manager. Those are lines of UserTransactionFactory and resources of TransactionManager and TransactionSynchronizationRegistry in the context.xml file.

JDBC pooling library settings

We aim to define database resources that can be used in the application. That's how you get the connection typically with code DataSource ds = InitialContext.doLookup("java:comp/env/transactionalDataSource"), and eventually execute a sql statement.
We define a PostgreSQL datasource with information how to create a new XA connection (we provide the host and port, credentials etc.) in the example.
The second resource is definition of jdbc pooling library to utilize the PostgreSQL one and to provide the XA capabilities. It roughtly means putting the PostgreSQL connection to the managed pool and enlisting the work under an active transaction.
Thus we have got two resources defined here. One is non-managed (the PosgreSQL one) and the second manages the first one to provide the ease work with the resources. For the developer is the most important to know he needs to use the managed one in his application, namely the transactionalDataSource from our example.

A bit about datasource configuration of Apache Tomcat context.xml

Let's take a side step at this place. Before we will talk in details about supported pooling libraries let's check a little bit more about the configuration of the Resource from perspective of XA connection in the context.xml.

Looking at the Resource definition there are highlighted parts which are interesting for us

  <Resource auth="Container" databaseName="test" description="Data Source"
    factory="org.postgresql.xa.PGXADataSourceFactory"
    loginTimeout="0" name="myDataSource" password="test" portNumber="5432" serverName="localhost"
    type="org.postgresql.xa.PGXADataSource" uniqueName="myDataSource"
    url="jdbc:postgresql://localhost:5432/test" user="test" username="test"/>
  

    name
    defines the name the resource is bound at the container and we can use the jndi lookup to find it by that name in application
    factory
    defines what type we will get as the final created Object. The factory which we declares here is class which implements interface ObjectFactory and from the provided properties it construct an object.
    If we would not define any factory element in the definition then the Tomcat class ResourceFactory is used (see default factory constants). The ResourceFactory will pass the call to the BasicDataSourceFactory of the dbcp2 library. Here we can see the importantce of the type xml parameter which defines what is the object type we want to obtain and the factory normally checks if it's able to provide such (by string equals check usually).
    The next step is generation of the object itself where the factory takes each of the properties and tries to applied them.
    In our case we use the PGXADataSourceFactory which utilizes some of the properties to create the XADataSource.
    serverName, portNumber, databaseName, user, password
    are properties used by the object factory class to get connection from the database
    Knowing the name of the properties for the particular ObjectFactory is possibly the most important when you need to configure your datasource. Here you need to check setters of the factory implementation.
    In case of the PGXADataSourceFactory we need to go through the inheritance hierarchy to find the properties are saved at BaseDataSource. For our case for the relevant properties are user name and password. From the BaseDataSource we can see the setter for the user name is setUser thus the property name we look for is user.

After this side step let's take a look at the setup of the Resources in respect of the used pooling library.

Apache Commons DBCP2 library

Quickstart: https://github.com/jbosstm/quickstart/tree/master/dbcp2-and-tomcat

The best integration comes probably with Apache Common DBCP2 as the library itself is part of the Tomcat distribution (the Tomcat code base uses fork of the project). The XA integration is provided in Apache Tomcat version 9.0.7 and later. There is added dbcp2 package managed which knows how to enlist a resource to XA transaction.

The integration is similar to what we discussed in case of the JDBC transactional driver. You need to have configured two resources in context.xml. One is the database datasource (see above) and other is wrapper providing XA capabilities.


  <Resource name="transactionalDataSource" uniqueName="transactionalDataSource"
    auth="Container" type="javax.sql.XADataSource"
    transactionManager="TransactionManager" xaDataSource="h2DataSource"
    factory="org.jboss.narayana.tomcat.jta.TransactionalDataSourceFactory"/>

The integration is here done over the use of the specific factory which directly depends on classes from Apache Tomcat org.apache.tomcat.dbcp.dbcp2 package. The factory ensures the resource being enlisted to the recovery manager as well.
The nice feature is that you can use all the DBCP2 configuration parameters for pooling as you would used when BasicDataSource is configured. See the configuration options and the their meaning at the Apache Commons documentation.

Summary:

  • Already packed in the Apache Tomcat distribution from version 9.0.7
  • Configure two resources in context.xml. One is the database datasource, the second is wrapper providing XA capabilities with use of the dbcp2 pooling capabilities integrated with TransactionalDataSourceFactory.

Narayana jdbc transactional driver

Quickstart: https://github.com/jbosstm/quickstart/tree/master/transactionaldriver/transactionaldriver-and-tomcat

With this we will get back to other two recent articles about jdbc transactional driver and recovery of the transactional driver.
The big advantage of jdbc transactional driver is its tight integration with Narayana. It's the dependecy of the Narayana tomcat-jta module which contains all the integration code needed for Narayana working in Tomcat. So if you take the tomcat-jta-5.8.1.Final you have packed the Narayna integration code and jdbc driver in out-of-the-box working bundle.

Configuration actions

Here we will define two resources in the context.xml file. The first one is the database one.


  <Resource name="h2DataSource" uniqueName="h2Datasource" auth="Container"
    type="org.h2.jdbcx.JdbcDataSource" username="sa" user="sa" password="sa"
    url="jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1" description="H2 Data Source"
    loginTimeout="0" factory="org.h2.jdbcx.JdbcDataSourceFactory"/>

The database one defines data needed for preparation of datasource and creation of the connection. The datasource is not XA aware. We need to add one more layer on top which is transactional JDBC driver here. It wraps the datasource connection within XA capabilities.


  <Resource name="transactionalDataSource" uniqueName="transactionalDataSource"
    auth="Container" type="javax.sql.DataSource" username="sa" password="sa"
    driverClassName="com.arjuna.ats.jdbc.TransactionalDriver"
    url="jdbc:arjuna:java:comp/env/h2DataSource" description="Transactional Driver Datasource"
    connectionProperties="POOL_CONNECTIONS=false"/>

As we do not define the element factory then the default one is used which is org.apache.tomcat.dbcp.dbcp2.BasicDataSourceFactory. Unfortunately, this is fine up to the time you need to process some more sophisticated pooling strategies. In this aspect the transactional driver does not play well with the default factory and some further integration work would be needed.

This configuration is nice for having transactionalDataSource available for the transactional work. Unfortunately, it's not all that you need to do. You miss here configuration of recovery. You need to tell the recovery manager what is the resource to care of. You can setup this in jbossts-properties.xml or maybe easier way to add it to environment variables of the starting Tomcat, for example by adding the setup under script $CATALINA_HOME/bin/setenv.sh
You define it with property com.arjuna.ats.jta.recovery.XAResourceRecovery.


-Dcom.arjuna.ats.jta.recovery.XAResourceRecovery1=com.arjuna.ats.internal.jdbc.recovery.BasicXARecovery;abs://$(pwd)/src/main/resources/h2recoveryproperties.xml

You can define whatever number of the resources you need the recovery is aware of. It's done by adding more numbers at the end of the property name (we use 1 in the example above). The value of the property is the class implementing com.arjuna.ats.jta.recovery.XAResourceRecovery. All the properties provided to the particular implementation is concatenated after the ; character. In our example it's path to the xml descriptor h2recoveryproperties.xml.
When transactional driver is used then you need to declareBasicXARecovery as recovery implementation class and this class needs connection properties to be declared in the xml descriptor.


  <?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
  <entry key="DB_1_DatabaseUser">sa</entry>
  <entry key="DB_1_DatabasePassword">sa</entry>
  <entry key="DB_1_DatabaseDynamicClass"></entry>
  <entry key="DB_1_DatabaseURL">java:comp/env/h2DataSource</entry>
</properties>

Note: there is option not defining the two resources under context.xml and use the env property for recovery enlistment. All the configuration properties are then involved in one properties file and transactional driver dynamic class is used. If interested the working example is at ochaloup/quickstart-jbosstm/tree/transactional-driver-and-tomcat-dynamic-class.

Summary:

  • Already packed in the tomcat-jta artifact
  • Configure two resources in context.xml. One is database datasource, the second is transactional datasource wrapped by transactional driver.
  • Need to configure recovery with env variable setup com.arjuna.ats.jta.recovery.XAResourceRecovery while providing xml descriptor with connection parameters

IronJacamar

Quickstart: https://github.com/jbosstm/quickstart/tree/master/jca-and-tomcat

The settings of IronJacamar integration differs pretty much from what we've seen so far. The IronJacamar implements whole JCA specification and it's pretty different beast (not just a jdbc pooling library).

The whole handling and integration is passed to IronJacamar itself.
You don't use tomcat-jta module at all.
You need to configure all aspects in the IronJacamar xml descriptors. Aspects like datasource definition, transaction configuration, pooling definition, up to the jndi binding.

The standalone IronJacamar is needed to be started with command org.jboss.jca.embedded.EmbeddedFactory.create().startup() where you defines the descriptors to be used. You can configure it in web.xml as ServletContextListener.

What are descriptors to be defined:

  • jdbc-xa.rar which is resource adapter provided by IronJacamar itself. It needs to be part of the deployment. It's capable to process ds files.
  • ds.xml which defines connecion properties and jndi name binding
  • transaction.xml which configures transaction manager instead of use of the jbossts-properties.xml.
Check more configuration in IronJacamar documentation.

Summary: IronJacamar is started as embedded system and process all the handling on its own. Developer needs to provide xml descriptor to set up.

Summary

This article provides the details about configuration of the Narayana when used in Apache Tomcat container. We've seen the three possible libraries to get the integration working - the Narayana JDBC transactional driver, Apache DBCP2 library and IronJacamar JCA implementation.
On top of it, the article contains many details about Narayana and Tomcat resource configuration.

If you hesitate what alternative is the best fit for your project then this table can help you

JDBC integration library When to use
Apache DBCP2 It's the recommended option when you want to obtain Narayana transaction handling in the Apache Tomcat Integration is done in the Narayana resource factory which ensures easily setting up the datasource and recovery in the one step.
Narayana transactional jdbc driver Is good fit when you want to have all parts integrated and covered by Narayana project. It provides lightweight JDBC pooling layer that could be nice for small projects. Integration requires a little bit more hand working.
IronJacamar To be used when you need whole JCA functionality running in Apache Tomcat. The benefit of this solution is the battle tested integration of Narayana and IronJacamar as they are delivered as one pack in the WildFly application server.

Thursday, January 11, 2018

Narayana periodic recovery of XA transactions

Let's talk about the transaction recovery with details specific to Narayana.
This blog post is related to JTA transactions. If you configure recovery for JTS, still you can find a relevant information here but then you will need to consult the Narayana documentation.

What is the transaction recovery

The transaction recovery is process needed when an active transaction fails for some reason. It could be a crash of process of the transaction manager (JVM) or connection to the resource (database)could fail or any other reason for failure.
The failure of the transaction could happen at various points of the transaction lifetime and the point define the state which the in-progress transaction was left at. The state could be just an in-memory state which is left behind and transaction manager relies on the resource transaction timeout to release it. But it could the transaction state after prepare was called (by successful prepare call the 2PC transaction confirms that is capable to finish transaction and more of what it promises to finish the transaction with commit). There has to be a process which finishes such transaction remainders. And that process is the transaction recovery.
Let's review three variants of failures which serves three different transaction states. Their results and needs of termination will guide us through the work of the transaction recovery process.

Transaction manager runs a global transaction which includes several transaction branches (when using term from XA specification). In our article about 2PC we used (not precisely) term resource-located transaction instead of the transaction branch.
Let's say we have a global transaction containing data insertion to a database plus sending a message to a message broker queue.
We will examine the crash of transaction manager (the JVM process) where each point represents one of the three example cases. The examples show the timeline of actions and differ in time when the crash happens.

  1. The global transaction was started and insertion to database happened, now JVM crashes. (no message was sent to queue). In this situation, all the transaction metadata is saved only in the memory. After JVM is restarted the transaction manager has no notion of existence of the global transaction in time before.
    But the insertion to the database already happened and the database has some work in progress already. But there was no promise by prepare call to end with commit and everything was stored only as an in-memory state thus transaction manager relies on the database to abort the work itself. Normally it happens when transaction timeout expires.
  2. The global transaction was started, data was inserted into the database and message was sent to the queue. The global transaction is asking to commit. The two-phase commit begins – the prepare is called on the database (resource-located transaction). Now the transaction manager (the JVM) crashes. If an interleaving data manipulation would be permitted then the 2PC commit would fail. But calling of prepare means the promise of the successful end. Thus the call of prepare causes locks to be taken to prevent other transactions to interleave.
    When transaction manager is restarted but again no notion of the transaction could be found as all the in-memory state was cleared. And there was nothing to be saved in the Narayana transaction log so far.
    But the database transaction is in the prepared state and with locks. On top of it, the transaction in the prepared state can't be rolled-back by transaction timeout and needs to wait for some other party to finish it.
  3. The global transaction was started, data inserted into the database and message was sent to the queue. The transaction was asked to commit. The two-phase commit begins – the prepare is called on the database and on the message queue too. A record success of the prepare phase is saved to the Narayana transaction log too. Now the transaction manager (JVM) crashes.
    After the transaction manager is restarted there is no in-memory state but we can observe the record in the Narayana transaction log and that database and the JMS queue resource-located transactions are in the prepared state with locks.

In the later cases the transaction state survived the JVM crash - once only at the side of locked records of a database, in other cases, a record is present in transaction log too. In the first case only in memory transaction representation was used where transaction manager is not responsible to finish it.
The work of finishing the unfinished transactions belongs to the recovery manager. The purpose of the recovery manager is to periodically check the state of the Narayana transaction log and resource transaction logs (unfinished resource-located transactions – it runs the JTA API call of XAResource.recover().
If an in-doubt transaction is found the recovery manager either roll it back - for example in the second case, or commit it as the whole prepare phase was originally finished with success, see the third case.

Narayana periodic recovery in details

The periodic recovery is the configurable process. That brings flexibility of the usage but made necessary to use proper settings if you want to run it.
We recommend checking the Narayana documenation, the chapter Failure Recovery.
The recovery runs periodicity (by default each two minutes) - the period could be changed by setting system property RecoveryEnvironmentBean.periodicRecoveryPeriod). When launched it iterates over all registered recovery modules (see Narayana codebase com.arjuna.ats.arjuna.recovery.RecoveryModule) and it runs the following sequence: calling the method periodicWorkFirstPass on all recovery modules, waiting time defined by RecoveryEnvironmentBean.recoveryBackoffPeriod, calling the method RecoveryEnvironmentBean.recoveryBackoffPeriod on all recovery modules.
When you want to run standard JTA XA transactions (JTS differs, you can check the config example in the Narayana code base) then you needs to configure the XARecoveryModule for the usage. The XARecoveryModule then brings to the play need of configuring XAResourceOrphanFilters which manage finishing in-doubt transactions when available only at the resource side (the second case represents such scenario).

Narayana periodic recovery configuration

You may ask how all this is configured, right?
The Narayana configuration is held in "beans". The "beans" contains properties which are retrieved by getter method calls all over the Narayana code. So configuration of the Narayana behaviour means redefining values of the demanded bean properties.
Let's check what are the beans relevant for setting the transaction management and recovery for XA transactions. We will use the jdbc transactional driver quickstart as an example.
The releavant beans are following

To configure the values of the properties you need to define it one of the following ways
  • via system property – see example in the quickstart pom.xml
    . We can see that the property is passed at the JVM argument line.
  • via use of the descriptor file jbossts-properties.xml.
    This is usually the main source of configuration in the standalone applications using Narayana. You can see the example jbossts-properties.xml and observe that as the standalone application is not the exception.
    The descriptor has to be at the classpath for the Narayana will be able to access it.
  • via call of bean setter methods.
    This is the programatic approach and is normally used mainly in managed environment as they are application servers as WildFly is.

  • The usage of the system property has precedence over use of the descriptor jbossts-properties.xml.
  • The usage of the programatic call of the setter method has precedence over use of system properties.

The default settings for the used narayana-idlj-jts.jar artifact can be seen at https://github.com/jbosstm/narayana/blob/master/ArjunaJTS/narayana-jts-idlj/src/main/resources/jbossts-properties.xml. Those are (with combination of settings inside of particular beans) default values used when you don't have any properties file defined.
For more details on configuration check the Narayana.io documentation.


If you want to use the programatic approach and call the bean setters you need to gain the bean instance first. That is normally done by calling a static method of PropertyManager. There are various of them depending what you want to configure.
The relevant for us are:

We will examine the programmatic approach at the example of the jdbc transactional driver quickstart inside of the recovery utility class where property controlling values of XAResourceRecovery is reset in the code.
If you search to understand what should be the exact name of the system property or entry in jbossts-properties.xml the rule of thumb is to take the short class name of the bean, add the dot and the name of the property at the end.
For example let's say you want to redefine time period for the periodic recovery cycle. Then you need to visit the RecoveryEnvironmentBean, find the name of the variable – which is periodicRecoveryPeriod. By using the rule of thumb will use the name RecoveryEnvironmentBean.periodicRecoveryPeriod for redefinition of the default 2 minutes value.
Some bean uses annotations @PropertyPrefix which offers other way of naming for the property for settings it up. In case of the periodicRecoveryPeriod we can use system property with name com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod to reset it in the same way.

Note: an interesting link could be the settings for the integration with Tomcat which useses programatic approach, see NarayanaJtaServletContextListener.

Thinking about XA recovery

I hope you have a better picture of recovery setup and how that works now.
The XARecoveryModule has the responsibility for handling recovery of 2PC XA transactions. The module is responsible for committing unfinished transactions and for handling orphans by running registered XAResourceOrphanFilter.

As you could see we configured two RecoveryModules – XARecoveryModule and AtomicActionRecoveryModule in the jbossts-properties.xml descriptor.
The AtomicActionRecoveryModule is responsible for loading resource from object store and if it is serializable and as the whole saved in the Narayana transaction log then it could be deserialized and used immediately during recovery.
This is not the case often. When the XAResource is not serializable (which is hard to achieve for example for database where we need to have a connection to do any work) the Narayana offers resource initiated recovery. That requires a class (a code and a settings) that could provide XAResources for the recovery purposes. For getting the resource we need a connection (to database, to jms broker...). The XARecoveryModule uses objects of two interfaces to get such information (to get the XAResources for recovery).
Those interfaces are

Both interfaces then contain method to retrieve the resources (XAResourceRecoveryHelper.getXAResources(),XAResourceRecovery.getXAResource()). The XARecoveryModule then ask all the received XAResources to find in-doubt transactions (by calling XAResource.recovery()) (aka. resource located transactions).
The found in-doubt transactions are then paired with transactions in Narayana transaction log store. If the match is found the XAResource.commit() could be called.
Maybe you wonder what both interfaces are mostly the same – which kind of true – but the use differs. The XAResourceRecoveryHelper is designed (and only available) to be used in programatic way. For adding the helper amogst other ones you need to call XARecoveryModule.addXAResourceRecoveryHelper(). You can even deregister the helper by method call XARecoveryModule.removeXAResourceRecoveryHelper.
The XAResourceRecovery is configured not directly in the code but via property com.arjuna.ats.jta.recovery.XAResourceRecovery. This is not viable for dynamic changes as in normal circumstances it's not possible to reset it – even when you try to change the values by call of JTAEnvironmentBean.setXaResourceRecoveryClassNames().

Running the recovery manager

We have explained how to configure the recovery properties but we haven't pointed down one important fact – in the standalone application there is no automatic launch of the recovery manager. You need manually to start it.
A good point is that's quite easy (if you don't use ORB) and it's fine to call just


      RecoveryManager manager = RecoveryManager.manager();
      manager.initialize()
    
This runs an indirect recovery manager (RecoveryManager.INDIRECT_MANAGEMENT) which spawns a thread which runs periodically the recovery process. If you feel that you need to run the periodic recovery in times you want (periodic timeout value is then not used) you can use direct management and call to run it manually

      RecoveryManager manager = RecoveryManager.manager(RecoveryManager.DIRECT_MANAGEMENT);
      manager.initialize();
      manager.scan();
    
For stopping the recovery manager to work use the terminate call.

      manager.terminate();
    

Summary

This blog post tried to introduce process of transaction recovery in Narayana.
The goal was to present settings necessary to be set for the recovery would work in an expected way for XA transactions and shows how to start the recovery manager in your application.

Recovery of Narayana jdbc transactional driver

The post about jdbc transactional driver introduced ways how you can start to code with it. The post talks about enlisting JDBC work into the global transaction but it omits the topic of recovery.
And it's here where using of transactional driver brings another benefit with ready to use approaches to set up the recovery. As in the prior article we will work with the jbosstm quickstart transactionaldriver-standalone.

After reading the post about transaction recovery you will find out that for the recovery manager would consider any resource that we enlist into the global transaction we have to:

  • either ensure that resource could be serialized into Narayana transaction log store (resource has to be serializable), in which case the recovery manager deserialize the XAResource and use it directly to get data from it
  • or to register recovery counterpart of the transaction enlistment which is capable to provide the instance of XAResource to the RecoveryModule
    in other words we need implemntation of XAResourceRecoveryHelper or XAResourceRecovery (and that's where transactional driver can help us).

For configuration of the recovery properties, we use the jbossts-properties.xml descriptor in our quickstart. We leave the most of the properties with their default values but we still need to concentrate to set up the recovery.
You can observe that it serves to define recovery modules, orphan filters or xa resource recovery helpers. Those are important entries for the recovery works for our transaction JDBC driver.
For better understanding what was set I recommend to check the comments there.

JDBC transaction driver and the recovery

In the prior article about JDBC driver we introduced three ways of providing connection data to the transaction manager (it then wraps the connection and provides transactionality in the user application). Let's go through the driver enlistment variants and check how the recovery can be configured for them.

XADataSource provided within Narayana JDBC driver properties

This is a variant where XADatasource is created directly in the code and then the created(!) instance is passed to the jdbc transactional driver.
As transactional driver receives the resource directly it does not have clue how to create such resource on its own during recovery. We have to help it with our own implementation of the XAResourceRecovery class. That has to be of course registered into environment bean (it's intentionally commented out as for testing purposes we need to change different variants).



XADataSource bound to JNDI

This variant bound the XADatasource to JNDI name. The recovery can lookup the provided jndi name and receive it to create an XAConnection to find the indoubt transactions at database side.
The point here is to pass information about jndi name to the recovery process for it knowing what to search for.
The jdbc driver uses a xml descriptor for that purpose in two variants


In fact, there is no big difference between those two variants and you can use whatever fits you better. In both versions you provide the JNDI name to be looked-up.

XADataSource connection data provided in properties file

The last variant uses properties file where the same connection information is used already during resource enlistment. And the same property file is then automatically used for recovery. You don't need to set any property manually. The recovery is automatically setup because the placement of the properties file is serialized into object store and then loaded during recovery.


In this case you configured the PropertyFileDynamicClass providing datasource credentials for the transaction manager and the recovery too. If you would like to extend the behaviour you can implement your own DynamicClass (please consult the codebase). For the recovery would work automatically you need to work with the RecoverableXAConnection.


Summary

There is currently available three approaches for the setting up recovery of jdbc transactional driver
creation of your own XAResourceRecovery/XAResourceRecoveryHelper which is feasible if you want to control the creation of the datasource and jdbc connection on your own. Or using one of the prepared XAResourceRecovery classes - either JDBCXARecovery or BasicXARecovery where you provide xml file where you specify the JNDI name of the datasource. The last option is to use properties file which defines credentials for connection and for recovery too.

Wednesday, December 27, 2017

Narayana jdbc transactional driver

The purpose of this blog post is to summarize ways (in current days) how to setup and use Narayana JDBC driver in your standalone application. The text is divided to two parts. Where here in the first we show creating managed connection while in the second we talk a bit about settings for recovery.

Transactional aware JDBC connections

For working with multiple database connections in transactionaly reliable way
you need either hacking on transaction handling (totally not recommended, a good knowledge of XA specification is necessary) or use transaction manager to that for you.

The word multiple is important here as if you want to run only single database connection you are fine to use local JDBC transaction (Connection.setAutoCommit(false), https://docs.oracle.com/javase/7/docs/api/java/sql/Connection.html#setAutoCommit(boolean)). But if you want to manage transactionally multiple JDBC connections (different databases, running at different database accounts...) then you need transaction manager.

If you use a transaction manager for that purpose (expecting running a standalone application) you need to: initialize the transaction manager, begin transaction and enlist each resource with the transaction for transaction manager to know which are those participants that are expected to be finished with ACID guarantees.

If you use Narayana you have another option to use the Narayana JDBC transaction driver (https://github.com/jbosstm/narayana/blob/master/ArjunaJTA/jdbc/classes/com/arjuna/ats/jdbc/TransactionalDriver.java).
JDBC transactional driver makes your life easier as you can configure the driver once and then get a managed connection, wrapped by the transactional functionality and you don't need to care of if anymore.

Managing the transaction enlistment on your own

The first code example shows how to use transaction manager to manage JDBC connection. There is no JDBC transaction driver involved and you manage enlistment manually.

There is enlisted only one resource in this code example which does not require transaction manager to be used in fact. The main purpose of using transaction manager is for managing two or more distinct resources. Another reason could be the offered JTA API which can make the code clearer.
// here we get instance of Narayana transaction manager
TransactionManager tm = com.arjuna.ats.jta.TransactionManager.transactionManager();
// and beginning the global transaction for XAResources could be enlisted into
tm.begin();

// getting DB2 datasource which then provides XAResource
XADataSource dsXA = neworg.h2.jdbcx.JdbcDataSource();
// the xa datasource has to be filled with information to connection happens, using setters
dsXA.set...();
// from XADataSource getting XAConnection and then the XAResource
XAConnection xaConn = dsXA.getXAConnection();
// transaction manager to be provided with the XAResource
tm.getTransaction().enlistResource(xaConn.getXAResource());

// the business logic in database in the transaction happening here
PreparedStatement ps = xaConn.getConnection().prepareStatement("INSERT INTO TEST values (?, ?)");
ps.setInt(1, 1);
ps.setString(2, "Narayana");

// statement executed and transaction is committed or rolled-back depending of the result
try {
  ps.executeUpdate();
  tm.commit();
} catch (Exception e) {
  tm.rollback();
} finally {
  xaConn.close(); // omitting try-catch block
}
You can compare approach of using JDBC local transaction not ensuring reliable resources management with use of Narayana with manual enlistment in the Narayana quickstarts.

 

Managing the transaction enlistment with use of the JDBC transactional driver

How is the same task will be with the transaction driver?
First we need an XADataSource to be provided to the transactional driver. Next we request connection from the transactional driver (and not directly from the XADataSource). As we requested the XADatasource from the driver it has chance to wrap it and it controls the connection. Thus the resource is automatically enlisted to an active transaction. That way you don't need to think of getting XAConnection and XADataSource and enlisting them to the transaction and you don't need to pass the transaction and the connection from method to a method as parameter but you can simply use the transactional driver connection withdrawal.

If you want to use the transactional driver in your project you will need to add two dependencies into configuration of your dependency management system. Here is what to use with maven
<dependency>
  <groupId>org.jboss.narayana.jta</groupId>
  <artifactId>narayana-jta</artifactId>
  <version>5.7.2.Final</version>
</dependency>
<dependency>
  <groupId>org.jboss.narayana.jta</groupId>
  <artifactId>jdbc</artifactId>
  <version>5.7.2.Final</version>
</dependency>
There are basically three possibilities how to provide XADataSource to the transactional driver. Let's go through them.

XADataSource provided within Narayana JDBC driver properties

First you can provide directly an instance of XADataSource. This single instance is used for the further connection distribution. This settings is done with property
TransactionalDriver.XADataSource. That key is filled with instance of the XADataSource.

You can examine this example in Narayana quickstart DriverProvidedXADataSource (two resources in use) and check the functionality in the test.

// XADataSource initialized to be passed to transactional driver
XADataSource dsXA = new org.h2.jdbcx.JdbcDataSource();

dsXA.set...();

// the datasource is put as property with the special name
Properties connProperties = new Properties();
connProperties.put(TransactionalDriver.XADataSource, dsXA);

// getting connection when the 'url' is 'jdbc:arjuna' prefix which determines
// the Naryana drive to be used
Connection con = DriverManager.getConnection(TransactionalDriver.arjunaDriver, connProperties);

// starting transaction
TransactionManager tm = com.arjuna.ats.jta.TransactionManager.transactionManager();
tm.begin();

// db business logic (sql insert query) preparation
PreparedStatement ps = conn.prepareStatement("INSERT INTO TEST values (?, ?)");
ps.setInt(1, 41);
ps.setString(2, "Narayana");

// execution, committing/rolling-back
try {
  ps.executeUpdate();
  tm.commit();
} catch (Exception e) {
  tm.rollback();
} finally {
  conn.close();
}

XADataSource bound to JNDI

Another possibility is using JNDI to bind the XADataSource to and provide the JNDI as part of the url to transactional driver.

You can examine this example in Narayana quickstart DriverIndirectRecoverable and check the functionality in the test.
// the JNDI name has to start with the Narayana transactional driver prefix,
// for would be determined that we want connection of transactional driver
// the suffix (here 'ds') is used as JNDI name that XADataSource will be bound to
XADataSource dsXA = JdbcDataSource ds = new JdbcDataSource();
dsXA.set...();

// binding xa datasource to JNDI name 'ds'
InitialContext ctx = new IntitialContext();
ctx.bind("ds", dsXA);

// passing the JNDI name 'ds' as part of the connection url that we demand
// the first part is narayana transactional driver prefix
String dsJndi = TransactionalDriver.arjunaDriver + "ds";
Connection conn = DriverManager.getConnection(dsJndi, new Properties());

// get transaction driver and start transaction
TransactionManager tm = com.arjuna.ats.jta.TransactionManager.transactionManager();
tm.begin();

// data insertion preparation
PreparedStatement ps = conn.prepareStatement("INSERT INTO TEST values (?, ?)");
ps.setInt(1, 42);
ps.setString(2, "Narayana");

// execute, commit or rollback
try {
  ps.executeUpdate();
  tm.commit();
} catch (Exception e) {
  tm.rollback();
} finally {
  conn.close();
} 

XADataSource connection data provided in properties file


The third option is about construction a construct of dynamic class. Here the part of the url after the arjunaDriver prefix depends on implementation of interface com.arjuna.ats.internal.jdbc.DynamicClass.
Currently there is an only one provided com.arjuna.ats.internal.jdbc.drivers.PropertyFileDynamicClass.
This expects that the jdbc url contains path to the properties file where connection data for XADataSource class is provided. In particular the property file defines a name of class implementing the XADataSource interface. This name is written in the key xaDataSourceClassName. Next you need to provide connection data (jdbc url, username, password) where each of the properties
is dynamically invoked as a setter on the particular XADataSource instance. Which is exact name of the property depends on the database you and the JDBC driver you use.

You can examine this example in Narayana quickstart DriverDirectRecoverable and check the functionality in the test.

// jdbc url is defined with path to the properties file
// see the ./ds.properties as 'getConnection' url parameter
Properties props = new Properties();
props.put(TransactionalDriver.dynamicClass, PropertyFileDynamicClass.class.getName());
Connection conn = DriverManager.getConnection(TransactionalDriver.arjunaDriver
  + "./ds.properties", props);

// starting transaction
TransactionManager tm = com.arjuna.ats.jta.TransactionManager.transactionManager();
tm.begin();

// data insertion preparation
PreparedStatement ps = conn.prepareStatement("INSERT INTO TEST values (?, ?)");
ps.setInt(1, 43);
ps.setString(2, "Narayana");

// execute, commit or rollback
try {
  ps.executeUpdate();
  tm.commit();
} catch (Exception e) {
  tm.rollback();
} finally {
  conn.close();
}

and the ./ds.properties file could look like this. We use H2 jdbc driver and if you check the API of the driver you will find there setters like setURL, setUser and setPassword which are used to fill connection data after XADataSource is intialized by default constructor.
# implementation of XADataSource
xaDataSourceClassName=org.h2.jdbcx.JdbcDataSource
# properties which will be invoked on dynamically created XADataSource as setters.
# For example there will be call
# JdbcDataSource.setURL("jdbc:h2:mem:test1;DB_CLOSE_DELAY=-1")
URL=jdbc:h2:mem:test1;DB_CLOSE_DELAY=-1
User=sa
Password=sa

In summary


Let's put together properties and their usage. The properties used in the transactional driver could be verified in the code of TransactionalDriver.
  • TransactionalDriver.arjunaDriver (jdbc:arjuna:) is a prefix of jdbc url which defines the Narayana transactional driver is in demand. Data after this prefix is used as parameter for later use.
Properties provided at time of connection request defines how to get XADataSource implementation.
  • TransactionalDriver.XADataSource when used it defines that implementation of XADataSource was provided as parameter and will be used for connection withdrawal 
  • TransactionalDriver.dynamicClass defines name of class implementing interface com.arjuna.ats.internal.jdbc.DynamicClass which is then used for dynamically creating of the XADataSource. 
  • TransactionalDriver.userName and TransactionalDriver.password you can use if you the particular connection needs to be specified with the values. They will be used in call of XADataSource.getXAConnection(username, password).
  • TransactionalDriver.poolConnections is default as true. The current behavior is simple. There is created a pool for each jdbc url, internal check verifies if the particular connection is in use, if not then it's passed for the usage otherwise it's created a new connection. The pool capacity is defined by property TransactionalDriver.maxConnections. When the connection is closed then it is returned to the pool and marked as 'not under use'. For some more sophisticated pool management some 3rd party pool management project is recommended.
    If this property is set to false then a new connection is returned each time is asked for. That way you need to pass this connection over the application.

Wednesday, December 20, 2017

Narayana LRA: implementation of saga transactions

The Narayana transaction manager implements saga transactional pattern naming it as LRA which is abbreviation to Long Running Action.
This saga implementation is basis for the specification fitting to schema of Eclipse MicroProfile. See at https://github.com/eclipse/microprofile-sandbox/tree/master/proposals/0009-LRA.

The communication is done over HTTP in way of REST principles.
The Java EE technologies which the LRA builds upon are CDI and JAX-RS.

Saga - what we are talking about

Before we start talking about LRA let's introduce the term saga.
Saga consists from independent set of operations which all together forms one atomic action. The operation can be whatever action you think.

Here we took take the well known example of booking a flight and subsequent services. The operations are booking flight, taxi and hotel and these three forms the atomic action which we want all parts would be processed together.




NOTE: From ACID point of view the Saga transaction relaxes the I (isolation) and because of that gets availability and is representation of BASE principle, see http://queue.acm.org/detail.cfm?id=1394128.

Principle of saga requires existence of a compensations actions invoked in case of failure.


The compensation actions undo work done by the "business" operations. The sequence in case of the flight example is first to book flight. If that success this change is overly visible to all the other parties trying to book the flight too (here it's the relaxation of ACID isolation).
Next is the taxi booking, followed by hotel booking. When there is no hotel available in the area for the particular date the whole saga fails and compensation action for the prior actions are invoked. The responsibility of the compensation action is undoing what was done - in this case canceling flight and taxi booking. How this is exactly done depends on the business logic. It could be updating of some database record, call to some API or sending email to the taxi operator.

In comparison to ACID transaction, where developer makes a SQL insertion to database or sends a message to a broker and the potential undoing such action (rollback) is handled by the transaction manager in the background, here in Saga world the responsibility of undoings completely moved to the developer who has to implement the compensations callback.

Responsibility of the transaction manager is to gather information about what operations are part of particular saga (receives participant enlistment) and ensuring the compensation callback is invoked, even in case of failure (either the participant or the manager itself).

The Narayana LRA implementation also adds way to define a complete callback which is invoked when saga ends successfully. This could be used for confirmation actions e.g. customer could be informed with email that order was processed while passing him details needed for the payment. Or database can be enriched with column informing if order was successfully processed and SQL statements could be created with that taken into account.

In summary using saga transactions is a design pattern which needs to be built to the foundation of the application architecture.

LRA - Long Running Actions

The Narayana LRA is implementation of saga for HTTP transport based on the REST principles.

Narayana LRA is represented by coordinator (transaction manager) exposing HTTP endpoint for an incoming remote communication. The coordinator is the core which ensures the LRA saga is processed in atomically. Services enlist to the coordinator, by calling defined REST endpoints. The coordinator calls then back services to confirm saga success or command to undone with compensate.

The coordinator can be placed as separate service or it can be attached to the application too.

For Narayana implementation applies that in case of coordinator packed with the application, the application itself talks to coordinator with in-memory calls.

Let's explain how the LRA communication works on the example. This is diagram showing our usecase.



We can see the LRA coordinator and 4 services talking to each other in the row synchronously (of course your application can be designed in a different way) and communication will looks in following way
  1. A client makes a call to the first service (Microservice #1)
  2. The Microservice #1 is responsible for starting LRA (creating the saga). It calls LRA coordinator to endpoint starting LRA. The coordinator announces lra identifier in response.
  3. The Microservice #1 enlists itself with the created LRA by calling coordinator along with the particular LRA identifier and handing over addresses of REST endpoints for compensation (optionally completion) callbacks. Those are endpoint the coordinator can call back to Microservice #1.
  4. The Microservice #1 takes the LRA identifier and adds it as an HTTP header (long-running-action) to the REST call to the next service - Microservice #2. If the Microservice #2 distinguishes the LRA header it can enlist itself (by announcing REST endpoints for compensation/completion callbacks) to coordinator.
  5. On way back the first service is responsible for finishing saga by calling close (when finishing with success) on the LRA coordinator with the saga identifier.
  6. Some of the other services could fail saga by calling cancel on the LRA coordinator, in which case the close won't succeeds and reports back an error.

Code examples

If you wan to see this example working check out the Red Hat MSA example
enriched with the LRA capabilities.
Detailed installation steps could be found at the page: https://developer.jboss.org/wiki/MSAQuickstartsWithLRAREST-ATOnMinishift

In the next article we will get into CDI annotations used by LRA and their functionality. Meanwhile you can check out how the WildFly Swarm microservice using LRA described in the example looks like
https://github.com/ochaloup/hola/blob/lra/src/main/java/com/redhat/developers/msa/hola/HolaResource.java#L81

or check other implementations run with Spring, Vert.x and Node.js

Monday, December 18, 2017

Saga implementations comparison

In the previous blog post we have investigated the general motions of the saga pattern and how sagas differ from traditional ACID approach. This article will focus on the current state of applicability of this pattern. We will introduce and compare three frameworks that presently support saga processing, namely Narayana LRA, Axon framework and Eventuate.io. Narayana LRA Narayana Long Running Actions is a specification developed by the Narayana team in the collaboration with the Eclipse MicroProfile initiative. The main focus is to introduce an API for coordinating long running activities with the assurance of the globally consistent outcome and without any locking mechanisms. [https://github.com/eclipse/microprofile-sandbox/tree/master/proposals/0009-LRA] Axon framework Axon framework is Java based framework for building scalable and highly performant applications. Axon is based on the Command Query Responsibility Segregation (CQRS) pattern. The main motion is the event processing which includes the separated Command bus for updates and the Event bus for queries. [http://www.axonframework.org/] Eventuate.io Eventuate is a platform that provides an event-driven programming model that focus on solving distributed data management in microservices architectures. Similarly to the Axon, it is based upon CQRS principles. The framework stores events in the MySQL database and it distributes them through the Apache Kafka platform. [http://eventuate.io] NOTE: CQRS is an architectural pattern that splits the domain model into two separated models - the first one is responsible for updates and the containing business logic while the other is taking care of the reads and providing information for the user. Comparisons Even though all of the above frameworks achieve the same outcome there are several areas where we can examine how the handling of the saga processing differ. Developer friendliness The LRA provides for the developer the traditional coordinator oriented architecture. Individual participants can join the LRA by the HTTP call to the LRA coordinator, each providing methods for saga completion and compensation. Narayana provides a LRA client which makes the REST invocations transparent. In the Axon sagas are implemented as aggregates. The aggregate is a logical group of entities and value objects that are treated as a single unit. Axon uses special type of event listener that allows the developer to associate a property in the events with the current saga so that the framework knows which saga should be invoked. The invocation and compensation are executed by separated event handlers and therefore different events. Not a long ago Eventuate presented a new API called Eventuate Tram which handles the saga processing for the platform. It enables applications to send messages as a part of an database transaction. The platform provides abstractions for messaging in form of named channels, events as subscriptions to domain events and commands for asynchronous communication.

Saga specifications In the LRA the specification of how the saga should be performed is specified by the initiator. Initiating service is able to invoke participants on the provided endpoints which allows participant to join the LRA context. The participant can specify whether to join, create new or ignore the corresponding LRA context by the CDI annotation. Axon provides a set of annotation the denote the saga class. A @Saga annotation defines the class as a saga aggregate which allows it to declare a saga event handlers. Saga handler can optionally include the name of the property in the incoming event which has been previously associated with the saga. Additionally Axon provides special annotations to mark the start and end of the saga. In Eventuate the developer is able to specify the saga definition including the participant invocations, compensations and reply handlers in the saga definition. It provides a simple builder which constructs the saga step by step providing the handlers for command and event processing. Failure handling The saga compensating actions are in the LRA defined as designated endpoints. If the initiating service cancels the LRA, the coordinator is able to call the compensation endpoints of the included participants. In Axon application developers need to know which events represents the failure to provide the correct event listeners. Also the tracking of the progress of the execution as well as the one of the compensation is operated by the developer. Eventuate registers the compensation handlers in the saga definitions. Participants are allowed to send a special build-in command which indicates failure a therefore the compensation of the previous actions. Ordering of invocations LRA and Eventuate invoke each participant in strict order. This approach expects the actions to be dependent on each prior action so the compensations are called in reverse order.
NOTE: Narayana LRA allows participants to modify this behavior by returning HTTP 202 Accepted. Axon on the contrary does not control the ordering of invocations so the programmer is allowed to send the compensation commands in any desired ordering. Structure and failure recovery As Axon and Eventuate are primarily CQRS based frameworks they require some additional handling for the saga execution. In Axon saga is still the aggregate which means that it consumes commands and produces events. This may be an unwanted overhead when the application does not follow the CQRS domain pattern. The same applies to Eventuate Tram but as the communication is shadowed through the messaging channels it does not force the programmer to follow CQRS. Both platforms mark every event and command into the distributed event log which warrants the saga completion upon the system failure with the eventual consistency. The LRA on the other hand requires only to implement processing and compensation endpoints. Processing is then handled by the LRA coordinator and it is made transparent to the end users. The failures of the coordinator can be managed by replication and/or transaction log. A participant failure is treated by the custom timeout on the saga invocation after which the coordinator cancels the saga. Conclusion We have discussed the main advantages and drawbacks of each platform respectively. As a part of my thesis I have created a basic example in all of the discussed frameworks. It is a simple ordering saga containing requests for shipment and invoice computation invoked on different services. In the next blog post I plan to describe the execution of this saga in every framework to discuss the main distinctions in grater detail. - https://github.com/xstefank/axon-service - https://github.com/xstefank/eventuate-service - pure CQRS solution - https://github.com/xstefank/lra-service - https://github.com/xstefank/eventuate-sagas - Eventuate Tram

Wednesday, November 8, 2017

Software Transactional Memory for the Cloud at ┬ÁCon London 2017

I have just returned from ┬ÁCon London 2017: The Microservices Conference. I was down there talking about our Software Transactional Memory implementation which our regular readers may recall was first introduced by Mark back in 2013. In particular I wanted to show how it can be used with the actor model features of Vert.x and the nice scaling features of OpenShift.

I have put the presentation slides in the narayana git repo so feel free to go and take a look to see what you missed out on. You can also access the source code I used for the practical demonstration of the technology from the same repository. The abstract for the talk gives a good overview of the topics discussed:

"Vert.x is the leading JVM-based stack for developing asynchronous, event-driven applications. Traditional ACID transactions, especially distributed transactions, are typically difficult to use in such an environment due to their blocking nature. However, the transactional actor model, which pre-dates Java, has been successful in a number of areas over the years. In this talk you will learn how they have been integrating this model using Narayana transactions, Software Transactional Memory and Vert.x. Michael will go through an example and show how Vert.x developers can now utilise volatile, persistent and nested transactions in their applications, as well as what this might mean for enterprise deployments."

And the demo that this abstract is referring to is pretty trivial but it does serve to draw out some of the impressive benefits of combining actors with STM. Here's how I introduced the technology:

  1. Firstly I showed how to write a simple Vert.x flight booking application that maintains an integer count of the number of bookings.
  2. Run the application using multiple Vert.x verticle instances.
  3. Demonstrate concurrency issues using parallel workloads.
  4. Fix the concurrency issue by showing how to add volatile STM support to the application.

I put the without STM and the with STM versions of the demo code in the same repository as the presentation slides.

I then showed how to deploy the application to the cloud using a single-node OpenShift cluster running on my laptop using Red Hat's minishift solution. For this I used "persistent" STM which allows state to be shared between JVMs and this gave me the opportunity to show some of the elastic scaling features of STM, Vert.x and OpenShift.