Friday, October 19, 2018

Narayana integration with Agroal connection pool

Project Agroal defines itself as “The natural database connection pool”. And that’s what is it.

It was developed by Luis Barreiro. He works for WildFly as a performance engineer. This prefigures what you can expect – a well performing database connection pool. As Agroal comes from the porfolio of the WildFly projects it offers smooth integration with WildFly and with Narayana too.

In the previous posts we checked other connection pools that you can use with Narayana - either the transactional driver provided by Narayana or DBCP2 which is nicely integrated to be used with Narayana in Apache Tomcat. Another option is the use of the IronJacamar which lives in the long-termed brotherhood with Narayana. All those options are nicely documented in our quickstarts.

Agroal is a party member and you should consider to check it. Either when running standalone application with Narayana or when you run on WildFly. Let’s take a look how you can use it in the standalone application first.

Agroal with Narayana standalone

In case you want to use the Agroal JDBC pooling capabilities with Narayana in your application you need to configure the Agroal datasource to know

  • how to grab the instance of the Narayana transaction manager
  • where to find the synchronization registry
  • how to register resources to Narayana recovery manager

Narayana setup

First we need to gain all the mentioned Narayana objects which are then passed to Agroal which ensures the integration by calling the Narayana API at appropriate moments.

// gaining the transction manager and synchronization registry
TransactionManager transactionManager
    = com.arjuna.ats.jta.TransactionManager.transactionManager();
TransactionSynchronizationRegistry transactionSynchronizationRegistry
    = new com.arjuna.ats.internal.jta.transaction.arjunacore.TransactionSynchronizationRegistryImple();

// intitialization of recovery manager
RecoveryManager recoveryManager
    = com.arjuna.ats.arjuna.recovery.RecoveryManager.manager();
// recovery service provides binding for hooking the XAResource to recovery process
RecoveryManagerService recoveryManagerService
    = new com.arjuna.ats.jbossatx.jta.RecoveryManagerService();

Agroal integration

Now we need to pass the Narayana's object instances to Agroal. With that being done we can obtain a JDBC Connection which is backed by the transaction manager.

AgroalDataSourceConfigurationSupplier configurationSupplier
    = new AgroalDataSourceConfigurationSupplier()
  .connectionPoolConfiguration(cp -> cp
    .transactionIntegration(new NarayanaTransactionIntegration(
        transactionManager, transactionSynchronizationRegistry,
        "java:/agroalds1", false, recoveryManagerService))
      cf.connectionFactoryConfiguration(cf ->
        .principal(new NamePrincipal("testuser"))
        .credential(new SimplePassword("testpass"))
        .recoveryPrincipal(new NamePrincipal("testuser"))
        .recoveryCredential(new SimplePassword("testpass"))
AgroalDataSource ds1 = AgroalDataSource.from(configurationSupplier);


conn1 = ds1.getConnection();

Those are steps needed for the standalone application to use Narayana and Agroal. The working code example could be seen in the Narayana quickstart at -

Agroal XA datasource in WildFly

If you want to use the power of Narayana in the WildFly application you need XA participants that Narayana can drive. From Agroal perspective you need to define a xa datasource which you use (linked via JNDI name) in your application.

DISCLAIMER: for you can use the Agroal capabilities integrated with Narayana you will need to run WildFly 15 or later. Currently only WildFly 14 is available so for testing this you need to build the WildFly from sources by yourself. The good message is that’s an easy task – see at

Agroal datasource subsystem is not available by default in the standalone.xml file so you need to enable that extension. When you run the jboss cli commands then you do it like this

./bin/ -c

#  jboss-cli is started, run following command there

From now you can work with the datasources-agroal subsystem. For you can create the xa-datasource definition you need to have a driver which the datasource will use. The driver has to define it’s XA connection provider.

NOTE: if you want to check what are options for the Agroal configuration in the jboss cli then read the resource description with command /subsystem=datasources-agroal:read-resource-description(recursive=true)

Agroal driver definition works only with drivers deployed as modules. You can’t just copy the driver jar to $JBOSS_HOME/standalone/deployments directory but you need to create a module under $JBOSS_HOME/modules directory. See details either by creating module.xml by yourself or the recommended way is using the jboss cli with command

module add --name=org.postgresql
    --resources=/path/to/jdbc/driver.jar --dependencies=javax.api,javax.transaction.api

NOTE: The command uses the name of the module org.postgresql as I will demonstrate adding the xa datasource for the PostgreSQL database.

When the module is added we can declare the Agroal’s driver.

    module=org.postgresql, class=org.postgresql.xa.PGXADataSource)

We’ve used the class org.postgresql.xa.PGXADataSource as we want to use it as XA datasource. When class is not defined then standard jdbc driver for PostgresSQL is used (org.postgresql.Driver) as declared in the META-INF/services/java.sql.Driver file.

NOTE: If you would declare the driver without the XA datasource being defined and then you try to add it to XA datasource definition you will get an error

    "outcome" => "failed",
    "failure-description" => {"WFLYCTL0080: Failed services" => {""
        => "WFLYAG0108: An xa-datasource requires a javax.sql.XADataSource as connection provider. Fix the connection-provider for the driver"}
    "rolled-back" => true

When the JDBC driver module is defined we can create the Agroal XA datasource. The bare minimum of attributes you have to define is shown in the following command

    jndi-name=java:/AgroalPostgresql, connection-pool={max-size=10}, connection-factory={
    driver=postgres, username=test, password=test,url=jdbc:postgresql://localhost:5432/test})

NOTE: this is the most simple way of define the credentials for the connection to database. If you consider more sophisticated method, than just username/password as clear strings saved in the standalone.xml, take a look at the Elytron capabilities.

To check if the WildFly Agroal datasource is able to connect to the database you can use test-connection command


If you are insterested how the configuration looks as a xml element in standalone.xml configuration file then the Agroal subsystem with PostgreSQL XA datasource definition would look like

<subsystem xmlns="urn:jboss:domain:datasources-agroal:1.0">
    <xa-datasource name="AgroalPostgresql" jndi-name="java:/AgroalPostgresql">
        <connection-factory driver="postgres" url="jdbc:postgresql://localhost:5432/test"
            username="test" password="test"/>
        <connection-pool max-size="10"/>
        <driver name="postgres" module="org.postgresql" class="org.postgresql.xa.PGXADataSource"/>

If you want use the Agroal non-xa datasource as commit markable resource (CMR) it’s possible too. You need to create a standard datasource and define it as connectable. For more information what the commit markable resource means and how it works check our previous blogpost about CMR.

<subsystem xmlns="urn:jboss:domain:datasources-agroal:1.0">
    <datasource name="AgroalPostgresql" connectable="true" jndi-name="java:/AgroalPostgresql"
        <connection-factory driver="postgres" url="jdbc:postgresql://localhost:5432/test"
            username="test" password="test"/>
        <connection-pool max-size="10"/>
        <driver name="postgres" module="org.postgresql" class="org.postgresql.Driver"/>

NOTE: In addition to this configuration of Agroal datasource you need to enable the CMR in the transaction subsystem too – check the blogpost for detailed info.


This blogpost showed way how to configure Agroal JDBC pooling library and how to integrate it with Narayana.
The code example is part of the Narayana quickstart and you can check it at

Sunday, September 9, 2018

Tips on how to evaluate STM implementations

Software Transactional Memory (STM) is a way of providing transactional behaviour for threads operating on shared memory. The transaction is an atomic and isolated set of changes to memory such that prior to commit no other thread sees the memory updates and after commit the changes appear to take effect instantaneously so other threads never see partial updates but on abort all of the updates are discarded.

Unlike other models such as XA, OTS, JTA, WS-AT etc, with STM there is no accepted standard for developers to program against. Consequently the various implementations of STM differ in important respects which have consequences for how application developers build their software. I recently came upon an excellent book on Transactional Memory where the authors James Larus and Ravi Rajwar presented a taxonomy of features and characteristics that can be used to differentiate the various STM implementations from each other. In this and subsequent blogs I will explain the taxonomy and identify where the Narayana STM solution (which was introduced in Mark Little's initial blog on the topic) fits into it. Towards the end of the series I will include some tips, best practices and advice on how you can get the most out of the Narayana implementation of STM.

In this first article I will cover isolation, nesting and exception handling. In later articles I will discuss topics such as conflict detection and resolution, transaction granularity, concurrency control etc.

By way of motivation, why would one want to use STM in favour of other transaction models and concurrency control mechanisms:
  • The STM approach of mutating data inside of a transaction has some nice features:
    • It is less error prone since the demarcation of an atomic block of code is primitive but other synchronisation approaches are many and varied. Techniques such as locks, semaphores, signals etc are tricky to get right, for example the programmer must ensure that accesses are protected with the correct locks and in the correct order. With conventional concurrency control, imagine trying reverse all the changes made during a computation if a problem such as deadlock or data race is detected, whereas code changes that are protected by STM can be aborted in a single statement.
    • Transactional updates make it easier for the programmer to reason about his code (it is clear how different threads affect each other) and data (because it simplifies the sharing of state between threads).
  • The declarative approach (where the programmer simply marks which code blocks are transactional) means concurrent programming is more intuitive with no explicit locks or synchronisation to worry about.
  • Can perform much better than fine grained locking (which can lead to deadlock) and coarse grained locking (which inhibits concurrency):
    • If a thread takes a lock and is context switched or incurs cache misses or page faults etc then other threads that need the lock are stalled until the thread is rescheduled or until the needed data is retrieved.
    • With STM, updates can be batched up and speculatively committed together.
    • The runtime manages lock acquisition and release and resolves conflicts (using approaches such as timeouts and retries).
  • It is easier to compose operations using a technique called nesting (traditionally composing two operations can produce concurrency problems unless one analyses in detail the locking approach used by those operations).

Properties of a STM system

In the following I will describe the design choices available to STM systems in general and in particular I will illustrate the choices made by the Narayana STM implementation using code examples. The examples will be made available in the Narayana STM test suite so that you can also experiment with the particular properties of the implementation. Each of the examples will be using the same transactional object which is defined as follows:

    public interface AtomicInt {
        int get() throws Exception;
        void set(int value) throws Exception;

    public class AtomicIntImpl implements AtomicInt {
        private int state;

        public int get() throws Exception {
            return state;

        public void set(int value) throws Exception {
            state = value;

The @Transactional annotation on the AtomicInt interface tells the system that instances of the interface are candidates to be managed by the STM system. The implementation of the interface defines a pair of methods for reading and writing the the shared state (by default all state is tracked but this default can be overridden via the @NotState annotation).

Property 1: Interaction with non transactional code

If uncommitted transactional memory updates are visible to non-transactional code and vice-versa (i.e. updates made by non-transactional code are visible to running transactions) then the isolation model is said to be weak. On the other hand if non-transactional accesses are upgraded to a transactional access then the model is said to be strong.

The weak access model, although common, can lead to data races. A data race occurs if two threads T1 and T2 access memory, T1 for writing, say, and the other for reading then the value of the memory read is indeterminate. If, for example T1 writes data inside a transaction and T2 reads that data, then if T1 aborts but T2 has made a decision based on the value it read then we have an incorrect program since aborted transactions must not have side effects (recall the "all or nothing" characteristic of atomicity).

Narayana STM follows the weak isolation model. The following test updates shared memory inside a transaction and then triggers a thread to perform non-transactional reads and writes on it while the transaction is still running. The test shows that the two threads interfere with each other producing indeterminate results:
    public void testWeakIsolation() throws Exception {
        AtomicIntImpl aiImple = new AtomicIntImpl();
        // STM is managed by Containers. Enlisting the above implementation
        // with the container returns a proxy which will enforce STM semantics
        AtomicInt ai = new RecoverableContainer().enlist(aiImple);
        AtomicAction tx = new AtomicAction();

        // set up the code that will access the memory outside of a transaction
        Thread ot = new Thread(() -> {
            try {
                synchronized (tx) {
                    tx.wait(); // for the other thread to start a transaction

                    // weak isolation implies that this thread (which is running
                    // outside of a transaction) can observe transactional updates
                    assertEquals(2, aiImple.get()); // the other thread set it to 2
                    aiImple.set(10); // this update is visible to transactional code
            } catch (Exception e) {


        ai.set(1); // initialise the shared memory
        tx.begin(); // start a transaction
            ai.set(2); // conditionally set the value to 2

            synchronized (tx) {
                tx.notify(); // trigger non-transactional code to update the memory

            // weak isolation means that this transactional code may see the
            // changes made by the non transactional code
            assertEquals(10, ai.get()); // the other thread set it to 10
            tx.commit(); // commit the changes made to the shared memory

        // changes made by non transactional code are still visible after commit
        assertEquals(10, ai.get());
        assertEquals(aiImple.get(), ai.get());

As an aside, notice in this example that the code first had to declare the shared data using the @Transactional annotation and then had to access it via a proxy returned from a RecoverableContainer. Some systems introduce new keywords into the language that demarcate the atomic blocks and in such systems any memory updates made by the atomic block would be managed by the STM implementation. That type of system takes some of the burden of ensuring correctness away from the programmer but are harder to implement (for example a common technique requires compiler extensions).

Property 2: Nested transactions

A nested transaction (the child) is one that is started in the context of an outer one (the parent). The child sees the changes made by the parent. Aborting the parent will abort each child. A parent that does not have any parents is called top level.

The effects of committing/aborting either transaction (the child or parent) and the visibility of changes depend upon which model is being used:


  • The parent and child transactions see each others updates.
  • If the child aborts the parent aborts too.
  • Changes made by the child only become visible to other threads when the parent commits
Pros - easy to implement
Cons - breaks composition (if the child aborts it causes all work done by the parent transaction to abort)

Closed Nested

  • Changes are hidden from the parent transaction (and from other transactions) until the child commits, at which time any changes made by the child become part of the parent transactions' set of updates (therefore, in contrast to open nested transactions, other transactions will not see the updates until the parent commits);
  • aborting the child does not abort the parent;
Pros - Is arguably the most natural model for application designers

Open Nested

  • When the child transaction commits, all other transactions see the updates even if the parent aborts which is useful if we want unrelated code to make permanent changes during the transaction even if the parent aborts.
Pros - enables work to be made permanent even if the parent aborts (for example logging code made by the child)

Narayana STM follows the closed model as is demonstrated by the following test case:
    public void testIsClosedNestedCommit() throws Exception {
        AtomicInt ai = new RecoverableContainer().enlist(new AtomicIntImpl());
        AtomicAction parent = new AtomicAction();
        AtomicAction child = new AtomicAction();

        ai.set(1); // initialise the shared memory
        parent.begin(); // start a top level transaction
            ai.set(2); // update the memory in the context of the parent transaction
            child.begin(); // start a child transaction
                ai.set(3); // update the memory in a child transaction
                // NB the parent would still see the value as 2
                // (not shown in this test)
            // since the child committed the parent should see the value as 3
            assertEquals(3, ai.get());
            // NB other transactions would not see the value 3 however until
            // the parent commits (not demonstrated in this test)

        assertEquals(3, ai.get());

Isolation amongst child transactions

The concept of isolation applies to nested transactions as well as to top level transactions. It seems most natural for siblings to use the same model as is used for isolation with respect to other transactions (ie transactions that are not in ancestor hierarchy of a particular child). For example the CORBA Object Transaction Service (OTS) supports the closed model and children do not see each others updates until the parent commits.

Property 3: Exception Handling

On exception the options are to either terminate or ignore the exception or to use a mixture of both where the programmer tells the system which exceptions should abort and which ones should commit the transaction which is similar to what the JTA 1.2 spec provides with its rollbackOn and dontRollbackOn annotation attributes.

The Narayana STM implementation takes the view that the programmer is best placed to make decisions about what to do under exceptional circumstances. The following test demonstrates this behaviour:
    public void testExceptionDoesNotAbort() throws Exception {
        AtomicInt ai = new RecoverableContainer().enlist(new AtomicIntImpl());
        AtomicAction tx = new AtomicAction();

            try {
                throw new Exception();
            } catch (Exception e) {
                assertEquals(2, ai.get());
                // the transaction should still be active

        assertEquals(3, ai.get());

What's Next

That's all for this week. In the next instalment I will cover conflict detection and resolution, transaction granularity and concurrency control.

Thursday, June 28, 2018

Narayana Commit Markable Resource: a faultless LRCO for JDBC datasources

CMR is neat Narayana feature enabling full XA transaction capability for one non-XA JDBC resource. This gives you a way to engage a database resource to XA transaction even the JDBC driver is not fully XA capable (or you just have a design restriction on it) while transaction data consistency is kept.

Last resource commit optimization (aka. LRCO)

Maybe you will say "adding one non-XA resource to a transaction is well-known LRCO optimization". And you are right. But just partially. The last resource commit optimization (abbreviated as LRCO) provides a way to enlist and process one non-XA datasource to the global transaction managed by the transaction manager. But LRCO contains a pitfall. When the crash of the system (or the connection) happens in particular point of the time, during two-phase commit processing, it causes data inconsistency. Namely, the LRCO could be committed while the rest of the resources will be rolled-back.

Let's elaborate a bit on the LRCO failure. Let's say we have a JMS resource where we send a message to a message broker and non-XA JDBC datasource where we save information to the database.

NOTE: The example refers to the Narayana two-phase commit implemenation.

  1. updating the database with INSERT INTO SQL command, enlisting LRCO resource under the transaction
  2. sending a message to the JMS broker, enlisting the JMS resource to the transaction
  3. Narayana starts the two phase commit processing
  4. prepare is called to JMS XA resource, the transaction log is stored at the JMS broker side
  5. prepare phase for the LRCO means to call commit at the non-XA datasource. That call makes the data changes visible to the outer world.
  6. crash of the Narayana JVM occurs before the Narayana can preserve information of commit to its transaction log store
  7. after the Narayana restarts there is no notion about the existence of any transaction thus the prepared JMS resource is rolled-back during transaction recovery

Note: roll-backing of the JMS resource is caused by presumed abort strategy applied in the Narayana. If transaction manager does do not apply the presumed abort then you end ideally not better than in the transaction heuristic state.

The LRCO processing is about ordering the LRCO resource as the last during the transaction manager 2PC prepare phase. At place where transaction normally calls prepare at XAResources there is called commit at the LRCO's underlaying non-XA resource.
Then during the transaction manager commit phase there is called nothing for the LRCO.

Commit markable resource (aka. CMR)

The Commit Markable Resource, abbreviated as CMR, is an enhancement of the last resource commit optimization applicable on the JDBC resources. The CMR approach achieves capabilities similar to XA by demanding special database table (normally named xids) that is accessible for transaction manager to write and to read via the configured CMR datasource.

Let's demonstrate the CMR behavior at the example (reusing setup from the previous one).

  1. updating the database with INSERT INTO SQL command, enlisting the CMR resource under the transaction
  2. sending a message to the JMS broker, enlisting the JMS resource to the transaction
  3. Narayana starts the two phase commit processing
  4. prepare on CMR saves information about prepare to the xids table
  5. prepare is called to JMS XA resource, the transaction log is stored at the JMS broker side
  6. commit on CMR means calling commit on underlaying non-XA datasource
  7. commit on JMS XA resource means commit on the XA JMS resource and thus the message being visible at the queue, the proper transaction log is removed at the JMS broker side
  8. Narayana two phase commit processing ends

From what you can see here the difference from the LRCO example is that the CMR resource is not ordered as last in the resource processing but it's ordered as the first one. The CMR prepare does not mean committing the work as in case of the LRCO but it means saving information about that CMR is considered to be prepared into the database xids table.
As the CMR is ordered as the first resource for processing it's taken as first during the commit phase too. The commit call then means to call commit at the underlying database connection. The xids table is not cleaned at that phase and it's normally responsibility of CommitMarkableResourceRecordRecoveryModule to process the garbage collection of records in the xids table (see more below).

The main fact to understand is that CMR resource is considered as fully prepared only after the commit is processed (meaning commit on the underlaying non-XA JDBC datasource). Till that time the transaction is considered as not prepared and will be processed with rollback by the transaction recovery.

NOTE: the term fully prepared considers the standard XA two-phase commit processing. If the transaction manager finishes with the prepare phase, aka. prepare is called on all transaction participants, the transaction is counted as prepared and commit is expected to be called on each participant.

It's important to note that the correct processing of failures in transactions which contain CMR resources is responsibility of the special periodic recovery module CommitMarkableResourceRecordRecoveryModule. It has to be configured as the first in the recovery module list as it needs to check and eventually process all the XA resources belonging to the transaction which contains the CMR resource (the recovery modules are processed in the order they were configured). You can check here how this is set up in WildFly.
The CMR recovery module knows about the existence of the CMR resource from the record saved in the xids table. From that it's capable to pair all the resources belonging to the same transaction where CMR was involved.

xids: database table to save CMR processing data

As said Narayana needs a special database table (usually named xids) to save information that CMR was prepared. You may wonder what is content of that table.
The table consists of three columns.

  • xid : id of the transaction branch belonging to the CMR resource
  • transactionManagerID : id of transaction manager, this serves to distinguish more transaction managers (WildFly servers) working with the same database. There is a strict rule that each transaction manager must be defined with unique transaction id (see description of the node-identifer).
  • actionuid : global transaction id which unites all the resources belonging to the one particular transaction

LRCO failure case with CMR

In the example, we presented as problematic for LRCO, the container crashed just before prepare phase finished. In such case, the CMR is not committed yet. The other transaction participants are then rolled-back as the transaction was not fully prepared. The CMR brings the consistent rollback outcome for all the resources.

Commit markable resource configured in WildFly

We have sketched the principle of the CMR and now it's time to check how to configure it for your application running at the WildFly application server.
The configuration consists of three steps.

  1. The JDBC datasource needs to be marked as connectable
  2. The database, the connectable datasource points to, has to be enriched with the xids table where Narayana can saves the data about CMR processing
  3. Transaction subsystem needs to be configured to be aware of the CMR capable resource

In our example, I use the H2 database as it's good for the showcase. You can find it in quickstart I prepared too. Check out the

Mark JDBC datasource as connectable

You will mark the resource as connectable when you use attribute connectable="true" in your datasource declaration in standalone*.xml configuration file. When you use jboss cli for the app server configuration you will use commands

/subsystem=datasources/data-source=jdbc-cmr:write-attribute(name=connectable, value=true)

The whole datasource configuration then looks like

<datasource jndi-name="java:jboss/datasources/jdbc-cmr" pool-name="jdbc-cmr-datasource"
          enabled="true" use-java-context="true" connectable="true">

When datasource is marked as connectable then the IronJacamar (JCA layer of WildFly) creates the datasource instance as implementing (defined in the jboss-transaction-spi project). This resource defines that the class provides method getConnection() throws Throwable. That's how the transaction manager is capable to obtain the connection to the database and works with the xids table inside it.

Xids database table creation

The database configured to be connectable has to ensure existence of the xids before transaction manager starts. As described above the xids allows to save the cruical information about the non-XA datasource during prepare. The shape of the SQL command depends on the SQL syntax of the database you use. The example of the table cleation commands is (see more commands under this link)

-- Oracle
  xid RAW(144), transactionManagerID VARCHAR(64), actionuid RAW(28)
CREATE UNIQUE INDEX index_xid ON xids (xid);

-- PostgreSQL
  xid bytea, transactionManagerID varchar(64), actionuid bytea
CREATE UNIQUE INDEX index_xid ON xids (xid);

-- H2
  xid VARBINARY(144), transactionManagerID VARCHAR(64), actionuid VARBINARY(28)
CREATE UNIQUE INDEX index_xid ON xids (xid);

I addressed the need of the table definition in the CMR quickstart by adding the JPA schema generation create script which contains the SQL to initialize the database.

Transaction manager CMR configuration

The last part is to configure the CMR for the transaction subsystem. The declaration puts the datasource under the list JTAEnvironmentBean#commitMarkableResourceJNDINames which is then used in code of TransactionImple#createResource.
The xml element used in the transaction subsystem and the jboss cli commands look like

  <commit-markable-resource jndi-name="java:jboss/datasources/jdbc-cmr"/>

CMR configuration options

In addition to such simple CMR declaration, the CMR can be configured with following parameters

  • jndi-name : as it could be seen above the jndi-name is way to point to the datasource which we mark as CMR ready
  • name : defines the name of the table which is used for storing the CMR state during prepare while used during recovery.
    The default value (and we've reffered to it in this way above) is xids
  • immediate-cleanup : If configured to true then there is registered a synchronization which removes proper value from the xids table immediatelly after the transaction is committed.
    When synchronization is not set up then the clean-up of the xids table is responsibility of the recovery by the code at CommitMarkableResourceRecordRecoveryModule. It checks about finished xids and it removes those which are free for garbage collection.
    The default value is false (using only recovery garbage collection).
  • batch-size : This parameter influences the process of the garbage collection (as described above). The garbage collection takes finished xids and runs DELETE SQL command. The DELETE contains the WHERE xid in (...) clause with maximum of batch-size entries provided. When there is still some finished xids left after deletion, another SQL command is assembled with maximum number of batch-size entries again.
    The default value is 100.

The commit-markable-resource xml element configured with all the parameters looks like

<subsystem xmlns="urn:jboss:domain:transactions:4.0">
  <recovery-environment socket-binding="txn-recovery-environment" status-socket-binding="txn-status-manager"/>
  <object-store path="tx-object-store" relative-to=""/>
      <commit-markable-resource jndi-name="java:jboss/datasources/jdbc-cmr">
          <xid-location name="myxidstable" batch-size="10" immediate-cleanup="true"/>

And the jboss cli commands for the same are

  :write-attribute(name=name, value=myxidstable)
  :write-attribute(name=immediate-cleanup, value=true)
  :write-attribute(name=batch-size, value=10)

NOTE: the JBoss EAP documentation about the CMR resource configuration can be found at section About the LRCO Optimization for Single-phase Commit (1PC)


The article explained what is the Narayana Commit Markable resource (CMR), it compared it with LRCO and presented its advantages. In the latter part of the article you found how to configure the CMR resource in your application deployed at the WildFly application server.
If you like to run an application using the commit markable resource feature, check our Narayana quickstart at

Monday, May 21, 2018

Narayana JDBC integration for Tomcat

Narayana implements JTA specification in Java. It's flexible and easy to be integrated to any system which desires transaction capabilities. As proof of the Narayana extensibility check our quickstarts like Spring Boot one or Camel one.
But this blogpost is different integration effort. It talks in details about Narayana integration with Apache Tomcat server.

If you do not care about details then just jump directly to the Narayana quickstarts in this area and use the code there for yourself.

If you want more detailed understanding read further.
All the discussed abilities are considered as the state of Narayana 5.8.1.Final or later.

Narayana, database resources and JDBC interface

All the proclaimed Narayana capabilities to integrate with other systems come from requirements for the system to conform with the JTA specification. JTA expects manageable resources which follows XA specification in particular. For case of the database resources the underlaying API is defined by JDBC specification. JDBC assembled resources manageable by transaction manager under package javax.sql It defines interfaces used for managing XA capabilities. The probably most noticable is XADataSource which serves as factory for XAConnection. From there we can otain XAResource. The XAResource is interface that the transaction manager works with. The instance of it participates in the two phase commit.

The workflow is to get or create the XADataSource, obtains XAConnection and as next the XAResource which is enlisted to the global transaction (managed by a transaction manager). Now we can call queries or statements through the XAConnection. When all the business work is finished the global transaction is commanded to commit which is propagated to call the commit on each enlisted XAResources.

It's important to mention that developer is not expected to do all this (getting xa resources, enlisting them to transaction manager…) All this handling is responsibility of the "container" which could be WildFly, Spring or Apache Tomcat in our case.
Normally the integration which ensures the database XAResource is enlisted to the transaction is provided by some pooling library. By the term pooling library we means code that manages a connection pool with capability enlisting database resource to the transaction.

We can say at the high level that integration parts are

  • the Apache Tomcat container
  • Narayana library
  • jdbc pooling library

In this article we will talk about Narayana JDBC transactional driver, Apache Commons DBCP and IronJacamar.

Narayana configuration with Tomcat

After the brief overview of integration requirements, let's elaborate on common settings needed for any integration approach you choose.
Be aware that each library needs a little bit different configuration and especially IronJacamar is specific.

JDBC pooling libraries integration

Narayana provides integration code in maven module tomcat-jta. That contains the glue code which integrates Narayana to the world of the Tomcat. If you write an application you will need the following:

  • providing Narayana itself to the application classpath
  • providing Narayana tomcat-jta module to the application classpath
  • configure WEB-INF/web.xml with NarayanaJtaServletContextListener which ensures the intialization of Narayana transaction manager
  • add META-INF/context.xml which setup Tomcat to start using implementation of JTA interfaces provided by Narayana
  • configure database resources to be XA aware and cooperate with Narayana by setting them up in the META-INF/context.xml

NOTE: if you expect to use the IronJacamar this requirements differs a bit!

If we take a look at the structure of the jar to be deployed we would get the picture possibly similar to this one:

  ├── META-INF
  │   └── context.xml
  └── WEB-INF
      ├── classes
      │   ├── application…
      │   └── jbossts-properties.xml
      ├── lib
      │   ├── arjuna-5.8.1.Final.jar
      │   ├── jboss-logging-3.2.1.Final.jar
      │   ├── jboss-transaction-spi-7.6.0.Final.jar
      │   ├── jta-5.8.1.Final.jar
      │   ├── postgresql-9.0-801.jdbc4.jar
      │   └── tomcat-jta-5.8.1.Final.jar
      └── web.xml

From this summary let's overview the configuration files one by one to see what's needed to be defined there.

Configuration files to be setup for the integration



The web.xml needs to define the NarayanaJtaServletContextListener to be loaded during context initialization to initialize the Narayana itself. Narayana needs to get running, for example, reaper thread that ensures transaction timeouts checking or thread of recovery manager.


This file is not compulsory. The purpose is to configure the Narayana itself.
If you don't use your own configuration file then the default is in charge. See more at blogpost Narayana periodic recovery of XA transactions or consider settings done by the default descriptor jbossts-properties.xml at narayana-jts-idlj.


  <?xml version="1.0" encoding="UTF-8" standalone="no"?>
  <Context antiJarLocking="true" antiResourceLocking="true">
      <!-- Narayana resources -->
      <Transaction factory="org.jboss.narayana.tomcat.jta.UserTransactionFactory"/>
      <Resource factory="org.jboss.narayana.tomcat.jta.TransactionManagerFactory"
        name="TransactionManager" type="javax.transaction.TransactionManager"/>
      <Resource factory="org.jboss.narayana.tomcat.jta.TransactionSynchronizationRegistryFactory"
        name="TransactionSynchronizationRegistry" type="javax.transaction.TransactionSynchronizationRegistry"/>

      <Resource auth="Container" databaseName="test" description="Data Source"
        factory="org.postgresql.xa.PGXADataSourceFactory" loginTimeout="0"
        name="myDataSource" password="test" portNumber="5432" serverName="localhost"
        type="org.postgresql.xa.PGXADataSource" user="test" username="test"
        uniqueName="myDataSource" url="jdbc:postgresql://localhost:5432/test"/>
      <Resource auth="Container" description="Transactional Data Source"
        initialSize="10" jmxEnabled="true" logAbandoned="true" maxAge="30000"
        maxIdle="16" maxTotal="4" maxWaitMillis="10000" minIdle="8"
        name="transactionalDataSource" password="test" removeAbandoned="true"
        removeAbandonedTimeout="60" testOnBorrow="true" transactionManager="TransactionManager"
        type="javax.sql.XADataSource" uniqueName="transactionalDataSource"
        username="test" validationQuery="select 1" xaDataSource="myDataSource"/>

I divide explanation this file into two parts. First are the generic settings - those needed for transaction manager integration (top part of the context.xml). The second part is on resource declaration that defines linking to the JDBC pooling library.

Transaction manager integration settings

We define implementation classes for the JTA api here. The implementation is provided by Narayana transaction manager. Those are lines of UserTransactionFactory and resources of TransactionManager and TransactionSynchronizationRegistry in the context.xml file.

JDBC pooling library settings

We aim to define database resources that can be used in the application. That's how you get the connection typically with code DataSource ds = InitialContext.doLookup("java:comp/env/transactionalDataSource"), and eventually execute a sql statement.
We define a PostgreSQL datasource with information how to create a new XA connection (we provide the host and port, credentials etc.) in the example.
The second resource is definition of jdbc pooling library to utilize the PostgreSQL one and to provide the XA capabilities. It roughtly means putting the PostgreSQL connection to the managed pool and enlisting the work under an active transaction.
Thus we have got two resources defined here. One is non-managed (the PosgreSQL one) and the second manages the first one to provide the ease work with the resources. For the developer is the most important to know he needs to use the managed one in his application, namely the transactionalDataSource from our example.

A bit about datasource configuration of Apache Tomcat context.xml

Let's take a side step at this place. Before we will talk in details about supported pooling libraries let's check a little bit more about the configuration of the Resource from perspective of XA connection in the context.xml.

Looking at the Resource definition there are highlighted parts which are interesting for us

  <Resource auth="Container" databaseName="test" description="Data Source"
    loginTimeout="0" name="myDataSource" password="test" portNumber="5432" serverName="localhost"
    type="org.postgresql.xa.PGXADataSource" uniqueName="myDataSource"
    url="jdbc:postgresql://localhost:5432/test" user="test" username="test"/>

    defines the name the resource is bound at the container and we can use the jndi lookup to find it by that name in application
    defines what type we will get as the final created Object. The factory which we declares here is class which implements interface ObjectFactory and from the provided properties it construct an object.
    If we would not define any factory element in the definition then the Tomcat class ResourceFactory is used (see default factory constants). The ResourceFactory will pass the call to the BasicDataSourceFactory of the dbcp2 library. Here we can see the importantce of the type xml parameter which defines what is the object type we want to obtain and the factory normally checks if it's able to provide such (by string equals check usually).
    The next step is generation of the object itself where the factory takes each of the properties and tries to applied them.
    In our case we use the PGXADataSourceFactory which utilizes some of the properties to create the XADataSource.
    serverName, portNumber, databaseName, user, password
    are properties used by the object factory class to get connection from the database
    Knowing the name of the properties for the particular ObjectFactory is possibly the most important when you need to configure your datasource. Here you need to check setters of the factory implementation.
    In case of the PGXADataSourceFactory we need to go through the inheritance hierarchy to find the properties are saved at BaseDataSource. For our case for the relevant properties are user name and password. From the BaseDataSource we can see the setter for the user name is setUser thus the property name we look for is user.

After this side step let's take a look at the setup of the Resources in respect of the used pooling library.

Apache Commons DBCP2 library


The best integration comes probably with Apache Common DBCP2 as the library itself is part of the Tomcat distribution (the Tomcat code base uses fork of the project). The XA integration is provided in Apache Tomcat version 9.0.7 and later. There is added dbcp2 package managed which knows how to enlist a resource to XA transaction.

The integration is similar to what we discussed in case of the JDBC transactional driver. You need to have configured two resources in context.xml. One is the database datasource (see above) and other is wrapper providing XA capabilities.

  <Resource name="transactionalDataSource" uniqueName="transactionalDataSource"
    auth="Container" type="javax.sql.XADataSource"
    transactionManager="TransactionManager" xaDataSource="h2DataSource"

The integration is here done over the use of the specific factory which directly depends on classes from Apache Tomcat org.apache.tomcat.dbcp.dbcp2 package. The factory ensures the resource being enlisted to the recovery manager as well.
The nice feature is that you can use all the DBCP2 configuration parameters for pooling as you would used when BasicDataSource is configured. See the configuration options and the their meaning at the Apache Commons documentation.


  • Already packed in the Apache Tomcat distribution from version 9.0.7
  • Configure two resources in context.xml. One is the database datasource, the second is wrapper providing XA capabilities with use of the dbcp2 pooling capabilities integrated with TransactionalDataSourceFactory.

NOTE: if you consider checking the pool status over JMX calls then DBCP2 comes with BasicDataSourceMXBean which exposes some information about the pool. You need to provide jmxName in your context.xml.

  <Resource name="transactionalDataSource" uniqueName="transactionalDataSource"
    auth="Container" type="javax.sql.XADataSource"
    transactionManager="TransactionManager" xaDataSource="h2DataSource"
    jmxEnabled="true" jmxName="org.apache.commons.dbcp2:name=transactionalDataSource"

Narayana jdbc transactional driver


With this we will get back to other two recent articles about jdbc transactional driver and recovery of the transactional driver.
The big advantage of jdbc transactional driver is its tight integration with Narayana. It's the dependecy of the Narayana tomcat-jta module which contains all the integration code needed for Narayana working in Tomcat. So if you take the tomcat-jta-5.8.1.Final you have packed the Narayna integration code and jdbc driver in out-of-the-box working bundle.

Configuration actions

Here we will define two resources in the context.xml file. The first one is the database one.

  <Resource name="h2DataSource" uniqueName="h2Datasource" auth="Container"
    type="org.h2.jdbcx.JdbcDataSource" username="sa" user="sa" password="sa"
    url="jdbc:h2:mem:testdb;DB_CLOSE_DELAY=-1" description="H2 Data Source"
    loginTimeout="0" factory="org.h2.jdbcx.JdbcDataSourceFactory"/>

The database one defines data needed for preparation of datasource and creation of the connection. The datasource is not XA aware. We need to add one more layer on top which is transactional JDBC driver here. It wraps the datasource connection within XA capabilities.

  <Resource name="transactionalDataSource" uniqueName="transactionalDataSource"
    auth="Container" type="javax.sql.DataSource" username="sa" password="sa"
    url="jdbc:arjuna:java:comp/env/h2DataSource" description="Transactional Driver Datasource"

As we do not define the element factory then the default one is used which is org.apache.tomcat.dbcp.dbcp2.BasicDataSourceFactory. Unfortunately, this is fine up to the time you need to process some more sophisticated pooling strategies. In this aspect the transactional driver does not play well with the default factory and some further integration work would be needed.

This configuration is nice for having transactionalDataSource available for the transactional work. Unfortunately, it's not all that you need to do. You miss here configuration of recovery. You need to tell the recovery manager what is the resource to care of. You can setup this in jbossts-properties.xml or maybe easier way to add it to environment variables of the starting Tomcat, for example by adding the setup under script $CATALINA_HOME/bin/
You define it with property com.arjuna.ats.jta.recovery.XAResourceRecovery.


You can define whatever number of the resources you need the recovery is aware of. It's done by adding more numbers at the end of the property name (we use 1 in the example above). The value of the property is the class implementing com.arjuna.ats.jta.recovery.XAResourceRecovery. All the properties provided to the particular implementation is concatenated after the ; character. In our example it's path to the xml descriptor h2recoveryproperties.xml.
When transactional driver is used then you need to declareBasicXARecovery as recovery implementation class and this class needs connection properties to be declared in the xml descriptor.

  <?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "">
  <entry key="DB_1_DatabaseUser">sa</entry>
  <entry key="DB_1_DatabasePassword">sa</entry>
  <entry key="DB_1_DatabaseDynamicClass"></entry>
  <entry key="DB_1_DatabaseURL">java:comp/env/h2DataSource</entry>

Note: there is option not defining the two resources under context.xml and use the env property for recovery enlistment. All the configuration properties are then involved in one properties file and transactional driver dynamic class is used. If interested the working example is at ochaloup/quickstart-jbosstm/tree/transactional-driver-and-tomcat-dynamic-class.


  • Already packed in the tomcat-jta artifact
  • Configure two resources in context.xml. One is database datasource, the second is transactional datasource wrapped by transactional driver.
  • Need to configure recovery with env variable setup com.arjuna.ats.jta.recovery.XAResourceRecovery while providing xml descriptor with connection parameters



The settings of IronJacamar integration differs pretty much from what we've seen so far. The IronJacamar implements whole JCA specification and it's pretty different beast (not just a jdbc pooling library).

The whole handling and integration is passed to IronJacamar itself.
You don't use tomcat-jta module at all.
You need to configure all aspects in the IronJacamar xml descriptors. Aspects like datasource definition, transaction configuration, pooling definition, up to the jndi binding.

The standalone IronJacamar is needed to be started with command org.jboss.jca.embedded.EmbeddedFactory.create().startup() where you defines the descriptors to be used. You can configure it in web.xml as ServletContextListener.

What are descriptors to be defined:

  • jdbc-xa.rar which is resource adapter provided by IronJacamar itself. It needs to be part of the deployment. It's capable to process ds files.
  • ds.xml which defines connecion properties and jndi name binding
  • transaction.xml which configures transaction manager instead of use of the jbossts-properties.xml.
Check more configuration in IronJacamar documentation.

Summary: IronJacamar is started as embedded system and process all the handling on its own. Developer needs to provide xml descriptor to set up.


This article provides the details about configuration of the Narayana when used in Apache Tomcat container. We've seen the three possible libraries to get the integration working - the Narayana JDBC transactional driver, Apache DBCP2 library and IronJacamar JCA implementation.
On top of it, the article contains many details about Narayana and Tomcat resource configuration.

If you hesitate what alternative is the best fit for your project then this table can help you

JDBC integration library When to use
Apache DBCP2 It's the recommended option when you want to obtain Narayana transaction handling in the Apache Tomcat Integration is done in the Narayana resource factory which ensures easily setting up the datasource and recovery in the one step.
Narayana transactional jdbc driver Is good fit when you want to have all parts integrated and covered by Narayana project. It provides lightweight JDBC pooling layer that could be nice for small projects. Integration requires a little bit more hand working.
IronJacamar To be used when you need whole JCA functionality running in Apache Tomcat. The benefit of this solution is the battle tested integration of Narayana and IronJacamar as they are delivered as one pack in the WildFly application server.

Thursday, January 11, 2018

Narayana periodic recovery of XA transactions

Let's talk about the transaction recovery with details specific to Narayana.
This blog post is related to JTA transactions. If you configure recovery for JTS, still you can find a relevant information here but then you will need to consult the Narayana documentation.

What is the transaction recovery

The transaction recovery is process needed when an active transaction fails for some reason. It could be a crash of process of the transaction manager (JVM) or connection to the resource (database)could fail or any other reason for failure.
The failure of the transaction could happen at various points of the transaction lifetime and the point define the state which the in-progress transaction was left at. The state could be just an in-memory state which is left behind and transaction manager relies on the resource transaction timeout to release it. But it could the transaction state after prepare was called (by successful prepare call the 2PC transaction confirms that is capable to finish transaction and more of what it promises to finish the transaction with commit). There has to be a process which finishes such transaction remainders. And that process is the transaction recovery.
Let's review three variants of failures which serves three different transaction states. Their results and needs of termination will guide us through the work of the transaction recovery process.

Transaction manager runs a global transaction which includes several transaction branches (when using term from XA specification). In our article about 2PC we used (not precisely) term resource-located transaction instead of the transaction branch.
Let's say we have a global transaction containing data insertion to a database plus sending a message to a message broker queue.
We will examine the crash of transaction manager (the JVM process) where each point represents one of the three example cases. The examples show the timeline of actions and differ in time when the crash happens.

  1. The global transaction was started and insertion to database happened, now JVM crashes. (no message was sent to queue). In this situation, all the transaction metadata is saved only in the memory. After JVM is restarted the transaction manager has no notion of existence of the global transaction in time before.
    But the insertion to the database already happened and the database has some work in progress already. But there was no promise by prepare call to end with commit and everything was stored only as an in-memory state thus transaction manager relies on the database to abort the work itself. Normally it happens when transaction timeout expires.
  2. The global transaction was started, data was inserted into the database and message was sent to the queue. The global transaction is asking to commit. The two-phase commit begins – the prepare is called on the database (resource-located transaction). Now the transaction manager (the JVM) crashes. If an interleaving data manipulation would be permitted then the 2PC commit would fail. But calling of prepare means the promise of the successful end. Thus the call of prepare causes locks to be taken to prevent other transactions to interleave.
    When transaction manager is restarted but again no notion of the transaction could be found as all the in-memory state was cleared. And there was nothing to be saved in the Narayana transaction log so far.
    But the database transaction is in the prepared state and with locks. On top of it, the transaction in the prepared state can't be rolled-back by transaction timeout and needs to wait for some other party to finish it.
  3. The global transaction was started, data inserted into the database and message was sent to the queue. The transaction was asked to commit. The two-phase commit begins – the prepare is called on the database and on the message queue too. A record success of the prepare phase is saved to the Narayana transaction log too. Now the transaction manager (JVM) crashes.
    After the transaction manager is restarted there is no in-memory state but we can observe the record in the Narayana transaction log and that database and the JMS queue resource-located transactions are in the prepared state with locks.

In the later cases the transaction state survived the JVM crash - once only at the side of locked records of a database, in other cases, a record is present in transaction log too. In the first case only in memory transaction representation was used where transaction manager is not responsible to finish it.
The work of finishing the unfinished transactions belongs to the recovery manager. The purpose of the recovery manager is to periodically check the state of the Narayana transaction log and resource transaction logs (unfinished resource-located transactions – it runs the JTA API call of XAResource.recover().
If an in-doubt transaction is found the recovery manager either roll it back - for example in the second case, or commit it as the whole prepare phase was originally finished with success, see the third case.

Narayana periodic recovery in details

The periodic recovery is the configurable process. That brings flexibility of the usage but made necessary to use proper settings if you want to run it.
We recommend checking the Narayana documenation, the chapter Failure Recovery.
The recovery runs periodicity (by default each two minutes) - the period could be changed by setting system property RecoveryEnvironmentBean.periodicRecoveryPeriod). When launched it iterates over all registered recovery modules (see Narayana codebase com.arjuna.ats.arjuna.recovery.RecoveryModule) and it runs the following sequence: calling the method periodicWorkFirstPass on all recovery modules, waiting time defined by RecoveryEnvironmentBean.recoveryBackoffPeriod, calling the method RecoveryEnvironmentBean.recoveryBackoffPeriod on all recovery modules.
When you want to run standard JTA XA transactions (JTS differs, you can check the config example in the Narayana code base) then you needs to configure the XARecoveryModule for the usage. The XARecoveryModule then brings to the play need of configuring XAResourceOrphanFilters which manage finishing in-doubt transactions when available only at the resource side (the second case represents such scenario).

Narayana periodic recovery configuration

You may ask how all this is configured, right?
The Narayana configuration is held in "beans". The "beans" contains properties which are retrieved by getter method calls all over the Narayana code. So configuration of the Narayana behaviour means redefining values of the demanded bean properties.
Let's check what are the beans relevant for setting the transaction management and recovery for XA transactions. We will use the jdbc transactional driver quickstart as an example.
The releavant beans are following

To configure the values of the properties you need to define it one of the following ways
  • via system property – see example in the quickstart pom.xml
    . We can see that the property is passed at the JVM argument line.
  • via use of the descriptor file jbossts-properties.xml.
    This is usually the main source of configuration in the standalone applications using Narayana. You can see the example jbossts-properties.xml and observe that as the standalone application is not the exception.
    The descriptor has to be at the classpath for the Narayana will be able to access it.
  • via call of bean setter methods.
    This is the programatic approach and is normally used mainly in managed environment as they are application servers as WildFly is (WildFly configures with jboss-cli).

  • The usage of the system property has precedence over use of the descriptor jbossts-properties.xml.
  • The usage of the programatic call of the setter method has precedence over use of system properties.

The default settings for the used narayana-idlj-jts.jar artifact can be seen at Those are (with combination of settings inside of particular beans) default values used when you don't have any properties file defined.
For more details on configuration check the documentation (section Development Guide -> Configuration options).

If you want to use the programatic approach and call the bean setters you need to gain the bean instance first. That is normally done by calling a static method of PropertyManager. There are various of them depending what you want to configure.
The relevant for us are:

We will examine the programmatic approach at the example of the jdbc transactional driver quickstart inside of the recovery utility class where property controlling values of XAResourceRecovery is reset in the code.
If you search to understand what should be the exact name of the system property or entry in jbossts-properties.xml the rule of thumb is to take the short class name of the bean, add the dot and the name of the property at the end.
For example let's say you want to redefine time period for the periodic recovery cycle. Then you need to visit the RecoveryEnvironmentBean, find the name of the variable – which is periodicRecoveryPeriod. By using the rule of thumb will use the name RecoveryEnvironmentBean.periodicRecoveryPeriod for redefinition of the default 2 minutes value.
Some bean uses annotations @PropertyPrefix which offers other way of naming for the property for settings it up. In case of the periodicRecoveryPeriod we can use system property with name com.arjuna.ats.arjuna.recovery.periodicRecoveryPeriod to reset it in the same way.

Note: an interesting link could be the settings for the integration with Tomcat which useses programatic approach, see NarayanaJtaServletContextListener.

Thinking about XA recovery

I hope you have a better picture of recovery setup and how that works now.
The XARecoveryModule has the responsibility for handling recovery of 2PC XA transactions. The module is responsible for committing unfinished transactions and for handling orphans by running registered XAResourceOrphanFilter.

As you could see we configured two RecoveryModules – XARecoveryModule and AtomicActionRecoveryModule in the jbossts-properties.xml descriptor.
The AtomicActionRecoveryModule is responsible for loading resource from object store and if it is serializable and as the whole saved in the Narayana transaction log then it could be deserialized and used immediately during recovery.
This is not the case often. When the XAResource is not serializable (which is hard to achieve for example for database where we need to have a connection to do any work) the Narayana offers resource initiated recovery. That requires a class (a code and a settings) that could provide XAResources for the recovery purposes. For getting the resource we need a connection (to database, to jms broker...). The XARecoveryModule uses objects of two interfaces to get such information (to get the XAResources for recovery).
Those interfaces are

Both interfaces then contain method to retrieve the resources (XAResourceRecoveryHelper.getXAResources(),XAResourceRecovery.getXAResource()). The XARecoveryModule then ask all the received XAResources to find in-doubt transactions (by calling XAResource.recovery()) (aka. resource located transactions).
The found in-doubt transactions are then paired with transactions in Narayana transaction log store. If the match is found the XAResource.commit() could be called.
Maybe you wonder what both interfaces are mostly the same – which kind of true – but the use differs. The XAResourceRecoveryHelper is designed (and only available) to be used in programatic way. For adding the helper amogst other ones you need to call XARecoveryModule.addXAResourceRecoveryHelper(). You can even deregister the helper by method call XARecoveryModule.removeXAResourceRecoveryHelper.
The XAResourceRecovery is configured not directly in the code but via property com.arjuna.ats.jta.recovery.XAResourceRecovery. This is not viable for dynamic changes as in normal circumstances it's not possible to reset it – even when you try to change the values by call of JTAEnvironmentBean.setXaResourceRecoveryClassNames().

Running the recovery manager

We have explained how to configure the recovery properties but we haven't pointed down one important fact – in the standalone application there is no automatic launch of the recovery manager. You need manually to start it.
A good point is that's quite easy (if you don't use ORB) and it's fine to call just

      RecoveryManager manager = RecoveryManager.manager();
This runs an indirect recovery manager (RecoveryManager.INDIRECT_MANAGEMENT) which spawns a thread which runs periodically the recovery process. If you feel that you need to run the periodic recovery in times you want (periodic timeout value is then not used) you can use direct management and call to run it manually

      RecoveryManager manager = RecoveryManager.manager(RecoveryManager.DIRECT_MANAGEMENT);
For stopping the recovery manager to work use the terminate call.



This blog post tried to introduce process of transaction recovery in Narayana.
The goal was to present settings necessary to be set for the recovery would work in an expected way for XA transactions and shows how to start the recovery manager in your application.

Recovery of Narayana jdbc transactional driver

The post about jdbc transactional driver introduced ways how you can start to code with it. The post talks about enlisting JDBC work into the global transaction but it omits the topic of recovery.
And it's here where using of transactional driver brings another benefit with ready to use approaches to set up the recovery. As in the prior article we will work with the jbosstm quickstart transactionaldriver-standalone.

After reading the post about transaction recovery you will find out that for the recovery manager would consider any resource that we enlist into the global transaction we have to:

  • either ensure that resource could be serialized into Narayana transaction log store (resource has to be serializable), in which case the recovery manager deserialize the XAResource and use it directly to get data from it
  • or to register recovery counterpart of the transaction enlistment which is capable to provide the instance of XAResource to the RecoveryModule
    in other words we need implemntation of XAResourceRecoveryHelper or XAResourceRecovery (and that's where transactional driver can help us).

For configuration of the recovery properties, we use the jbossts-properties.xml descriptor in our quickstart. We leave the most of the properties with their default values but we still need to concentrate to set up the recovery.
You can observe that it serves to define recovery modules, orphan filters or xa resource recovery helpers. Those are important entries for the recovery works for our transaction JDBC driver.
For better understanding what was set I recommend to check the comments there.

JDBC transaction driver and the recovery

In the prior article about JDBC driver we introduced three ways of providing connection data to the transaction manager (it then wraps the connection and provides transactionality in the user application). Let's go through the driver enlistment variants and check how the recovery can be configured for them.

XADataSource provided within Narayana JDBC driver properties

This is a variant where XADatasource is created directly in the code and then the created(!) instance is passed to the jdbc transactional driver.
As transactional driver receives the resource directly it does not have clue how to create such resource on its own during recovery. We have to help it with our own implementation of the XAResourceRecovery class. That has to be of course registered into environment bean (it's intentionally commented out as for testing purposes we need to change different variants).

XADataSource bound to JNDI

This variant bound the XADatasource to JNDI name. The recovery can lookup the provided jndi name and receive it to create an XAConnection to find the indoubt transactions at database side.
The point here is to pass information about jndi name to the recovery process for it knowing what to search for.
The jdbc driver uses a xml descriptor for that purpose in two variants

In fact, there is no big difference between those two variants and you can use whatever fits you better. In both versions you provide the JNDI name to be looked-up.

XADataSource connection data provided in properties file

The last variant uses properties file where the same connection information is used already during resource enlistment. And the same property file is then automatically used for recovery. You don't need to set any property manually. The recovery is automatically setup because the placement of the properties file is serialized into object store and then loaded during recovery.

In this case you configured the PropertyFileDynamicClass providing datasource credentials for the transaction manager and the recovery too. If you would like to extend the behaviour you can implement your own DynamicClass (please consult the codebase). For the recovery would work automatically you need to work with the RecoverableXAConnection.


There is currently available three approaches for the setting up recovery of jdbc transactional driver
creation of your own XAResourceRecovery/XAResourceRecoveryHelper which is feasible if you want to control the creation of the datasource and jdbc connection on your own. Or using one of the prepared XAResourceRecovery classes - either JDBCXARecovery or BasicXARecovery where you provide xml file where you specify the JNDI name of the datasource. The last option is to use properties file which defines credentials for connection and for recovery too.