Category Archives: Spring

Configuring @Configuration ApplicationContext for Spring Test Framework @ContextConfiguration

Here is a @Configuration class, RabbitTestConfiguration, truncated for the sake of a simple example. Bootstrapping this for testing using Spring’s Test Framework is simple. First make sure you have @ImportResource mapping via classpath to your xml which here has 2 simple declarations:

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns:beans=""

<component-scan base-package=”org.hyperic.hq.plugin.rabbitmq”/>

<property-placeholder location=”/etc/”/>


Next, make sure you remove the @Configuration annotation declared at the class level. We will be bootstrapping this a different way.

public class RabbitTestConfiguration {

    private @Value("${hostname}") String hostname;

    private @Value("${username}") String username;

    private @Value("${password}") String password;

    public SingleConnectionFactory singleConnectionFactory() {
        SingleConnectionFactory connectionFactory = new SingleConnectionFactory(hostname);
        return connectionFactory;
// ... shortened for brevity

Now let’s build an abstract Spring base test

 * AbstractSpringTest
 * @author Helena Edelson
@ContextConfiguration(loader = TestContextLoader.class)
public abstract class AbstractSpringTest {
    /** Inheritable logger */
    protected final Log logger = LogFactory.getLog(this.getClass().getName());

    /** Now we can autowire our beans that all child tests will need. Note that they are protected. */
    protected org.springframework.amqp.rabbit.connection.SingleConnectionFactory singleConnectionFactory;

    public void before() {
        assertNotNull("singleConnectionFactory should not be null", singleConnectionFactory);
        //... more assertion checks for other beans, removed for brevity.

And finally, build a test context loader, override customizeContext() and bootstrap your annotational config class. Since the config class bootstraps the minimal context xml config, now we’re all set.

 * TestContextLoader
 * @author Helena Edelson
public class TestContextLoader extends GenericXmlContextLoader {

    protected void customizeContext(GenericApplicationContext context) {
        AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
        /** This is really the key */

Now all of my Spring test classes can simply extend the base class and freely call the inherited shared beans or declare any @Autowired dependency

public class RabbitGatewayTest extends AbstractSpringTest {
     @Autowired protected Queue marketDataQueue;

    public void getConnections() throws Exception {
        com.rabbitmq.client.Connection conn = singleConnectionFactory.createConnection();
        // ... etc


Open Artificial Intelligence for the Cloud – Initial Diagram

This is a working (initial) idea of where an AI appliance might fit into a virtualized, elastic environment. I will update this concept soon as I created it early this summer and have many changes to now make to it. The parent post: Open Source Artificial Intelligence for the Cloud

© Copyright – not for re-use.


Open Source Artificial Intelligence for the Cloud

The idea of introducing intelligent computing so that software can make decisions autonomously and in real time is not new but it is relatively new to the cloud. Microsoft seems to be the strongest force in this realm currently but this is not open source. The idea of clouds that manage themselves steps way beyond Cloud Ops as we know it, and is the only logical, and necessary, next step.

I’ve been researching various open source AI platforms and learning algorithms to start building a prototype for cloud-based learning and action for massive distributed Cloud Ops systems and applications. Once could off er an AI cloud appliance eventually but I will start with a prototype and build on that using RabbitMQ (an AMQP messaging implementation), CEP, Spring, Java as a base. I’ve been looking into OpenStack, the open source cloud computing software being developed by NASA, RackSpace, and many others. Here is the future GitHub repo:

Generally to do these sort of projects you need a large amount of funding and engineers with PhD’s, none of which I have. So my only alternative is to see how we might take this concept and create a light weight open source solution.

Imagine having some of these (this being not a comprehensive list) available to your systems:

AI Technology

  • Planning and Scheduling
  • Heuristic Search
  • Temporal Reasoning
  • Learning Models
  • Intelligent Virtual Agents
  • Autonomic and Adaptive Clustering of Distributed Agents
  • Clustering Autonomous Entities
  • Constraint Predictability and Resolution
  • Automated Intelligent Provisioning
  • Real-time learning
  • Real-world action
  • Uncertainty and Probabilistic Reasoning
  • Decision Making
  • Knowledge Representation

I will go into these topics further in future posts.

About me: I am currently an engineer at SpringSource/VMware on the vFabric Cloud Application Platform side of things, however this project is wholly outside of that, I feel the need to pursue something I’m interested in. I work with the RabbitMQ team and a member of the AMQP Working Group.


JMX and MBean Support With Spring

The context of this post is simply about how to use the Spring Framework to export your Spring-managed pojos for management and monitoring via JMX. Later I’ll post on using Hyperic, in the cloud, etc. First things first – as this is an update from a similar post of mine from ’08 or early ’09.

Let’s start by adding the spring jmx dependency, which will be something like this, depending on the repos you are using:


Service Pojo

Now let’s set up some java classes for management. I have a business service that I want to monitor and a pojo to instrument.

@ManagedResource(objectName = "bean:name=inventoryManager", description = "Inventory Service",
        log = true, logFile = "oms.log", currencyTimeLimit = 15, persistPolicy = "OnUpdate", persistPeriod = 200,
        persistLocation = "foo", persistName = "bar")
public class InventoryServiceImpl implements InventoryService {

    @Autowired private InventoryDao inventoryDao;

    @ManagedOperation(description = "Add two numbers")
            @ManagedOperationParameter(name = "x", description = "The first number"),
            @ManagedOperationParameter(name = "y", description = "The second number")})
    public int add(int x, int y) {
        return x + y;

    @ManagedOperation(description = "Get inventory levels")
    @ManagedOperationParameters({@ManagedOperationParameter(name = "product", description = "The Product)})
    public long getInventoryLevel(Product product) {
        return getInventoryLevel(product.getSkew());

First, let’s peak into my @BusinessService annotation in case you are wondering:

@Target({ElementType.TYPE, ElementType.ANNOTATION_TYPE})
public @interface BusinessService {

Annotating any of my service layer pojo’s makes them both transactional and with an instance created in the Spring context.

Entity Pojo

Now here is a simple pojo as an entity or javabean, what have you:

@ManagedResource(objectName = "bean:name=myPojoEntity", description = "My Managed Bean", log = true,
        logFile = "oms.log", currencyTimeLimit = 15, persistPolicy = "OnUpdate", persistPeriod = 200,
        persistLocation = "foo", persistName = "bar")
public class MyPojo {

    private long somethingToTuneInRuntime;

    /* Creates a writeable attribute for managing */
    @ManagedAttribute(description = "Tunable In Runtime Attribute",
            currencyTimeLimit = 20,
            defaultValue = "bar",
            persistPolicy = "OnUpdate")
    public void setSomethingToTuneInRuntime(long value) {
        this.somethingToTuneInRuntime = value;

    @ManagedAttribute(defaultValue = "foo", persistPeriod = 300)
    public String getSomethingToTuneInRuntime() {
        return somethingToTuneInRuntime;

Spring Config

Now let’s configure Spring to autoregister our pojos to export and manage/monitor:
Create a jmx-context.xml file in your WEB-INF/* dir

Add: <context:mbean-export/>

Activates default exporting of MBeans by detecting standard MBeans in the Spring
context as well as @ManagedResource annotations on Spring-defined beans.
Rather than defining an MBeanExporter bean, just provide this single element. I could walk you through a simple, simpler and simplest config of spring jmx but with annotational config in the simplest requirements, this is all you need to do to get up and running. If you need object naming in multiple vm situations, you can easily do that and other things too but, that’s out of scope for this post ;)


Spring DAO Exception Translation with @Repository

In Spring, when you mark a DAO/Repository classes with the Spring stereotype annotation, @Repository:

public class MyDaoImpl implements MyDao { .. }

spring creates an instance of it in the IoC container as with any other stereotype annotation (@Component, @Service) but it can also add exception translation if you explicitly add this bean declaration to your config:
<bean class=”org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor”/>

This enables translation of native runtime resource exceptions that would otherwise be vendor-specific (database (oracle, etc), orm (hibernate, jpa, etc) to Spring’s runtime exception hierarchy regardless of what vendors you use over time. As it is simply enabling a feature in the Spring Framework vs something you would use, it does not require a bean id.

Also as the stereotype annotations are meta annotations you can do this to enrich behavior easily

public @interface MyRepository {
public class OtherFooDao implements FooDao { .. }


Simple Asynchronous Processing with Spring’s TaskExecutor

This post is merely meant as a starting guide to tinkering for a light-weight solution to handing off execution of a task for async processing without the overhead of Spring Batch or Spring JMS and Message Brokers, among other middleware solutions.

1. I have a simplistic junit test that merely kicks off the service method to view the path of execution:

 * TaskTests
 * @author Helena Edelson
 * @since v 1.0
public class TaskTests extends BaseTest {
    protected static final Logger logger = Logger.getLogger(TaskTests.class);
    @Autowired private OrderService orderService;

    public void testExecution(){
        logger.debug("Starting execution thread...");
        orderService.dispatch(new Order());

2. A simple context config:

<?xml version=”1.0″ encoding=”UTF-8″?>
<beans xmlns=””

<bean id=”taskExecutor” class=”org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor” p:corePoolSize=”5″ p:maxPoolSize=”25″/>

<!– OR alternately: Creates a org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor –>
<task:executor id=”taskExecutor” pool-size=”5-25″ queue-capacity=”100″ rejection-policy=”CALLER_RUNS”/>


3. OrderService that delegates to the ThreadPoolTaskExecutor:

 * OrderServiceImpl is used for both orders and returns
 * @author Helena Edelson
 * @MessageEndpoint which is a @Component
 * @since Dec 29, 2009
public class OrderServiceImpl implements OrderService {
    private OrderDao orderDao;
    private MerchantService merchantService;
    private ReceivingService receivingService;
    @Autowired private TaskExecutor taskExecutor;

    public OrderServiceImpl(OrderDao orderDao, ReceivingService receivingService, MerchantService merchantService) {
        this.orderDao = orderDao;
        this.receivingService = receivingService;
        this.merchantService = merchantService;

    public final void dispatch(final Order order) {
        logger.debug("Starting dispatch execution...");

        if (this.taskExecutor != null) {
            this.taskExecutor.execute(new Runnable() {
                public void run() {

        logger.debug("Completed dispatch execution...");

    private final void executorAsync(final Order order) {
        logger.debug("Starting Async execution...");


        logger.debug("Completed Async execution...");

/* Where the output will be: Note the dispatch method returns control to its caller before the async method begins:
2010-01-27 13:23:27,546 [main] DEBUG org.springsource.oms.infrastructure.TaskTests  - Starting execution thread...
2010-01-27 13:23:27,546 [main] DEBUG  - Starting dispatch execution...
2010-01-27 13:23:27,546 [main] DEBUG  - Completed dispatch execution...
2010-01-27 13:23:27,546 [taskExecutor-1] DEBUG  - Starting Async execution...
persisting org.springsource.oms.domain.entities.Order@1f10a67

* Alternately for a different scenario you can play around with this:
public void withExecutor(final Order order) {
        try {
            CompletionService completionService = new ExecutorCompletionService(taskExecutor);

            Object result1 = completionService.submit(new Callable() {
                public Object call() {
                    return daoDatasourceOne.createOrder(order);
            Object result2 = completionService.submit(new Callable() {
                public Object call() {
                    return daoDatasourceTwo.createOrder(order);


        } catch (InterruptedException e) {
        } catch (ExecutionException e) {


I recommend looking into the @Async annotation, which I will post on shortly. In the meantime here is the ref page for Spring Task and Scheduling:


Transaction Isolation and Spring Transaction Management

I was asked a question by a Spring student of mine and as it pertains to Spring Transaction Management, as well as transaction  management and databases in general I thought I’d share it with everyone if it may help anyone else. The question went something like this:

How do we prevent concurrent modification of the same data? Two users are in the middle of a transaction on the same data. How do we isolate those operations so that other transactions can not read the data and how to we handle synchronizing  changes to that data with commits? This deals with preserving  data integrity, the underlying locking mechanisms of your database, transaction demarcation and transaction isolation settings which are configurable by Spring Transaction Management but ultimately controlled by your specific database vendor’s implementation.

Some ways to think about transaction isolation are:

  1. The effects of one transaction are not visible to another until the transaction completes
  2. The degree of isolation one transaction has from the work of other transactions

So how do we keep one transaction from seeing uncommitted writes from other transactions and synchronize those writes?

Remember ACID = Atomicity, Consistency, Isolation and Durability?

The SQL standard defines four levels of transaction isolation in terms of three phenomena that must be prevented between concurrent transactions. These phenomena are:

Dirty read: A transaction reads data written by a concurrent uncommitted transaction.
Nonrepeatable read: A transaction re-reads data it has previously read and finds that data has been modified by another transaction (that committed since the initial read).
Phantom read: A transaction re-executes a query returning a set of rows that satisfy a search condition and finds that the set of rows satisfying the condition has changed due to another recently-committed transaction.

For most databases, the default transaction isolation level is “read committed” (your transaction operates on the data that it sees at the beginning of the transaction). You can, for example, configure transactions to be serializable which increases lock contention, but is more suited for critical operations. Here we could get into database locking, predicate locking, etc. While it is outside the scope of this post, it is very worthwhile to know; I do suggest reading up on it.

In the java layer, here are the standard isolation levels defined in the JDBC specification, in order of weakest to strongest isolation, with the respective inverse correlation on performance:

• TRANSACTION_NONE: transactions are not supported.
• TRANSACTION_READ_UNCOMMITTED: dirty reads, non-repeatable reads and phantom reads can occur.
• TRANSACTION_READ_COMMITTED: dirty reads are prevented; non-repeatable reads and phantom reads can occur.
• TRANSACTION_REPEATABLE_READ: reads and non-repeatable reads are prevented; phantom reads can occur.
• TRANSACTION_SERIALIZABLE: dirty reads, non-repeatable reads and phantom reads are prevented.

Spring’s Transaction Management Isolation levels:

Use the default isolation level of the underlying datastore.

Indicates that dirty reads are prevented; non-repeatable reads and phantom reads can occur. This level only prohibits a transaction from reading a row with uncommitted changes in it.

Indicates that dirty reads and non-repeatable reads are prevented; phantom reads can occur. This level prohibits a transaction from reading a row with uncommitted changes in it, and it also prohibits the situation where one transaction reads a row, a second transaction alters the row, and the first transaction rereads the row, getting different values the second time (a “non-repeatable read”).

Indicates that dirty reads, non-repeatable reads and phantom reads are prevented. This level includes the prohibitions in ISOLATION_REPEATABLE_READ and further prohibits the situation where one transaction reads all rows that satisfy a WHERE condition, a second transaction inserts a row that satisfies that WHERE condition, and the first transaction rereads for the same condition, retrieving the additional “phantom” row in the second read.

Note the Spring isolation and JDBC isolation levels are the same.

So for XML configuration of transaction demarcation in Spring:

<tx:advice id=”txAdvice” transaction-manager=”transactionManager”>
<tx:method name=”insert*” read-only=”false” propagation=”REQUIRED” isolation=”READ_COMMITTED”/>

And for Annotation configuration:

in xml: <tx:annotation-driven/> to enable @Transactional

in your service layer,  on the class or method level:

public class DefaultFooService implements FooService { ...} 
@Transactional(isolation = Isolation.SERIALIZABLE)
pubic void createSomething(..){ ... }


Integrating Search into Applications with Spring

I have been doing some research, prototyping, and am pretty excited about Compass integration with Spring. Compass simplifies search functionality and is built on top of Lucene as a search engine abstraction which extends Lucene and adds transactional support on top of it. Working with it is similar to working with DAO patterns and calls and all of Lucene’s functionality is available through Compass.

Some of the main objects are the CompassSearchSession, CompassConfiguration, CompassSession, and CompassTransaction. There is full tx rollback support, callbacks, you can easily abstract it into your application’s framework to make calls incredibly simple, and if you are familiar with Spring it is one step away from dao work. Very clean, very simple. I will post some framework abstraction code this week.


How To Run Two ActiveMQ Brokers On A Laptop

I needed to set up and test ActiveMQ failover URI’s and play around with Broker topologies for optimization in a safe environment (in my control environment where no other app servers or brokers other than this env could access). I have a Mac laptop with multiple VM operating systems so that I can easily work with any client/company requirements for a current project (ubuntu, windoz, etc). Here’s what I did to set up 2 local test Brokers:

1. Know both native and virtual operating system host names. IP addresses will not work

Explicitly setting IP addresses in the tc config will fail on the native OS as it attempts to bind to the virtual IP

2. Give each Broker a unique name

Standard procedure for any multiple Broker topology on a network

Installing, configuring, and running ActiveMQ is a no brainer so I won’t cover that here but the specific, simple configuration allowing me to run multiple instances on one machine were:

3. Super basic configuration of the connectors: In the activemq.xml Spring config file

Note: what I did here is set up two transport protocols, one for TCP, one for NIO

Instance One: on the native laptop OS

<networkConnector name=”nc-1″ uri=”multicast://default” userName=”system” password=”manager”/>

<transportConnector name=”tcp” uri=”tcp://nativeHostName:61616?trace=true” discoveryUri=”multicast://default”/>
<transportConnector name=”nio” uri=”nio://nativeHostName:61618?trace=true” discoveryUri=”multicast://default”/>

Instance Two: on the virtual machine’s OS

<networkConnector name=”nc-2″ uri=”multicast://default” userName=”system” password=”manager”/>

<transportConnector name=”tcp” uri=”tcp://virtualHostName:61616?trace=true” discoveryUri=”multicast://default”/>
<transportConnector name=”nio” uri=”nio://virtualHostName:61618?trace=true” discoveryUri=”multicast://default”/>

It’s that easy.

4. Start the native then virtual ActiveMQ servers

5. Open a browser on both OS’s and if you enabled the web console, at http://localhost:8161/admin/ you will see the broker interface with each unique broker name.


SuMQ Messaging Framework

I’ve just started staging my new project, SuMQ, and appreciate patience as this will take me a while to get the code standardized and on google.

SuMQ is a light-weight enterprise messaging framework built in Java, leveraging Spring, JMS, and ActiveMQ. It plugs into Flex Messaging via BlazeDS for the client. This can also be configured for other clients aside from Flex.

The sample will be ready for clustered BlazeDS instances and load balanced Application Servers.


a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page");