Helena Edelson

Akka, Spark, Scala, Cassandra, Big Data, Data Science, Cloud Computing, Machine Learning, Distributed Architecture

  • cloud

  • RSS Subscribe

  • Topics

  • Archives

    • @helenaedelson on Twitter
    • Helena Edelson on LinkedIn
    • GitHub Octocat

      helena @ github

      • Status updating...

Archive for the 'Spring' Category

Configuring @Configuration ApplicationContext for Spring Test Framework @ContextConfiguration

Posted by Helena Edelson on 23rd September 2010

Here is a @Configuration class, RabbitTestConfiguration, truncated for the sake of a simple example. Bootstrapping this for testing using Spring’s Test Framework is simple. First make sure you have @ImportResource mapping via classpath to your xml which here has 2 simple declarations:

<?xml version="1.0" encoding="UTF-8"?>
<beans:beans xmlns:beans="http://www.springframework.org/schema/beans"
xsi:schemaLocation="http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd">

<component-scan base-package=”org.hyperic.hq.plugin.rabbitmq”/>

<property-placeholder location=”/etc/test.properties”/>


Next, make sure you remove the @Configuration annotation declared at the class level. We will be bootstrapping this a different way.

public class RabbitTestConfiguration {

    private @Value("${hostname}") String hostname;

    private @Value("${username}") String username;

    private @Value("${password}") String password;

    public SingleConnectionFactory singleConnectionFactory() {
        SingleConnectionFactory connectionFactory = new SingleConnectionFactory(hostname);
        return connectionFactory;
// ... shortened for brevity

Now let’s build an abstract Spring base test

 * AbstractSpringTest
 * @author Helena Edelson
@ContextConfiguration(loader = TestContextLoader.class)
public abstract class AbstractSpringTest {
    /** Inheritable logger */
    protected final Log logger = LogFactory.getLog(this.getClass().getName());

    /** Now we can autowire our beans that all child tests will need. Note that they are protected. */
    protected org.springframework.amqp.rabbit.connection.SingleConnectionFactory singleConnectionFactory;

    public void before() {
        assertNotNull("singleConnectionFactory should not be null", singleConnectionFactory);
        //... more assertion checks for other beans, removed for brevity.

And finally, build a test context loader, override customizeContext() and bootstrap your annotational config class. Since the config class bootstraps the minimal context xml config, now we’re all set.

 * TestContextLoader
 * @author Helena Edelson
public class TestContextLoader extends GenericXmlContextLoader {

    protected void customizeContext(GenericApplicationContext context) {
        AnnotationConfigApplicationContext ctx = new AnnotationConfigApplicationContext();
        /** This is really the key */

Now all of my Spring test classes can simply extend the base class and freely call the inherited shared beans or declare any @Autowired dependency

public class RabbitGatewayTest extends AbstractSpringTest {
     @Autowired protected Queue marketDataQueue;

    public void getConnections() throws Exception {
        com.rabbitmq.client.Connection conn = singleConnectionFactory.createConnection();
        // ... etc


Posted in Annotations, Java, Open Source, Spring | No Comments »

Open Artificial Intelligence for the Cloud – Initial Diagram

Posted by Helena Edelson on 20th September 2010

This is a working (initial) idea of where an AI appliance might fit into a virtualized, elastic environment. I will update this concept soon as I created it early this summer and have many changes to now make to it. The parent post: Open Source Artificial Intelligence for the Cloud

© Copyright – not for re-use.


Posted in AI, Artificial Intelligence, Cloud, Cloud Ops, Messaging, Open Source, RabbitMQ, Spring | No Comments »

Open Source Artificial Intelligence for the Cloud

Posted by Helena Edelson on 18th September 2010

The idea of introducing intelligent computing so that software can make decisions autonomously and in real time is not new but it is relatively new to the cloud. Microsoft seems to be the strongest force in this realm currently but this is not open source. The idea of clouds that manage themselves steps way beyond Cloud Ops as we know it, and is the only logical, and necessary, next step.

I’ve been researching various open source AI platforms and learning algorithms to start building a prototype for cloud-based learning and action for massive distributed Cloud Ops systems and applications. Once could off er an AI cloud appliance eventually but I will start with a prototype and build on that using RabbitMQ (an AMQP messaging implementation), CEP, Spring, Java as a base. I’ve been looking into OpenStack, the open source cloud computing software being developed by NASA, RackSpace, and many others. Here is the future GitHub repo: http://github.com/helena/OCAI.

Generally to do these sort of projects you need a large amount of funding and engineers with PhD’s, none of which I have. So my only alternative is to see how we might take this concept and create a light weight open source solution.

Imagine having some of these (this being not a comprehensive list) available to your systems:

AI Technology

  • Planning and Scheduling
  • Heuristic Search
  • Temporal Reasoning
  • Learning Models
  • Intelligent Virtual Agents
  • Autonomic and Adaptive Clustering of Distributed Agents
  • Clustering Autonomous Entities
  • Constraint Predictability and Resolution
  • Automated Intelligent Provisioning
  • Real-time learning
  • Real-world action
  • Uncertainty and Probabilistic Reasoning
  • Decision Making
  • Knowledge Representation

I will go into these topics further in future posts.

About me: I am currently an engineer at SpringSource/VMware on the vFabric Cloud Application Platform side of things, however this project is wholly outside of that, I feel the need to pursue something I’m interested in. I work with the RabbitMQ team and a member of the AMQP Working Group.


Posted in AI, AMQP, Artificial Intelligence, CEP, Cloud, Cloud Ops, Java, Messaging, Open Source, RabbitMQ, Software Development, Spring | No Comments »

Spring 3.0.4 is now available

Posted by Helena Edelson on 21st August 2010

Spring 3.0.4 is now available: vew Arjen’s post


Posted in Java, Spring | No Comments »

JMX and MBean Support With Spring

Posted by Helena Edelson on 17th May 2010

The context of this post is simply about how to use the Spring Framework to export your Spring-managed pojos for management and monitoring via JMX. Later I’ll post on using Hyperic, in the cloud, etc. First things first – as this is an update from a similar post of mine from ’08 or early ’09.

Let’s start by adding the spring jmx dependency, which will be something like this, depending on the repos you are using:


Service Pojo

Now let’s set up some java classes for management. I have a business service that I want to monitor and a pojo to instrument.

@ManagedResource(objectName = "bean:name=inventoryManager", description = "Inventory Service",
        log = true, logFile = "oms.log", currencyTimeLimit = 15, persistPolicy = "OnUpdate", persistPeriod = 200,
        persistLocation = "foo", persistName = "bar")
public class InventoryServiceImpl implements InventoryService {

    @Autowired private InventoryDao inventoryDao;

    @ManagedOperation(description = "Add two numbers")
            @ManagedOperationParameter(name = "x", description = "The first number"),
            @ManagedOperationParameter(name = "y", description = "The second number")})
    public int add(int x, int y) {
        return x + y;

    @ManagedOperation(description = "Get inventory levels")
    @ManagedOperationParameters({@ManagedOperationParameter(name = "product", description = "The Product)})
    public long getInventoryLevel(Product product) {
        return getInventoryLevel(product.getSkew());

First, let’s peak into my @BusinessService annotation in case you are wondering:

@Target({ElementType.TYPE, ElementType.ANNOTATION_TYPE})
public @interface BusinessService {

Annotating any of my service layer pojo’s makes them both transactional and with an instance created in the Spring context.

Entity Pojo

Now here is a simple pojo as an entity or javabean, what have you:

@ManagedResource(objectName = "bean:name=myPojoEntity", description = "My Managed Bean", log = true,
        logFile = "oms.log", currencyTimeLimit = 15, persistPolicy = "OnUpdate", persistPeriod = 200,
        persistLocation = "foo", persistName = "bar")
public class MyPojo {

    private long somethingToTuneInRuntime;

    /* Creates a writeable attribute for managing */
    @ManagedAttribute(description = "Tunable In Runtime Attribute",
            currencyTimeLimit = 20,
            defaultValue = "bar",
            persistPolicy = "OnUpdate")
    public void setSomethingToTuneInRuntime(long value) {
        this.somethingToTuneInRuntime = value;

    @ManagedAttribute(defaultValue = "foo", persistPeriod = 300)
    public String getSomethingToTuneInRuntime() {
        return somethingToTuneInRuntime;

Spring Config

Now let’s configure Spring to autoregister our pojos to export and manage/monitor:
Create a jmx-context.xml file in your WEB-INF/* dir

Add: <context:mbean-export/>

Activates default exporting of MBeans by detecting standard MBeans in the Spring
context as well as @ManagedResource annotations on Spring-defined beans.
Rather than defining an MBeanExporter bean, just provide this single element. I could walk you through a simple, simpler and simplest config of spring jmx but with annotational config in the simplest requirements, this is all you need to do to get up and running. If you need object naming in multiple vm situations, you can easily do that and other things too but, that’s out of scope for this post ;)


Posted in Spring, Spring JMX | No Comments »

Spring DAO Exception Translation with @Repository

Posted by Helena Edelson on 5th April 2010

In Spring, when you mark a DAO/Repository classes with the Spring stereotype annotation, @Repository:

public class MyDaoImpl implements MyDao { .. }

spring creates an instance of it in the IoC container as with any other stereotype annotation (@Component, @Service) but it can also add exception translation if you explicitly add this bean declaration to your config:
<bean class=”org.springframework.dao.annotation.PersistenceExceptionTranslationPostProcessor”/>

This enables translation of native runtime resource exceptions that would otherwise be vendor-specific (database (oracle, etc), orm (hibernate, jpa, etc) to Spring’s runtime exception hierarchy regardless of what vendors you use over time. As it is simply enabling a feature in the Spring Framework vs something you would use, it does not require a bean id.

Also as the stereotype annotations are meta annotations you can do this to enrich behavior easily

public @interface MyRepository {
public class OtherFooDao implements FooDao { .. }


Posted in Annotations, Spring | No Comments »

Spring Framework 3.0.2 is Released

Posted by Helena Edelson on 4th April 2010

Read Juergen’s blog post about the latest release and get the code


Posted in Spring | No Comments »

Simple Asynchronous Processing with Spring’s TaskExecutor

Posted by Helena Edelson on 27th January 2010

This post is merely meant as a starting guide to tinkering for a light-weight solution to handing off execution of a task for async processing without the overhead of Spring Batch or Spring JMS and Message Brokers, among other middleware solutions.

1. I have a simplistic junit test that merely kicks off the service method to view the path of execution:

 * TaskTests
 * @author Helena Edelson
 * @since v 1.0
public class TaskTests extends BaseTest {
    protected static final Logger logger = Logger.getLogger(TaskTests.class);
    @Autowired private OrderService orderService;

    public void testExecution(){
        logger.debug("Starting execution thread...");
        orderService.dispatch(new Order());

2. A simple context config:

<?xml version=”1.0″ encoding=”UTF-8″?>
<beans xmlns=”http://www.springframework.org/schema/beans”
http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-3.0.xsd
http://www.springframework.org/schema/aop http://www.springframework.org/schema/aop/spring-aop-3.0.xsd
http://www.springframework.org/schema/context http://www.springframework.org/schema/context/spring-context-3.0.xsd
http://www.springframework.org/schema/task http://www.springframework.org/schema/task/spring-task-3.0.xsd”>

<bean id=”taskExecutor” class=”org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor” p:corePoolSize=”5″ p:maxPoolSize=”25″/>

<!– OR alternately: Creates a org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor –>
<task:executor id=”taskExecutor” pool-size=”5-25″ queue-capacity=”100″ rejection-policy=”CALLER_RUNS”/>


3. OrderService that delegates to the ThreadPoolTaskExecutor:

 * OrderServiceImpl is used for both orders and returns
 * @author Helena Edelson
 * @MessageEndpoint which is a @Component
 * @since Dec 29, 2009
public class OrderServiceImpl implements OrderService {
    private OrderDao orderDao;
    private MerchantService merchantService;
    private ReceivingService receivingService;
    @Autowired private TaskExecutor taskExecutor;

    public OrderServiceImpl(OrderDao orderDao, ReceivingService receivingService, MerchantService merchantService) {
        this.orderDao = orderDao;
        this.receivingService = receivingService;
        this.merchantService = merchantService;

    public final void dispatch(final Order order) {
        logger.debug("Starting dispatch execution...");

        if (this.taskExecutor != null) {
            this.taskExecutor.execute(new Runnable() {
                public void run() {

        logger.debug("Completed dispatch execution...");

    private final void executorAsync(final Order order) {
        logger.debug("Starting Async execution...");


        logger.debug("Completed Async execution...");

/* Where the output will be: Note the dispatch method returns control to its caller before the async method begins:
2010-01-27 13:23:27,546 [main] DEBUG org.springsource.oms.infrastructure.TaskTests  - Starting execution thread...
2010-01-27 13:23:27,546 [main] DEBUG org.springsource.oms.domain.services.OrderServiceImpl  - Starting dispatch execution...
2010-01-27 13:23:27,546 [main] DEBUG org.springsource.oms.domain.services.OrderServiceImpl  - Completed dispatch execution...
2010-01-27 13:23:27,546 [taskExecutor-1] DEBUG org.springsource.oms.domain.services.OrderServiceImpl  - Starting Async execution...
persisting org.springsource.oms.domain.entities.Order@1f10a67

* Alternately for a different scenario you can play around with this:
public void withExecutor(final Order order) {
        try {
            CompletionService completionService = new ExecutorCompletionService(taskExecutor);

            Object result1 = completionService.submit(new Callable() {
                public Object call() {
                    return daoDatasourceOne.createOrder(order);
            Object result2 = completionService.submit(new Callable() {
                public Object call() {
                    return daoDatasourceTwo.createOrder(order);


        } catch (InterruptedException e) {
        } catch (ExecutionException e) {


I recommend looking into the @Async annotation, which I will post on shortly. In the meantime here is the ref page for Spring Task and Scheduling: http://static.springsource.org/spring/docs/3.0.x/reference/html/scheduling.html


Posted in Annotations, Concurrency, Java, Spring, Spring Task | No Comments »

Spring Roo 1.0.0 Released

Posted by Helena Edelson on 31st December 2009

Just released today!



Posted in Configuration Management, Java, Software Development, Spring, Spring ROO | No Comments »

Spring Roo

Posted by Helena Edelson on 22nd October 2009

Tuesday I went to Ben Alex’s presentation at SpringOne on Roo. He literally build a basic but pretty comprehensive web application in ten minutes and walked us through it. I’ll write some more about it soon but the main project site is
springsource.org/roo, which pretty much says it all


Posted in Java, Software Development, Spring, Spring ROO | No Comments »

Transaction Isolation and Spring Transaction Management

Posted by Helena Edelson on 21st October 2009

I was asked a question by a Spring student of mine and as it pertains to Spring Transaction Management, as well as transaction  management and databases in general I thought I’d share it with everyone if it may help anyone else. The question went something like this:

How do we prevent concurrent modification of the same data? Two users are in the middle of a transaction on the same data. How do we isolate those operations so that other transactions can not read the data and how to we handle synchronizing  changes to that data with commits? This deals with preserving  data integrity, the underlying locking mechanisms of your database, transaction demarcation and transaction isolation settings which are configurable by Spring Transaction Management but ultimately controlled by your specific database vendor’s implementation.

Some ways to think about transaction isolation are:

  1. The effects of one transaction are not visible to another until the transaction completes
  2. The degree of isolation one transaction has from the work of other transactions

So how do we keep one transaction from seeing uncommitted writes from other transactions and synchronize those writes?

Remember ACID = Atomicity, Consistency, Isolation and Durability?

The SQL standard defines four levels of transaction isolation in terms of three phenomena that must be prevented between concurrent transactions. These phenomena are:

Dirty read: A transaction reads data written by a concurrent uncommitted transaction.
Nonrepeatable read: A transaction re-reads data it has previously read and finds that data has been modified by another transaction (that committed since the initial read).
Phantom read: A transaction re-executes a query returning a set of rows that satisfy a search condition and finds that the set of rows satisfying the condition has changed due to another recently-committed transaction.

For most databases, the default transaction isolation level is “read committed” (your transaction operates on the data that it sees at the beginning of the transaction). You can, for example, configure transactions to be serializable which increases lock contention, but is more suited for critical operations. Here we could get into database locking, predicate locking, etc. While it is outside the scope of this post, it is very worthwhile to know; I do suggest reading up on it.

In the java layer, here are the standard isolation levels defined in the JDBC specification, in order of weakest to strongest isolation, with the respective inverse correlation on performance:

• TRANSACTION_NONE: transactions are not supported.
• TRANSACTION_READ_UNCOMMITTED: dirty reads, non-repeatable reads and phantom reads can occur.
• TRANSACTION_READ_COMMITTED: dirty reads are prevented; non-repeatable reads and phantom reads can occur.
• TRANSACTION_REPEATABLE_READ: reads and non-repeatable reads are prevented; phantom reads can occur.
• TRANSACTION_SERIALIZABLE: dirty reads, non-repeatable reads and phantom reads are prevented.

Spring’s Transaction Management Isolation levels:

Use the default isolation level of the underlying datastore.

Indicates that dirty reads are prevented; non-repeatable reads and phantom reads can occur. This level only prohibits a transaction from reading a row with uncommitted changes in it.

Indicates that dirty reads and non-repeatable reads are prevented; phantom reads can occur. This level prohibits a transaction from reading a row with uncommitted changes in it, and it also prohibits the situation where one transaction reads a row, a second transaction alters the row, and the first transaction rereads the row, getting different values the second time (a “non-repeatable read”).

Indicates that dirty reads, non-repeatable reads and phantom reads are prevented. This level includes the prohibitions in ISOLATION_REPEATABLE_READ and further prohibits the situation where one transaction reads all rows that satisfy a WHERE condition, a second transaction inserts a row that satisfies that WHERE condition, and the first transaction rereads for the same condition, retrieving the additional “phantom” row in the second read.

Note the Spring isolation and JDBC isolation levels are the same.

So for XML configuration of transaction demarcation in Spring:

<tx:advice id=”txAdvice” transaction-manager=”transactionManager”>
<tx:method name=”insert*” read-only=”false” propagation=”REQUIRED” isolation=”READ_COMMITTED”/>

And for Annotation configuration:

in xml: <tx:annotation-driven/> to enable @Transactional

in your service layer,  on the class or method level:

public class DefaultFooService implements FooService { ...} 
@Transactional(isolation = Isolation.SERIALIZABLE)
pubic void createSomething(..){ ... }


Posted in AOP, Java, Spring, Transaction Management | No Comments »

Integrating Search into Applications with Spring

Posted by Helena Edelson on 12th October 2009

I have been doing some research, prototyping, and am pretty excited about Compass integration with Spring. Compass simplifies search functionality and is built on top of Lucene as a search engine abstraction which extends Lucene and adds transactional support on top of it. Working with it is similar to working with DAO patterns and calls and all of Lucene’s functionality is available through Compass.

Some of the main objects are the CompassSearchSession, CompassConfiguration, CompassSession, and CompassTransaction. There is full tx rollback support, callbacks, you can easily abstract it into your application’s framework to make calls incredibly simple, and if you are familiar with Spring it is one step away from dao work. Very clean, very simple. I will post some framework abstraction code this week.


Posted in Java, Search, Spring | No Comments »

Application Configuration with Spring and Java Annotations

Posted by Helena Edelson on 25th July 2009

There is an ongoing debate about best practices for configuration management between XML with schema support versus Annotational meta data. I can make a valid argument either way, and I think the choice should be made on a case by case basis. But, is XML configuration so bad? I like using annotational configuration for things like Spring Web Services and Spring @MVC, where the configuration is specific to the method and arguments – where if you modify code you are likely to modify configuration. I also schema config with namespace support for cases where having a central area of configuration, easily maintainable, and keeping all code clean of meta data is preferred. One easy way to clean up necessary XML config is to use namespaces.

“Namespaces dramatically improve the Spring XML landscape
• More expressive, less verbose
• Just ask Spring Security where it’s 200+ lines of config went!”
- Chris Beams, Project Lead, Spring JavaConfig

Also with namespaces, in some cases the best practices configuration is already done for you and you can easily leverage convention over configuration to also reduce verbosity.

Item two in the list above is absolutely true. When you upgrade from Acegi Security to Spring Security, the configuration volume is night and day, and very cool. Perhaps they realized that if you required users to configure so many framework objects for one application layer such as security or messaging, you haven’t done a service to your users. They simply automatically register what amounts to best practices, letting the user override and add to the security context alternatively, as needed. Now that’s helpful.

So what about annotations? I am currently on large-scale enterprise migration project. Originally we made the decision to go with XML because the team was learning Java and Flex and we didn’t want to add Annotations to the mix for them. As the migrated application’s codebase is growing with domain logic, and in parallel its config files for business and framework services, I am moving toward Annotations at least in my own committed code. At first it was a, “Let’s not pollute and add dependencies in the code..” issue, and now it is a, “Component scanning, very cool.. automated registration … no config files to maintain and refactor” issue.

I am blogging not writing a thesis, so yes, there are more valid arguments which I leave for others, lets get to it and look at code. The main topics I’ll cover briefly are implementing JSR-250 Support, Spring’s @Autowired, Using Qualifiers, Component Scanning.

JSR-250 Support

Three of the total JSR-250 annotations defined in Java EE 5 which are out of the box in Java SE 6 are @Resource, @PostConstruct, and @PostDestroy. There are more in common-annotations.jar.

Enabling Spring’s JSR-250 support
This is as simple as implementing one of these two options in your application’s context.xml:

1. Old School
<bean class=“org.springframework.context.annotation.CommonAnnotationBeanPostProcessor”/>

2. Namespace, which offers greater functionalty than the above option:

You can implicitly are explicitly name your resource. This is implicit naming by property:
private DataSource securityDataSource;

public void setSecurityDataSource(DataSource sds) {
this.primaryDataSource = sds;

Alternatively, you can name by explicit meta data:

private DataSource dataSource;

public void setDataSource(DataSource dataSource) {
this.dataSource = dataSource;

And if needed, you can disable type-matching fallback as well:

<bean class=“org.springframework.context.annotation.CommonAnnotationBeanPostProcessor”>
<property name=“fallbackToDefaultTypeMatch” value=“false”/>


Lifecycle Annotations: @PostConstruct and @PreDestroy

In some of my framework services I use 2 of Spring’s Interfaces: InitializingBean, DisposableBean and their respective methods: afterPropertiesSet(), destroy().  Another option to implementing these non-programmatically is this method which I think makes the code much cleaner and less verbose:

@PostConstruct public void initialize() { // on post setter injection }

@PreDestroy public void doShutdown() { // on close context }

Yet another option is if you have a need for creating custom initialize and destroy annotations you can configure them like so:

<bean class=“org.springframework.context.annotation.CommonAnnotationBeanPostProcessor”>
<property name=“initAnnotationType” value=“com.edelsonmedia.infrastructure.annotations.Initialize”/>
<property name=“destroyAnnotationType” value=“com.edelsonmedia.infrastructure.annotations..Destroy”/>

which is cool because I like writing annotations for increased customization.


@Autowired is a dependency resolver acting on type which acts on fields, methods, and constructors. There are no method naming convention restrictions and multiple parameters are accepted.

In Spring 2.5, using the context namespace with <context:annotation-config/> automatically enables the @Autowired annotation.

Field injection:
@Autowired private DataSource dataSource;

Setter Injection:
@Autowired public void setDataSource(DataSource dataSource){...}

Constructor Injection:
@Autowired public ObjectRepository(DataSource dataSource){...}

Method Injection:
@Autowired public doSetup(DataSource dataSource, Company company) {...}

Attributes for further configuration:
This distinguishes an optional property but note that if there are more than one matches found this will fail:
private SomeObject obj;

This configures a primary candidate from the other types:
<bean id=“dataSource” primary=“true” class=“org.apache.commons.dbcp.BasicDataSource” … />
<bean id=“backupDataSource” class=“org.apache.commons.dbcp.BasicDataSource” … />

The @Qualifier annotation is scoped for field, constructor arg and method parameters. This annotation offers named matching functionality to the @Autowired annotation, finer granularity for autowire candidate resolution, and an extension point for custom autowiring qualifiers, which I will show below:

Field Injection
private DataSource dataSource;

Setter Injection
public void setDataSource(@Qualifier(“securityDataSource”)
DataSource dataSource) {
this.dataSource = dataSource;

Constructor Injection
public void setup(@Qualifier(“securityDataSource”)
DataSource dataSource, SomeObject obj) {
this.dataSource = dataSource;
this.obj = obj;

Multiple Parameter Method Injection
public AbstractedRepository(@Qualifier(“securityDataSource”)
DataSource dataSource) {
this.dataSource = dataSource;

@Qualifier As Meta-Annotation for Extended or Custom Qualifiers

Define it:
public @interface VMLoaded { … }

Now use the annotation with @Autowired:
@VMLoaded /* If the annotation provides meaning as is, no value necessary */
private BootrapManager bootstrapManager;

If you want to register a custom annotation without using @Qualifier as meta-annotation:

<bean class=“org.springframework.beans.factory.annotation.CustomAutowireConfigurer”>
<property name=“customQualifierTypes”>

You can define attributes with custom qualifiers A value attribute can match against a bean name just like it does for @Qualifier
public @interface CompanyCatalog {
String company();
int type();

Attributes can resolve against XML metadata or class-level annotation metadata
@Category(company=“someCompany”, type=“principle”)
public class CompanyCatalog implements Catalog {
//… etc

Stereotype Annotations

@Component – a generic component
@Repository – a repository (DAO)
@Service – a stateless service of idempotent operations
@Controller – an MVC controller

All auto-detected components are implicitly named from the non-qualified class name. This:
public class AController { … }

is equivalent to:

<bean id=“aController” class=“org.foo.web.controller.AController”/>

You can set the generated name explicitly, where this:
public class AController { … }

is equivalent to:
<bean id=“aCatalog”

So that whole services-config.xml file you may have that grows and grows now can be migrated to annotations in your classes, and you can delete the config file.

Component Scanning
In Spring 2.5, they included a new class, ClassPathBeanDefinitionScanner, which accepts packages passed in as arguments, and detects any class with declared stereotypes in the path while scanning the base package and its sub-packages. Using component scanning is easy. Simply add the context namespace to your main schema config and add:

<context:component-scan base-package=“org.foo”/>

Now you are set to customize the component scanner if needed. Using the @Component stereotype declared on a custom annotation, you can then decorate any class with that annotation simply by the above configuration addition and:

public @interface MyAnnotation{ ... }

public class MyClass { ... }

But what if you need to filter the package scanning? Below demonstrates how to include components with custom filters, and the assignable, aspectj and regex filters:

<context:component-scan base-package=“org.foo”>
<!-- custom filter -->
<context:include-filter type=“annotation” expression=“foo.Bar”/>
<context:include-filter type=“assignable” expression=“foo.Baz”/>
<context:include-filter type=“aspectj” expression=“foo..*Service”/>
<context:include-filter type=“regex” expression=“foo\.B[a-z]+”/>

For further customization, you can disable default filters or stereotypes and exclude filters

<context:component-scan base-package=“org.example.web” use-default-filters=“false”>
<context:include-filter type=“annotation” expression=“foo.Bar”/>
<context:include-filter type=“aspectj” expression=“foo..*Service”/>
<context:exclude-filter type=“assignable” expression=“foo.Bad”/>

Scoping Components
As with bean definition in xml, the default scope is singleton. To provide any other scope, add the @Scope annotation:

public class MyController { … }

public class SomeWebComponent { … }

Best Practices
So in conclusion, you can you annotations and XML configuration together and leverage the best of both within one application context. Here are some thoughts from Juergen Hoeller, a Principle engineer at SpringSource, on the topic:
Annotation metadata is in the code

  • Pro: facilitates refactoring
  • Con: forces recompilation

XML externalizes the configuration

  • Pro: configuration is not scattered
  • Con: XML is verbose

Find out more: JSR_250 spec


Posted in Annotations, Application Cofiguration, Configuration Management, Flex, Java, JMS, Software Development, Spring | No Comments »

How To Run Two ActiveMQ Brokers On A Laptop

Posted by Helena Edelson on 30th April 2009

I needed to set up and test ActiveMQ failover URI’s and play around with Broker topologies for optimization in a safe environment (in my control environment where no other app servers or brokers other than this env could access). I have a Mac laptop with multiple VM operating systems so that I can easily work with any client/company requirements for a current project (ubuntu, windoz, etc). Here’s what I did to set up 2 local test Brokers:

1. Know both native and virtual operating system host names. IP addresses will not work

Explicitly setting IP addresses in the tc config will fail on the native OS as it attempts to bind to the virtual IP

2. Give each Broker a unique name

Standard procedure for any multiple Broker topology on a network

Installing, configuring, and running ActiveMQ is a no brainer so I won’t cover that here but the specific, simple configuration allowing me to run multiple instances on one machine were:

3. Super basic configuration of the connectors: In the activemq.xml Spring config file

Note: what I did here is set up two transport protocols, one for TCP, one for NIO

Instance One: on the native laptop OS

<networkConnector name=”nc-1″ uri=”multicast://default” userName=”system” password=”manager”/>

<transportConnector name=”tcp” uri=”tcp://nativeHostName:61616?trace=true” discoveryUri=”multicast://default”/>
<transportConnector name=”nio” uri=”nio://nativeHostName:61618?trace=true” discoveryUri=”multicast://default”/>

Instance Two: on the virtual machine’s OS

<networkConnector name=”nc-2″ uri=”multicast://default” userName=”system” password=”manager”/>

<transportConnector name=”tcp” uri=”tcp://virtualHostName:61616?trace=true” discoveryUri=”multicast://default”/>
<transportConnector name=”nio” uri=”nio://virtualHostName:61618?trace=true” discoveryUri=”multicast://default”/>

It’s that easy.

4. Start the native then virtual ActiveMQ servers

5. Open a browser on both OS’s and if you enabled the web console, at http://localhost:8161/admin/ you will see the broker interface with each unique broker name.


Posted in ActiveMQ, Java, JMS, JMS Broker, Spring, Spring JMS | No Comments »

SuMQ Messaging Framework

Posted by Helena Edelson on 24th March 2009

I’ve just started staging my new project, SuMQ, and appreciate patience as this will take me a while to get the code standardized and on google.

SuMQ is a light-weight enterprise messaging framework built in Java, leveraging Spring, JMS, and ActiveMQ. It plugs into Flex Messaging via BlazeDS for the client. This can also be configured for other clients aside from Flex.

The sample will be ready for clustered BlazeDS instances and load balanced Application Servers.



Posted in ActiveMQ, Annotations, AOP, Application Cofiguration, BlazeDS & LCDS, Broker Topology, Java, JMS, JMS Broker, Messaging, Spring, Spring JMS | No Comments »

a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page");