Category Archives: Java

JMS ActiveMQ Broker Topologies and High-Availability Configuration

Pure Master Slave BrokerTopology
Pure Master Slave Simplified Topology
I am not actually going to go into Broker topologies, there are many great resources for that such as this by Bruce Snyder: or, all great stuff. This example uses a store and forward topology, or, distributed queues, and incorporates basic authentication:

My use case was to handle down JMS Servers. What I needed to do was implement failover as well as master slave strategies and a topology for message redundancy in case of hardware failure, etc. The client could not have any message loss. With failover, you can see how ActiveMQ switches from the main broker to the second, third, etc on failure. I have a case of four JMS servers in production, each server it is on is load balanced.

There are just a few configurations to add or modify in order to set up JMS Failover with Master/Slave for your broker topology. Here is a basic configuration. For this use case, all JMS servers are configured as standalone versus embedded.

I. Client URI

You will need to add the Failover protocol, either with a basic URI pattern or a composite. In this use case, there are load balanced servers in Production and multiple Development and QA environments which require different configurations for master/slave and failover.

In your application’s properties file for messaging add a modified version of this with your mappings:,tcp://slaveh2:61616,tcp://master2:61616,tcp://slave2:61616,network:static://(tcp://localhost:61616,tcp://master2:61616,tcp://slave2:61616))?randomize=false

Note: I set connections as locked down (static) communication configurations vs multicast or dynamic discovery so that I know exactly what servers can communicate with each other and how. Also this is assuming you have one set per environment to account for mapping the appropriate IP’s in development, qa, production, dr, etc.

Note: Do not configure networkConnections for master slave, they are handled on the slave with the following configuration:
<masterConnector remoteURI= "tcp://masterhost:61616" userName="wooty" password="woo"/>

II. Spring Configuration

<bean id="pooledConnectionFactory" class="org.apache.activemq.pool.PooledConnectionFactory"
<property name="connectionFactory">
<bean class="org.apache.activemq.spring.ActiveMQConnectionFactory">
<constructor-arg value="${}"/>
<property name="userName" value="${activemq.username}"/>
<property name="password" value="${activemq.password}"/>

III. Broker Configuration


<broker brokerName="{hostname}" waitForSlave="true" xmlns="" dataDirectory="${activemq.base}/data">
<!-- passed in by the client broker URI so you can easily manager per environment: sweet -->

<!-- TCP uses the OpenWire marshaling protocol to convert messages to stream of bytes (and back) -->
<transportConnector name="tcp" uri="tcp://localhost:61616?trace=true" />
<transportConnector name="nio" uri="nio://localhost:61618?trace=true" />
<!-- <transportConnector name="ssl" uri="ssl://localhost:61617"/>
<transportConnector name="http" uri="http://localhost:61613"/
<transportConnector name="https" uri="https://localhost:61222"/> -->

<!-- Basic security and credentials -->
<authenticationUser username="system" password="manager" groups="admin, publishers,consumers"/>

..more configuration

Slave: for ActiveMQ 4.1 or later which also allows for authentication as show below

<broker brokerName="{hostname}Slave" deleteAllMessagesOnStartup="true" xmlns="">
<transportConnector uri="tcp://localhost:61616"/>

<masterConnector remoteURI= "tcp://masterhost:62001" userName="wooty" password="woo"/
<!-- Basic security and credentials -->
<authenticationUser username="system" password="manager" groups="admin, publishers,consumers"/>



Transaction Isolation and Spring Transaction Management

I was asked a question by a Spring student of mine and as it pertains to Spring Transaction Management, as well as transaction  management and databases in general I thought I’d share it with everyone if it may help anyone else. The question went something like this:

How do we prevent concurrent modification of the same data? Two users are in the middle of a transaction on the same data. How do we isolate those operations so that other transactions can not read the data and how to we handle synchronizing  changes to that data with commits? This deals with preserving  data integrity, the underlying locking mechanisms of your database, transaction demarcation and transaction isolation settings which are configurable by Spring Transaction Management but ultimately controlled by your specific database vendor’s implementation.

Some ways to think about transaction isolation are:

  1. The effects of one transaction are not visible to another until the transaction completes
  2. The degree of isolation one transaction has from the work of other transactions

So how do we keep one transaction from seeing uncommitted writes from other transactions and synchronize those writes?

Remember ACID = Atomicity, Consistency, Isolation and Durability?

The SQL standard defines four levels of transaction isolation in terms of three phenomena that must be prevented between concurrent transactions. These phenomena are:

Dirty read: A transaction reads data written by a concurrent uncommitted transaction.
Nonrepeatable read: A transaction re-reads data it has previously read and finds that data has been modified by another transaction (that committed since the initial read).
Phantom read: A transaction re-executes a query returning a set of rows that satisfy a search condition and finds that the set of rows satisfying the condition has changed due to another recently-committed transaction.

For most databases, the default transaction isolation level is “read committed” (your transaction operates on the data that it sees at the beginning of the transaction). You can, for example, configure transactions to be serializable which increases lock contention, but is more suited for critical operations. Here we could get into database locking, predicate locking, etc. While it is outside the scope of this post, it is very worthwhile to know; I do suggest reading up on it.

In the java layer, here are the standard isolation levels defined in the JDBC specification, in order of weakest to strongest isolation, with the respective inverse correlation on performance:

• TRANSACTION_NONE: transactions are not supported.
• TRANSACTION_READ_UNCOMMITTED: dirty reads, non-repeatable reads and phantom reads can occur.
• TRANSACTION_READ_COMMITTED: dirty reads are prevented; non-repeatable reads and phantom reads can occur.
• TRANSACTION_REPEATABLE_READ: reads and non-repeatable reads are prevented; phantom reads can occur.
• TRANSACTION_SERIALIZABLE: dirty reads, non-repeatable reads and phantom reads are prevented.

Spring’s Transaction Management Isolation levels:

Use the default isolation level of the underlying datastore.

Indicates that dirty reads are prevented; non-repeatable reads and phantom reads can occur. This level only prohibits a transaction from reading a row with uncommitted changes in it.

Indicates that dirty reads and non-repeatable reads are prevented; phantom reads can occur. This level prohibits a transaction from reading a row with uncommitted changes in it, and it also prohibits the situation where one transaction reads a row, a second transaction alters the row, and the first transaction rereads the row, getting different values the second time (a “non-repeatable read”).

Indicates that dirty reads, non-repeatable reads and phantom reads are prevented. This level includes the prohibitions in ISOLATION_REPEATABLE_READ and further prohibits the situation where one transaction reads all rows that satisfy a WHERE condition, a second transaction inserts a row that satisfies that WHERE condition, and the first transaction rereads for the same condition, retrieving the additional “phantom” row in the second read.

Note the Spring isolation and JDBC isolation levels are the same.

So for XML configuration of transaction demarcation in Spring:

<tx:advice id=”txAdvice” transaction-manager=”transactionManager”>
<tx:method name=”insert*” read-only=”false” propagation=”REQUIRED” isolation=”READ_COMMITTED”/>

And for Annotation configuration:

in xml: <tx:annotation-driven/> to enable @Transactional

in your service layer,  on the class or method level:

public class DefaultFooService implements FooService { ...} 
@Transactional(isolation = Isolation.SERIALIZABLE)
pubic void createSomething(..){ ... }


Integrating Search into Applications with Spring

I have been doing some research, prototyping, and am pretty excited about Compass integration with Spring. Compass simplifies search functionality and is built on top of Lucene as a search engine abstraction which extends Lucene and adds transactional support on top of it. Working with it is similar to working with DAO patterns and calls and all of Lucene’s functionality is available through Compass.

Some of the main objects are the CompassSearchSession, CompassConfiguration, CompassSession, and CompassTransaction. There is full tx rollback support, callbacks, you can easily abstract it into your application’s framework to make calls incredibly simple, and if you are familiar with Spring it is one step away from dao work. Very clean, very simple. I will post some framework abstraction code this week.


Decoupling Asynchronous Messaging With Spring JMS and Spring Integration

The importance of decoupling in applications is vital but it is not easy to do it well, I am constantly working to improve my strategies. Even more important is the role of Messaging in an enterprise context and in the design. I think Message-driven architecture is perhaps the most important to integrate into applications in terms of its scope of applicability and how it lends to scalability. In this vein, Spring Integration becomes a highly intriguing element worthy of study, testing, and integration. I barely touch on Spring Integration here, particulary in its standard usage but simply use a few classes programmatically to decouple JMS implementation from its clients.

Messaging and Decoupling

Messaging is everywhere and so subtle we are not aware of it, it just happens all around us, all the time. Consider this: in Genetics on a cellular level, which is nongranular in the scope of genetics itself, the major elements in a cell communicate, but how? They are not connected by any physical construct save the mutual environment they are in so how do they do it? They message each other with signals, receptors and a means to translate those messages, which contain instructions.

In Gene expression, DNA/RNA within eukaryote cells (think systems within systems within systems…elements moving in space and time under specific, ever changing constraints and environmental fluctuations (put that in your Agile timebox!) ) communicate by transmitting messages, intercepting, translating and even performing message amplification. There are specialized elements, mRNA specifically, Messenger RNA, which are translated by signal recognition particles… Cool, right? But this happens all around us, outside, in space, everywhere. And it is all decoupled, and therein lies the beauty of messaging.

So what about our applications? Here is one very simple, isolated application of using Spring Integration ( a low-level usage, not very sophisticated ) to decouple your client and server messaging code:

So I wrote a little java package that integrates Spring JMS, ActiveMQ, Spring Integration and Flex Messaging for the front end which hooks into either BlazeDS or Livecycle Data Services. I had a bunch of constraints to solve for such as all Destinations had to come from the database, there were lifecycle issues as far as timing of element initializations with the IoC bean creation and Flex Messaging elements being created and having what they needed such as the Flex Messaging adaptors which I resolved by flex java bootstrapping. In another post I will go into the JMS package further. For the topic here let’s focus on the JMS-Spring JMS-Spring Integration bridge.

The image to the left shows the layout of my jms package to facilitate the mapping. In this system, messages come in from 2 areas: the client and java services on the server. Complicated systems will have many more but let’s talk about the 2 that most would have.  The client sends messages to the server that are both user messages and operational messages by the system. Java services send messages when certain business rules and criteria are triggered, passing business data and any message-aware object could be sending messages.

Sending on the server

To insure proper usage I created an interface that a service must implement to send to ActiveMQ, called JMSClientSupport. Note all code in this post is simplified. Here, I actually have it returning a validation message if errors occurred so that a business service developer could implement handling per requirements.

A Business Entity
public class Foo implements Serializable {...}

A Service
public class FooServiceImpl implements BusinessService, FooService, SMTPClientSupport, JMSClientSupport {
public void insert(Foo foo) {
..//do some important business stuff
public void publish(Object object) {
if ((someBusinessValidation((Foo) object)) {
jmsService.send(destinationId, object);

public interface JMSClientSupport {
void publish (Object object);

Sending from the Client

You could have any client sending messages of any nature to the server. In this case I am using Flex. Messages of type <T> are wrapped on the client as an IMessage {AsyncMessage,CommandMessage etc}. When these messages make their way through Blaze or Livecycle, I have it wired to hit this java adapter which is represented in a Hash per FlexDestination for 1:1 FlexDestination : JMS Destination by Flex.

For this example I am implementing the JMS MessageListener to show tight coupling as well as decoupling:

public class FlexMessagingAdapter extends MessagingAdapter implements MessageListener {
// Invoked by Flex when a message comes in from the client to this adapter's Destination
public Object invoke(Message message) {
// a custom interceptor that extracts partition Destination info like ActiveMQ message group or subtopics like STOCKS.NASDAQ for more specific routing
String partition = new DestinationPartitionInterceptor(message).intercept();
jmsService.send(destination, new IntegrationMessageCreator (message, partition));
return null;

// Decoupled: Invoked when a Message is received from the Spring Integration channel
public void handleMessage(org.springframework.integration.core.Message<?> message) {....}

// Sets the Spring Integration MessageChannel for sending and receiving messages
public void setMessageChannel(MessageChannel messageChannel) {
this.messageChannel = messageChannel;

// Tightly coupled with JMS by the MessageListener.onMessage() method
public void onMessage(javax.jms.Message jmsMessage) {
flex.messaging.messages.Message message = new IntegrationMessageCreator(jmsMessage).createMessage(getDestination().getId(), getSubscribers());
if (getMessageService().getMessageBroker().getChannelIds().contains("streaming-amf")) {
MessageBroker broker = MessageBroker.getMessageBroker(null);
broker.routeMessageToService(message, null);
} else {
getMessageService().pushMessageToClients(message, true);

The Transformer: Where Messages Intersect

I have a second post that show an even more decoupled messaging strategy with Spring Integration but this is purely a a basic idea using Flex Messaging, Spring Integration, Spring JMS and ActiveMQ. I will post the more broad strategy next :)

Step 1: Client messages are transformed here by extending the JMS MessageCreator. In this class I pull out the data from any Object type but specifically Flex Message and a JMSMessage types.

public class IntegrationMessageCreator implements MessageCreator {
// a few constructors here to handle multiple message types: JMSMessage, Flex Message, Object message, etc

private MessageBuilder createBuilder() {
MessageBuilder builder = null;
if (this.object != null) {
builder = MessageBuilder.withPayload(object);
} else if (this.flexMessage != null && flexMessage.getBody() != null) {
builder = MessageBuilder.withPayload(flexMessage.getBody()).copyHeaders(flexMessage.getHeaders());
// ActiveMQ Message Groups
if (this.partition != null) builder.setHeader(MessageConstants.Headers.JMSXGROUPID, partition);

return builder;

// to JMS
public javax.jms.Message createMessage(Session session) throws JMSException {
return new IntegrationMessageConverter().toMessage(createBuilder().build(), session);

// To Flex
public flex.messaging.messages.Message createMessage(String destinationId, int subscribers) {
Message integrationMessage = (Message) new IntegrationMessageConverter().fromMessage(this.jmsMessage);

flex.messaging.messages.Message flexMessage = new AsyncMessage();
// …and other good jms to flex data

return flexMessage;

The Converter

import org.springframework.integration.jms.HeaderMappingMessageConverter;
import org.springframework.integration.core.Message;
import javax.jms.Session;

public class IntegrationMessageConverter extends HeaderMappingMessageConverter {

// Converts from a JMS Message to an Integration Message. You should do a try catch but I cut it out for brevity
public Object fromMessage(javax.jms.Message jmsMessage) throws Exception {
return (Message) super.fromMessage(jmsMessage);

// Converts from an Integration Message to a JMS Message. You should do a try catch but I cut it out for brevity
public javax.jms.Message toMessage(Object object, Session session) throws Exception {
return jmsMessage = super.toMessage(object, session);

JMS Asynchronous Reception

In my jmsConfig.xml I configured one Spring MessageListenerAdapter which I have wired with a message delegate, a POJO and its overloaded handleMessage method name:

<bean id=”messageListenerAdapter”>
<property name=”delegate” ref=”defaultMessageDelegate”/>
<property name=”defaultListenerMethod” value=”handleMessage”/>
<property name=”messageConverter” ref=”simpleMessageConverter”/>

As the application loads and all Spring beans are initialized, I initialize all of my JMS Destinations. As I do this, I also initialize a MessageListenerAdapter for each Destination. I have a stateless JMSService, which is called by another service, MessagingGateway, to initialize each Destination and which calls PollingListenerContainerFactory to create child MessageListenerAdaptors for each Destination. The adapters are configured based on an abstract parent configuration:

<bean id=”abstractListenerContainer” abstract=”true” destroy-method=”destroy”>
<property name=”connectionFactory” ref=”pooledConnectionFactory”/>
<property name=”transactionManager” ref=”jmsTransActionManager”/>
<property name=”cacheLevel” value=”3″/>
<property name=”taskExecutor” ref=”taskExecutor”/>
<property name=”autoStartup” value=”true”/>

Snippet from PollingListenerContainerFactory:

* Gets the parent from the IoC to reduce runtime config and
* resources to create children.
* <p/>
* DefaultMessageListenerContainer is Responsible for all threading
* of message reception and dispatches into the listener for processing.
* Supports dynamic scaling for a higher during peakloads.
* @param destination
* @param messageListener
* @return
public static DefaultMessageListenerContainer createMessageListenerContainer(Destination destination, MessageListener messageListener) {

ChildBeanDefinition childBeanDefinition = new ChildBeanDefinition(“abstractListenerContainer”, configureListenerContainer(destination, messageListener));

String beanID = IdGeneratorUtil.getStringId();
ConfigurableListableBeanFactory beanFactory = ApplicationContextAware.getConfigurableListableBeanFactory();
((DefaultListableBeanFactory) beanFactory).registerBeanDefinition(beanID, childBeanDefinition);

DefaultMessageListenerContainer container = (DefaultMessageListenerContainer) ApplicationContextAware.getBean(beanID);
return container;

* Configures the child listener, based on the parent in jmsConfig.xml.
* <p>Configures Queue or Topic consumers:
* Queue: stick with 1 consumer for low-volume queues: default is
* Topic: there’s no need for more than one concurrent consumer
* Durable Subscription: Only 1 concurrent consumer supported
* <p/>
* props.addPropertyValue(“messageSelector”, “”); sets a message selector for this listener
* @param destination
* @param messageListener
* @return
private static MutablePropertyValues configureListenerContainer(Destination destination, MessageListener messageListener) {
MutablePropertyValues props = new MutablePropertyValues();
props.addPropertyValue(“destination”, destination);
props.addPropertyValue(“messageListener”, messageListener);

// Enable throttling on peak loads
if (destination instanceof Queue) {
props.addPropertyValue(“maxConcurrentConsumers”, 50); // modify to needs
// Override default setting Point-to-Point (Queues)
if (destination instanceof Topic) {
props.addPropertyValue(“pubSubDomain”, true);

return props;

this is overkill to this topic but its cool stuff. So we now have a JMS listener for asynchronous JMS reception as well as flex, but now let’s look at the message delegate we wired into the MessageListenerAdapter:

public interface MessageDelegate {

void handleMessage(String message);

void handleMessage(Map message);

void handleMessage(Serializable message);

Pretty simple, right? It’s a POJO with absolutely no JMS code whatsoever for asynchronous message reception. How does it work? Spring abstracts the JMS code and calls it behind the scenes, if you look at the source code for Spring’s SimpleMessageConverter, it does the fromMessage() toMessage() handling for you and throws the resultant “message” into the appropriate overloaded method above. Now this is great for simple message abstraction but the above with JMS-Flex and Spring Integration is an example of more complicated handling. With clients you often need to translate and transfer the data from message type 1 to message type 2. In the adapter code above, you would use the handleMessage() method to get the message from Spring Integration and into the message type of your choice, here, a Flex message.


One of Those Java String Modification Interview Questions

Here is one of those interview questions for String modification where you need to solve for an input of 3 letters, and return 3 letters down from each one in the alphabet: My solution isn’t perfect but…after being asked it under interview stress :)

public void foo(String input) {
List output = new ArrayList();

for (char c: input.toCharArray()) {
output.add(new Character((char) ((‘a’ + Character.toLowerCase(c) + 3 – ‘a’) % 26)));

for(String out : output) {
System.out.println(“output:” + out);


JMX MBean Proxying for Export via Remoting to the Client

I wanted to expose mbeans related to application monitoring and services such as a JMS broker, users in session, etc and remote that data to the client which in my case was Flex.  First I checked out using annotations for automated registration and mbean creation but that wouldn’t help me export to a remoting format..without creating my own classes. Something for another post, but what I did was create a package with some classes, which I will explain here. You can certainly deliver this strategy more minimally but I’ve found that to come back and bite me later so I’ve added in some type safety to the proxy creation of the MBeans.

The Basic Components

There are a few important components you need to set this up as a framework

1. An MBeanConnection

2. A type-safe proxy generator

3. An applicable JMX service URL, Port, Hostname

To export mbeans to the client

1. ObjectName that can be connected to if they are services

2. Ports for those services where applicable (i.e. a broker)

3. Interfaces to proxy

I. The Goal: export mbean data by remoting to the client

I created a service which is mirrored as a Flex actionscript class for remoting which encapsulates the calls for various system monitoring data via jmx: This I registered for remoting in the framework of choice, in my case it was Flex. I have a custom annotation that handles this which I don’t include in this snippet.

public class ClientMBeanService implements IClientMBeanService { ..view the code… }

II. The Delegate

With the idea of each component to be exported create a Delegate class that implements the interface dictating the contract that it must implement a method to provide the port, the ObjectName and export the mbean:

public class BrokerDelegate implements IMBeanDelegate { ..view the code… }

All Delegates must implement

public interface IMBeanDelegate { ..view the code… }

III. For Remote and Local MBeanConnections

public class MBeanConnection { ..view the code… }

IV. The Cool Part: Get a Typesafe Proxy Intance of The MBean

public final class MBeanProxyInstance { ..view the code… }

V. Typesafing: I did not originally write this class, only modified it for Java 6, but the crux is:

public class TypeSafeMBeanProxy { ..view the code… }


Annotational MBean Export with Spring and JMX

If you just want to easily export mbeans to jconsole or an external profiling tool this is pretty sweet:

I. Create a Spring schema config file for JMX

<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns=""

<!-- This bean needs to be eagerly pre-instantiated in order for the exporting to occur: -->
<!-- ***************** Autodetects Annotated Beans JMX Exposure *********************** -->
<bean id="annotationalMbeanExporter" class="org.springframework.jmx.export.MBeanExporter" lazy-init="false">
<property name="autodetect" value="true"/>
<property name="namingStrategy" ref="namingStrategy"/>
<property name="assembler" ref="assembler"/>
<!-- one of many optional config options -->
<property name="registrationBehaviorName" value="REGISTRATION_IGNORE_EXISTING"/>
<bean id="attributeSource" class="org.springframework.jmx.export.annotation.AnnotationJmxAttributeSource"/>
<bean id="assembler" class="org.springframework.jmx.export.assembler.MetadataMBeanInfoAssembler">
<property name="attributeSource" ref="attributeSource"/>
<bean id="namingStrategy" class="org.springframework.jmx.export.naming.MetadataNamingStrategy">
<property name="attributeSource" ref="attributeSource"/>

II. Annotate a component, bean, etc

@ManagedResource(objectName = "bean:name=applicationMonitor")
public class ClientMBeanService implements IClientMBeanService {

@ManagedOperation(description = "Monitor the JMS broker")
public BrokerStatistics exposeBroker() {
return new BrokerDelegate().exposeMBean();

@ManagedResource(objectName = "spring:name=broker", description = "ActimemMQ Broker")
public class BrokerStatistics {

private String brokerId;
private String brokerName;
private long totalMessages;
private long totalConsumers;
private long totalTopics;
private long totalQueues;
private List<Map> topics;
private List<Map> queues;

public String getBrokerId() {
return brokerId;

The annotational configuration in the mbean exporter allows automatic registration and mbean creation of beans. It’s that simple.


JPA 2.0 Concurrency and locking and AOP Around Advice for Deadlock Retry

This article by Carol McDonald from Sun on JPA 2.0 Concurrency and locking is interesting. I like the idea of using AOP for handling crosscutting concerns such as Deadlocks, such as this:

1. You define your Deadlock Aspect
2. Write your @Around advice for stateless or idempotent operations that throw HibernateException from classes marked @Repository:

Around any idempotent operation, such as in a stateless business service, when a DataAccessException is thrown, keep track of how many tries, and re-execute:

public Object retryDeadlockLosers(ProceedingJoinPoint pjp) throws Throwable {
int attempts = 0;
DeadlockLoserDataAccessException looserEx = null;
while (attempts++ < maxAttempts) {
try {
return pjp.proceed();
} catch(DeadlockLoserDataAccessException ex) {
loserEx = ex;
throw loserEx;


Which is more a pattern you can use wherever needed and how such as you can apply to go to another rack, hop to another database, what have you.


Rolling Your Own Java Memory Profiler

I was looking around for some code to build a unit test for easy tests to isolate various aspects or collaborators in an application and after trying a couple of solutions that didn’t seem right, I found, “Do you know your data size?” By Vladimir Roubtsov, Its impossible to do any basic true tests because, and I like that he brings up these points:

  • A single call to Runtime.freeMemory() proves insufficient because a JVM may decide to increase its current heap size at any time (especially when it runs garbage collection). Unless the total heap size is already at the -Xmx maximum size, we should use Runtime.totalMemory()-Runtime.freeMemory() as the used heap size.
  • Executing a single Runtime.gc() call may not prove sufficiently aggressive for requesting garbage collection. We could, for example, request object finalizers to run as well. And since Runtime.gc() is not documented to block until collection completes, it is a good idea to wait until the perceived heap size stabilizes.
  • If the profiled class creates any static data as part of its per-class class initialization (including static class and field initializers), the heap memory used for the first class instance may include that data. We should ignore heap space consumed by the first class instance.

The basic idea is Runtime.totalMemory()-Runtime.freeMemory() as the used heap size. Vladimir includes the code and I’ll post my junit test soon which simplifies this.


How To Run Two ActiveMQ Brokers On A Laptop

I needed to set up and test ActiveMQ failover URI’s and play around with Broker topologies for optimization in a safe environment (in my control environment where no other app servers or brokers other than this env could access). I have a Mac laptop with multiple VM operating systems so that I can easily work with any client/company requirements for a current project (ubuntu, windoz, etc). Here’s what I did to set up 2 local test Brokers:

1. Know both native and virtual operating system host names. IP addresses will not work

Explicitly setting IP addresses in the tc config will fail on the native OS as it attempts to bind to the virtual IP

2. Give each Broker a unique name

Standard procedure for any multiple Broker topology on a network

Installing, configuring, and running ActiveMQ is a no brainer so I won’t cover that here but the specific, simple configuration allowing me to run multiple instances on one machine were:

3. Super basic configuration of the connectors: In the activemq.xml Spring config file

Note: what I did here is set up two transport protocols, one for TCP, one for NIO

Instance One: on the native laptop OS

<networkConnector name=”nc-1″ uri=”multicast://default” userName=”system” password=”manager”/>

<transportConnector name=”tcp” uri=”tcp://nativeHostName:61616?trace=true” discoveryUri=”multicast://default”/>
<transportConnector name=”nio” uri=”nio://nativeHostName:61618?trace=true” discoveryUri=”multicast://default”/>

Instance Two: on the virtual machine’s OS

<networkConnector name=”nc-2″ uri=”multicast://default” userName=”system” password=”manager”/>

<transportConnector name=”tcp” uri=”tcp://virtualHostName:61616?trace=true” discoveryUri=”multicast://default”/>
<transportConnector name=”nio” uri=”nio://virtualHostName:61618?trace=true” discoveryUri=”multicast://default”/>

It’s that easy.

4. Start the native then virtual ActiveMQ servers

5. Open a browser on both OS’s and if you enabled the web console, at http://localhost:8161/admin/ you will see the broker interface with each unique broker name.


Developing on a Mac with VMware Fusion

Many engineers work on Macs and most needing virtual machines for other OS’s seem to use Parallels, others suggest Bootcamp. I’ve developed on Linux for years using RedHat then Suse, and switched to Ubuntu 2 years ago, and started as a developer on Solaris and Windows. But my dual boot laptop just performed so poorly on the original Windows partition (which I kept for remoting into corporate VPN’s) it became too inefficient to work with, and too old.

Each project/client I work with has different hardware and networking requirements. I needed a way to work on many operating systems through a single, always-available high performance one. I chose VMWare’s Fusion. I regretfully bought a basic XP OS and will also install the latest Ubuntu when the stable version hits soon. I’d like to monitor performance while using Fusion because running it concurrent with my development environment requires like 2GB. I’ll post more about this after some usage :)