Category Archives: Software Development


It seems the word is out that I left DataStax. I made so many friends there, even recruited an old friend, and know many of us will continue to cross paths and collaborate. Thanks to everyone for the great memories.

Over the last year I have spoken at several Spark, Scala and Machine Learning meetups, then internationally at Scala Conferences like Scala Days Amsterdam, many Big Data conferences such as Spark Summit, Strata and Emerging Tech Conferences, all of which I hope to continue to do.

Spark Summit East 2015

Scala Days Amsterdam, 2015

Philly ETE – Emerging Tech 2015

In October 2015 I’ll be speaking at Spark Summit EU, and on the Spark panel at Grace Hopper. Also I’ll be at Strata NYC again, this year hosting the Spark track on Thursday. I’m adding some cool stuff to my Big Data Lambda talk and new code.

What’s next – I’m humbled to have been contacted by many engineers I know and highly respect about working at their companies, where in some cases they are the founders. This is making for a very difficult decision because they are all doing really interesting and exciting work with open source technologies I like. In the meantime I am taking a month or so to enjoy the last part of summer, something I haven’t had time to do for years :)


An Agile Software Development Lifecycle – Increasing Efficiency and Throughput, Decreasing Bottlenecks


This post is in progress, and comes from years of finding things that work, as well as slamming into the pain and not having the authority to change the process (That’s called using power for good, people), or saying, “Good luck with that”. One thing I have learned in all the org’s I have frequented is that most orgs that have problems recognize this and sincerely want and are trying to improve the process. Getting the process right is an art.

SDLC – Increasing Efficiency and Throughput, Decreasing Bottlenecks

Whether you are new to Agile or seasoned, you should read The Agile Manifesto – poetry for the weary.


Software development lifecycle improvements can increase throughput, manageability and maintainability, vastly increase overall efficiency, and decreases bugs. A major factor is improving the agile process for the organization (from planning to deploys), for which you need to not only educate staff but also have employee buy-in to the process. Jumping into this with a system that is in need of re-architecting could create more bottlenecks and technical debt so timing and rollout of new processes must be planned well.


Planning And Requirements

Insure that requirements are broken down into what can be completed and demoed correctly in each sprint. Scoping each unit of work for a feature, enhancement, bug fix or task must be done in such a way that dependencies are never a bottleneck and goals are achievable and testable. The business should set requirements in a scope that is achievable, partitioning work sometimes over multiple sprints.



Do not overlook a very important part of the process that lends to success – your process tooling. Tools like JIRA are often underutilized. Leveraged properly, it makes the work and process transparent, improving not only efficiency but accuracy of implementations, testing, and deploys. If you don’t already have Greenhopper for JIRA, get it. JIRA is not just a bug tool, it is a SDLC tool.


  • I would not do a project without JIRA (with Greenhopper)
  • I would not do a project without GIT as the repo. Anything else and you are asking for pain and incredible inefficiency

Turning Requirements Into Stories

This may be the starting directive: Build in real-time asynchronous stock ticker updates to the client. In Agile, the Story should look like this:


As a User I want to receive real-time updates of stocks I have selected to watch, so that I can take immediate, accurate advantage of market fluctuations

We can not proceed without covering dependencies. A functional dependency is something that requires or depends on another thing being done first. Example: I want to tweet, I have a dependency on an internet connection to send the tweet in order to accomplish that task.

This assumes or highlights many things, for instance, what could be done in parallel as tasks of the same story? What may more appropriately completed in a prior Sprint?

  • Is there a UI dependency to create a form where the user can select stocks to watch, submit the form?
  • Does a new db table need to be created for persisting stock selections by user?
  • What in the messaging system needs to be added to do this?
  • What could be done in parallel as tasks of the same story?
  • How can this be architected to allow for consecutive sprint development?
  • How can QA best test this over each sprint until the final task is committed?
  • Do we put this in a feature branch or roll each part in by sprint so QA can test?

A business dependency, which may also be called a requirement, is something engineering needs from the business in order to complete a Story or Task. For example:

As a business owner I want to authorize all incoming user requests from the client to insure that user has been granted rights to do what they are requesting to do in the system.

Say the Product Manager brings this up in a Sprint Planning meeting to the team, reviews the requirements, and one of the engineers says, “I see a requirement missing, there is no failure scenario”. If the client is simply making API REST calls to the server, then any client can call what it is not granted authority to do. A failure plan is needed for what is generated on the server and sent to the caller/client and if the client is a UI, how this is displayed. This could need a new UI task and time.


Story Points And Time Boxing

Possibly the most highly-debated part of any SDLC process methodology, whether agile, lean or other.
Planning Poker is an interesting read.

Planning And Effective Use Of Tooling

First, a Project Manager should identify the availability in hours per day of each team member for a sprint to get the total hours of the team for a sprint. I should note here that I have fought with QA many times over the years and may be biased.


  • Product Manager Creates a Story ticket that will have child tasks
    • The business insures that business requirements are accurately and granularily described in the story’s description
    • Team (Dev/QA) assesses the business requirements, and identifies questions or information lacking by the business
    • Project Manager assigns the Story ticket to the business/Product Manager to complete – and everyone can see where in the process this ticket is. When complete, the business/Product Manager assigns ticket back to Project Manager
    • Product Manager and Project Manger reviews with team
      • If team deems requirements are adequate to proceed, Project Manager can schedule the Story
    • Team (Engineer(s) – Dev) votes on the number of story points to assign to the Story – how many days will it take to complete? What is the difficulty of the story?
    • Does this story have dependencies?
      • If so, are they completed or scheduled? How long will they take?
      • Does this story require an architect before developers come on?
      • Do we need to prototype or vet a new technology decision or strategy first?
      • Is there a functional or UI dependency?
      • Dependencies in the same sprint must be scheduled in such a way that no
    • Team (Engineer(s) – Dev) create Child Tasks under Story – if a junior team, Team Lead insures accuracy and that nothing is left out
      • Developers task out what needs to be done to complete a scoped story
      • Developers pick task tickets and assign to self
      • Developer / QA timeboxes task – how many hours will this take?
    • Team (Engineer(s) – QA) create Child Tasks under Story
      • Create task ticket to create test plan for story
      • Review test plan with the business and developers
      • Attach approved test plan to Story’s JIRA ticket
    • more coming

Now insure that the hours it will take to complete the work is achievable given the availability of the team.


Development – and Effective User Of GIT

  • TDD – Test Driven Development
    • 1:1 correlation between requirements from the business and unit/integration test methods
    • Unit test: testing a method or small unit of code
    • Integration test: testing a Service method that covers multiple component (DAO, Util, Messaging, etc)
    • Tests must be written in such a way that they are OS and Environment independent
    • Tests must be written in such a way that they can be run by an automated process – Continuous Integration (CI)
    • If CI tests fail, the team should be notified by email that tests have failed, and attended to quicky
    • Before code check in
      • Latest changes in the current branch should be pulled in
      • The new code must compile with the latest code in the branch
      • Tests should be run by the developer and pass
        • This means that every developer is responsible for all of their tests passing when a team mate checks out the latest code
      • New code must be deployed successfully by the developer
  • Coding Standards
    • Every org needs coding standards and a means to insure that such standards are being implemented
      • Code Reviews
      • Developer training
      • Best Practices instruction
      • Lunch and Learn presentations for the team
      • Team education of the system, tools, processes – the more everyone knows, the faster and less error prone things will be
      • How to properly test code
    • Architecture enforcement with AOP in codebase
      • Automate catching poor architecture or code implementations in dev-time
        • No engineer has time to stay on top of static code standard documentation

Technical Debt

If an org is accumulating technical debt in part due to the business: sales and marketing pressures for new features and bus fixes faster than engineering can also take care of issues that need to be resolved as well as doing work properly the first time, then the risk management of the system is put on the back burner and accumulates. This will eventually blow up.

Technical debt time must be built into R&D sprints, and often are done as sprints on their own.


QA, Staging, Deploys, and Rollback Strategies

Not a free-for-all, I have not written this yet. However, deploys should be automated. If you need to do it manually, there is something wrong with your application architecture and/or deployment architecture.


Open Source Artificial Intelligence for the Cloud

The idea of introducing intelligent computing so that software can make decisions autonomously and in real time is not new but it is relatively new to the cloud. Microsoft seems to be the strongest force in this realm currently but this is not open source. The idea of clouds that manage themselves steps way beyond Cloud Ops as we know it, and is the only logical, and necessary, next step.

I’ve been researching various open source AI platforms and learning algorithms to start building a prototype for cloud-based learning and action for massive distributed Cloud Ops systems and applications. Once could off er an AI cloud appliance eventually but I will start with a prototype and build on that using RabbitMQ (an AMQP messaging implementation), CEP, Spring, Java as a base. I’ve been looking into OpenStack, the open source cloud computing software being developed by NASA, RackSpace, and many others. Here is the future GitHub repo:

Generally to do these sort of projects you need a large amount of funding and engineers with PhD’s, none of which I have. So my only alternative is to see how we might take this concept and create a light weight open source solution.

Imagine having some of these (this being not a comprehensive list) available to your systems:

AI Technology

  • Planning and Scheduling
  • Heuristic Search
  • Temporal Reasoning
  • Learning Models
  • Intelligent Virtual Agents
  • Autonomic and Adaptive Clustering of Distributed Agents
  • Clustering Autonomous Entities
  • Constraint Predictability and Resolution
  • Automated Intelligent Provisioning
  • Real-time learning
  • Real-world action
  • Uncertainty and Probabilistic Reasoning
  • Decision Making
  • Knowledge Representation

I will go into these topics further in future posts.

About me: I am currently an engineer at SpringSource/VMware on the vFabric Cloud Application Platform side of things, however this project is wholly outside of that, I feel the need to pursue something I’m interested in. I work with the RabbitMQ team and a member of the AMQP Working Group.


Patterns of Scalability and Pathways in Systems

Note: this is just a draft – I was trying to fall asleep, started to think, and this is what I thought about.

In college I wrote a 300-page senior thesis entitled, “Energy Pathways In Biological Systems”. It was within the context of genetics to microbiology up through complex pathways of micro-climates to ecosystems and even pathways of migratory animals (Arctic Wolves that cover at least 1,000 mile territories to Terns and Whales that have annual migration patterns covering half the earth). For each there is movement of elements in space and time. I had a blast researching and writing it but my fascination revolved around that shared concept over seemingly vast discrepancies of scale that were actually sharing massive similarities, being only sizable to other scales, can only be in a relative framework of complexity.

Think of systems as an atom. There are layers or levels and activity going on all the time. Now think of this atomic model with pathways repetitively used for resources to move, kind of like corridors. Resources behave differently from other resources, thus the corridors are different, the speed of motion is different and the size is different. Also production and consumption of those resources is different. Entropy works differently based on environment, among other factors.

The pathways of genetic information through a cell move in a seemingly small scale to Nitrogen pathways in a rain forest but the complexity in a cell looking down to the smaller elements in that system is great. Also in a rain forest, Nitrogen molecules and everything they interact with as they move through that system, looking up to larger components, are equally great and yet looking down to the components that amass that system we see the same thing.

Think about that – scale, if only quantifiable by the scale of other systems, is relative. So what could lend to differentiation? Complexity could be an important part of the equation. So what about this – System A is larger and has more components than System B. System B is less complex than System A. If both systems are replicated many times and distributed which may fare better? Hard to say with such a limited theoretical idea but what about Okham’s Razor – the simplest way is the best, essentially. In mechanics, the less moving parts, the less points of failure. A human is a complicated system, a virus is a very simple system and yet a virus can so easily attack the more complicated system. Cells replicate very quickly, and each new cell gets its own copy of the genetic instructions the original parent cell had. I’m rambling but just trying to give some simple examples to think about.

So how do we properly think about scale in systems? What can we learn from successful patterns of scalability that are all around us?


Automating The Deployment Process

How many companies fully automate their deploy process? This is not a new idea but I am bringing it up as it is a very important one. I just read and really like the idea bounced by Martin Fowler here,, put forth by Dave Farley and Jez Humble. If you think of the SDLC in terms of a manufacturing plant, and around that framework wrap the ideas of LEAN and Six Sigma, not doing this strategy is actually contributing, hundreds of hours per year in high-release companies, to bottlenecks and increase of constraints to flow. The simple idea demonstrated by this sample of blue green deployment shows how simply setting up the proper environment we can easily flip the switch to what instance is production. There are of course many strategies to this, some excellent ones in the cloud and easily transferable to not, particularly when we think in terms of OSGi, but the point remains the same – critical to do for many business justifications, many ways to do it.


Decoupling Asynchronous Messaging With Spring JMS and Spring Integration

The importance of decoupling in applications is vital but it is not easy to do it well, I am constantly working to improve my strategies. Even more important is the role of Messaging in an enterprise context and in the design. I think Message-driven architecture is perhaps the most important to integrate into applications in terms of its scope of applicability and how it lends to scalability. In this vein, Spring Integration becomes a highly intriguing element worthy of study, testing, and integration. I barely touch on Spring Integration here, particulary in its standard usage but simply use a few classes programmatically to decouple JMS implementation from its clients.

Messaging and Decoupling

Messaging is everywhere and so subtle we are not aware of it, it just happens all around us, all the time. Consider this: in Genetics on a cellular level, which is nongranular in the scope of genetics itself, the major elements in a cell communicate, but how? They are not connected by any physical construct save the mutual environment they are in so how do they do it? They message each other with signals, receptors and a means to translate those messages, which contain instructions.

In Gene expression, DNA/RNA within eukaryote cells (think systems within systems within systems…elements moving in space and time under specific, ever changing constraints and environmental fluctuations (put that in your Agile timebox!) ) communicate by transmitting messages, intercepting, translating and even performing message amplification. There are specialized elements, mRNA specifically, Messenger RNA, which are translated by signal recognition particles… Cool, right? But this happens all around us, outside, in space, everywhere. And it is all decoupled, and therein lies the beauty of messaging.

So what about our applications? Here is one very simple, isolated application of using Spring Integration ( a low-level usage, not very sophisticated ) to decouple your client and server messaging code:

So I wrote a little java package that integrates Spring JMS, ActiveMQ, Spring Integration and Flex Messaging for the front end which hooks into either BlazeDS or Livecycle Data Services. I had a bunch of constraints to solve for such as all Destinations had to come from the database, there were lifecycle issues as far as timing of element initializations with the IoC bean creation and Flex Messaging elements being created and having what they needed such as the Flex Messaging adaptors which I resolved by flex java bootstrapping. In another post I will go into the JMS package further. For the topic here let’s focus on the JMS-Spring JMS-Spring Integration bridge.

The image to the left shows the layout of my jms package to facilitate the mapping. In this system, messages come in from 2 areas: the client and java services on the server. Complicated systems will have many more but let’s talk about the 2 that most would have.  The client sends messages to the server that are both user messages and operational messages by the system. Java services send messages when certain business rules and criteria are triggered, passing business data and any message-aware object could be sending messages.

Sending on the server

To insure proper usage I created an interface that a service must implement to send to ActiveMQ, called JMSClientSupport. Note all code in this post is simplified. Here, I actually have it returning a validation message if errors occurred so that a business service developer could implement handling per requirements.

A Business Entity
public class Foo implements Serializable {...}

A Service
public class FooServiceImpl implements BusinessService, FooService, SMTPClientSupport, JMSClientSupport {
public void insert(Foo foo) {
..//do some important business stuff
public void publish(Object object) {
if ((someBusinessValidation((Foo) object)) {
jmsService.send(destinationId, object);

public interface JMSClientSupport {
void publish (Object object);

Sending from the Client

You could have any client sending messages of any nature to the server. In this case I am using Flex. Messages of type <T> are wrapped on the client as an IMessage {AsyncMessage,CommandMessage etc}. When these messages make their way through Blaze or Livecycle, I have it wired to hit this java adapter which is represented in a Hash per FlexDestination for 1:1 FlexDestination : JMS Destination by Flex.

For this example I am implementing the JMS MessageListener to show tight coupling as well as decoupling:

public class FlexMessagingAdapter extends MessagingAdapter implements MessageListener {
// Invoked by Flex when a message comes in from the client to this adapter's Destination
public Object invoke(Message message) {
// a custom interceptor that extracts partition Destination info like ActiveMQ message group or subtopics like STOCKS.NASDAQ for more specific routing
String partition = new DestinationPartitionInterceptor(message).intercept();
jmsService.send(destination, new IntegrationMessageCreator (message, partition));
return null;

// Decoupled: Invoked when a Message is received from the Spring Integration channel
public void handleMessage(org.springframework.integration.core.Message<?> message) {....}

// Sets the Spring Integration MessageChannel for sending and receiving messages
public void setMessageChannel(MessageChannel messageChannel) {
this.messageChannel = messageChannel;

// Tightly coupled with JMS by the MessageListener.onMessage() method
public void onMessage(javax.jms.Message jmsMessage) {
flex.messaging.messages.Message message = new IntegrationMessageCreator(jmsMessage).createMessage(getDestination().getId(), getSubscribers());
if (getMessageService().getMessageBroker().getChannelIds().contains("streaming-amf")) {
MessageBroker broker = MessageBroker.getMessageBroker(null);
broker.routeMessageToService(message, null);
} else {
getMessageService().pushMessageToClients(message, true);

The Transformer: Where Messages Intersect

I have a second post that show an even more decoupled messaging strategy with Spring Integration but this is purely a a basic idea using Flex Messaging, Spring Integration, Spring JMS and ActiveMQ. I will post the more broad strategy next :)

Step 1: Client messages are transformed here by extending the JMS MessageCreator. In this class I pull out the data from any Object type but specifically Flex Message and a JMSMessage types.

public class IntegrationMessageCreator implements MessageCreator {
// a few constructors here to handle multiple message types: JMSMessage, Flex Message, Object message, etc

private MessageBuilder createBuilder() {
MessageBuilder builder = null;
if (this.object != null) {
builder = MessageBuilder.withPayload(object);
} else if (this.flexMessage != null && flexMessage.getBody() != null) {
builder = MessageBuilder.withPayload(flexMessage.getBody()).copyHeaders(flexMessage.getHeaders());
// ActiveMQ Message Groups
if (this.partition != null) builder.setHeader(MessageConstants.Headers.JMSXGROUPID, partition);

return builder;

// to JMS
public javax.jms.Message createMessage(Session session) throws JMSException {
return new IntegrationMessageConverter().toMessage(createBuilder().build(), session);

// To Flex
public flex.messaging.messages.Message createMessage(String destinationId, int subscribers) {
Message integrationMessage = (Message) new IntegrationMessageConverter().fromMessage(this.jmsMessage);

flex.messaging.messages.Message flexMessage = new AsyncMessage();
// …and other good jms to flex data

return flexMessage;

The Converter

import org.springframework.integration.jms.HeaderMappingMessageConverter;
import org.springframework.integration.core.Message;
import javax.jms.Session;

public class IntegrationMessageConverter extends HeaderMappingMessageConverter {

// Converts from a JMS Message to an Integration Message. You should do a try catch but I cut it out for brevity
public Object fromMessage(javax.jms.Message jmsMessage) throws Exception {
return (Message) super.fromMessage(jmsMessage);

// Converts from an Integration Message to a JMS Message. You should do a try catch but I cut it out for brevity
public javax.jms.Message toMessage(Object object, Session session) throws Exception {
return jmsMessage = super.toMessage(object, session);

JMS Asynchronous Reception

In my jmsConfig.xml I configured one Spring MessageListenerAdapter which I have wired with a message delegate, a POJO and its overloaded handleMessage method name:

<bean id=”messageListenerAdapter”>
<property name=”delegate” ref=”defaultMessageDelegate”/>
<property name=”defaultListenerMethod” value=”handleMessage”/>
<property name=”messageConverter” ref=”simpleMessageConverter”/>

As the application loads and all Spring beans are initialized, I initialize all of my JMS Destinations. As I do this, I also initialize a MessageListenerAdapter for each Destination. I have a stateless JMSService, which is called by another service, MessagingGateway, to initialize each Destination and which calls PollingListenerContainerFactory to create child MessageListenerAdaptors for each Destination. The adapters are configured based on an abstract parent configuration:

<bean id=”abstractListenerContainer” abstract=”true” destroy-method=”destroy”>
<property name=”connectionFactory” ref=”pooledConnectionFactory”/>
<property name=”transactionManager” ref=”jmsTransActionManager”/>
<property name=”cacheLevel” value=”3″/>
<property name=”taskExecutor” ref=”taskExecutor”/>
<property name=”autoStartup” value=”true”/>

Snippet from PollingListenerContainerFactory:

* Gets the parent from the IoC to reduce runtime config and
* resources to create children.
* <p/>
* DefaultMessageListenerContainer is Responsible for all threading
* of message reception and dispatches into the listener for processing.
* Supports dynamic scaling for a higher during peakloads.
* @param destination
* @param messageListener
* @return
public static DefaultMessageListenerContainer createMessageListenerContainer(Destination destination, MessageListener messageListener) {

ChildBeanDefinition childBeanDefinition = new ChildBeanDefinition(“abstractListenerContainer”, configureListenerContainer(destination, messageListener));

String beanID = IdGeneratorUtil.getStringId();
ConfigurableListableBeanFactory beanFactory = ApplicationContextAware.getConfigurableListableBeanFactory();
((DefaultListableBeanFactory) beanFactory).registerBeanDefinition(beanID, childBeanDefinition);

DefaultMessageListenerContainer container = (DefaultMessageListenerContainer) ApplicationContextAware.getBean(beanID);
return container;

* Configures the child listener, based on the parent in jmsConfig.xml.
* <p>Configures Queue or Topic consumers:
* Queue: stick with 1 consumer for low-volume queues: default is
* Topic: there’s no need for more than one concurrent consumer
* Durable Subscription: Only 1 concurrent consumer supported
* <p/>
* props.addPropertyValue(“messageSelector”, “”); sets a message selector for this listener
* @param destination
* @param messageListener
* @return
private static MutablePropertyValues configureListenerContainer(Destination destination, MessageListener messageListener) {
MutablePropertyValues props = new MutablePropertyValues();
props.addPropertyValue(“destination”, destination);
props.addPropertyValue(“messageListener”, messageListener);

// Enable throttling on peak loads
if (destination instanceof Queue) {
props.addPropertyValue(“maxConcurrentConsumers”, 50); // modify to needs
// Override default setting Point-to-Point (Queues)
if (destination instanceof Topic) {
props.addPropertyValue(“pubSubDomain”, true);

return props;

this is overkill to this topic but its cool stuff. So we now have a JMS listener for asynchronous JMS reception as well as flex, but now let’s look at the message delegate we wired into the MessageListenerAdapter:

public interface MessageDelegate {

void handleMessage(String message);

void handleMessage(Map message);

void handleMessage(Serializable message);

Pretty simple, right? It’s a POJO with absolutely no JMS code whatsoever for asynchronous message reception. How does it work? Spring abstracts the JMS code and calls it behind the scenes, if you look at the source code for Spring’s SimpleMessageConverter, it does the fromMessage() toMessage() handling for you and throws the resultant “message” into the appropriate overloaded method above. Now this is great for simple message abstraction but the above with JMS-Flex and Spring Integration is an example of more complicated handling. With clients you often need to translate and transfer the data from message type 1 to message type 2. In the adapter code above, you would use the handleMessage() method to get the message from Spring Integration and into the message type of your choice, here, a Flex message.


Rolling Your Own Java Memory Profiler

I was looking around for some code to build a unit test for easy tests to isolate various aspects or collaborators in an application and after trying a couple of solutions that didn’t seem right, I found, “Do you know your data size?” By Vladimir Roubtsov, Its impossible to do any basic true tests because, and I like that he brings up these points:

  • A single call to Runtime.freeMemory() proves insufficient because a JVM may decide to increase its current heap size at any time (especially when it runs garbage collection). Unless the total heap size is already at the -Xmx maximum size, we should use Runtime.totalMemory()-Runtime.freeMemory() as the used heap size.
  • Executing a single Runtime.gc() call may not prove sufficiently aggressive for requesting garbage collection. We could, for example, request object finalizers to run as well. And since Runtime.gc() is not documented to block until collection completes, it is a good idea to wait until the perceived heap size stabilizes.
  • If the profiled class creates any static data as part of its per-class class initialization (including static class and field initializers), the heap memory used for the first class instance may include that data. We should ignore heap space consumed by the first class instance.

The basic idea is Runtime.totalMemory()-Runtime.freeMemory() as the used heap size. Vladimir includes the code and I’ll post my junit test soon which simplifies this.


SpringSource First Milestone of Spring BlazeDS Released

SpringSource released its first Milestone for Spring and BlazeDS, the free version of Adobe’s LCDS

“This first milestone is very much a foundational release, focusing on support for configuring and bootstrapping the BlazeDS MessageBroker (the central component that handles incoming messages from the Flex client) as a Spring-managed object, routing HTTP messages to it through the Spring DispatcherServlet infrastructure, and easily exporting Spring beans as destinations for direct Flex remoting. Future milestones leading up to the final 1.0 will build upon this foundation to provide deeper features such as Spring Security integration, messaging integration using Spring’s JMS support, an AMFView for use in conjunction with Spring 3.0′s REST support, and hopefully further things to address the needs of our community that we haven’t thought of yet.” – Jeremy Grelle, SpringSource

More info:

Reference Docs

I’ve been developing framework for messaging with Flex, Livecycle Data Services, Java and JMS and will give this first release a test run


DRY, Separation of Concerns, and AOP

I like this distillation of the DRY principle

“Every piece of system knowledge should have a single, authoritative, unambiguous representation” – Dave Thomas

as it goes with the idea of Separation of Concerns that each module

  • does one thing
  • knows one thing
  • keeps the secret of how it does that thing hidden

Combined, these are a good rule, but in practice how do you keep your code aligned with these principles and implement a few standard requirements such as

  • All service layer operations must be wrapped in a transaction
  • All transactions are secure
  • Send JMS and / or email messages to users if certain business criteria in the service layer logic are met
  • Expose usage statistics for data access objects

OO techniques such as Proxy, Decorator, Command, and Callbacks can help a design stay DRY but these work-arounds are easily “invasive, cumbersome, and weakly typed” (SpringSource). So what pattern do we see emerging? The above examples share that

1:1 principle

“There should be a clear, simple, 1:1 correspondence between design-level requirements and the implementation”

  • 1:n violations of a single design requirement is scattered across multiple parts of an implementation
  • n:1 implementation details for multiple requirements are tangled up inside the same module
  • n:n it’s all mixed up (typical case)

- SpringSource

So why does this matter? Consider a system, even a software system, as a changing, growing organism in space and time vs a static system. It needs to be agile, easy to maintain and scale, easy for new people to contribute to (no spaghetti code or design), etc. Enter AOP.

AOP (Aspect Oriented Programming) is a modularity technology allowing us to decouple application layers and modules through several implementation options and technologies. Which technology you choose depends on constraints and fit of needs. Some are easier to start with yet limiting such as Spring AOP, while others such as AspectJ require the addition of another technology and language yet allow for decoupling but even that is debatable.


This next topic will be in my next blog entry..