Helena Edelson

Akka and Spring Committer, Scala, FP, Cloud Computing, Data Mining, Machine Learning, Distributed Architecture, API Design

  • cloud

  • RSS Subscribe

  • Topics

  • Archives



    • @helenaedelson on Twitter
    • Helena Edelson on LinkedIn
    • GitHub Octocat

      helena @ github

      • Status updating...


Archive for the 'Software Development' Category

An Agile Software Development Lifecycle – Increasing Efficiency and Throughput, Decreasing Bottlenecks

Posted by Helena Edelson on 25th February 2012

 

This post is in progress, and comes from years of finding things that work, as well as slamming into the pain and not having the authority to change the process (That’s called using power for good, people), or saying, “Good luck with that”. One thing I have learned in all the org’s I have frequented is that most orgs that have problems recognize this and sincerely want and are trying to improve the process. Getting the process right is an art.


SDLC – Increasing Efficiency and Throughput, Decreasing Bottlenecks


Whether you are new to Agile or seasoned, you should read The Agile Manifesto – poetry for the weary.

 

Software development lifecycle improvements can increase throughput, manageability and maintainability, vastly increase overall efficiency, and decreases bugs. A major factor is improving the agile process for the organization (from planning to deploys), for which you need to not only educate staff but also have employee buy-in to the process. Jumping into this with a system that is in need of re-architecting could create more bottlenecks and technical debt so timing and rollout of new processes must be planned well.

 

Planning And Requirements


Insure that requirements are broken down into what can be completed and demoed correctly in each sprint. Scoping each unit of work for a feature, enhancement, bug fix or task must be done in such a way that dependencies are never a bottleneck and goals are achievable and testable. The business should set requirements in a scope that is achievable, partitioning work sometimes over multiple sprints.

 

Tooling


Do not overlook a very important part of the process that lends to success – your process tooling. Tools like JIRA are often underutilized. Leveraged properly, it makes the work and process transparent, improving not only efficiency but accuracy of implementations, testing, and deploys. If you don’t already have Greenhopper for JIRA, get it. JIRA is not just a bug tool, it is a SDLC tool.

 

  • I would not do a project without JIRA (with Greenhopper)
  • I would not do a project without GIT as the repo. Anything else and you are asking for pain and incredible inefficiency

Turning Requirements Into Stories


This may be the starting directive: Build in real-time asynchronous stock ticker updates to the client. In Agile, the Story should look like this:

 

As a User I want to receive real-time updates of stocks I have selected to watch, so that I can take immediate, accurate advantage of market fluctuations

We can not proceed without covering dependencies. A functional dependency is something that requires or depends on another thing being done first. Example: I want to tweet, I have a dependency on an internet connection to send the tweet in order to accomplish that task.

This assumes or highlights many things, for instance, what could be done in parallel as tasks of the same story? What may more appropriately completed in a prior Sprint?

  • Is there a UI dependency to create a form where the user can select stocks to watch, submit the form?
  • Does a new db table need to be created for persisting stock selections by user?
  • What in the messaging system needs to be added to do this?
  • What could be done in parallel as tasks of the same story?
  • How can this be architected to allow for consecutive sprint development?
  • How can QA best test this over each sprint until the final task is committed?
  • Do we put this in a feature branch or roll each part in by sprint so QA can test?

A business dependency, which may also be called a requirement, is something engineering needs from the business in order to complete a Story or Task. For example:

As a business owner I want to authorize all incoming user requests from the client to insure that user has been granted rights to do what they are requesting to do in the system.

Say the Product Manager brings this up in a Sprint Planning meeting to the team, reviews the requirements, and one of the engineers says, “I see a requirement missing, there is no failure scenario”. If the client is simply making API REST calls to the server, then any client can call what it is not granted authority to do. A failure plan is needed for what is generated on the server and sent to the caller/client and if the client is a UI, how this is displayed. This could need a new UI task and time.

 

Story Points And Time Boxing


Possibly the most highly-debated part of any SDLC process methodology, whether agile, lean or other.
Planning Poker is an interesting read.

Planning And Effective Use Of Tooling


First, a Project Manager should identify the availability in hours per day of each team member for a sprint to get the total hours of the team for a sprint. I should note here that I have fought with QA many times over the years and may be biased.

 

  • Product Manager Creates a Story ticket that will have child tasks
    • The business insures that business requirements are accurately and granularily described in the story’s description
    • Team (Dev/QA) assesses the business requirements, and identifies questions or information lacking by the business
    • Project Manager assigns the Story ticket to the business/Product Manager to complete – and everyone can see where in the process this ticket is. When complete, the business/Product Manager assigns ticket back to Project Manager
    • Product Manager and Project Manger reviews with team
      • If team deems requirements are adequate to proceed, Project Manager can schedule the Story
    • Team (Engineer(s) – Dev) votes on the number of story points to assign to the Story – how many days will it take to complete? What is the difficulty of the story?
    • Does this story have dependencies?
      • If so, are they completed or scheduled? How long will they take?
      • Does this story require an architect before developers come on?
      • Do we need to prototype or vet a new technology decision or strategy first?
      • Is there a functional or UI dependency?
      • Dependencies in the same sprint must be scheduled in such a way that no
    • Team (Engineer(s) – Dev) create Child Tasks under Story – if a junior team, Team Lead insures accuracy and that nothing is left out
      • Developers task out what needs to be done to complete a scoped story
      • Developers pick task tickets and assign to self
      • Developer / QA timeboxes task – how many hours will this take?
    • Team (Engineer(s) – QA) create Child Tasks under Story
      • Create task ticket to create test plan for story
      • Review test plan with the business and developers
      • Attach approved test plan to Story’s JIRA ticket
    • more coming

Now insure that the hours it will take to complete the work is achievable given the availability of the team.

 

Development – and Effective User Of GIT


  • TDD – Test Driven Development
    • 1:1 correlation between requirements from the business and unit/integration test methods
    • Unit test: testing a method or small unit of code
    • Integration test: testing a Service method that covers multiple component (DAO, Util, Messaging, etc)
    • Tests must be written in such a way that they are OS and Environment independent
    • Tests must be written in such a way that they can be run by an automated process – Continuous Integration (CI)
    • If CI tests fail, the team should be notified by email that tests have failed, and attended to quicky
    • Before code check in
      • Latest changes in the current branch should be pulled in
      • The new code must compile with the latest code in the branch
      • Tests should be run by the developer and pass
        • This means that every developer is responsible for all of their tests passing when a team mate checks out the latest code
      • New code must be deployed successfully by the developer
  • Coding Standards
    • Every org needs coding standards and a means to insure that such standards are being implemented
      • Code Reviews
      • Developer training
      • Best Practices instruction
      • Lunch and Learn presentations for the team
      • Team education of the system, tools, processes – the more everyone knows, the faster and less error prone things will be
      • How to properly test code
    • Architecture enforcement with AOP in codebase
      • Automate catching poor architecture or code implementations in dev-time
        • No engineer has time to stay on top of static code standard documentation

Technical Debt

If an org is accumulating technical debt in part due to the business: sales and marketing pressures for new features and bus fixes faster than engineering can also take care of issues that need to be resolved as well as doing work properly the first time, then the risk management of the system is put on the back burner and accumulates. This will eventually blow up.

Technical debt time must be built into R&D sprints, and often are done as sprints on their own.

 

QA, Staging, Deploys, and Rollback Strategies


Not a free-for-all, I have not written this yet. However, deploys should be automated. If you need to do it manually, there is something wrong with your application architecture and/or deployment architecture.

Share/Save

Posted in Agile, AOP, Software Development, Software Development Lifecycle | No Comments »

Open Source Artificial Intelligence for the Cloud

Posted by Helena Edelson on 18th September 2010

The idea of introducing intelligent computing so that software can make decisions autonomously and in real time is not new but it is relatively new to the cloud. Microsoft seems to be the strongest force in this realm currently but this is not open source. The idea of clouds that manage themselves steps way beyond Cloud Ops as we know it, and is the only logical, and necessary, next step.

I’ve been researching various open source AI platforms and learning algorithms to start building a prototype for cloud-based learning and action for massive distributed Cloud Ops systems and applications. Once could off er an AI cloud appliance eventually but I will start with a prototype and build on that using RabbitMQ (an AMQP messaging implementation), CEP, Spring, Java as a base. I’ve been looking into OpenStack, the open source cloud computing software being developed by NASA, RackSpace, and many others. Here is the future GitHub repo: http://github.com/helena/OCAI.

Generally to do these sort of projects you need a large amount of funding and engineers with PhD’s, none of which I have. So my only alternative is to see how we might take this concept and create a light weight open source solution.

Imagine having some of these (this being not a comprehensive list) available to your systems:

AI Technology

  • Planning and Scheduling
  • Heuristic Search
  • Temporal Reasoning
  • Learning Models
  • Intelligent Virtual Agents
  • Autonomic and Adaptive Clustering of Distributed Agents
  • Clustering Autonomous Entities
  • Constraint Predictability and Resolution
  • Automated Intelligent Provisioning
  • Real-time learning
  • Real-world action
  • Uncertainty and Probabilistic Reasoning
  • Decision Making
  • Knowledge Representation

I will go into these topics further in future posts.

About me: I am currently an engineer at SpringSource/VMware on the vFabric Cloud Application Platform side of things, however this project is wholly outside of that, I feel the need to pursue something I’m interested in. I work with the RabbitMQ team and a member of the AMQP Working Group.

Share/Save

Posted in AI, AMQP, Artificial Intelligence, CEP, Cloud, Cloud Ops, Java, Messaging, Open Source, RabbitMQ, Software Development, Spring | No Comments »

Patterns of Scalability and Pathways in Systems

Posted by Helena Edelson on 4th April 2010

Note: this is just a draft – I was trying to fall asleep, started to think, and this is what I thought about.

In college I wrote a 300-page senior thesis entitled, “Energy Pathways In Biological Systems”. It was within the context of genetics to microbiology up through complex pathways of micro-climates to ecosystems and even pathways of migratory animals (Arctic Wolves that cover at least 1,000 mile territories to Terns and Whales that have annual migration patterns covering half the earth). For each there is movement of elements in space and time. I had a blast researching and writing it but my fascination revolved around that shared concept over seemingly vast discrepancies of scale that were actually sharing massive similarities, being only sizable to other scales, can only be in a relative framework of complexity.

Think of systems as an atom. There are layers or levels and activity going on all the time. Now think of this atomic model with pathways repetitively used for resources to move, kind of like corridors. Resources behave differently from other resources, thus the corridors are different, the speed of motion is different and the size is different. Also production and consumption of those resources is different. Entropy works differently based on environment, among other factors.

The pathways of genetic information through a cell move in a seemingly small scale to Nitrogen pathways in a rain forest but the complexity in a cell looking down to the smaller elements in that system is great. Also in a rain forest, Nitrogen molecules and everything they interact with as they move through that system, looking up to larger components, are equally great and yet looking down to the components that amass that system we see the same thing.

Think about that – scale, if only quantifiable by the scale of other systems, is relative. So what could lend to differentiation? Complexity could be an important part of the equation. So what about this – System A is larger and has more components than System B. System B is less complex than System A. If both systems are replicated many times and distributed which may fare better? Hard to say with such a limited theoretical idea but what about Okham’s Razor – the simplest way is the best, essentially. In mechanics, the less moving parts, the less points of failure. A human is a complicated system, a virus is a very simple system and yet a virus can so easily attack the more complicated system. Cells replicate very quickly, and each new cell gets its own copy of the genetic instructions the original parent cell had. I’m rambling but just trying to give some simple examples to think about.

So how do we properly think about scale in systems? What can we learn from successful patterns of scalability that are all around us?

Share/Save

Posted in Software Development | No Comments »

Automating The Deployment Process

Posted by Helena Edelson on 4th March 2010

How many companies fully automate their deploy process? This is not a new idea but I am bringing it up as it is a very important one. I just read and really like the idea bounced by Martin Fowler here, http://martinfowler.com/bliki/BlueGreenDeployment.html, put forth by Dave Farley and Jez Humble. If you think of the SDLC in terms of a manufacturing plant, and around that framework wrap the ideas of LEAN and Six Sigma, not doing this strategy is actually contributing, hundreds of hours per year in high-release companies, to bottlenecks and increase of constraints to flow. The simple idea demonstrated by this sample of blue green deployment shows how simply setting up the proper environment we can easily flip the switch to what instance is production. There are of course many strategies to this, some excellent ones in the cloud and easily transferable to not, particularly when we think in terms of OSGi, but the point remains the same – critical to do for many business justifications, many ways to do it.

Share/Save

Posted in Software Development, Software Development Lifecycle | No Comments »

Spring Roo 1.0.0 Released

Posted by Helena Edelson on 31st December 2009

Just released today!

http://blog.springsource.com/2009/12/31/spring-roo-1-0-0-released/

Share/Save

Posted in Configuration Management, Java, Software Development, Spring, Spring ROO | No Comments »

Spring Roo

Posted by Helena Edelson on 22nd October 2009

Tuesday I went to Ben Alex’s presentation at SpringOne on Roo. He literally build a basic but pretty comprehensive web application in ten minutes and walked us through it. I’ll write some more about it soon but the main project site is
springsource.org/roo, which pretty much says it all

Share/Save

Posted in Java, Software Development, Spring, Spring ROO | No Comments »

Decoupling Asynchronous Messaging With Spring JMS and Spring Integration

Posted by Helena Edelson on 4th October 2009

The importance of decoupling in applications is vital but it is not easy to do it well, I am constantly working to improve my strategies. Even more important is the role of Messaging in an enterprise context and in the design. I think Message-driven architecture is perhaps the most important to integrate into applications in terms of its scope of applicability and how it lends to scalability. In this vein, Spring Integration becomes a highly intriguing element worthy of study, testing, and integration. I barely touch on Spring Integration here, particulary in its standard usage but simply use a few classes programmatically to decouple JMS implementation from its clients.

Messaging and Decoupling

Messaging is everywhere and so subtle we are not aware of it, it just happens all around us, all the time. Consider this: in Genetics on a cellular level, which is nongranular in the scope of genetics itself, the major elements in a cell communicate, but how? They are not connected by any physical construct save the mutual environment they are in so how do they do it? They message each other with signals, receptors and a means to translate those messages, which contain instructions.

In Gene expression, DNA/RNA within eukaryote cells (think systems within systems within systems…elements moving in space and time under specific, ever changing constraints and environmental fluctuations (put that in your Agile timebox!) ) communicate by transmitting messages, intercepting, translating and even performing message amplification. There are specialized elements, mRNA specifically, Messenger RNA, which are translated by signal recognition particles… Cool, right? But this happens all around us, outside, in space, everywhere. And it is all decoupled, and therein lies the beauty of messaging.

So what about our applications? Here is one very simple, isolated application of using Spring Integration ( a low-level usage, not very sophisticated ) to decouple your client and server messaging code:

So I wrote a little java package that integrates Spring JMS, ActiveMQ, Spring Integration and Flex Messaging for the front end which hooks into either BlazeDS or Livecycle Data Services. I had a bunch of constraints to solve for such as all Destinations had to come from the database, there were lifecycle issues as far as timing of element initializations with the IoC bean creation and Flex Messaging elements being created and having what they needed such as the Flex Messaging adaptors which I resolved by flex java bootstrapping. In another post I will go into the JMS package further. For the topic here let’s focus on the JMS-Spring JMS-Spring Integration bridge.

The image to the left shows the layout of my jms package to facilitate the mapping. In this system, messages come in from 2 areas: the client and java services on the server. Complicated systems will have many more but let’s talk about the 2 that most would have.  The client sends messages to the server that are both user messages and operational messages by the system. Java services send messages when certain business rules and criteria are triggered, passing business data and any message-aware object could be sending messages.

Sending on the server

To insure proper usage I created an interface that a service must implement to send to ActiveMQ, called JMSClientSupport. Note all code in this post is simplified. Here, I actually have it returning a validation message if errors occurred so that a business service developer could implement handling per requirements.

A Business Entity
public class Foo implements Serializable {...}

A Service
public class FooServiceImpl implements BusinessService, FooService, SMTPClientSupport, JMSClientSupport {
public void insert(Foo foo) {
..//do some important business stuff
publish(foo);
}
public void publish(Object object) {
if ((someBusinessValidation((Foo) object)) {
jmsService.send(destinationId, object);
}
}
}


public interface JMSClientSupport {
void publish (Object object);
}

Sending from the Client

You could have any client sending messages of any nature to the server. In this case I am using Flex. Messages of type <T> are wrapped on the client as an IMessage {AsyncMessage,CommandMessage etc}. When these messages make their way through Blaze or Livecycle, I have it wired to hit this java adapter which is represented in a Hash per FlexDestination for 1:1 FlexDestination : JMS Destination by Flex.

For this example I am implementing the JMS MessageListener to show tight coupling as well as decoupling:

public class FlexMessagingAdapter extends MessagingAdapter implements MessageListener {
// Invoked by Flex when a message comes in from the client to this adapter's Destination
public Object invoke(Message message) {
// a custom interceptor that extracts partition Destination info like ActiveMQ message group or subtopics like STOCKS.NASDAQ for more specific routing
String partition = new DestinationPartitionInterceptor(message).intercept();
jmsService.send(destination, new IntegrationMessageCreator (message, partition));
}
return null;
}


// Decoupled: Invoked when a Message is received from the Spring Integration channel
public void handleMessage(org.springframework.integration.core.Message<?> message) {....}

// Sets the Spring Integration MessageChannel for sending and receiving messages
public void setMessageChannel(MessageChannel messageChannel) {
this.messageChannel = messageChannel;
}


// Tightly coupled with JMS by the MessageListener.onMessage() method
public void onMessage(javax.jms.Message jmsMessage) {
flex.messaging.messages.Message message = new IntegrationMessageCreator(jmsMessage).createMessage(getDestination().getId(), getSubscribers());
if (getMessageService().getMessageBroker().getChannelIds().contains("streaming-amf")) {
MessageBroker broker = MessageBroker.getMessageBroker(null);
broker.routeMessageToService(message, null);
} else {
getMessageService().pushMessageToClients(message, true);
}
}}

The Transformer: Where Messages Intersect

I have a second post that show an even more decoupled messaging strategy with Spring Integration but this is purely a a basic idea using Flex Messaging, Spring Integration, Spring JMS and ActiveMQ. I will post the more broad strategy next :)

Step 1: Client messages are transformed here by extending the JMS MessageCreator. In this class I pull out the data from any Object type but specifically Flex Message and a JMSMessage types.

public class IntegrationMessageCreator implements MessageCreator {
// a few constructors here to handle multiple message types: JMSMessage, Flex Message, Object message, etc

private MessageBuilder createBuilder() {
MessageBuilder builder = null;
if (this.object != null) {
builder = MessageBuilder.withPayload(object);
} else if (this.flexMessage != null && flexMessage.getBody() != null) {
builder = MessageBuilder.withPayload(flexMessage.getBody()).copyHeaders(flexMessage.getHeaders());
}
// ActiveMQ Message Groups
if (this.partition != null) builder.setHeader(MessageConstants.Headers.JMSXGROUPID, partition);

return builder;
}

// to JMS
public javax.jms.Message createMessage(Session session) throws JMSException {
return new IntegrationMessageConverter().toMessage(createBuilder().build(), session);
}

// To Flex
public flex.messaging.messages.Message createMessage(String destinationId, int subscribers) {
Message integrationMessage = (Message) new IntegrationMessageConverter().fromMessage(this.jmsMessage);

flex.messaging.messages.Message flexMessage = new AsyncMessage();
flexMessage.setBody(integrationMessage.getPayload());
flexMessage.setDestination(destinationId);
flexMessage.setHeaders(integrationMessage.getHeaders());
// …and other good jms to flex data

return flexMessage;
}
}

The Converter


import org.springframework.integration.jms.HeaderMappingMessageConverter;
import org.springframework.integration.core.Message;
import javax.jms.Session;

public class IntegrationMessageConverter extends HeaderMappingMessageConverter {

// Converts from a JMS Message to an Integration Message. You should do a try catch but I cut it out for brevity
public Object fromMessage(javax.jms.Message jmsMessage) throws Exception {
return (Message) super.fromMessage(jmsMessage);
}

// Converts from an Integration Message to a JMS Message. You should do a try catch but I cut it out for brevity
public javax.jms.Message toMessage(Object object, Session session) throws Exception {
return jmsMessage = super.toMessage(object, session);
}
}

JMS Asynchronous Reception

In my jmsConfig.xml I configured one Spring MessageListenerAdapter which I have wired with a message delegate, a POJO and its overloaded handleMessage method name:

<bean id=”messageListenerAdapter”>
<property name=”delegate” ref=”defaultMessageDelegate”/>
<property name=”defaultListenerMethod” value=”handleMessage”/>
<property name=”messageConverter” ref=”simpleMessageConverter”/>
</bean>

As the application loads and all Spring beans are initialized, I initialize all of my JMS Destinations. As I do this, I also initialize a MessageListenerAdapter for each Destination. I have a stateless JMSService, which is called by another service, MessagingGateway, to initialize each Destination and which calls PollingListenerContainerFactory to create child MessageListenerAdaptors for each Destination. The adapters are configured based on an abstract parent configuration:

<bean id=”abstractListenerContainer” abstract=”true” destroy-method=”destroy”>
<property name=”connectionFactory” ref=”pooledConnectionFactory”/>
<property name=”transactionManager” ref=”jmsTransActionManager”/>
<property name=”cacheLevel” value=”3″/>
<property name=”taskExecutor” ref=”taskExecutor”/>
<property name=”autoStartup” value=”true”/>
</bean>

Snippet from PollingListenerContainerFactory:

/**
* Gets the parent from the IoC to reduce runtime config and
* resources to create children.
* <p/>
* DefaultMessageListenerContainer is Responsible for all threading
* of message reception and dispatches into the listener for processing.
* Supports dynamic scaling for a higher during peakloads.
*
* @param destination
* @param messageListener
* @return
*/
public static DefaultMessageListenerContainer createMessageListenerContainer(Destination destination, MessageListener messageListener) {

ChildBeanDefinition childBeanDefinition = new ChildBeanDefinition(“abstractListenerContainer”, configureListenerContainer(destination, messageListener));

String beanID = IdGeneratorUtil.getStringId();
ConfigurableListableBeanFactory beanFactory = ApplicationContextAware.getConfigurableListableBeanFactory();
((DefaultListableBeanFactory) beanFactory).registerBeanDefinition(beanID, childBeanDefinition);

DefaultMessageListenerContainer container = (DefaultMessageListenerContainer) ApplicationContextAware.getBean(beanID);
container.setDestination(destination);
return container;
}

/**
* Configures the child listener, based on the parent in jmsConfig.xml.
* <p>Configures Queue or Topic consumers:
* Queue: stick with 1 consumer for low-volume queues: default is
* Topic: there’s no need for more than one concurrent consumer
* Durable Subscription: Only 1 concurrent consumer supported
* <p/>
* props.addPropertyValue(“messageSelector”, “”); sets a message selector for this listener
*
* @param destination
* @param messageListener
* @return
*/
private static MutablePropertyValues configureListenerContainer(Destination destination, MessageListener messageListener) {
MutablePropertyValues props = new MutablePropertyValues();
props.addPropertyValue(“destination”, destination);
props.addPropertyValue(“messageListener”, messageListener);

// Enable throttling on peak loads
if (destination instanceof Queue) {
props.addPropertyValue(“maxConcurrentConsumers”, 50); // modify to needs
}
// Override default setting Point-to-Point (Queues)
if (destination instanceof Topic) {
props.addPropertyValue(“pubSubDomain”, true);
}

return props;
}

this is overkill to this topic but its cool stuff. So we now have a JMS listener for asynchronous JMS reception as well as flex, but now let’s look at the message delegate we wired into the MessageListenerAdapter:

public interface MessageDelegate {

void handleMessage(String message);

void handleMessage(Map message);

void handleMessage(Serializable message);
}

Pretty simple, right? It’s a POJO with absolutely no JMS code whatsoever for asynchronous message reception. How does it work? Spring abstracts the JMS code and calls it behind the scenes, if you look at the source code for Spring’s SimpleMessageConverter, it does the fromMessage() toMessage() handling for you and throws the resultant “message” into the appropriate overloaded method above. Now this is great for simple message abstraction but the above with JMS-Flex and Spring Integration is an example of more complicated handling. With clients you often need to translate and transfer the data from message type 1 to message type 2. In the adapter code above, you would use the handleMessage() method to get the message from Spring Integration and into the message type of your choice, here, a Flex message.

Share/Save

Posted in ActiveMQ, Annotations, Application Cofiguration, Broker Topology, Configuration Management, Flex, Java, JMS, Messaging, Software Development, Spring JMS | No Comments »

Application Configuration with Spring and Java Annotations

Posted by Helena Edelson on 25th July 2009

There is an ongoing debate about best practices for configuration management between XML with schema support versus Annotational meta data. I can make a valid argument either way, and I think the choice should be made on a case by case basis. But, is XML configuration so bad? I like using annotational configuration for things like Spring Web Services and Spring @MVC, where the configuration is specific to the method and arguments – where if you modify code you are likely to modify configuration. I also schema config with namespace support for cases where having a central area of configuration, easily maintainable, and keeping all code clean of meta data is preferred. One easy way to clean up necessary XML config is to use namespaces.

“Namespaces dramatically improve the Spring XML landscape
• More expressive, less verbose
• Just ask Spring Security where it’s 200+ lines of config went!”
- Chris Beams, Project Lead, Spring JavaConfig

Also with namespaces, in some cases the best practices configuration is already done for you and you can easily leverage convention over configuration to also reduce verbosity.

Item two in the list above is absolutely true. When you upgrade from Acegi Security to Spring Security, the configuration volume is night and day, and very cool. Perhaps they realized that if you required users to configure so many framework objects for one application layer such as security or messaging, you haven’t done a service to your users. They simply automatically register what amounts to best practices, letting the user override and add to the security context alternatively, as needed. Now that’s helpful.

So what about annotations? I am currently on large-scale enterprise migration project. Originally we made the decision to go with XML because the team was learning Java and Flex and we didn’t want to add Annotations to the mix for them. As the migrated application’s codebase is growing with domain logic, and in parallel its config files for business and framework services, I am moving toward Annotations at least in my own committed code. At first it was a, “Let’s not pollute and add dependencies in the code..” issue, and now it is a, “Component scanning, very cool.. automated registration … no config files to maintain and refactor” issue.

I am blogging not writing a thesis, so yes, there are more valid arguments which I leave for others, lets get to it and look at code. The main topics I’ll cover briefly are implementing JSR-250 Support, Spring’s @Autowired, Using Qualifiers, Component Scanning.

JSR-250 Support

Three of the total JSR-250 annotations defined in Java EE 5 which are out of the box in Java SE 6 are @Resource, @PostConstruct, and @PostDestroy. There are more in common-annotations.jar.

Enabling Spring’s JSR-250 support
This is as simple as implementing one of these two options in your application’s context.xml:

1. Old School
<bean class=“org.springframework.context.annotation.CommonAnnotationBeanPostProcessor”/>

2. Namespace, which offers greater functionalty than the above option:
<context:annotation-config/>

@Resource
You can implicitly are explicitly name your resource. This is implicit naming by property:
@Resource
private DataSource securityDataSource;

@Resource
public void setSecurityDataSource(DataSource sds) {
this.primaryDataSource = sds;
}

Alternatively, you can name by explicit meta data:

@Resource(name=“securityDataSource”)
private DataSource dataSource;

@Resource(name=“securityDataSource”)
public void setDataSource(DataSource dataSource) {
this.dataSource = dataSource;
}

And if needed, you can disable type-matching fallback as well:

<bean class=“org.springframework.context.annotation.CommonAnnotationBeanPostProcessor”>
<property name=“fallbackToDefaultTypeMatch” value=“false”/>
</bean>

Spring

Lifecycle Annotations: @PostConstruct and @PreDestroy

In some of my framework services I use 2 of Spring’s Interfaces: InitializingBean, DisposableBean and their respective methods: afterPropertiesSet(), destroy().  Another option to implementing these non-programmatically is this method which I think makes the code much cleaner and less verbose:

@PostConstruct public void initialize() { // on post setter injection }


@PreDestroy public void doShutdown() { // on close context }

Yet another option is if you have a need for creating custom initialize and destroy annotations you can configure them like so:

<bean class=“org.springframework.context.annotation.CommonAnnotationBeanPostProcessor”>
<property name=“initAnnotationType” value=“com.edelsonmedia.infrastructure.annotations.Initialize”/>
<property name=“destroyAnnotationType” value=“com.edelsonmedia.infrastructure.annotations..Destroy”/>
</bean>

which is cool because I like writing annotations for increased customization.

@Autowired

@Autowired is a dependency resolver acting on type which acts on fields, methods, and constructors. There are no method naming convention restrictions and multiple parameters are accepted.

In Spring 2.5, using the context namespace with <context:annotation-config/> automatically enables the @Autowired annotation.

Field injection:
@Autowired private DataSource dataSource;

Setter Injection:
@Autowired public void setDataSource(DataSource dataSource){...}

Constructor Injection:
@Autowired public ObjectRepository(DataSource dataSource){...}

Method Injection:
@Autowired public doSetup(DataSource dataSource, Company company) {...}

Attributes for further configuration:
This distinguishes an optional property but note that if there are more than one matches found this will fail:
@Autowired(required=false)
private SomeObject obj;

This configures a primary candidate from the other types:
<bean id=“dataSource” primary=“true” class=“org.apache.commons.dbcp.BasicDataSource” … />
<bean id=“backupDataSource” class=“org.apache.commons.dbcp.BasicDataSource” … />



@Qualifier
The @Qualifier annotation is scoped for field, constructor arg and method parameters. This annotation offers named matching functionality to the @Autowired annotation, finer granularity for autowire candidate resolution, and an extension point for custom autowiring qualifiers, which I will show below:

Field Injection
@Autowired
@Qualifier(“primaryDataSource”)
private DataSource dataSource;

Setter Injection
@Autowired
public void setDataSource(@Qualifier(“securityDataSource”)
DataSource dataSource) {
this.dataSource = dataSource;
}

Constructor Injection
@Autowired
public void setup(@Qualifier(“securityDataSource”)
DataSource dataSource, SomeObject obj) {
this.dataSource = dataSource;
this.obj = obj;
}

Multiple Parameter Method Injection
@Autowired
public AbstractedRepository(@Qualifier(“securityDataSource”)
DataSource dataSource) {
this.dataSource = dataSource;
}

@Qualifier As Meta-Annotation for Extended or Custom Qualifiers

Define it:
@Qualifier
public @interface VMLoaded { … }

Now use the annotation with @Autowired:
@Autowired
@VMLoaded /* If the annotation provides meaning as is, no value necessary */
private BootrapManager bootstrapManager;

If you want to register a custom annotation without using @Qualifier as meta-annotation:

<bean class=“org.springframework.beans.factory.annotation.CustomAutowireConfigurer”>
<property name=“customQualifierTypes”>
<set>
<value>org.example.Online</value>
<value>org.example.Offline</value>
</set>
</property>
</bean>

You can define attributes with custom qualifiers A value attribute can match against a bean name just like it does for @Qualifier
@Qualifier
public @interface CompanyCatalog {
String company();
int type();
...etc
}

Attributes can resolve against XML metadata or class-level annotation metadata
@Category(company=“someCompany”, type=“principle”)
public class CompanyCatalog implements Catalog {
//… etc
}

Stereotype Annotations

@Component – a generic component
@Repository – a repository (DAO)
@Service – a stateless service of idempotent operations
@Controller – an MVC controller

All auto-detected components are implicitly named from the non-qualified class name. This:
@Controller
public class AController { … }

is equivalent to:

<bean id=“aController” class=“org.foo.web.controller.AController”/>

You can set the generated name explicitly, where this:
@Controller(“aCatalog”)
public class AController { … }

is equivalent to:
<bean id=“aCatalog”
class=“org.foo.web.controller.AController”/>

So that whole services-config.xml file you may have that grows and grows now can be migrated to annotations in your classes, and you can delete the config file.

Component Scanning
In Spring 2.5, they included a new class, ClassPathBeanDefinitionScanner, which accepts packages passed in as arguments, and detects any class with declared stereotypes in the path while scanning the base package and its sub-packages. Using component scanning is easy. Simply add the context namespace to your main schema config and add:

<context:component-scan base-package=“org.foo”/>

Now you are set to customize the component scanner if needed. Using the @Component stereotype declared on a custom annotation, you can then decorate any class with that annotation simply by the above configuration addition and:

@Component
public @interface MyAnnotation{ ... }


@MyAnnotation
public class MyClass { ... }

But what if you need to filter the package scanning? Below demonstrates how to include components with custom filters, and the assignable, aspectj and regex filters:

<context:component-scan base-package=“org.foo”>
<!-- custom filter -->
<context:include-filter type=“annotation” expression=“foo.Bar”/>
<context:include-filter type=“assignable” expression=“foo.Baz”/>
<context:include-filter type=“aspectj” expression=“foo..*Service”/>
<context:include-filter type=“regex” expression=“foo\.B[a-z]+”/>
</context:component-scan>

For further customization, you can disable default filters or stereotypes and exclude filters

<context:component-scan base-package=“org.example.web” use-default-filters=“false”>
<context:include-filter type=“annotation” expression=“foo.Bar”/>
<context:include-filter type=“aspectj” expression=“foo..*Service”/>
<context:exclude-filter type=“assignable” expression=“foo.Bad”/>
</context:component-scan>

Scoping Components
As with bean definition in xml, the default scope is singleton. To provide any other scope, add the @Scope annotation:

@Controller
@Scope(“prototype”)
public class MyController { … }

@MyController
@Scope(“session”)
public class SomeWebComponent { … }

Best Practices
So in conclusion, you can you annotations and XML configuration together and leverage the best of both within one application context. Here are some thoughts from Juergen Hoeller, a Principle engineer at SpringSource, on the topic:
Annotation metadata is in the code

  • Pro: facilitates refactoring
  • Con: forces recompilation

XML externalizes the configuration

  • Pro: configuration is not scattered
  • Con: XML is verbose

Find out more: JSR_250 spec

Share/Save

Posted in Annotations, Application Cofiguration, Configuration Management, Flex, Java, JMS, Software Development, Spring | No Comments »

Rolling Your Own Java Memory Profiler

Posted by Helena Edelson on 4th July 2009

I was looking around for some code to build a unit test for easy tests to isolate various aspects or collaborators in an application and after trying a couple of solutions that didn’t seem right, I found, “Do you know your data size?” By Vladimir Roubtsov, JavaWorld.com. Its impossible to do any basic true tests because, and I like that he brings up these points:

  • A single call to Runtime.freeMemory() proves insufficient because a JVM may decide to increase its current heap size at any time (especially when it runs garbage collection). Unless the total heap size is already at the -Xmx maximum size, we should use Runtime.totalMemory()-Runtime.freeMemory() as the used heap size.
  • Executing a single Runtime.gc() call may not prove sufficiently aggressive for requesting garbage collection. We could, for example, request object finalizers to run as well. And since Runtime.gc() is not documented to block until collection completes, it is a good idea to wait until the perceived heap size stabilizes.
  • If the profiled class creates any static data as part of its per-class class initialization (including static class and field initializers), the heap memory used for the first class instance may include that data. We should ignore heap space consumed by the first class instance.

The basic idea is Runtime.totalMemory()-Runtime.freeMemory() as the used heap size. Vladimir includes the code and I’ll post my junit test soon which simplifies this.

Share/Save

Posted in Java, Software Development | No Comments »

SpringSource First Milestone of Spring BlazeDS Released

Posted by Helena Edelson on 18th December 2008

SpringSource released its first Milestone for Spring and BlazeDS, the free version of Adobe’s LCDS

“This first milestone is very much a foundational release, focusing on support for configuring and bootstrapping the BlazeDS MessageBroker (the central component that handles incoming messages from the Flex client) as a Spring-managed object, routing HTTP messages to it through the Spring DispatcherServlet infrastructure, and easily exporting Spring beans as destinations for direct Flex remoting. Future milestones leading up to the final 1.0 will build upon this foundation to provide deeper features such as Spring Security integration, messaging integration using Spring’s JMS support, an AMFView for use in conjunction with Spring 3.0′s REST support, and hopefully further things to address the needs of our community that we haven’t thought of yet.” – Jeremy Grelle, SpringSource

More info: http://www.springsource.org/node/904

Reference Docs http://static.springframework.org/spring-flex/docs/1.0.x/reference/html/index.html

I’ve been developing framework for messaging with Flex, Livecycle Data Services, Java and JMS and will give this first release a test run

Share/Save

Posted in BlazeDS & LCDS, Flex, Software Development, Spring | No Comments »

Excellent Presentation on Resource Management, Reclamation, and Optimization

Posted by Helena Edelson on 10th December 2008

Great presentation: Resource Management, Reclamation, and Optimization by Greg Skinner

Share/Save

Posted in Software Development | No Comments »

How To Increase Heap Size For IntelliJ

Posted by Helena Edelson on 10th December 2008

IntelliJ is a massive IDE that I love to develop in. However it requires a lot of space to roam so here’s how to up the memory so it doesn’t slow down your activity:

1. In /{path}/{IDEA_HOME}/bin/idea.vmoptions
2. Change your settings to
-Xms128m
-Xmx512m
-XX:MaxPermSize=128m
-ea

Share/Save

Posted in Software Development | No Comments »

IntelliJ 8 Review

Posted by Helena Edelson on 5th November 2008

Jetbrain’s IntelliJ IDE 8 is out and I have been using it for a few weeks. So far I think it’s great, but I will add a full review after returning from SpringOne.

Share/Save

Posted in Java, Software Development | No Comments »

DRY, Separation of Concerns, and AOP

Posted by Helena Edelson on 29th June 2008

I like this distillation of the DRY principle

“Every piece of system knowledge should have a single, authoritative, unambiguous representation” – Dave Thomas

as it goes with the idea of Separation of Concerns that each module

  • does one thing
  • knows one thing
  • keeps the secret of how it does that thing hidden

Combined, these are a good rule, but in practice how do you keep your code aligned with these principles and implement a few standard requirements such as

  • All service layer operations must be wrapped in a transaction
  • All transactions are secure
  • Send JMS and / or email messages to users if certain business criteria in the service layer logic are met
  • Expose usage statistics for data access objects

OO techniques such as Proxy, Decorator, Command, and Callbacks can help a design stay DRY but these work-arounds are easily “invasive, cumbersome, and weakly typed” (SpringSource). So what pattern do we see emerging? The above examples share that

1:1 principle

“There should be a clear, simple, 1:1 correspondence between design-level requirements and the implementation”

  • 1:n violations of a single design requirement is scattered across multiple parts of an implementation
  • n:1 implementation details for multiple requirements are tangled up inside the same module
  • n:n it’s all mixed up (typical case)

- SpringSource

So why does this matter? Consider a system, even a software system, as a changing, growing organism in space and time vs a static system. It needs to be agile, easy to maintain and scale, easy for new people to contribute to (no spaghetti code or design), etc. Enter AOP.

AOP (Aspect Oriented Programming) is a modularity technology allowing us to decouple application layers and modules through several implementation options and technologies. Which technology you choose depends on constraints and fit of needs. Some are easier to start with yet limiting such as Spring AOP, while others such as AspectJ require the addition of another technology and language yet allow for decoupling but even that is debatable.

Aspects

This next topic will be in my next blog entry..

Share/Save

Posted in Software Development | No Comments »

Encapsulation

Posted by Helena Edelson on 29th June 2008

“Every module in the decomposition is characterized by its knowledge of a design decision which it hides from all others… It’s interface or definition was chosen to reveal as little as possible about its inner workings” – On the criteria to be used in decomposing a system into modules – Parnas (1972)

Share/Save

Posted in Software Development | No Comments »

 
a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page"); a2a_init("page");