Sunday, 9 October 2011

IT Program Management, a Retrospective Analysis

The global IT workspace today is inundated with IT organizations who are delivering highly complex solutions to enterprise scale customers. The delivery of such solutions generally ends up in efforts which disseminates into several parallel projects. The key success factors for such solutions require much stronger governance and management infrastructure.

An IT program generally comprises of multiple projects running in parallel to deliver a solution. Hence there are much more complexities in managing programs then a project. The ramification of a program failure has a ripple effect on the client organization and also the solution provider as it ends up being a failure of multiple projects.

Program Governance

An individual project governance structure generally consists of a project manager, a technical specialist capable of integrating technology with business philosophy and a business sponsor who is responsible for ensuring that the deliverables are in line with the business goals to be met.

A program governance structure on the contrary generally has a steering committee comprising of senior level personnel whose accountability and role is often decided by the board of directors, a program manager entrusted by the steering committee to guide the workforce and ensure that the program achieves its desired results. A set of project managers reporting to the program manager are responsible for managing individual parallel projects which form a part of the program.

In short program management generally requires a three to four layered management structure as compared to project management which generally requires a single or double layered management structure.

Program’s success

The Role of PMO

The success of a program depends on the foundation of the technical and a comprehensive administrative infrastructure. The administrative infrastructure is provided in the form of a PMO. The PMO is responsible for resource loading, facilities management, expense monitoring with respect to allocated budget, KPI creation on program’s progress.

The Role of technical specialist/Architect

The technical infrastructure is provided by a technical specialist and his team overseeing and ensuring the creation of robust application architecture and Data centre operations. The success also depends on the selection of a methodologies and best practices in technical,administrative and business process areas.

The role of Methodologies and Best Practices

Acheving Technical efficiency & Sustainability
On the technical front The delivery of a quality software product requires focus on the application’s architecture, artifact design and delivery model. Careful selection and efficient use of industry standard methodologies and frameworks can bolster the stability of the technical infrastructure and ensure quality product delivery.Frameworks like TOGAF and Zacman provides a robust architectural foundation whereas development methodologies like CMMI for a waterfall approach and SCRUM, RUP, DSDM and Enterprise XP for an agile approach can streamline the development cycle and enable on-time delivery of quality software artifacts. System maintenance and upgrade can be streamlined by implementing service management methodologies like ITIL, which predominantly focuses on service desk management.

Achieving Business Process Excellence
Streamlining of the business processes within the client organization is also required.Industry standards and best practices like Six Sigma can provide the necessary impetus in this regard.

Achieving Governance Goals
Some restructuring of organization’s governance framework might also be required. This may well be an assimilation of bureaucratic and matrix structures.

Continuous Monitoring

The project’s objectives can be met only by periodic audits and reviews to track progress with respect to delivery schedule, expenses incurred, payment milestones etc.

Strict controls are required from program management perspective to ensure that functional requirements are signed off by customer and an effective change control mechanism is put in place to reduce scope creep.

Program management metrics if prepared in adherence to the program’s objective can significantly help in measuring the current progress made and optimize the processes to fulfill the program’s objective.

Building a Product roadmap

The program management should have a strategic focus on building a product roadmap. This will increase ROI.


Hence as discussed above, efficient program management requires careful evaluation of the strategic goals of the program, a robust management infrastructure, a careful selection of matrices to measure progress and a vision to build a product roadmap.

Wednesday, 3 February 2010

Financial Cryptography and Information security in Financial Services

What is financial cryptography?

Financial cryptography is the use of cryptography in dealing with financial transactions, it's foundation is based on the following key parameters that ensures successful and secure financial transactions:

• Secure communication architecture reliability
• Control on user access rights.
• Security product's Governance

Financial Cryptographic zones in Internet Banking Applications

Cryptography in financial institutions operates within cryptographic zones. For instance in an Internet banking application financial Cryptography will operate in the following zones.

- Account Holder's secure Login Zone (Login to bank's website)
- Bank's web server-to-application server communication and authentication zone
- Banks Application Server-to-Business domain layer communication zone

Security Risks in internet-based financial/banking applications.

- Spoofed site.

SSL proxies can create spoofed SSL sessions and intercept sensitive data like the user's credentials. In this scenario the web server's certificate invalidity will be reported by the browser but very few internet users will realize and acknowledge this certificate invalidity as a security risk.

- Vulnerability of data exiting SSL session communication channel.

Once the data comes out of an SSL session communication channel it is in unencrypted form and can be intercepted.

Financial Cryptography in Merchant Banking/Card payment systems

In Merchant banking/card payment systems financial cryptography secures transaction cycle from the Merchant to the acquirer to the card issuer. It reduces the risks encurred by the Acquirer and card Issuer bank.

Financial Cryptography in ACH and global financial messaging services

Tuesday, 17 November 2009

Secure Communication Using Java Security APIs

What is secure Communication ?

Secure communication between two business entities must ensure the following :

     - Data Integrity
     - Confidentiality
     - Authentication
     - Non-repudiation

Data Integrity

When information is sent by one business entity to another, the communication framework must ensure that the data has not been tampered with or altered in any way.

This is achieved by creating a message digest i.e. a hash based on the data and sending it to the recipient along with the data.(see authentication section below for more details).


Only the intended recipient of the information should be able to read and understand the information. Confidentiality is achieved by using Cryptography techniques i.e. converting the plain text into encrypted cipher text using key-based symmetric or asymmetric encryption algorithms.

Symmetric Algorithm

Symmetric algorithm uses the same key for encryption and decryption, the key is referred to as secret key. Some of the popular symmetric algorithms are DES and triple DES and IDEA.


  • Symmetric algorithm is faster than asymmetric algorithm.

  • Hardware implementation is possible, which can result in very high-speed data encryption.


  • Problem of both parties mutually agreeing on a key.

  • Preserving the secrecy of the key can also pose challenges, as the same key must be known to more than one person i.e. both the sender and the receiver. So, failure in being able to preserve the secrecy of the key on any one side will result in a complete breakdown of the security infrastructure.

Asymmetric algorithm

There are two keys involved in this. A public key and a private key forming a keypair. Data encrypted by public key from a keypair can be decrypted using the private key and vice-versa.

The public key of the recipient is known to the sender and is used to encrypt the information. The recipient then uses his private key to decrypt the message.It is to be noted here that the recipient's private key is not shared with anyone else.
Popular asymmetric algorithms are DSA and RSA.


  • No bottleneck of mutual agreement by both parties on a single key.

  • The security infrastructure is dependent on two keypairs i.e. four separate keys and not just one secret key, making the setup more robust.


  • Asymmetric encryption, decryption using keypairs is a slow process and if large amounts of data is involved, it can be time consuming and require a lot of system resources.


There should be some form of proof to ensure that the information received has the stamp of approval from the intended sender. This is achieved by a digital signature from the sender.

A digital signature is an encrypted message digest i.e. an encrypted hash.

A message digest is a hash generated using hashing algorithms like MD5 or SHA-1. These algorithms accept input data and generates a hash based on that data. MD5 produces a 128-bit hash whereas SHA-1 produces a 160-bit hash.

A digital signature of the sender is created by :

  • Generating a message digest as explained above.

  • Then encrypting the message digest using the sender's private key.

How does the digital signature fulfil the authentication requirements ?

The encryption of the hash using the sender's private key provides the stamp of approval from the sender because the private key should only be known to the sender, as per the principles of the asymmetric algorithm security setup. This fulfils authentication requirement.

The hash itself fulfils the data integrity requirement.

The recipient needs to first:

  • Decrypt the encrypted hash.

  • Then regenerate a hash based on the information received from sender.

  • Compare the newly generated hash with the one received as part of the digital signature. If both match then the data has reached the recipient unaltered/untampered.


There should also be a means to vouch for the fact that the information and digital signature has come from the original sender and not from someone else, fraudulently using the sender's identity.

This can be confirmed by a digital certificate issued by a trusted third party i.e. a certificate Authority (CA).

How to obtain a digital certificate ?

In order to get a digital certificate a sender needs to :

  • Generate a keypair.

  • Then send the public key along with some proof of identification to a certificate authority.

  • If the CA is satisfied with the proof of identification supplied, a certificate is issued by the CA by signing the sender's public key with the private key of CA.

This certificate is often referred to as X.509 certificate.

What is certificate Chaining ?

If just one certificate authority cannot provide the required trust, then one can use certificate chaining i.e. one CA vouching for another

-: Java Security APIs for secure communication

There are four main API’s for security in Java:

     - Java Cryptography Architecture (JCA)
     - Java Cryptography Extensions (JCE)
     - Java Secure Socket Extensions (JSSE)
     - Java Authentication and Authorization Services (JAAS)

Java Cryptography Architecture (JCA)

Java Cryptography Architecture (JCA) encapsulates the overall architecture of Java’s cryptography concepts and algorithms.JCA includes both and javax.crypto packages.

Some of the engine classes used by JCA to provide cryptographic concepts are as follows:

     - MessageDigest
     - Signature
     - KeyFactory
     - KeyPairGenerator
     - Cipher

Java Cryptography Extensions (JCE)

 Java Cryptography Extensions (JCE) provides software implementations that enables developers to encrypt data, create message digests and perform key management activities.

The JCE APIs cover the following implementations:

     - Symmetric bulk encryption, such as DES, RC2, and IDEA
     - Asymmetric encryption, such as RSA
     - Password-based encryption (PBE)
     - Key generation and key agreement
     - Message Authentication Codes (MAC)

Java Secure Socket Extensions (JSSE)

Java Secure Socket Extensions (JSSE) provides application developers a framework and an implementation for SSL and TLS transport layer protocols. This enables secure data transmission between application client and server via a HTTP or FTP request.

Java Authentication and Authorization Services (JAAS)

Java Authentication and Authorization Service enables developers to setup client restrictions and access control to application's functionality.

This is generally provided by the policies and permissions setup and controlled by the Java SandBox and JVM.

The JAAS-related classes and interfaces are as follows:

      -: Common classes :-

     - Subject
     - Principal
     - Credential

      -: Authentication classes and interfaces :-

     - LoginContext
     - LoginModule
     - CallbackHandler
     - Callback

      -: Authorization classes :-

     - Policy
     - AuthPermission
     - PrivateCredentialPermission.

All of them belong to either the Java.Security or java.Security.auth packages.

Tuesday, 8 September 2009

J2EE Application Performance Tuning Part 2

-: Caching objects in Hibernate to improve performance in J2EE Applications :-

What is caching?

The general concept of caching is that when an object is first read from an external storage, a copy of it will be stored in an area referred to as cache.For subsequent readings the object can be retrieved from the cache directly, which is faster than retrieving it from external storage.

Levels of caching in Hibernate

As a high performance O/R mapping framework, Hibernate supports the caching of persistent objects at different levels.

-<< First Level caching >>-

In hibernate by default objects are cached with session scope. This kind of caching is called “first level caching”.

-<< Second Level Caching >>-

First level caching doesn't help when the same object needs to be read across different sessions. In order to enable this one needs to turn on "second level caching" in hibernate i.e. setting up objects caches that are accessable across multiple sessions.
Second level caching can be done on class associations on collections and on database query statements.

Caching frameworks for non-distributed and distributed J2EE application environments.

Hibernate supports many caching frameworks like EHCache, OSCache, SwarmCache, JBossCache and Terracota.
  • In a non-distrinuted environment EHCache framework is a good choice, it is also the default cache provider for hibernate.

  • In a distributed environment a good choice would be Terracota, which is a highly powerful open source framework that supports distributed caching and provides network attached memory.

-: Identifying and dealing with memory leaks :-

Memory leaks can occur due to:

    - Logical flaws in the code.
    - System's architecture setup.
    - Application server's incompatibility with third party products.

In a large enterprise scale application it is not always easy to identify memory leaks, so under certain circumstances one will need to run the application inside a memory profiler to identify memory leaks. "JProfiler" is one that is quite popular.

Some memory leak scenarios caused be erroneous code are as follows:

  • ResultSet and Statement objects created using pooled connections. When the connection is closed it just returns to the connection pool ut doesn't close the ResultSet or Statement objects.

  • Collection elements not removed after use in the application.

  • Incorrect scoping of variables i.e. if a variable is needed only in a method but is declared as member variable of a class, then its lifetime is unnecessarily is extended to that of the class, which will hold up memory or a longer time period.

some simple memory leak examples to follow:

Example 1. Memory leak caused by
    ** collection elements not removed & incorrect scoping of variables **.

// The following code throws:

java.lang.OutOfMemoryError: Java heap space ".

// This is because method MemoryLeakingMethod(HashMap emps)
// is invoked with a class variable as method parameter.
// so the memory used by it cannot be reclaimed by garbage
// collector between method executions unless, it is
// nullified or collection elements removed.
// multiple calls to the method with different variables
// will fill up the java heap space.

public class MemoryLeakClass {

private HashMap emp01,emp02,emp03...;
public static void main(String[] args) {

MemoryLeakClass m = new MemoryLeakClass();


      System.gc();   // trying to reclaim memory used by m.emp01
           // but, not possible because m.emp01
           // is a class variable with instance scope
           // and maintains strong reference.
           // However, memory will be
           // reclaimed if WeakHashMap used
           // instead of HashMap

}catch(InterruptedException e){e.printStackTrace();}

...........multiple executions..
java.lang.OutOfMemoryError: Java heap space


       -: Method: MemoryLeakingMethod(HashMap emps) :-

public void MemoryLeakingMethod(HashMap emps){

// The HashMap 'emps' passed to this method is a class variable.

System.out.println("*** Memory leaking method ***"+" Run: "+run++);

try {

for(int i=0;i<100000;i++){
emp = new Employees();

// populating 'Employees' object.


// adding Employees object to HashMap class variable 'emps'

emps.put(new Integer(i), emp);

}catch(SQLException e){e.printStackTrace();}
catch (java.text.ParseException e) {
// TODO Auto-generated catch block


*** << If the variable scoping cannot be changed then using a WeakHashMap instead of a HashMap can solve this problem.This is because a weakReference gets freed aggressively by the garbage collector. So the garbage collection code in the main method mentioned above will reclaim the memory between method executions.>> ***

The following change to the above code will prevent an OutOfMemoryError:

Change: private HashMap emp01,emp02,emp03...;
with : private WeakHashMap emp01,emp02,emp03...;

-: WeakHashMap vs HashMap :-

A WeakHashMap is identical to a HashMap in terms of it's functionality, except that the entries in it do not maintain a strong references against it's keys, so the garbage collector may remove the keys from the WeakHashMap and subsequently garbage collect the object. In other works the WeakHashMap behaves like a weakly referenced object.

-: Serializable vs Externalizable :-

Serialization can be a slow process if you have a large object graph and if the classes in the object graph contain large number of variables. The serializable interface by default will serialize the state of all the classes forming the object graph.

Sometimes it may not be a requirement to serialize the state of all the classes/superclasses in the object graph. This can normally be done by declaring the unwanted class variables as transient. But what if this needs to be decided at runtime? The solution is to replace the serializable implementation with externalizable.

The externalizable interface provides full control to the class implementing it on which states to maintain and which ones to discard and this can be done conditionally at run-time using the two methods readExternal and writeExternal. This complete control over marshalling and unmarshalling at run-time can result in being able to achieve improved application performance.

*** A note of caution though.. The methods readExternal and writeExternal are public methods so one has to look at the security aspects of the code.

Monday, 10 August 2009

J2EE Applications Performance Tuning Part 1

When it comes to enterprise scale Java applications there maybe severe performance degradations due to software architecture design flaws and application Infrastructure setup.

The performance degradation can occur due to faulty code causing:

        - Memory Leaks
        - Inefficient thread pooling and connection pooling.
        - Leaky Sessions
        - Absence of optimised caching mechanism
        - Improper use of synchronization &
         Collections framework implementation classes in code.

Memory leaks

Memory leaks occur when faulty application code results in lingering reference being maintained of unused/unwanted objects even after process completion. Thus preventing the garbage collector from re-claiming the memory occupied by these unwanted objects. This will result in more and more leaked objects filling up the heap (especially the tenured space in the heap) and cause severe performance degradation of the application and may finally result in an OutOfMemoryError, if the JVM is unable to allocate enough memory for a new instance of an object in the heap.

-:Possible Solutions:-

1. Inspect the growth pattern of the Heap to identify trends.

2. Start the application inside a memory profiler. Execute a request take a snapshot of the heap. Then re-execute the request and again take a snapshot of the heap. compare the two snapshots and try and identify live objects which belong to the older request and should not appear in the results of the second execution. You may need to re-run the request a number of times before being able to identify leaked objects.

3. A temporary solution to the problem may be an application server re-start. However solutions 1,2 mentioned above can be used to identify objects causing memory leaks and the application refactored to resolve the issue permanently.

Inefficient thread pooling and database connection pooling configurations

Improper sizing of the thread execution pool in an application server may result in severe performance degradation. This is because the thread pool size determines the number of simultaneous requests that can be processed at one time by the application server. If the pool size is too small, then this will result in a large number of requests waiting in the queue to be picked up. Alternatively if the pool size is too large, then a lot of time will be wasted due to context switching between threads.

Improper sizing of database connection pooling e.g. JDBC connection pooling can also result in severe performance degradation. This is because if the pool size is too small then a large number of requests will have to wait due to unavailability of database connection. Alternatively if the connection pool is too large, then a lot of application server resources will be wasted in maintaining a large number of connections and there will be a excessive load on the database as well, resulting in poor database performance.

-:Possible Solutions:-

1.Analyze CPU usage vs Thread pool usage percentage.
  • If CPU usage is low but thread pool usage is high, this is an indication that the thread pool is too small and optimum system resources not being utilized by the application. Hence the pool size should be increased proportionately.

  • If CPU usage is high but thread pool usage is low, this is an indication that the thread pool is too large and lot of resources are being used for context switching between threads. Hence the pool size should be reduced proportionately.

2.Analyze CPU usage vs JDBC connection pool usage percentage.
  • Low CPU usage but high JDBC connection pool utilization indicates connection pool is too small resulting in database and CPU resources being under utilized.

  • High CPU usage but low JDBC connection pool utilization indicates connection pool is too large and needs to be reduced in size.

Leaky Sessions

A leaky session does not leak anything, it actually consumes memory that belongs to session objects causing memory leak. This memory is eventually reclaimed when the session times out. These kind of sessions can also result in an application's performance degradation due to high memory consumption and can even result in OutOfMemoryError and application server crash if they happen to get created in large numbers.

-:Possible Solutions:-

1.Increase the Heap size to accomodate the sessions causing memory problems.
2. Encourage application users to logoff when not using the application, in order to reduce the number of active sessions.
3. Decrease the session timeout interval if possible so that the session expires within a shorter time window which can reduce the number of active sessions at any given time resulting in less memory usage.
4. Refactor the application if possible to reduce the information held by session scoped variables.

Absence of optimised caching mechanism

An absence of optimised caching mechanism can also result in poor application performance.If an enterprise scale application does not have a in-memory distributed caching mechanism, then the scalability of the application will be severely affected and over a period of time with increasing transactional load on the system will result in deteriotating system performance. Cache clusters with in-memory distributed caching mechanism can prevent this from happening.

Frameworks like Ehcache or Terracota allows distributed caching.

They allow:
  • Both memory and disk cache storage.

  • Provide APIs for caching Hibernate,JMS, SOAP/REST web service objects.

  • Enables efficient cache handling using cacheManagers, cache listeners, cache loaders, cache exception handlers etc.

Improper use of Synchronization & Collections framework implementation classes in code.

A J2EE application's performance can be severely affected due to improper use of synchronization and inefficient use of the implementation classes of java collections framework.

  • Large synchronized blocks in code can slow down application performance due to lengthy locking periods.

  • Try and avoid using Vectors and HashTables wherever possible and replace them with ArrayList and HashMaps.

Wednesday, 1 July 2009

Enterprise Messaging Architecture Design using JMS

When it comes to choosing a messaging solution, one must ensure that the messaging architecture is:

  - Robust
  - Scalable
  - Supports both point-to-point and publish-subscribe models.
  - Efficiently handles high volume of asynchronous requests.
  - Allows seamless integration with a SOA framework.

An enterprise messaging architecture that caters for the above can be designed using the following core J2EE design patterns:

  - Message Broker
  - Service Activator
  - Service To Worker
  - Web Service endpoint Proxy

Sample Code below using Message Broker, Service Activator and Service To Worker J2EE core design patterns:

-- JMSMessageBroker interface


import javax.jms.JMSException;
import javax.naming.NamingException;

public interface JMSMessageBroker {

void sendTextMessageToQueue(String msg) throws NamingException, JMSException;
void sendObjectMessageToQueue(Serializable msg) throws JMSException, NamingException;
void receiveFromQueue();

--JMSMessageBrokerImpl class


import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.ObjectMessage;
import javax.jms.Queue;
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;
import javax.jms.QueueSession;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.naming.NamingException;

public class JMSMessageBrokerImpl implements JMSMessageBroker {

private QueueConnectionFactory connectionFactory;
private JMSServiceLocator jmsServiceLocator;
private Queue queue;
private QueueConnection queueConnection;
private QueueSession queueSession;
private MessageProducer messageProducer;
private TextMessage textMessage;
private ObjectMessage objectMessage;
private MessageConsumer messageConsumer;
private Message mesg;
private String text;
private Object obj;

public void receiveFromQueue() {
// TODO Auto-generated method stub

try {

connectionFactory = (QueueConnectionFactory) jmsServiceLocator.getQueueConnectionFactory();
queueConnection = connectionFactory.createQueueConnection();
queue = jmsServiceLocator.getQueue();
queueSession = queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
messageConsumer = queueSession.createConsumer(queue);
mesg = (TextMessage)messageConsumer.receive();

if(mesg instanceof TextMessage){

text = ((TextMessage)mesg).getText();
else if(mesg instanceof ObjectMessage){

obj = ((ObjectMessage)mesg).getObject();
}catch(NamingException e){e.printStackTrace();}
catch(JMSException e1){e1.printStackTrace();}

public void sendObjectMessageToQueue(Serializable msg) throws JMSException, NamingException {
// TODO Auto-generated method stub

connectionFactory = (QueueConnectionFactory) jmsServiceLocator.getQueueConnectionFactory();
queueConnection = connectionFactory.createQueueConnection();
queue = jmsServiceLocator.getQueue();
queueSession = queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
messageProducer = queueSession.createProducer(queue);

objectMessage = queueSession.createObjectMessage(msg);

public void sendTextMessageToQueue(String msg) throws NamingException, JMSException {
// TODO Auto-generated method stub

connectionFactory = (QueueConnectionFactory) jmsServiceLocator.getQueueConnectionFactory();
queueConnection = connectionFactory.createQueueConnection();
queue = jmsServiceLocator.getQueue();
queueSession = queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
messageProducer = queueSession.createProducer(queue);

textMessage = queueSession.createTextMessage(msg);

--JMSTaskManager interface

public interface JMSTaskManager {

void processRequest() throws InterruptedException;


--JMSTaskManagerImpl class


public class JMSTaskManagerImpl implements JMSTaskManager, Serializable {

private JMSCommandProcessorImpl jmsCommandProcessor;
private Object businessService;
private String action;
private Object[] arguments;

public JMSTaskManagerImpl(Object businessService, String action,
Object[] arguments) {
// TODO Auto-generated constructor stub
this.businessService = businessService;
this.action = action;
this.arguments = arguments;

public void processRequest() throws InterruptedException {
// TODO Auto-generated method stub

jmsCommandProcessor = new JMSCommandProcessorImpl();


--JMSCommandProcessor interface

public interface JMSCommandProcessor {

void processRequest(Object businessService, String action, Object[] arguments) throws InterruptedException;


--JMSCommandProcessorImpl class

import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.Date;

public class JMSCommandProcessorImpl implements JMSCommandProcessor,Serializable {

private static int seqno;
private Command command;

public JMSCommandProcessorImpl() {
// TODO Auto-generated constructor stub


public void processRequest(Object businessService, String action, Object[] arguments) throws InterruptedException {
// TODO Auto-generated method stub

command = new Command(businessService,action,arguments);
Thread task = new Thread(command);


class Command implements Runnable {

private Object businessService;
private String action;
private Method[] methods;
private Method method;
private Object[] arguments;

Command(Object businessService,String action,Object[] arguments){
this.businessService = businessService;
this.action = action;
this.arguments = arguments;
public void run() {

try {

Class cls = this.businessService.getClass();
Object service = cls.newInstance();
methods = cls.getMethods();

for(Method method : methods){
this.method = method;

method.invoke(service, arguments);

}catch(InstantiationException e){e.printStackTrace();}
catch(IllegalAccessException e1){e1.printStackTrace();}
catch(InvocationTargetException e3){e3.printStackTrace();}


--JMSMessageListener class


import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.ObjectMessage;

import com.mockrunner.mock.jms.MockObjectMessage;

public class JMSMessageListener implements MessageListener {

private Serializable JmsTaskManager;

public void onMessage(Message msg) {
// TODO Auto-generated method stub

if(msg instanceof ObjectMessage){

try {

JmsTaskManager = ((ObjectMessage)msg).getObject();

}catch(JMSException e){e.printStackTrace();}
catch(InterruptedException e1){e1.printStackTrace();}


Thursday, 21 May 2009

Continuous Integration best practices

Continuous integration is a practice, which if incorporated in the software development life cycle results in increased ability to spot errors before they are introduced into the system.

This development practice greatly reduces regression bugs in the system and is an inherent part of agile software development methodologies like XP and SCRUM.

Continuous Integration best practices are as follows:

1. When the developer commits the code in a version control system like, say for e.g. CVS. A new build should start automatically.
2. If the build is successful, automated tests should run without any manual intervention.
3. If the tests are successful, the integration cycle ends or else checkout the code that has broken the build and fix it.

Continuous integration can be implemented using the following products:

- CruiseControl
- Hudson