Tuesday, 17 November 2009

Secure Communication Using Java Security APIs

What is secure Communication ?

Secure communication between two business entities must ensure the following :

     - Data Integrity
     - Confidentiality
     - Authentication
     - Non-repudiation

Data Integrity

When information is sent by one business entity to another, the communication framework must ensure that the data has not been tampered with or altered in any way.

This is achieved by creating a message digest i.e. a hash based on the data and sending it to the recipient along with the data.(see authentication section below for more details).


Only the intended recipient of the information should be able to read and understand the information. Confidentiality is achieved by using Cryptography techniques i.e. converting the plain text into encrypted cipher text using key-based symmetric or asymmetric encryption algorithms.

Symmetric Algorithm

Symmetric algorithm uses the same key for encryption and decryption, the key is referred to as secret key. Some of the popular symmetric algorithms are DES and triple DES and IDEA.


  • Symmetric algorithm is faster than asymmetric algorithm.

  • Hardware implementation is possible, which can result in very high-speed data encryption.


  • Problem of both parties mutually agreeing on a key.

  • Preserving the secrecy of the key can also pose challenges, as the same key must be known to more than one person i.e. both the sender and the receiver. So, failure in being able to preserve the secrecy of the key on any one side will result in a complete breakdown of the security infrastructure.

Asymmetric algorithm

There are two keys involved in this. A public key and a private key forming a keypair. Data encrypted by public key from a keypair can be decrypted using the private key and vice-versa.

The public key of the recipient is known to the sender and is used to encrypt the information. The recipient then uses his private key to decrypt the message.It is to be noted here that the recipient's private key is not shared with anyone else.
Popular asymmetric algorithms are DSA and RSA.


  • No bottleneck of mutual agreement by both parties on a single key.

  • The security infrastructure is dependent on two keypairs i.e. four separate keys and not just one secret key, making the setup more robust.


  • Asymmetric encryption, decryption using keypairs is a slow process and if large amounts of data is involved, it can be time consuming and require a lot of system resources.


There should be some form of proof to ensure that the information received has the stamp of approval from the intended sender. This is achieved by a digital signature from the sender.

A digital signature is an encrypted message digest i.e. an encrypted hash.

A message digest is a hash generated using hashing algorithms like MD5 or SHA-1. These algorithms accept input data and generates a hash based on that data. MD5 produces a 128-bit hash whereas SHA-1 produces a 160-bit hash.

A digital signature of the sender is created by :

  • Generating a message digest as explained above.

  • Then encrypting the message digest using the sender's private key.

How does the digital signature fulfil the authentication requirements ?

The encryption of the hash using the sender's private key provides the stamp of approval from the sender because the private key should only be known to the sender, as per the principles of the asymmetric algorithm security setup. This fulfils authentication requirement.

The hash itself fulfils the data integrity requirement.

The recipient needs to first:

  • Decrypt the encrypted hash.

  • Then regenerate a hash based on the information received from sender.

  • Compare the newly generated hash with the one received as part of the digital signature. If both match then the data has reached the recipient unaltered/untampered.


There should also be a means to vouch for the fact that the information and digital signature has come from the original sender and not from someone else, fraudulently using the sender's identity.

This can be confirmed by a digital certificate issued by a trusted third party i.e. a certificate Authority (CA).

How to obtain a digital certificate ?

In order to get a digital certificate a sender needs to :

  • Generate a keypair.

  • Then send the public key along with some proof of identification to a certificate authority.

  • If the CA is satisfied with the proof of identification supplied, a certificate is issued by the CA by signing the sender's public key with the private key of CA.

This certificate is often referred to as X.509 certificate.

What is certificate Chaining ?

If just one certificate authority cannot provide the required trust, then one can use certificate chaining i.e. one CA vouching for another

-: Java Security APIs for secure communication

There are four main API’s for security in Java:

     - Java Cryptography Architecture (JCA)
     - Java Cryptography Extensions (JCE)
     - Java Secure Socket Extensions (JSSE)
     - Java Authentication and Authorization Services (JAAS)

Java Cryptography Architecture (JCA)

Java Cryptography Architecture (JCA) encapsulates the overall architecture of Java’s cryptography concepts and algorithms.JCA includes both java.security and javax.crypto packages.

Some of the engine classes used by JCA to provide cryptographic concepts are as follows:

     - MessageDigest
     - Signature
     - KeyFactory
     - KeyPairGenerator
     - Cipher

Java Cryptography Extensions (JCE)

 Java Cryptography Extensions (JCE) provides software implementations that enables developers to encrypt data, create message digests and perform key management activities.

The JCE APIs cover the following implementations:

     - Symmetric bulk encryption, such as DES, RC2, and IDEA
     - Asymmetric encryption, such as RSA
     - Password-based encryption (PBE)
     - Key generation and key agreement
     - Message Authentication Codes (MAC)

Java Secure Socket Extensions (JSSE)

Java Secure Socket Extensions (JSSE) provides application developers a framework and an implementation for SSL and TLS transport layer protocols. This enables secure data transmission between application client and server via a HTTP or FTP request.

Java Authentication and Authorization Services (JAAS)

Java Authentication and Authorization Service enables developers to setup client restrictions and access control to application's functionality.

This is generally provided by the policies and permissions setup and controlled by the Java SandBox and JVM.

The JAAS-related classes and interfaces are as follows:

      -: Common classes :-

     - Subject
     - Principal
     - Credential

      -: Authentication classes and interfaces :-

     - LoginContext
     - LoginModule
     - CallbackHandler
     - Callback

      -: Authorization classes :-

     - Policy
     - AuthPermission
     - PrivateCredentialPermission.

All of them belong to either the Java.Security or java.Security.auth packages.

Tuesday, 8 September 2009

J2EE Application Performance Tuning Part 2

-: Caching objects in Hibernate to improve performance in J2EE Applications :-

What is caching?

The general concept of caching is that when an object is first read from an external storage, a copy of it will be stored in an area referred to as cache.For subsequent readings the object can be retrieved from the cache directly, which is faster than retrieving it from external storage.

Levels of caching in Hibernate

As a high performance O/R mapping framework, Hibernate supports the caching of persistent objects at different levels.

-<< First Level caching >>-

In hibernate by default objects are cached with session scope. This kind of caching is called “first level caching”.

-<< Second Level Caching >>-

First level caching doesn't help when the same object needs to be read across different sessions. In order to enable this one needs to turn on "second level caching" in hibernate i.e. setting up objects caches that are accessable across multiple sessions.
Second level caching can be done on class associations on collections and on database query statements.

Caching frameworks for non-distributed and distributed J2EE application environments.

Hibernate supports many caching frameworks like EHCache, OSCache, SwarmCache, JBossCache and Terracota.
  • In a non-distrinuted environment EHCache framework is a good choice, it is also the default cache provider for hibernate.

  • In a distributed environment a good choice would be Terracota, which is a highly powerful open source framework that supports distributed caching and provides network attached memory.

-: Identifying and dealing with memory leaks :-

Memory leaks can occur due to:

    - Logical flaws in the code.
    - System's architecture setup.
    - Application server's incompatibility with third party products.

In a large enterprise scale application it is not always easy to identify memory leaks, so under certain circumstances one will need to run the application inside a memory profiler to identify memory leaks. "JProfiler" is one that is quite popular.

Some memory leak scenarios caused be erroneous code are as follows:

  • ResultSet and Statement objects created using pooled connections. When the connection is closed it just returns to the connection pool ut doesn't close the ResultSet or Statement objects.

  • Collection elements not removed after use in the application.

  • Incorrect scoping of variables i.e. if a variable is needed only in a method but is declared as member variable of a class, then its lifetime is unnecessarily is extended to that of the class, which will hold up memory or a longer time period.

some simple memory leak examples to follow:

Example 1. Memory leak caused by
    ** collection elements not removed & incorrect scoping of variables **.

// The following code throws:

java.lang.OutOfMemoryError: Java heap space ".

// This is because method MemoryLeakingMethod(HashMap emps)
// is invoked with a class variable as method parameter.
// so the memory used by it cannot be reclaimed by garbage
// collector between method executions unless, it is
// nullified or collection elements removed.
// multiple calls to the method with different variables
// will fill up the java heap space.

public class MemoryLeakClass {

private HashMap emp01,emp02,emp03...;
public static void main(String[] args) {

MemoryLeakClass m = new MemoryLeakClass();


      System.gc();   // trying to reclaim memory used by m.emp01
           // but, not possible because m.emp01
           // is a class variable with instance scope
           // and maintains strong reference.
           // However, memory will be
           // reclaimed if WeakHashMap used
           // instead of HashMap

}catch(InterruptedException e){e.printStackTrace();}

...........multiple executions..
java.lang.OutOfMemoryError: Java heap space


       -: Method: MemoryLeakingMethod(HashMap emps) :-

public void MemoryLeakingMethod(HashMap emps){

// The HashMap 'emps' passed to this method is a class variable.

System.out.println("*** Memory leaking method ***"+" Run: "+run++);

try {

for(int i=0;i<100000;i++){
emp = new Employees();

// populating 'Employees' object.


// adding Employees object to HashMap class variable 'emps'

emps.put(new Integer(i), emp);

}catch(SQLException e){e.printStackTrace();}
catch (java.text.ParseException e) {
// TODO Auto-generated catch block


*** << If the variable scoping cannot be changed then using a WeakHashMap instead of a HashMap can solve this problem.This is because a weakReference gets freed aggressively by the garbage collector. So the garbage collection code in the main method mentioned above will reclaim the memory between method executions.>> ***

The following change to the above code will prevent an OutOfMemoryError:

Change: private HashMap emp01,emp02,emp03...;
with : private WeakHashMap emp01,emp02,emp03...;

-: WeakHashMap vs HashMap :-

A WeakHashMap is identical to a HashMap in terms of it's functionality, except that the entries in it do not maintain a strong references against it's keys, so the garbage collector may remove the keys from the WeakHashMap and subsequently garbage collect the object. In other works the WeakHashMap behaves like a weakly referenced object.

-: Serializable vs Externalizable :-

Serialization can be a slow process if you have a large object graph and if the classes in the object graph contain large number of variables. The serializable interface by default will serialize the state of all the classes forming the object graph.

Sometimes it may not be a requirement to serialize the state of all the classes/superclasses in the object graph. This can normally be done by declaring the unwanted class variables as transient. But what if this needs to be decided at runtime? The solution is to replace the serializable implementation with externalizable.

The externalizable interface provides full control to the class implementing it on which states to maintain and which ones to discard and this can be done conditionally at run-time using the two methods readExternal and writeExternal. This complete control over marshalling and unmarshalling at run-time can result in being able to achieve improved application performance.

*** A note of caution though.. The methods readExternal and writeExternal are public methods so one has to look at the security aspects of the code.

Monday, 10 August 2009

J2EE Applications Performance Tuning Part 1

When it comes to enterprise scale Java applications there maybe severe performance degradations due to software architecture design flaws and application Infrastructure setup.

The performance degradation can occur due to faulty code causing:

        - Memory Leaks
        - Inefficient thread pooling and connection pooling.
        - Leaky Sessions
        - Absence of optimised caching mechanism
        - Improper use of synchronization &
         Collections framework implementation classes in code.

Memory leaks

Memory leaks occur when faulty application code results in lingering reference being maintained of unused/unwanted objects even after process completion. Thus preventing the garbage collector from re-claiming the memory occupied by these unwanted objects. This will result in more and more leaked objects filling up the heap (especially the tenured space in the heap) and cause severe performance degradation of the application and may finally result in an OutOfMemoryError, if the JVM is unable to allocate enough memory for a new instance of an object in the heap.

-:Possible Solutions:-

1. Inspect the growth pattern of the Heap to identify trends.

2. Start the application inside a memory profiler. Execute a request take a snapshot of the heap. Then re-execute the request and again take a snapshot of the heap. compare the two snapshots and try and identify live objects which belong to the older request and should not appear in the results of the second execution. You may need to re-run the request a number of times before being able to identify leaked objects.

3. A temporary solution to the problem may be an application server re-start. However solutions 1,2 mentioned above can be used to identify objects causing memory leaks and the application refactored to resolve the issue permanently.

Inefficient thread pooling and database connection pooling configurations

Improper sizing of the thread execution pool in an application server may result in severe performance degradation. This is because the thread pool size determines the number of simultaneous requests that can be processed at one time by the application server. If the pool size is too small, then this will result in a large number of requests waiting in the queue to be picked up. Alternatively if the pool size is too large, then a lot of time will be wasted due to context switching between threads.

Improper sizing of database connection pooling e.g. JDBC connection pooling can also result in severe performance degradation. This is because if the pool size is too small then a large number of requests will have to wait due to unavailability of database connection. Alternatively if the connection pool is too large, then a lot of application server resources will be wasted in maintaining a large number of connections and there will be a excessive load on the database as well, resulting in poor database performance.

-:Possible Solutions:-

1.Analyze CPU usage vs Thread pool usage percentage.
  • If CPU usage is low but thread pool usage is high, this is an indication that the thread pool is too small and optimum system resources not being utilized by the application. Hence the pool size should be increased proportionately.

  • If CPU usage is high but thread pool usage is low, this is an indication that the thread pool is too large and lot of resources are being used for context switching between threads. Hence the pool size should be reduced proportionately.

2.Analyze CPU usage vs JDBC connection pool usage percentage.
  • Low CPU usage but high JDBC connection pool utilization indicates connection pool is too small resulting in database and CPU resources being under utilized.

  • High CPU usage but low JDBC connection pool utilization indicates connection pool is too large and needs to be reduced in size.

Leaky Sessions

A leaky session does not leak anything, it actually consumes memory that belongs to session objects causing memory leak. This memory is eventually reclaimed when the session times out. These kind of sessions can also result in an application's performance degradation due to high memory consumption and can even result in OutOfMemoryError and application server crash if they happen to get created in large numbers.

-:Possible Solutions:-

1.Increase the Heap size to accomodate the sessions causing memory problems.
2. Encourage application users to logoff when not using the application, in order to reduce the number of active sessions.
3. Decrease the session timeout interval if possible so that the session expires within a shorter time window which can reduce the number of active sessions at any given time resulting in less memory usage.
4. Refactor the application if possible to reduce the information held by session scoped variables.

Absence of optimised caching mechanism

An absence of optimised caching mechanism can also result in poor application performance.If an enterprise scale application does not have a in-memory distributed caching mechanism, then the scalability of the application will be severely affected and over a period of time with increasing transactional load on the system will result in deteriotating system performance. Cache clusters with in-memory distributed caching mechanism can prevent this from happening.

Frameworks like Ehcache or Terracota allows distributed caching.

They allow:
  • Both memory and disk cache storage.

  • Provide APIs for caching Hibernate,JMS, SOAP/REST web service objects.

  • Enables efficient cache handling using cacheManagers, cache listeners, cache loaders, cache exception handlers etc.

Improper use of Synchronization & Collections framework implementation classes in code.

A J2EE application's performance can be severely affected due to improper use of synchronization and inefficient use of the implementation classes of java collections framework.

  • Large synchronized blocks in code can slow down application performance due to lengthy locking periods.

  • Try and avoid using Vectors and HashTables wherever possible and replace them with ArrayList and HashMaps.

Wednesday, 1 July 2009

Enterprise Messaging Architecture Design using JMS

When it comes to choosing a messaging solution, one must ensure that the messaging architecture is:

  - Robust
  - Scalable
  - Supports both point-to-point and publish-subscribe models.
  - Efficiently handles high volume of asynchronous requests.
  - Allows seamless integration with a SOA framework.

An enterprise messaging architecture that caters for the above can be designed using the following core J2EE design patterns:

  - Message Broker
  - Service Activator
  - Service To Worker
  - Web Service endpoint Proxy

Sample Code below using Message Broker, Service Activator and Service To Worker J2EE core design patterns:

-- JMSMessageBroker interface

import java.io.Serializable;

import javax.jms.JMSException;
import javax.naming.NamingException;

public interface JMSMessageBroker {

void sendTextMessageToQueue(String msg) throws NamingException, JMSException;
void sendObjectMessageToQueue(Serializable msg) throws JMSException, NamingException;
void receiveFromQueue();

--JMSMessageBrokerImpl class

import java.io.Serializable;

import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageConsumer;
import javax.jms.MessageProducer;
import javax.jms.ObjectMessage;
import javax.jms.Queue;
import javax.jms.QueueConnection;
import javax.jms.QueueConnectionFactory;
import javax.jms.QueueSession;
import javax.jms.Session;
import javax.jms.TextMessage;
import javax.naming.NamingException;

public class JMSMessageBrokerImpl implements JMSMessageBroker {

private QueueConnectionFactory connectionFactory;
private JMSServiceLocator jmsServiceLocator;
private Queue queue;
private QueueConnection queueConnection;
private QueueSession queueSession;
private MessageProducer messageProducer;
private TextMessage textMessage;
private ObjectMessage objectMessage;
private MessageConsumer messageConsumer;
private Message mesg;
private String text;
private Object obj;

public void receiveFromQueue() {
// TODO Auto-generated method stub

try {

connectionFactory = (QueueConnectionFactory) jmsServiceLocator.getQueueConnectionFactory();
queueConnection = connectionFactory.createQueueConnection();
queue = jmsServiceLocator.getQueue();
queueSession = queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
messageConsumer = queueSession.createConsumer(queue);
mesg = (TextMessage)messageConsumer.receive();

if(mesg instanceof TextMessage){

text = ((TextMessage)mesg).getText();
else if(mesg instanceof ObjectMessage){

obj = ((ObjectMessage)mesg).getObject();
}catch(NamingException e){e.printStackTrace();}
catch(JMSException e1){e1.printStackTrace();}

public void sendObjectMessageToQueue(Serializable msg) throws JMSException, NamingException {
// TODO Auto-generated method stub

connectionFactory = (QueueConnectionFactory) jmsServiceLocator.getQueueConnectionFactory();
queueConnection = connectionFactory.createQueueConnection();
queue = jmsServiceLocator.getQueue();
queueSession = queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
messageProducer = queueSession.createProducer(queue);

objectMessage = queueSession.createObjectMessage(msg);

public void sendTextMessageToQueue(String msg) throws NamingException, JMSException {
// TODO Auto-generated method stub

connectionFactory = (QueueConnectionFactory) jmsServiceLocator.getQueueConnectionFactory();
queueConnection = connectionFactory.createQueueConnection();
queue = jmsServiceLocator.getQueue();
queueSession = queueConnection.createQueueSession(false, Session.AUTO_ACKNOWLEDGE);
messageProducer = queueSession.createProducer(queue);

textMessage = queueSession.createTextMessage(msg);

--JMSTaskManager interface

public interface JMSTaskManager {

void processRequest() throws InterruptedException;


--JMSTaskManagerImpl class

import java.io.Serializable;

public class JMSTaskManagerImpl implements JMSTaskManager, Serializable {

private JMSCommandProcessorImpl jmsCommandProcessor;
private Object businessService;
private String action;
private Object[] arguments;

public JMSTaskManagerImpl(Object businessService, String action,
Object[] arguments) {
// TODO Auto-generated constructor stub
this.businessService = businessService;
this.action = action;
this.arguments = arguments;

public void processRequest() throws InterruptedException {
// TODO Auto-generated method stub

jmsCommandProcessor = new JMSCommandProcessorImpl();


--JMSCommandProcessor interface

public interface JMSCommandProcessor {

void processRequest(Object businessService, String action, Object[] arguments) throws InterruptedException;


--JMSCommandProcessorImpl class

import java.io.Serializable;
import java.lang.reflect.InvocationTargetException;
import java.lang.reflect.Method;
import java.util.Date;

public class JMSCommandProcessorImpl implements JMSCommandProcessor,Serializable {

private static int seqno;
private Command command;

public JMSCommandProcessorImpl() {
// TODO Auto-generated constructor stub


public void processRequest(Object businessService, String action, Object[] arguments) throws InterruptedException {
// TODO Auto-generated method stub

command = new Command(businessService,action,arguments);
Thread task = new Thread(command);


class Command implements Runnable {

private Object businessService;
private String action;
private Method[] methods;
private Method method;
private Object[] arguments;

Command(Object businessService,String action,Object[] arguments){
this.businessService = businessService;
this.action = action;
this.arguments = arguments;
public void run() {

try {

Class cls = this.businessService.getClass();
Object service = cls.newInstance();
methods = cls.getMethods();

for(Method method : methods){
this.method = method;

method.invoke(service, arguments);

}catch(InstantiationException e){e.printStackTrace();}
catch(IllegalAccessException e1){e1.printStackTrace();}
catch(InvocationTargetException e3){e3.printStackTrace();}


--JMSMessageListener class

import java.io.Serializable;

import javax.jms.JMSException;
import javax.jms.Message;
import javax.jms.MessageListener;
import javax.jms.ObjectMessage;

import com.mockrunner.mock.jms.MockObjectMessage;

public class JMSMessageListener implements MessageListener {

private Serializable JmsTaskManager;

public void onMessage(Message msg) {
// TODO Auto-generated method stub

if(msg instanceof ObjectMessage){

try {

JmsTaskManager = ((ObjectMessage)msg).getObject();

}catch(JMSException e){e.printStackTrace();}
catch(InterruptedException e1){e1.printStackTrace();}


Thursday, 21 May 2009

Continuous Integration best practices

Continuous integration is a practice, which if incorporated in the software development life cycle results in increased ability to spot errors before they are introduced into the system.

This development practice greatly reduces regression bugs in the system and is an inherent part of agile software development methodologies like XP and SCRUM.

Continuous Integration best practices are as follows:

1. When the developer commits the code in a version control system like, say for e.g. CVS. A new build should start automatically.
2. If the build is successful, automated tests should run without any manual intervention.
3. If the tests are successful, the integration cycle ends or else checkout the code that has broken the build and fix it.

Continuous integration can be implemented using the following products:

- CruiseControl
- Hudson

Monday, 18 May 2009

Enterprise System Maintenance/Production Support

Enterprise System maintenance and production support, Challenges and possible solutions

Enterprise systems generally have a tiered architecture comprising of the following tiers:

-    A presentation layer of (e.g. HTML,JSPs)
-    A Service Layer (e.g. JMS message brokers and web service brokers)
-    A Business Domain Layer of (e.g. java,C++)
-    A Persistence Layer of (e.g. RDBMS like Oracle,Sybase etc)

Challenges & Solutions

Requirement of Multi-skilled support personnel

Support personnel must be familiar with multiple software platforms, multiple programming languages various architectural frameworks and tools of the trade. It is not always easy to get the right person for the job.

Possible solutions can be in the form of :

a) Using a cross-functional team, although I think this is only a temporary solution and can create nightmares to fulfil SLA requirements.
b) Continuous periodic skill upgrade programmes.

Complexity and cost of setting up test environments to mimic production.

It is difficult and not cost effective to provide test environments that can mimic the production setup. This sometimes makes it difficult to replicate production bugs due to differences in configuration and hardware capabilities.

Possible solution can be a phased approach to investigation and also breaking up the possible fix and applying it in a phased manner.

Immergence of agile methodologies, a paradigm shift throwing new challenges

Iterative incremental development techniques can result in an increase in the frequency of maintenance releases, which increases the possibility of regression bugs being introduced in the system.

Possible solution could be pre-release meetings between support and development teams and proper co-ordination between the two teams during production releases.

Complexities encountered in testing integrated distributed system components

Integrating multiple distributed system components across multiple platforms also including legacy code and proprietary software often presents operational scenarios which are difficult to test e.g. SSO proxy testing or web service calls to third party software.

Possible solutions are some involvement of architects and designers at this stage to ensure proper use of mocking objects.

Skill retention and learning curve challenges

Excessive manpower movement can result in shortfalls in skill retentions and knowledge distribution within the team.

The team leader should ensure that proper handover takes place when members leave the project.

Requirement of a tiered application support structure which has its pros and cons

The merits of a tiers application support structure is that it enables faster problem resolution due to segregation of problem domain across multiple teams 1st line , 2nd line etc. Using a dynamic problem escalation and feedback mechanism.

Demerits of this support structure is sometimes dealing with problems of overlapping responsibilities between teams for certain production problems.

An Industry standard Support Structure to Support Enterprise System is shown below

click on diagram to view larger picture

Tuesday, 28 April 2009

Hibernate Vs JDBC Performance

==< The Hibernate advantage over JDBC >==

Concurrency Support

In JDBC there is no check that always every user has updated data this check has to be added by the developer.
Hibernate maintains this concurrency check using a version field.It checks this version field in the database table before every update operation.

So, if two users retrieve data from the same table and modify it and if one of them saves the modification, the version gets updated. Now when the second user tries to save his data hibernate doesn't allow it because the data he retrieved was modified and his version doesn't match with the version in the database.

Caching and Connection Pooling

In JDBC, caching and connection pooling is maintained by hand-coding.
Hibernate provides excellent caching support and connection pooling for better application performance.

Transaction Management

In JDBC one has to explicitly handle transaction management in the code.
Hibernate provides injected transaction management.

Programming Overhead

In JDBC one has to do a lot of coding in the form of SQL queries to handle persistant data in database.
In Hibernate there is no need to write code in the form of SQL queries to save and retrieve the data, thus reduces programming overhead and development time.

Maintenance Costs

Applications using JDBC contain large amounts of code that handles database persistant data. This code is subjected to changes whenever there is a change in database table structure leading to high maintenance cost.

In Hibernate the actual mapping between database tables and program objects is done in a XML descriptor file. So any changes to a database table will only need a change in the XML file resulting in centralized maintenance and reduction of maintenance costs.

Wednesday, 15 April 2009

Compare between EJB3.0 and EJB2.0

Configuration & Performance improvements in EJB3.0 over EJB2.0

1. EJB2.0 uses XMLDescriptor files to define bean configuration and dependencies and performs JNDI lookups for object references which is slow on performance.
EJB3.0 uses POJOs with newly introduced metadata annotation instead of JNDI lookups and XMLDeployment Descriptor files.This architecture results in better performance.

2. In EJB2.0 one has to write Home and Remote Interfaces and also implement standard interfaces like javax.ejb.SessionBean which requires the implementation of container callback methods like ejbPassivate, ejbActivate, ejbLoad, ejbStore etc.
EJB3.0 is a simple POJO and doesn't need to implement Home or Remote Interfaces and other standard interfaces like javax.ejb.SessionBean. So no need to implement container callback methods like ejbPassivate, ejbActivate, ejbLoad, ejbStore etc. This results in a simplified configuration and better performance.

Flexibility & Portability improvements in EJB3.0 over EJB2.0

1. EJB2.0 objects are heavyweight.
EJB 3.0 entities do not need to implement the interfaces explained above, so they are lightweight and easy to convert from a DAO to Entity bean or vice versa.

2. In EJB2.0 EJB-QL is not very flexible and has limitations.
EJB3.0 uses a refined EJB-QL which allows multiple levels of joins and hence database queries written are very flexible

3. EJB2.0 uses entity beans to access the database.
EJB3.0 supports Java Persistence API for all its data needs which is more generalized and eliminates portability issues.

4. EJB2.0 needs a EJB Container to run.
EJB3.0 does not need to implement standard interfaces and hence can be loaded and run in independent JVM without the need of an EJB container.

5. EJB2.0 has limitations in terms of its pluggability with third party persistence providers.
EJB3.0 can be used with pluggable third party persistence providers.

6. In EJB2.0 security is provided through the use of Deployment descriptors.
In EJB3.0 Security can be provided through annotations which simplifies the configuration and setup tasks and also reduces performance overheads.

Thursday, 26 March 2009

Scheduling and thread pooling in Spring tutorial

The Spring Framework contains integration classes for scheduling support and thread pooling.

The scheduling support classes act as a wrapper for the java Quartz Scheduler

They are as follows:


Some of the thread pooling support classes are as follows:


Please see below a simple example using Spring Framework's Quartz scheduling and threadpooling features:

UML Diagram of example using Spring's scheduling and thread pooling features

click on diagram to view larger picture

Please see below example Code:


import org.quartz.JobExecutionContext;
import org.quartz.JobExecutionException;
import org.springframework.context.ApplicationContext;
import org.springframework.scheduling.quartz.QuartzJobBean;

public class MonitorJob extends QuartzJobBean {

   private int timeout;

   private static final String APPLICATION_CONTEXT_KEY = "applicationContext";

   public void setTimeout(int timeout){
     this.timeout = timeout;

   protected void executeInternal(JobExecutionContext context) throws JobExecutionException {
     // TODO Auto-generated method stub


   private ApplicationContext getApplicationContext(JobExecutionContext context )
throws Exception {

     ApplicationContext appCtx = null;
     appCtx = (ApplicationContext)context.getScheduler().getContext().get(APPLICATION_CONTEXT_KEY);

     if (appCtx == null) {
       throw new JobExecutionException(
"No application context available in scheduler context for key \"" + APPLICATION_CONTEXT_KEY + "\"");
     return appCtx;

   private void processJobs(JobExecutionContext context) {

     MonitorProcessDelegate procDelegate = null;

     try {

         ApplicationContext ctx = getApplicationContext(context);
         procDelegate = (MonitorProcessDelegate)ctx.getBean("MonitorProcessDelegate");

     }catch (Exception e) { e.printStackTrace();}

     System.out.println("Running job monitor");




import org.springframework.core.task.TaskExecutor;

public class MonitorProcessDelegate {

   private class ActiveProcesses implements Runnable {

   // code for identifying and listing active processes

     public void run() {

   private class BusinessExceptionCasesBacklog implements Runnable {

   // code for identifying and listing business exception cases

     public void run() {

   private TaskExecutor taskExecutor;

   public MonitorProcessDelegate;(TaskExecutor taskExecutor) {

     this.taskExecutor = taskExecutor;

   public void ListActiveProcesses() {

     taskExecutor.execute(new ActiveProcesses());


public void ListBusinessExceptionCases() {

     taskExecutor.execute(new BusinessExceptionCasesBacklog());


Spring's Dispatcher servlet's configuration file settings

click on diagram to view larger picture

Abstract Factory Pattern tutorial creating threads

Say for e.g.

Finance module uses threads with the following attributes:

ThreadGroup = "Finance"
Priority = 7

Reporting module uses threads with the following attributes:

ThreadGroup = "Report"
Priority = 5

Using an AbstractFactoryPattern it is possible to design a framework which will provide transparency to developers in creating threads with application module-specific attributes and also maintaining a centralized control on module-specific thread properties.

Please see below UML Diagram of the Framework:

Please click on diagram to view larger picture

Sample code of framework:


public interface AbstractIF {
   public Thread createThread();


public interface AbstractFactoryIF {
   public AbstractIF createThread(Runnable runnable);


public class ThreadFactory implements AbstractFactoryIF {

   String moduleName;

   public ThreadFactory(String moduleName) {
     this.moduleName = moduleName;
   public AbstractIF createThread(Runnable runnable) {
   // TODO Auto-generated method stub

     if(moduleName.equals("Finance")) {
       return new CreateFinanceThread(moduleName,runnable);
     else if(moduleName.equals("Report")){
       return new CreateReportThread(moduleName,runnable);
       return null;



public class CreateFinanceThread implements AbstractIF {

   private String moduleName;
   private ThreadGroup threadGroup;
   private Runnable runnable;
   private Thread t;
   private int priority = 7;

   public CreateFinanceThread(String moduleName,Runnable runnable) {
     this.runnable = runnable;
     this.moduleName = moduleName;

   public Thread createThread() {
   // TODO Auto-generated method stub
     threadGroup = new ThreadGroup(moduleName);
     t = new Thread(threadGroup,runnable);
     return t;


public class CreateReportThread implements AbstractIF {

   private String moduleName;
   private ThreadGroup threadGroup;
   private Runnable runnable;
   private Thread t;
   private int priority = 5;

   public CreateReportThread(String moduleName,Runnable runnable) {
     this.runnable = runnable;
     this.moduleName = moduleName;

   public Thread createThread() {
   // TODO Auto-generated method stub
     threadGroup = new ThreadGroup(moduleName);
     t = new Thread(threadGroup,runnable);
     return t;


public class ProcessDelegate {

   private ThreadFactory threadFactory;
   private String moduleName;
   private Thread t;

   public ProcessDelegate(String moduleName) {
     this.moduleName = moduleName;

   public void doproc() {
     AbstractFactoryIF factory = new ThreadFactory(moduleName);
     AbstractIF thread = factory.createThread(new Runnable() {
       public void run() {

     t = thread.createThread();

   public void doStuff(String moduleName) {
      // do module specific stuff here based on the value of moduleName parameter..

     /* The Finance stuff will be run in a thread with:

          - ThreadGroup="Finance"
          - priority = 7

     The Reporting stuff will be run in a thread with:

          - ThreadGroup="Report"
          - priority = 5

     But this will be transparent to the developer using the ProcessDelegate class.