You are on page 1of 35

1) What is Hibernate?

Hibernate is a powerful, high performance object/relational persistence and query service. This
lets the users to develop persistent classes following object-oriented principles such as
association, inheritance, polymorphism, composition, and collections.

2) What is ORM?

ORM stands for Object/Relational mapping. It is the programmed and translucent perseverance of
objects in a Java application in to the tables of a relational database using the metadata that
describes the mapping between the objects and the database. It works by transforming the data
from one representation to another.

3) What does an ORM solution comprises of?

• It should have an API for performing basic CRUD (Create, Read, Update, Delete)
operations on objects of persistent classes
• Should have a language or an API for specifying queries that refer to the classes and the
properties of classes
• An ability for specifying mapping metadata
• It should have a technique for ORM implementation to interact with transactional objects
to perform dirty checking, lazy association fetching, and other optimization functions

4) What are the different levels of ORM quality?

There are four levels defined for ORM quality.

i. Pure relational
ii. Light object mapping
iii. Medium object mapping
iv. Full object mapping

5) What is a pure relational ORM?

The entire application, including the user interface, is designed around the relational model and
SQL-based relational operations.

6) What is a meant by light object mapping?

The entities are represented as classes that are mapped manually to the relational tables. The
code is hidden from the business logic using specific design patterns. This approach is successful
for applications with a less number of entities, or applications with common, metadata-driven
data models. This approach is most known to all.

7) What is a meant by medium object mapping?

The application is designed around an object model. The SQL code is generated at build time. And
the associations between objects are supported by the persistence mechanism, and queries are
specified using an object-oriented expression language. This is best suited for medium-sized
applications with some complex transactions. Used when the mapping exceeds 25 different
database products at a time.
8) What is meant by full object mapping?

Full object mapping supports sophisticated object modeling: composition, inheritance,


polymorphism and persistence. The persistence layer implements transparent persistence;
persistent classes do not inherit any special base class or have to implement a special interface.
Efficient fetching strategies and caching strategies are implemented transparently to the
application.

9) What are the benefits of ORM and Hibernate?

There are many benefits from these. Out of which the following are the most important one.

i. Productivity – Hibernate reduces the burden of developer by providing much of the


functionality and let the developer to concentrate on business logic.
ii. Maintainability – As hibernate provides most of the functionality, the LOC for the
application will be reduced and it is easy to maintain. By automated object/relational
persistence it even reduces the LOC.
iii. Performance – Hand-coded persistence provided greater performance than automated
one. But this is not true all the times. But in hibernate, it provides more optimization that
works all the time there by increasing the performance. If it is automated persistence then
it still increases the performance.
iv. Vendor independence – Irrespective of the different types of databases that are there,
hibernate provides a much easier way to develop a cross platform application.

10) How does hibernate code looks like?

Session session = getSessionFactory().openSession();


Transaction tx = session.beginTransaction();
MyPersistanceClass mpc = new MyPersistanceClass ("Sample App");
session.save(mpc);
tx.commit();
session.close();

The Session and Transaction are the interfaces provided by hibernate. There are many other
interfaces besides this.

11) What is a hibernate xml mapping document and how does it look like?

In order to make most of the things work in hibernate, usually the information is provided in an
xml document. This document is called as xml mapping document. The document defines, among
other things, how properties of the user defined persistence classes’ map to the columns of the
relative tables in database.

<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC
"http://hibernate.sourceforge.net/hibernate-mapping-2.0.dtd">

<hibernate-mapping>
<class name="sample.MyPersistanceClass" table="MyPersitaceTable">
<id name="id" column="MyPerId">
<generator class="increment"/>
</id>
<property name="text" column="Persistance_message"/>
<many-to-one name="nxtPer" cascade="all" column="NxtPerId"/>
</class>
</hibernate-mapping>

Everything should be included under tag. This is the main tag for an xml mapping document.

12) Show Hibernate overview?

13) What the Core interfaces are of hibernate framework?

There are many benefits from these. Out of which the following are the most important one.

i. Session Interface – This is the primary interface used by hibernate applications. The
instances of this interface are lightweight and are inexpensive to create and destroy.
Hibernate sessions are not thread safe.
ii. SessionFactory Interface – This is a factory that delivers the session objects to
hibernate application. Generally there will be a single SessionFactory for the whole
application and it will be shared among all the application threads.
iii. Configuration Interface – This interface is used to configure and bootstrap hibernate.
The instance of this interface is used by the application in order to specify the location of
hibernate specific mapping documents.
iv. Transaction Interface – This is an optional interface but the above three interfaces are
mandatory in each and every application. This interface abstracts the code from any kind
of transaction implementations such as JDBC transaction, JTA transaction.
v. Query and Criteria Interface – This interface allows the user to perform queries and
also control the flow of the query execution.

14) What are Callback interfaces?

These interfaces are used in the application to receive a notification when some object events
occur. Like when an object is loaded, saved or deleted. There is no need to implement callbacks in
hibernate applications, but they’re useful for implementing certain kinds of generic functionality.

15) What are Extension interfaces?

When the built-in functionalities provided by hibernate is not sufficient enough, it provides a way
so that user can include other interfaces and implement those interfaces for user desire
functionality. These interfaces are called as Extension interfaces.

16) What are the Extension interfaces that are there in hibernate?

There are many extension interfaces provided by hibernate.

• ProxyFactory interface - used to create proxies


• ConnectionProvider interface – used for JDBC connection management
• TransactionFactory interface – Used for transaction management
• Transaction interface – Used for transaction management
• TransactionManagementLookup interface – Used in transaction management.
• Cahce interface – provides caching techniques and strategies
• CacheProvider interface – same as Cache interface
• ClassPersister interface – provides ORM strategies
• IdentifierGenerator interface – used for primary key generation
• Dialect abstract class – provides SQL support

17) What are different environments to configure hibernate?

There are mainly two types of environments in which the configuration of hibernate application
differs.

i. Managed environment – In this kind of environment everything from database


connections, transaction boundaries, security levels and all are defined. An example of this
kind of environment is environment provided by application servers such as JBoss,
Weblogic and WebSphere.
ii. Non-managed environment – This kind of environment provides a basic configuration
template. Tomcat is one of the best examples that provide this kind of environment.

18) What is the file extension you use for hibernate mapping file?

The name of the file should be like this : filenam.hbm.xml

The filename varies here. The extension of these files should be “.hbm.xml”.

This is just a convention and it’s not mandatory. But this is the best practice to follow this
extension.

19) What do you create a SessionFactory?

Configuration cfg = new Configuration();


cfg.addResource("myinstance/MyConfig.hbm.xml");
cfg.setProperties( System.getProperties() );
SessionFactory sessions = cfg.buildSessionFactory();

First, we need to create an instance of Configuration and use that instance to refer to the location
of the configuration file. After configuring this instance is used to create the SessionFactory by
calling the method buildSessionFactory().

20) What is meant by Method chaining?

Method chaining is a programming technique that is supported by many hibernate interfaces. This
is less readable when compared to actual java code. And it is not mandatory to use this format.
Look how a SessionFactory is created when we use method chaining.

SessionFactory sessions = new Configuration()


.addResource("myinstance/MyConfig.hbm.xml")
.setProperties( System.getProperties() )
.buildSessionFactory();
21) What does hibernate.properties file consist of?

This is a property file that should be placed in application class path. So when the Configuration
object is created, hibernate is first initialized. At this moment the application will automatically
detect and read this hibernate.properties file.

hibernate.connection.datasource = java:/comp/env/jdbc/AuctionDB
hibernate.transaction.factory_class =
net.sf.hibernate.transaction.JTATransactionFactory
hibernate.transaction.manager_lookup_class =
net.sf.hibernate.transaction.JBossTransactionManagerLookup
hibernate.dialect = net.sf.hibernate.dialect.PostgreSQLDialect

22) What should SessionFactory be placed so that it can be easily accessed?

As far as it is compared to J2EE environment, if the SessionFactory is placed in JNDI then it can
be easily accessed and shared between different threads and various components that are
hibernate aware. You can set the SessionFactory to a JNDI by configuring a property
hibernate.session_factory_name in the hibernate.properties file.

23) What are POJOs?

POJO stands for plain old java objects. These are just basic JavaBeans that have defined setter
and getter methods for all the properties that are there in that bean. Besides they can also have
some business logic related to that property. Hibernate applications works efficiently with POJOs
rather then simple java classes.

24) What is object/relational mapping metadata?

ORM tools require a metadata format for the application to specify the mapping between classes
and tables, properties and columns, associations and foreign keys, Java types and SQL types. This
information is called the object/relational mapping metadata. It defines the transformation
between the different data type systems and relationship representations.

25) What is HQL?

HQL stands for Hibernate Query Language. Hibernate allows the user to express queries in its own
portable SQL extension and this is called as HQL. It also allows the user to express in native SQL.

26) What are the different types of property and class mappings?

• Typical and most common property mapping



• <property name="description" column="DESCRIPTION" type="string"/>
• Or
• <property name="description" type="string">
• <column name="DESCRIPTION"/>
• </property>
• Derived properties

• <property name="averageBidAmount" formula="( select AVG(b.AMOUNT) from BID
b where b.ITEM_ID = ITEM_ID )" type="big_decimal"/>
• Typical and most common property mapping

• <property name="description" column="DESCRIPTION" type="string"/>
• Controlling inserts and updates

• <property name="name" column="NAME" type="string" insert="false"
update="false"/>

27) What is Attribute Oriented Programming?

XDoclet has brought the concept of attribute-oriented programming to Java. Until JDK 1.5, the
Java language had no support for annotations; now XDoclet uses the Javadoc tag format
(@attribute) to specify class-, field-, or method-level metadata attributes. These attributes are
used to generate hibernate mapping file automatically when the application is built. This kind of
programming that works on attributes is called as Attribute Oriented Programming.

28) What are the different methods of identifying an object?

There are three methods by which an object can be identified.

i. Object identity – Objects are identical if they reside in the same memory location in the
JVM. This can be checked by using the = = operator.
ii. Object equality – Objects are equal if they have the same value, as defined by the
equals( ) method. Classes that don’t explicitly override this method inherit the
implementation defined by java.lang.Object, which compares object identity.
iii. Database identity – Objects stored in a relational database are identical if they represent
the same row or, equivalently, share the same table and primary key value.

29) What are the different approaches to represent an inheritance hierarchy?

i. Table per concrete class.


ii. Table per class hierarchy.
iii. Table per subclass.

30) What are managed associations and hibernate associations?

Associations that are related to container management persistence are called managed
associations. These are bi-directional associations. Coming to hibernate associations, these are
unidirectional.

1) Introduction

While working with Hibernate web applications we will face so many problems in its
performance due to database traffic. That to when the database traffic is very heavy .
Actually hibernate is well used just because of its high performance only. So some techniques
are necessary to maintain its performance. Caching is the best technique to solve this problem.
In this article we will discuss about, how we can improve the performance of Hibernate web
applications using caching.

The performance of Hibernate web applications is improved using caching by optimizing the
database applications. The cache actually stores the data already loaded from the database,
so that the traffic between our application and the database will be reduced when the application
want to access that data again. Maximum the application will works with the data in the cache
only. Whenever some another data is needed, the database will be accessed. Because the time
needed to access the database is more when compared with the time needed to access the
cache. So obviously the access time and traffic will be reduced between the application and
the database. Here the cache stores only the data related to current running application. In
order to do that, the cache must be cleared time to time whenever the applications are
changing. Here are the contents.

• Introduction.
o First-level cache.
o Second-level cache.
• Cache Implementations.
o EHCache.
o OSCache.
o SwarmCache.
o JBoss TreeCache.
• Caching Stringategies.
o Read-only.
o Read-Write.
o Nonstriict read-write.
o Transactional.
• Configuration.
• <cache> element.
• Caching the queries.
• Custom Cache.
o Configuration.
o Implementation :: ExampleCustomCache.
• Something about Caching.
o Performance.
o About Caching.
• Conclusion.

Hibernate uses two different caches for objects: first-level cache and second-level cache..

1.1) First-level cache

First-level cache always Associates with the Session object. Hibernate uses this cache by
default. Here, it processes one transaction after another one, means wont process one
transaction many times. Mainly it reduces the number of SQL queries it needs to generate within
a given transaction. That is instead of updating after every modification done in the transaction, it
updates the transaction only at the end of the transaction.

1.2) Second-level cache

Second-level cache always associates with the Session Factory object. While running the
transactions, in between it loads the objects at the Session Factory level, so that those objects
will available to the entire application, don’t bounds to single user. Since the objects are already
loaded in the cache, whenever an object is returned by the query, at that time no need to go for a
database transaction. In this way the second level cache works. Here we can use query level
cache also. Later we will discuss about it.

2) Cache Implementations
Hibernate supports four open-source cache implementations named EHCache (Easy
Hibernate Cache), OSCache (Open Symphony Cache), Swarm Cache, and JBoss Tree
Cache. Each cache has different performance, memory use, and configuration possibilities.

2.1) 2.1 EHCache (Easy Hibernate Cache) (org.hibernate.cache.EhCacheProvider)

• It is fast.
• lightweight.
• Easy-to-use.
• Supports read-only and read/write caching.
• Supports memory-based and disk-based caching.
• Does not support clustering.

2.2)OSCache (Open Symphony Cache) (org.hibernate.cache.OSCacheProvider)

• It is a powerful .
• flexible package
• supports read-only and read/write caching.
• Supports memory- based and disk-based caching.
• Provides basic support for clustering via either JavaGroups or JMS.

2.3)SwarmCache (org.hibernate.cache.SwarmCacheProvider)

• is a cluster-based caching.
• supports read-only or nonstrict read/write caching .
• appropriate for applications those have more read operations than write operations.

2.4)JBoss TreeCache (org.hibernate.cache.TreeCacheProvider)

• is a powerful replicated and transactional cache.


• useful when we need a true transaction-capable caching architecture .

3) Caching Stringategies

Important thing to remembered while studying this one is none of the cache providers support
all of the cache concurrency strategies.

3.1) Read-only

• Useful for data that is read frequently but never updated.


• It is Simple .
• Best performer among the all.

Advantage if this one is, It is safe for using in a cluster. Here is an example for using the read-
only cache strategy.

<class name="abc.mutable " mutable="true ">


<cache usage="read-only"/>
....
</class>
3.2) Read-Write

• Used when our data needs to be updated.


• It’s having more overhead than read-only caches.
• When Session.close() or Session.disconnect() is called the transaction should be
completed in an environment where JTA is no used.
• It is never used if serializable transaction isolation level is required.
• In a JTA environment, for obtaining the JTA TransactionManager we must specify the
property hibernate.transaction.manager_lookup_class.
• To use it in a cluster the cache implementation must support locking.

Here is an example for using the read-write cache stringategy.

<class name="abc.xyz" .... >


<cache usage="read-write"/>
….
<set name="yuv" ... >
<cache usage="read-write"/>
….
</set>
</class>

3.3) Nonstrict read-write

• Needed if the application needs to update data rarely.


• we must specify hibernate.transaction.manager_lookup_class to use this in a JTA
environment .
• The transaction is completed when Session.close() or Session.disconnect() is called In
other environments (except JTA) .

Here is an example for using the nonstrict read-write cache stringategy.

<class name="abc.xyz" .... >


<cache usage=" nonstringict-read-write"/>
….
</class>

3.4) Transactional

• It supports only transactional cache providers such as JBoss TreeCache.


• only used in JTA environment.

4) Configuration

For configuring cache the hibernate.cfg.xml file is used. A typical configuration file is shown
below.

<hibernate-configuration>
<session-factory>
...
<property name="hibernate.cache.provider_class">
org.hibernate.cache.EHCacheProvider
</property>
...
</session-factory>
</hibernate-configuration>

The name in <property> tag must be hibernate.cache.provider_class for activating second-


level cache. We can use hibernate.cache.use_second_level_cache property, which allows
you to activate and deactivate the second-level cache. By default, the second-level cache
is activated and uses the EHCache.

5) <cache> element

The <cache> element of a class has the following form:

<cache
usage=" caching stringategy"
region="RegionName"
include="all | non-lazy"/>

• usage (mandatory) specifies the caching stringategy: transactional, read-write,


nonstringict-read-write or read-only.
• region (optional) specifies the name of the second level cache region .
• include (optional) non-lazy specifies that properties of the entity mapped with
lazy="true" may not be cached when attribute-level lazy fetching is enabled.

The <cache> element of a class is also called as the collection mapping.

6) Caching the queries

Until now we saw only caching the transactions. Now we are going to study about the caching
the queries.Suppose some queries are running frequently with same set of parameters, those
queries can be cached. We have to set hibernate.cache.use_query_cache to true by calling
Query.setCacheable(true) for enabling the query cache. Actually updates in the queries occur
very often. So, for query caching, two cache regions are necessary.

• For storing the results.( cache identifier values and results of value type only).
• For storing the most recent updates.

Query cache always used second-level cache only. Queries wont cached by default. Here is
an example implementation of query cache.

List xyz = abc.createQuery("Query")


.setEntity("…",….)
.setMaxResults(some integer)
.setCacheable(true)
.setCacheRegion("region name")
.list();

We can cache the exact results of a query by setting the hibernate.cache.use_query_cache


property in the hibernate.cfg.xml file to true as follows:
<property name="hibernate.cache.use_query_cache">true</property>

Then, we can use the setCacheable() method on any query we wish to cache.

7) Custom Cache

To understand the relation between cache and the application the cache implementation must
generate statistics of cache usage.

7.1) Custom Cache Configuration

In the hibernate.properties file set the property hibernate.cache.provider_class =


examples.customCache.customCacheProvider.

7.2) Implementation :: ExampleCustomCache

Here is the implementation of ExampleCustomCache. Here it uses Hashtable for storing the
cache statistics.

package examples.ExampleCustomCache;

import net.sf.hibernate.cache;
import java.util;
import org.apache.commons.logging;

public class ExampleCustomCache implements Cache


{
public Log log = LogFactory.getLog(ExapleCustomCache.class);
public Map table = new Hashtable(100);
int hits, misses, newhits, newmisses, locks, unlocks, remhits, remmisses, clears,
destroys;

public void statCount(StringBuffer input, String string1, int value)


{
input.append(string1 + " " + value);
}

public String lStats()


{
StringBuffer res = new StringBuffer();

statCount(res, "hits", hits);


statCount(res, "misses", misses);
statCount(res, "new hits", newhits);
statCount(res, "new misses", newmisses);
statCount(res, "locks", lock);
statCount(res, "unlocks", unlock);
statCount(res, "rem hits ", remhits);
statCount(res, "rem misses", remmisses);
statCount(res, "clear", clears);
statCount(res, "destroy", destroys);

return res.toString();
}
public Object get(Object key)
{
if (table.get(key) == null)
{
log.info("get " + key.toString () + " missed");
misses++;
} else
{
log.info("get " + key.toString () + " hit");
hits++;
}

return table.get(key);
}

public void put(Object key, Object value)


{
log.info("put " + key.toString ());
if (table.containsKey(key))
{
newhits++;
} else
{
newmisses++;
}
table.put(key, value);
}

public void remove(Object key)


{
log.info("remove " + key.toString ());
if (table.containsKey(key))
{
remhits++;
} else
{
remmisses++;
}
table.remove(key);
}

public void clear()


{
log.info("clear");
clears++;
table.clear();
}

public void destroy()


{
log.info("destringoy ");
destroys++;
}

public void lock(Object key)


{
log.info("lock " + key.toStringing());
locks++;
}

public void unlock(Object key)


{
log.info("unlock " + key.toStringing());
unlocks++;
}

Here is the example of Custom Cache.

Package examples.ExapleCustomCache;

import java.util;
import net.sf.hibernate.cache;

public class ExampleCustomCacheProvider implements CacheProvider


{

public Hashtable cacheList = new Hashtable();

public Hashtable getCacheList()


{
return cacheList;
}

public Stringing cacheInfo ()


{
StringingBuffer aa = new StringingBuffer();
Enumeration cList = cacheList.keys();

while (cList.hasMoreElements())
{
Stringing cName = cList.nextElement().toStringing();
aa.append(cName);

ExapleCustomCache myCache = (ExapleCustomCache)cacheList.get(cName);

aa.append(myCache.lStats());
}

return aa.toStringing();
}

public ExampleCustomCacheProvider()
{
}

public Cache bCache(String string2, Properties properties)


{
ExampleCustomCache nC = new ExapleCustomCache();
cacheList.put(string2, nC);
return nC;
}

8) Something about Caching


8.1) Performance

Hibernate provides some metrics for measuring the performance of caching, which are all
described in the Statistics interface API, in three categories:

• Metrics related to the general Session usage.


• Metrics related to the entities, collections, queries, and cache as a whole.
• Detailed metrics related to a particular entity, collection, query or cache region.

8.2) About Caching

• All objects those are passed to methods save(), update() or saveOrUpdate() or those
you get from load(), get(), list(), iterate() or scroll() will be saved into cache.
• flush() is used to synchronize the object with database and evict() is used to delete it
from cache.
• contains() used to find whether the object belongs to the cache or not.
• Session.clear() used to delete all objects from the cache .
• Suppose the query wants to force a refresh of its query cache region, we should call
Query.setCacheMode(CacheMode.REFRESH).

9) Conclusion

Caching is good one and hibernate found a good way to implement it for improving its
performance in web applications especially when more database traffic occurs. If we implement
it very correctly, we will get our applications to be running at their maximum capacities. I will
cover more about the caching implementations in my coming articles. Try to get full coding
guidelines before going to implement this.

1) Introduction

This article deals with Hibernate Interceptors. Hibernate is an open-source project that provides
ORM solution. For more information about Hibernate, novice readers are encouraged to read the
article An Introduction to Hibernate on javabeat before reading this article.

Situations may demand to perform some set of pre-requisite/post-requisite operations


before/after the core functional logic. In such a case, an interceptor can be used to intercept the
existing business functionality to provide extensible or add-on features. They provide pluggable
architecture and are generally callback methods that will be called by the framework in response
to a particular set of events/actions if properly registered and configured. They follow the
standard Interceptor pattern and they have various advantages in an application design. They can
be used to monitor the various parts of the input that are passed to an application to validate
them. They even have the capability to overwrite the core functional logic of the module.

For example, consider an online shopping system that ships goods to the customer's shipping
address upon placing a request. Suppose, there is an enhancement to this application telling that
the request has to be validated because of the increasing number of spams and the customer
should be notified through e-mail (or mobile) upon successful delivery of the goods. These two
enhancements have to be projected into the application's core logic.

Having a general overview of the core logic will look something like the following,

• Validate the User Request


• Ship the Goods to the customer
• Notify the customer about its successful delivery

As we can see above, the two enhancements have to be projected within the application's core
logic which requires code changes. But, if the application has to be properly designed with the
notion of Interceptors, then the code change can be eliminated to the maximum.

2) Interceptors in Hibernate

Hibernate provides an ORM solution for persisting and querying data in the database. A Hibernate
application can be structured in a way such that certain methods can be make to be invoked when
a particular life-cycle event occurs. Not always the API in a software/product will completely
satisfy the application needs and requirements. Hibernate is no more away from this. Therefore,
Hibernate API is designed in such a way to provide pluggable framework through the notion of
Interceptors.

In a multi-tiered application, the situation for the inclusion of Interceptors can happen at any
level. It can happen at the Client level, Server level and even at the persistence level. Imagine an
application is saving employee records in a database and now the application mandates to display
to the Database admin about the history of inserts and updates.

A simple general overview of the logic looks like the following,

• Insert/Update the records in the Database


• During Insert/Update, maintain the log information in a file

As we can see, the maintenance of this logging information should happen whenever when an
insert/update goes to the Database. Such a logger interceptor can be easily plugged into the
application with minimal code change because of the flexible design of Hibernate.

2.1) Types of Interceptors

Based on their scope, Interceptors in Hibernate can fall under two categories. They are,

• Application-scoped Interceptors
• Session-scoped Interceptors

2.1.1) Application-scoped Interceptor

An application can contain one or more database sessions represented by the Session interface.
If an application is configured to use Global Interceptors, then it will affect the persistent
objects in all the sessions. The following code configures a global interceptor,

Configuration configuration = new


Configuration();
configuration.setInterceptor(new
MyInterceptor());

SessionFactory sessionFactory =
configuration.buildSessionFactory();

Session session1 = sessionFactory.openSession();


Employee e1, e2 = null;
// Assume e1 and e2 objects are associated with
session1.

Session session2 = sessionFactory.openSession();


User u1, u2 = null
//Assume u1 and u2 objects are associated with
session1.

A global-scoped interceptor can be set to an application by calling the


Configuration.setInterceptor(Interceptor) method. In the above code, we have two
different session objects 'session1' and 'session2'. Let us assume that e1 and e2 Employee objects
are associated with session 'session1' and u1 and u2 User objects are associated with session
'session2'. The applied application-scoped interceptor would have affected all the objects (e1, e2,
u1 and u2), even though they are in different sessions.

2.1.2) Session-scoped Interceptor

A session-scoped interceptor will affect all the persistent objects that are associated with that
particular session only. The following code shows how to configure a session-scoped interceptor,

Configuration configuration = new


Configuration();

SessionFactory sessionFactory =
configuration.buildSessionFactory();

MyInterceptor myInterceptor = new


MyInterceptor();
Session session1 =
sessionFactory.openSession(myInterceptor);
Employee e1, e2 = null;
// Assume e1 and e2 objects are associated with
session 'session1'.

MyAnotherInterceptor myAnotherInterceptor = new


MyAnotherInterceptor ();
Session session2 =
sessionFactory.openSession(myAnotherInterceptor);
User u1, u2 = null;
// Assume u1 and u2 objects are associated with
session 'session2'.

From the above code, we can infer that a session-scoped interceptor can be set by calling the
method SessionFactory.openSession(Interceptor). In the above code, we have two different
session objects 'session1' and 'session2' being configured with interceptors MyInterceptor and
MyAnotherInterceptor respectively. So, e1 and e2 objects will be affected by MyInterceptor,
whereas u1 and u2 objects will be affected by MyAnotherInterceptor.

3) Interceptor API

Three interfaces related to Interceptors are available in Hibernate, out of which 2 are the classical
interfaces. Lifecycle and Validatable are the classic interfaces and whereas Interceptor is
available in org.hibernate package.
Following sections discusses more about the Interceptor interfaces in detail:

3.1) The 'Validatable' Interface

This classic interface can be implemented by Persistent Java Class to validate the state of the
persistent object. This interface has a single method called Validatable.validate(), which can
be given implementation to check the validity of the state of the object. Consider the following
code,

import java.util.Date;
import org.hibernate.classic.Validatable;
import org.hibernate.classic.ValidationFailure;

public class ProjectDuration implements


Validatable{

private Date startDate;


private Date endDate;

/// Other Code here.

public void validate(){


if (startDate.after(endDate)){
throw new
ValidationFailure(
"Start Date cannot be greater than the End
Date.")
}
}
}

The above persistent class ProjectDuration implements the Validatable interface, and has a
simple validation rule in the validate method, stating that the project start date cannot come
after the end date. This Validatable.validate() method will be called by the framework during
the save operation. A save opeation can happen whenever Session.save(), Session.update(),
Session.saveOrUpdate() or Session.flush() methods are invoked.

3.2) The 'Lifecycle' Interface

A persistent object goes through the various phases in its life-cycle. It can be newly created,
persisted in the database, can be loaded at a later-time, will undergo modifications if needed and
finally deleted. The various phases that happen in the life of a persistent object are encapsulated
in the Lifecycle interface. Following are the available methods in the Lifecycle Interface.

This method will be called by the framework before the


onLoad() loading of the persistent object, i.e, when the
Session.load() is called.
This method will be called by the framework before the
onSave() save operation, when the Session.save() or
Session.saveorUpdate() method is called.
onUpdate() This method will be called by the framework, before
updating any properties on the persistent object, i.e,
when a call to Session.update() is made.
This method is called before the delete operation, i.e. a
onDelete()
call to Session.delete() is made.

All the four methods are passed a Session object, which represents the session in which the
persistent objects are associated with . A persistent class can implement the above interceptor for
providing any customization, like the following,

import org.hibernate.Session;
import org.hibernate.classic.Lifecycle;

class MyCustomizedPeristentClass implements


Lifecycle{

public boolean onDelete(Session s) throws


CallbackException {
return false;
}

public void onLoad(Session s,


Serializable id) {
System.out.println("Loading");
}

public boolean onSave(Session s) throws


CallbackException {
return false;
}

public boolean onUpdate(Session s) throws


CallbackException {
return false;
}

3.3) The 'Interceptor' Interface

This interface allows application to provide greater customization for persisting the objects. It
even allows the code to modify the state of the persistent object. It has more than 15 different
methods and so the designers of Hibernate provide the concrete EmptyInterceptor class which
implements the Interceptor interface to provide default/empty method implementations.
Applications can use EmptyInterceptor class instead of depending on the Interceptor interface.

Following are the most common operations that are available in the Interceptor interface,

Called by the framework before


the start of a transaction, i.e.
afterTransactionBegin() when the
Session.startTransaction()
method is called.
Called by the framework, before
the transaction is about to end
(either committed or rolled-back)
beforeTransactionCompletion()
, i.e. when a call is made to
Transaction.commit() or
Transaction.rollback()
Called by the framework after the
afterTransactionCompletion() transaction has ended
(committed or rolled-back)
This revised onSave() method is
passed with various information
like the property names of the
entities, their values, their states
etc., as arguments and will be
called by the framework during
onSave() the save operation. A save
operation may happen during
Session.save(),
Session.saveOrUpdate(),
Session.flush() or
Transaction.commit() method
calls.
This revised onUpdate() method
is passed with various
information like the property
names of the entities, their
values, their states etc., as
arguments and will be called by
the framework when the
onUpdate() properties of a persistent object
is about to be updated. An
update operation may happen
during a call to Session.update(),
Session.saveOrUpdate(),
Session.flush() or
Transaction.commit() is made.
This revised onLoad() method is
passed with various information
like the property names of the
entities, their values, their states
onLoad() etc., as arguments and will be
called by the framework when a
persistent object is about to be
loaded, i.e. when
Session.load() is called.
onDelete() This revised onDelete() method
which is passed with various
information like the property
names of the entities, their
values, their states etc., as
arguments will be called by the
framework when a persistent
object is about to be deleted, i.e.,
when a call to Session.delete()
is made.

4) Test Application

Following section provides a sample application, which uses the interceptors that was discussed in
the preceding few sections.

4.1) Pre-requisites

The following are the pre-requisite softwares/products needed to run the sample application.

• Java Development Kit (http://java.sun.com/javase/downloads/index.jsp)


• Hibernate 3.2 (http://www.hibernate.org/6.html)
• MySQL Database Server (http://dev.mysql.com/downloads/mysql/6.0.html)
• MySQL Database Driver (http://dev.mysql.com/downloads/connector/j/3.1.html)

Let us have a simple scenario which makes use of the two Interceptors.

The first interceptor called the CustomSaveInterceptor populates the persistent object with
additional values apart from the original values that are given by the user. A database table called
"PlayersName" is created with column names "fName", "mName", "lName", "completeName"
representing the first name, middle name, last name and the complete name of the player. Let
the user give values only for fName, mName, lName and not for the completeName. The
completeName which is just the concatenation of the fName, mName and lName along with
white-spaces between them will be taken care by the CustomSaveInterceptor.

The second is the LoggerInterceptor which keeps tracks of all the Insertions that are made to
the Database table.

PlayerName.java:

package interceptor;

import java.io.Serializable;

import org.hibernate.CallbackException;
import org.hibernate.Session;
import org.hibernate.classic.Lifecycle;
import org.hibernate.classic.Validatable;
import org.hibernate.classic.ValidationFailure;

public class PlayerName implements Validatable,


Lifecycle {

private String firstName;


private String middleName;
private String lastName;
private String completeName;

private String primaryKey;


public PlayerName(){

public String getFirstName() {


return firstName;
}

public void setFirstName(String


firstName) {
this.firstName = firstName;
}

public String getLastName() {


return lastName;
}

public void setLastName(String lastName)


{
this.lastName = lastName;
}

public String getMiddleName() {


return middleName;
}

public void setMiddleName(String


middleName) {
this.middleName = middleName;
}

public String getCompleteName() {


return completeName;
}

public void setCompleteName(String


completeName) {
this.completeName = completeName;
}

public String getPrimaryKey() {


return primaryKey;
}

public void setPrimaryKey(String


primaryKey) {
this.primaryKey = primaryKey;
}

public void validate() throws


ValidationFailure {

if
((firstName.equals(middleName)) &&
(middleName.equals(lastName))){
throw new
ValidationFailure("First Name, Middle Name
and Last Name
cannot be the same");
}

public boolean onDelete(Session s) throws


CallbackException {
return false;
}

public void onLoad(Session s,


Serializable id) {
System.out.println("Loading");
}

public boolean onSave(Session s) throws


CallbackException {
return false;
}

public boolean onUpdate(Session s) throws


CallbackException {
return false;
}
}

The above class 'PlayerName' represents the persistent class that we wish to save to the
database. This class has various properties like firstName, middleName, lastName, completeName
which represents the first name, middle name, last name and the complete name of the player. It
also has a field called primaryKey for storing the primary key values which will be set manually by
the application.

This class implements the validation Interceptor called Validatable for doing a simple validate on
the value of the name. It will just ensure whether the first name, middle name and the last name
are unique by giving simple implementation in the validate() method. If they are not unique,
then a ValidationFailureException is thrown by the program.

PlayerName class also implements the Lifecycle interface and the methods onLoad(),
onSave(), onUpdate(), onDelete() are given default implementation.

CustomSaveInterceptor.java:

Following is the code for cutom save interceptor which extends the EmptyInterceptor class and
provides a simple logic of updating the persistent object with the complete name value.

package interceptor;

import java.io.Serializable;

import org.hibernate.EmptyInterceptor;
import org.hibernate.type.Type;
public class CustomSaveInterceptor extends
EmptyInterceptor {
public boolean onSave(Object entity,
Serializable id,
Object[] state,
String[] propertyNames,
Type[] types)
{
if (entity instanceof
PlayerName){
PlayerName playerName =
(PlayerName)entity;
String completeName =
playerName.getFirstName() + " " +

playerName.getMiddleName() + " " +

playerName.getLastName();

playerName.setCompleteName(completeName);
}

return super.onSave(entity, id,


state, propertyNames, types);
}
}

The onSave() method is the method of interest here, and we can see that this method is passed
with various parameters. The entity represents the persistent entity object that is about to be
saved. The id represents the Serializable primary key (which in our case will be a simple string
object). The state array represents the values of the properties of the persistent object, The
propertyNames array holds a list of string values, which are firstName, middleName, lastName,
completeName. Since all the types in the PlayerName class are strings, the Type array points to
the string type.

The code initially does a pre-conditionary check to ensure whether the entity is of the right
PlayerName type, and then updates the completeName property which is simply the
concatenation of the firstName, middleName and the lastName values.

LoggerInterceptor.java:

package interceptor;

import java.io.Serializable;

import org.hibernate.EmptyInterceptor;
import org.hibernate.type.Type;

public class LoggerInterceptor extends


EmptyInterceptor{

public boolean onSave(Object entity,


Serializable id,
Object[] state,
String[] propertyNames,
Type[] types)
{
System.out.println("Saving the
persistent Object " +
entity.getClass() + "
with Id " + id);
return super.onSave(entity, id,
state, propertyNames, types);
}
}

The implementation of the LoggerInterceptor class is relatively simple, as this class does
nothing apart from overriding the onSave() method and writing the log information to the
console.

InterceptorTest.java:

package interceptor;

import java.util.List;

import org.hibernate.*;
import org.hibernate.cfg.Configuration;
import org.hibernate.classic.Validatable;

public class InterceptorTest {

public static void main(String[] args) {

Configuration configuration = new


Configuration().configure();
configuration.setInterceptor(new
CustomSaveInterceptor());

SessionFactory sessionFactory =
configuration.buildSessionFactory();
Session session =
sessionFactory.openSession(new
LoggerInterceptor());
createPlayerNames(session);
listPlayerNames(session);
}

private static void


createPlayerNames(Session session){

PlayerName rahul =
createPlayerName("Rahul", "Sharad", "Dravid",
"RSD");
PlayerName dhoni =
createPlayerName("Mahendra", "Singh", "Dhoni",
"MSD");
PlayerName karthik =
createPlayerName("Krishnakumar", "Dinesh",
"Karthik",
"KDK");
PlayerName same =
createPlayerName("Same", "Same", "Same", "SME");

Transaction transaction =
session.beginTransaction();
try{
session.save(rahul);
session.save(dhoni);
session.save(karthik);

Transaction
innerTransaction = null;
try{
innerTransaction =
session.beginTransaction();
session.save(same);
}catch(Exception
exception){

System.out.println("\n" +
exception.getMessage());
}finally{
if
(innerTransaction.isActive()){

innerTransaction.commit();
}
}
}catch(Exception exception){

System.out.println(exception.getMessage());
transaction.rollback();
session.clear();
}finally{
if
(transaction.isActive()){

transaction.commit();
}
}
session.flush();
}

private static PlayerName


createPlayerName(String fName,
String
mName,String lName, String id){
PlayerName playerName = new
PlayerName();
playerName.setFirstName(fName);
playerName.setMiddleName(mName);
playerName.setLastName(lName);
playerName.setPrimaryKey(id);
return playerName;
}

private static void listPlayerNames(Session


session){
Query query =
session.createQuery("From PlayerName");
List allPlayers = query.list();
System.out.println("\n");
for(PlayerName player :
allPlayers){
listPlayerName(player);
}
}

private static void


listPlayerName(PlayerName player){
StringBuilder result = new
StringBuilder();
result.append("First Name =
").append(player.getFirstName())
.
append(" , Middle Name = ")
.
append(player.getMiddleName()).

append(" , Last Name = ").

append(player.getLastName()).
append(" , Full Name =
").append(player.getCompleteName());

System.out.println(result.toString());
}

The above code sets the CustomSaveInterceptor globally by calling the


Configuration.setInterceptor(new CustomSaveInterceptor()). LoggerInterceptor is
configured on the session-basis by calling the SessionFactory.openSession(new
LoggerInterceptor()). The createPlayerNames() method creates some test player objects.
Note that the player object 'same' is created with first-name, middle-name and last-name
pointing out to 'Same', which is expected to be captured by the Validator Interceptor.

All the objects are saved by calling the Session.save() method, within a transactional context,
but the save operation for the 'same' object has been done within a separate transaction, so that
even if the persistence operation of the 'same' object fails, the rest of the operation wont gets
affected. This also tells the support for nested transaction.

The code then lists out the player objects fetched from the database by executing a simple query
"FROM PlayerName". The output of the above program will look like this,

Saving the persistent Object class


interceptor.PlayerName with Id RSD
Saving the persistent Object class
interceptor.PlayerName with Id MSD
Saving the persistent Object class
interceptor.PlayerName with Id KDK
First Name, Middle Name and Last Name
cannot be the same

First Name = Rahul , Middle Name =


Sharad , Last Name = Dravid ,
Full Name = Rahul Sharad Dravid
First Name = Mahendra , Middle Name =
Singh , Last Name = Dhoni ,
Full Name = Mahendra Singh Dhoni
First Name = Krishnakumar , Middle Name
= Dinesh , Last Name = Karthik ,
Full Name = Krishnakumar Dinesh
Karthik

4.2) Hibernate Configuration and Mapping Files

Following are the hibernate configuration and the mapping files that have to be placed in the
sample application's run-time class path.

Hibernate.cfg.xml:

The Hibernate Configuration File (hibernate.cfg.xml) provides configuration parameters to


the application like the database URL, the username/password in the Database server etc. Given
below is the Configuration file for the sample application.

<?xml version='1.0' encoding='utf-8'?>


<!DOCTYPE hibernate-configuration PUBLIC "-//Hibernate/Hibernate Configuration
DTD//EN"
"http://hibernate.sourceforge.net/hibernate-configuration-3.0.dtd">

<hibernate-configuration>
<session-factory>
<property
name="hibernate.connection.driver_class">com.mysql.jdbc.Driver</property>
<property
name="hibernate.connection.url">jdbc:mysql://localhost/dbforhibernate</property>
<property name="hibernate.connection.username">root</property>
<property name="hibernate.connection.password">root</property>
<property name="dialect">org.hibernate.dialect.MySQLDialect</property>

<!-- Mapping files -->


<mapping resource="playername.hbm.xml" />
</session-factory>
</hibernate-configuration>

playername.hmb.xml:

Mapping Files provides mapping information like how a Java class is mapped to the relational
database table. Any number of mapping files can be referenced from an application. Given below
is the playername mapping file used in the sample application.
<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC "-
//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-
mapping-3.0.dtd">

<hibernate-mapping>
<class name="interceptor.PlayerName"
table="PlayerNames">
<id name="primaryKey" column="Id"
type = "string">
<generator class="assigned"/>
</id>
<property name="firstName">
<column name="fName"/>
</property>
<property name="middleName">
<column name="mName"/>
</property>
<property name="lastName">
<column name="lName"/>
</property>
<property name="completeName">
<column name="completeName"/>
</property>
</class>
</hibernate-mapping>

5) Summary

This article started with the definition of Interceptors, then defined where Interceptors can fit into
the Hibernate Technology. Then are explained the differences between a global interceptor and a
session-scoped interceptor. The various API's related to Hibernate interceptor are given detailed
discussion. Finally, the article concluded with a simple sample application making use of the
Interceptors.

Integrating Spring Framework with Hibernate ORM Framework

1) Introduction

Hibernate is a powerful technology for persisting data in any kind of Application. Spring, on the
other hand is a dependency injection framework that supports IOC. The beauty of Spring is that it
can integrates well with most of the prevailing popular technologies. In this article, we will discuss
on how it is possible to integrate Spring with Hibernate. This article assumes that the reader has a
basic understanding in both Spring and Hibernate Frameworks.

If you are new to Spring and Hibernate frameworks, please read the introduction articles on
Spring and Hibernate before start reading this article. Shunmuga Raja has explained in
Introduction to Spring Framework. This article will help you to understand the fundamentals of
the Spring framework. In another article Introduction to Hibernate published on 12/05/2007 by
Shunmuga Raja explains what is ORM framework and how to start writing the simple
hibernate application.

2) Spring and Hibernate


As a pre-requisite, let us understand the need for such integration before we actually get into the
integration between these two technologies. It is well known that Hibernate is a powerful ORM
tool that lies between Application and Database. It enables Application to access data from any
database in a platform-independent manner. There is no need for the Application to depend on
the low-level JDBC details like managing connection, dealing with statements and result sets. All
the necessary details for accessing a particular data source is easily configurable in Xml files.
Another good thing is that Hibernate can be coupled well with both J2SE and J2EE Applications.

One of the problem with using Hibernate is that the client Application that accesses the database
using Hibernate Framework has to depend on the Hibernate APIs like Configuration,
SessionFactory and Session. These objects will continue to get scattered across the code
throughout the Application. Moreover, the Application code has to manually maintain and manage
these objects. In the case of Spring, the business objects can be highly configurable with the help
of IOC Container. In simple words, the state of an object can be externalized from the
Application code. It means that now it is possible to use the Hibernate objects as Spring Beans
and they can enjoy all the facilities that Spring provides.

3) Integration Sample

Instead of looking into the various Integration APIs that are available in the Spring Bundle, let us
study and understand these APIs as we go through the sample code. The following sections cover
the various steps involved in the Spring-Hiberante integration along with a detailed
explanation.

3.1) Creating Database

The following sample application uses the MySql database for dealing with data. MySql database
can be downloaded from http://dev.mysql.com/downloads/mysql/5.0.html#downloads. After
installing the database, start the MySql client and create a test database by issuing the following
command,

Create database samples;

Note that the character ';' is the statement terminator for every command. Once the 'samples'
database is created, use the database for creating tables by using the command,

Use samples;

This uses the 'samples' database for the current database session. It means that whatever
operation we do, such as creating tables, will eventually affect the 'samples' database. Now, let
us create a sample table called 'employee' which is having four fields namely id, name, age and
salary. The following command creates the 'employee' table in the 'samples' database,

create table employee(id varchar(10), name varchar(20), age int(3), salary


int(10));

Now an empty table (table with no records within it) is created.


3.2) The Employee class

Now let us create a class called Employee for storing the data that are fetched from the employee
table. The class design is such that the column names for the table 'employee' will be mapped as
the variable names in the Java class with the appropriate data type. The complete code listing for
the Employee class is as follows,

Employee.java

package javabeat.spring.hibernate;

public class Employee {

private String id;


private String name;
private int age;
private double salary;

public Employee() {
}

public String getId(){


return id;
}

public void setId(String id){


this.id = id;
}

public String getName(){


return name;
}

public void setName(String name){


this.name = name;
}

public int getAge(){


return age;
}

public void setAge(int age){


this.age = age;
}

public double getSalary(){


return salary;
}

public void setSalary(double salary){


this.salary = salary;
}

public String toString(){


return "Id = " + id + ", Name = " + name + ", Age = "
+ age + ", Salary = " + salary;
}
}

Note that the toString() method is overridden to give a meaningful display for the employee
object.

3.3) Creating the Hibernate Mapping file

We have created 'employee' table in the database and a corresponding Java class in the
Application layer. However, we haven't specified that the 'employee' table should map to the
Java class and the column names in the 'employee' table should map to the Java variables in the
Employee class. This is where the Hibernate Mapping files comes into picture. Let us have a
look at the Hibernate Mapping file,

employee.hbm.xml

<?xml version="1.0"?>
<!DOCTYPE hibernate-mapping PUBLIC "-//Hibernate/Hibernate Mapping DTD 3.0//EN"
"http://hibernate.sourceforge.net/hibernate-mapping-3.0.dtd">
<hibernate-mapping>
<class name="javabeat.spring.hibernate.Employee" table="Employee">
<id name="id" column="Id">
<generator class="assigned"/>
</id>

<property name="name">
<column name="Name"/>
</property>
<property name="age">
<column name="Age"/>
</property>
<property name="salary">
<column name="Salary"/>
</property>
</class>
</hibernate-mapping>

Note that the Mapping file is an Xml file and its name is employee.hbm.xml. The portion of the
string 'hbm' in the mapping file stands for hibernate Mapping File. Although it is not necessary to
follow this convention, it will be easy to figure what type of xml file is this, just by looking at the
extension. Xml conforms to a well-defined DTD, the hibernate-mappings-3.0.dtd.

The root element for the mapping file is the hibernate-mapping tag which can define one or
more mappings, following which we have the class tag which defines a mapping between the
database table name and the Java class. The 'name' attribute must point to a fully qualified Java
class name whereas the table attribte must point to the database table.

The next series of tags define the mapping definition of the column names against its Java
variables counterparts. The 'id' tag defines an identifier for a row and it is commonly used as a
primary key column. The property tag has an attribute called 'name' which points to the Java
variable name, following which is the name of the column in the database table to which it maps
to.
3.4) Creating the Spring Configuration File

This section deals with configuring the various information needed for the Spring Framework. In
Spring, all the business objects are configured in Xml file and the configured business objects are
called Spring Beans. These Spring Beans are maintained by the IOC which is given to the Client
Application upon request. Let us define a data source as follows,

spring-hibernate.xml

<?xml version="1.0" encoding="UTF-8"?>

<beans xmlns="http://www.springframework.org/schema/beans"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://www.springframework.org/schema/beans
http://www.springframework.org/schema/beans/spring-beans-2.0.xsd">

<bean id="myDataSource" class="org.apache.commons.dbcp.BasicDataSource" >


<property name="driverClassName" value="com.mysql.jdbc.Driver"/>
<property name="url" value="jdbc:mysql://localhost/samples"/>
<property name="username" value="root"/>
<property name="password" value="pwForRoot"/>
</bean>


</beans>

The above bean defines a data-source of type 'org.apache.commons.dbcp.BasicDataSource'.


More importantly, it defines the various connection properties that are needed for accessing the
database. For accessing the MySql database, we need MySql database driver which can be
downloaded from http://dev.mysql.com/downloads/connector/j/5.1.html. The first property called
driverClassName should point to the class name of the MySql Database Driver. The second
property url represents the URL string which is needed to connect to the MySql Database. The
third and the fourth properties represent the database username and the password needed to
open up a database session.

Now, let us define the second Spring Bean which is the SessionFactoryBean. If you would have
programmed in Hibernate, you will realize that SessionFactoryBean is responsible for creating
Session objects through which Transaction and Data accessing is done. Now the same
SessionFactoryBean has to be configured in Spring's way as follows,

<bean id="mySessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
<property name="dataSource" ref="myDataSource"/>
<property name="mappingResources">
<list>
<value>./resources/employee.hbm.xml</value>
</list>
</property>
<property name="hibernateProperties">
<value>hibernate.dialect=org.hibernate.dialect.HSQLDialect</value>
</property>
</bean>
To make the SessionFactoryBean to get properly configured, we have given two mandatory
information. One is the data-source information which contains the details for accessing the
database. This we have configured already in the previous step and have referred it here using
the 'ref' attribute in the 'property' tag. The second one is a list of Mapping files which
contains the mapping information between the database tables and the Java class names. We
have defined one such mapping file in section 2 and have referenced the same here with the
'list' tag.

The 3rd important Spring Bean is the Hibernate Template. It provides a wrapper for low-level
data accessing and manipulation. Precisely, it contains methods for
inserting/delting/updating/finding data in the database. For the Hibernate Template to get
configured, the only argument is the SessionFactoryBean object as represented in the following
section,

<bean id="hibernateTemplate"
class="org.springframework.orm.hibernate3.HibernateTemplate">
<property name="sessionFactory">
<ref bean="mySessionFactory"/>
</property>
</bean>

The final Bean definition is the Dao class which is the client facing class. Since this class has to
be defined in the Application level, it can contain any number of methods for wrapping data
access to the Client. Since we know that it is the Hibernate Template class that interacts with
the database, it will be ideal a refer an instance of Hibernate Template to the Dao class.

<bean id="employeeDao" class="javabeat.spring.hibernate.EmployeeDao">


<property name="hibernateTemplate">
<ref bean="hibernateTemplate"/>
</property>
</bean>

Note that a reference is made to EmployeeDao class which is discussed in the forthcoming section.

3.5) Defining the EmployeeDao class

As described earlier, this EmployeeDao class can contain any number of methods that can be
accessed by the clients. The design of this class can fall under two choices. One is this class can
directly depend on the Hibernate Template object which is injected by the IOC for accessing
the data. The second one is that it can make use of the Hibernate API for data accessing. The
declaration of the class is as follows,

EmployeeDao.java

package javabeat.spring.hibernate;

import java.sql.SQLException;
import org.hibernate.HibernateException;
import org.hibernate.Session;
import org.springframework.orm.hibernate3.HibernateCallback;
import org.springframework.orm.hibernate3.HibernateTemplate;

public class EmployeeDao {

private HibernateTemplate hibernateTemplate;

public void setHibernateTemplate(HibernateTemplate hibernateTemplate){


this.hibernateTemplate = hibernateTemplate;
}

public HibernateTemplate getHibernateTemplate(){


return hibernateTemplate;
}

public Employee getEmployee(final String id){


HibernateCallback callback = new HibernateCallback() {
public Object doInHibernate(Session session)
throws HibernateException,SQLException {
return session.load(Employee.class, id);
}
};
return (Employee)hibernateTemplate.execute(callback);
}

public void saveOrUpdate(final Employee employee){


HibernateCallback callback = new HibernateCallback() {
public Object doInHibernate(Session session)
throws HibernateException,SQLException {
session.saveOrUpdate(employee);
return null;
}
};
hibernateTemplate.execute(callback);
}
}

This class makes use of Hibernate API (particularly the Session object) for data accessing. To
instruct Spring to access the Hibernate API, we have the put the piece of logic that makes use of
the Hibernate API into a particular well defined method in a well known interface that Spring
knows. It happens to be the HibernateCallback interface with the method doInHibernate()
with an instance of Hibernate Session being passed.

Note that we have defined two methods; getEmployee() and saveOrUpdate in the EmployeeDao
class. And to make use of the Hibernate APIs, we have defined the code in the
HibernateCallback.doInHibernate() method and have informed Spring to execute this code by
passing the interface reference to the HibernateTemplate.execute() method.

3.6) The Client Application

SpringHibernateTest.java

package javabeat.spring.hibernate;

import org.springframework.beans.factory.BeanFactory;
import org.springframework.beans.factory.xml.XmlBeanFactory;
import org.springframework.core.io.FileSystemResource;
import org.springframework.core.io.Resource;
import org.springframework.orm.hibernate3.LocalSessionFactoryBean;

public class SpringHibernateTest {

public static void main(String[] args) {

Resource resource = new FileSystemResource(


"./src/resources/spring-hibernate.xml");
BeanFactory factory = new XmlBeanFactory(resource);

Employee employee = new Employee();


employee.setId("123");
employee.setName("ABC");
employee.setAge(20);
employee.setSalary(15000.00d);

EmployeeDao employeeDao = (EmployeeDao)factory.getBean(


"employeeDao");
employeeDao.saveOrUpdate(employee);

Employee empResult = employeeDao.getEmployee("123");


System.out.println(empResult);
}
}

Finally, we come to the sample client Application for accessing the test data. The control goes like
this. When the method BeanFactory.getBean("employeeDao") is called, Spring finds the
references made in the Bean definition of Employee Dao Bean. It happens to be the Hibernate
Template object. Then an attempt will be made to initialize the Hibernate Template object
where it will see that a Session Factory Bean object is referenced. Then, while constructing the
Session Factory Bean object, the data-source information will get resolved along with the
database tables and the Java classes.

4) Conclusion

This article was aimed at discussing about Integration of Spring with Hibernate. It discussed
the need for such an integration and also briefed about the benefits that it offers. Then, a very
detailed step-by-step sample was given to clearly illustrate how the integration works.

You might also like