You are on page 1of 33

1.

Abstract class is a class which contain one or more abstract methods, which has to be implemented by sub classes

. Inteface is a Java Object containing method

Interface
(1) Decelear with keyword interface (2) It will not (3) By default all methods are Public AbstractMethods. (4) it will contain only final variables,

declaration and doesn't contain implementation. The classes which have implementing the Interfaces must provide the method defination for all the methods.2. Abstract class is a Class prefix wtih a abstract keyword followed by Class definaton. Interacace is a Interface which starts with interface keyword.3. Abstract class contatins one or more abstract methods. where as Interface contains all abstract methods and final declarations.4. Abstract class contains the method defination of the some methods. but Interface contains only method declaration, no defination provided.5. Abstract classes are useful in a situation that Some general methods should be implemented and specialization behaviour should be implemented by child classes. Interafaces are useful in a situation that all properties should be implemented we can use this scenario

Abstract Class
Constructor.

--------------------------------------------------------------------------(1) Decelear with keyword Abstract. (2) Itwill may or may contain (3) It may contact Abstract methods and non Abstract methods. (4) It will final variables and Instances Variables.

Constructor.

2. There is two defference between static inner and non


static inner class. I)In the case of declaring data memeber and member method , non static inner class cannot have static data member and static member method But Static Inner class can have static and non static data member and member method

II) In the case of creating instance, the instance of non s static inner class is created with the reference of object of outer class in which it is definedthis means it have inclosing instance . But the instance of static inner class is created with the reference of Outer class, not with the reference of object of outer class..this means it have not inclosing instance For example class A { class B { // static int x; not allowed here..

} static class C { static int x; // allowed here } }

class Test { public static void main(String str) { A o=new A(); A.B obj1 =o.new B();//need of inclosing instance

A.C obj2 =new A.C();

// not need of reference of object of outer class. } }

3. Java Final Keyword


22/05/2008

A java variable can be declared using the keyword final. Then the final variable can be assigned only once. A variable that is declared as final and not initialized is called a blank final variable. A blank final variable forces the constructors to initialise it. Java classes declared as final cannot be extended. Restricting inheritance! Methods declared as final cannot be overridden. In methods private is equal to final, but in variables it is not. final parameters values of the parameters cannot be changed after initialization. Do a small java exercise to find out the implications of final parameters in method overriding. Java local classes can only reference local variables and parameters that are declared as final. A visible advantage of declaring a java variable as static final is, the compiled java class results in faster performance.

4.

Polymorphism is the capability of an action or method to do different things based on the object that it is acting upon. In other words, polymorphism allows you define one interface

and have multiple implementation. This is one of the basic principles of object oriented programming.

The method overriding is an example of runtime polymorphism. You can have a method in subclass overrides the method in its super classes with the same name and signature. Java

virtual machine determines the proper method to call at the runtime, not at the compile time.

Let's take a look at the following example:

class Animal { void whoAmI() { System.out.println("I am } } class Dog extends Animal { void whoAmI() { System.out.println("I am } } class Cow extends Animal { void whoAmI() { System.out.println("I am } } class Snake extends Animal { void whoAmI() { System.out.println("I am } }

a generic Animal.");

a Dog.");

a Cow.");

a Snake.");

class RuntimePolymorphismDemo { public static void main(String[] args) { Animal ref1 = new Animal(); Animal ref2 = new Dog(); Animal ref3 = new Cow(); Animal ref4 = new Snake(); ref1.whoAmI(); ref2.whoAmI(); ref3.whoAmI(); ref4.whoAmI(); }

The output is

I I I I

am am am am

a a a a

generic Animal. Dog. Cow. Snake.

5. The two keywords, this and super to help you explicitly name the field or method that you want. Using this and super you have full control on whether to call a method or field
present in the same class or to call from the immediate superclass. This keyword is used as a reference to the current object which is an instance of the current class. The keyword super also references the current object, but as an instance of the current classs super class. The this reference to the current object is useful in situations where a local variable hides, or shadows, a field with the same name. If a method needs to pass the current object to another method, it can do so using the this reference. Note that the this reference cannot be used in a static context, as static code is not executed in the context of any object.

6. Encapsulation: Is hiding unwanted/un-expected/propriety implementation details from the actual users of object. e.g. List<string> list = new List<string>(); list.Sort(); /* Here, which sorting algorithm is used and hows its implemented is not useful to the user who wants to perform sort, that's why its hidden from the user of list. */ Abstraction: Is a way of providing generalization and hence a common way to work with objects of vast diversity. e.g. class Aeroplane : IFlyable, IFuelable, IMachine { // Aeroplane's Design says: // Aeroplane is a flying object // Aeroplane can be fueled // Aeroplane is a Machine } // But the code related to Pilot, or Driver of Aeroplane is not bothered // about Machine or Fuel. Hence, // pilot code:

IFlyable flyingObj = new Aeroplane(); flyingObj.Fly(); // fighter Pilot related code IFlyable flyingObj2 = new FighterAeroplane(); flyingObj2.Fly(); // UFO related code IFlyable ufoObj = new UFO(); ufoObj.Fly(); // **All the 3 Above codes are genaralized using IFlyable, // Interface Abstraction** // Fly related code knows how to fly, irrespective of the type of // flying object they are. // Similarly, Fuel related code: // Fueling an Aeroplane IFuelable fuelableObj = new Aeroplane(); fuelableObj.FillFuel(); // Fueling a Car IFuelable fuelableObj2 = new Car(); // class Car : IFuelable { } fuelableObj2.FillFuel(); // ** Fueling code does not need know what kind of vehicle it is, so far // as it can Fill Fuel*

Unit-2

1.

Thread life cycle in java

In this tutorial you will learn about the thread life cycle of java. Below is the block diagram of thread life cycle. We will take a close look of all the stages of thread life cycle.

New born stage: in this stage the thread is newly created and has not been started yet. You can see by diagram that this is first stage of all the thread, before start any thread; it has to crate the instance of thread. You can directly go to either dead or active stage from new born stage. Active stage: this is the stage where thread executes. A thread can be come in active stage only by new born stage. Any thread can be come in active stage by calling start () method of that thread Being in the active stage does not mean that thread is running, because there may be chance that There is more thread which has the high priority than yours thread. In active stage there is two more stages called as Running and Runnable stage. Running stage: This is the stage where thread executes actually. It is thread shoulder responsibility to decide which thread should execute first. At a time only one thread can be in this stage, all other active thread will be in Runnable stage waiting for their execution time. Runnable stage: this is the active stage where the threads are waiting for their term for execution. You can move any thread from Running stage to Runnable stage by using yaield() method of thread class. Blocked stage: This is the stage where the thread has to block for execution, from this stage a thread can be come back to active stage and can be directly send to dead stage. There are some methods in thread which can be cause thread to go into blocked thread. Wait(), suspend() and sleep() are method which can be used to bring thread into blocked stage. To come back into active stage you have to use notify() or notifyAll() if thread came by wait() method in blocked stage. To come back into active stage you have to use resume() method if thread came by suspend() method of thread. The thread will come back automatically into active stage if time over given in milliseconds if thread came by sleep() method. Dead stage: this is the last stage of thread life cycle. A thread has to come in dead stage if it has finished execution, you can explicitly bring a thread to dead stage from any other stage by using stop() method of thread class.

2. What is synchronization and why is it important? Describe synchronization in respect to multithreading.

Synchronization is the process of allowing threads to execute one after another.

Synchronization control the access the multiple threads to a shared resources. Without synchronization of threads, one thread can modify a shared variable while another thread can update the same shared variable, which leads to significant errors.

Java - What is synchronization and why is it important? - Jan 15, 2009 at 8:10 am by Rajmeet Ghai

What is synchronization and why is it important?

Java supports multiple threads to be executed. This may cause two or more threads to access the same fields or objects. Synchronization is a process which keeps all concurrent threads in execution to be in synch. Synchronization avoids memory consistence errors caused due to inconsistent view of shared memory. When a method is declared as synchronized; the thread holds the monitor for that method's object If another thread is executing the synchronized method, your thread is blocked until that thread releases the monitor.

Java - Explain the use of synchronization keyword. - Jan 15, 2009 at 8:10 am by Rajmeet Ghai

Explain the use of synchronization keyword.

When a method in Java needs to be synchronized, the keyword synchronized should be added.

Example: Public synchronized void increment() { X++; } Synchronization does not allow invocation of this Synchronized method for the same object until the first thread is done with the object. Synchronization allows having control over the data in the class.

Synchronization in Java - July 31, 2009 at 14:00 PM by Amit Satpute

What is synchronization and why is it important? Describe synchronization in respect to multithreading.

Threads communicate by sharing access to fields and the objects. However, due to threading there is a possibility of thread interference and memory inconsistency. Synchronization is used to prevent this. In synchronization, if an object is visible to more than a thread, all reads or writes to that object's variables are done through synchronized methods.

Unit-3

1.

Basic I/O

This lesson covers the Java platform classes used for basic I/O. It first focuses on I/O Streams, a powerful concept that greatly simplifies I/O operations. The lesson also looks at serialization, which lets a program write whole objects out to streams and read them back again. Then the lesson looks at file I/O and file system operations, including random access files. Most of the classes covered in the I/O Streams section are in the java.io package. Most of the classes covered in the File the java.nio.filepackage.
I/O Streams

I/O

section are in

Byte Streams handle I/O of raw binary data. Character Streams handle I/O of character data, automatically handling translation to and from the local character set. Buffered Streams optimize input and output by reducing the number of calls to the native API. Scanning and Formatting allows a program to read and write formatted text. I/O from the Command Line describes the Standard Streams and the Console object.

Data Streams handle binary I/O of primitive data type and String values. Object Streams handle binary I/O of objects.

File I/O (Featuring NIO.2)

What is a Path? examines the concept of a path on a file system. The Path Class introduces the cornerstone class of the java.nio.file package. Path Operations looks at methods in the Path class that deal with syntactic operations. File Operations introduces concepts common to many of the file I/O methods. Checking a File or Directory shows how to check a file's existence and its level of accessibility. Deleting a File or Directory. Copying a File or Directory. Moving a File or Directory. Managing Metadata explains how to read and set file attributes. Reading, Writing and Creating Files shows the stream and channel methods for reading and writing files. Random Access Files shows how to read or write files in a nonsequentially manner. Creating and Reading Directories covers API specific to directories, such as how to list a directory's contents. Links, Symbolic or Otherwise covers issues specific to symbolic and hard links. Walking the File Tree demonstrates how to recursively visit each file and directory in a file tree. Finding Files shows how to search for files using pattern matching. Watching a Directory for Changes shows how to use the watch service to detect files that are added, removed or updated in one or more directories. Other Useful Methods covers important API that didn't fit elsewhere in the lesson. Legacy File I/O Code shows how to leverage Path functionality if you have older code using the java.io.File class. A table mapping java.io.File API tojava.nio.file API is provided.

Unit-4

1.

Ports and Sockets

In this section we will introduce the concepts of port and socket.

2.10.1 Ports

Each process that wants to communicate with another process identifies itself to the TCP/IP protocol suite by one or more ports. A port is a 16-bit number, used by the host-to-host protocol to identify to which higher-level protocol or application program (process) it must deliver incoming messages. As some higher-level programs are themselves protocols, standardized in the TCP/IP protocol suite, such as TELNET and FTP, they use the same port number in all TCP/IP implementations. Those "assigned" portnumbers are called well-known ports and the standard applications well-known services. The "well-known" ports are controlled and assigned by the Internet Assigned Numbers Authority (IANA) and on most systems can only be used by system processes or by programs executed by privileged users. The assigned "wellknown" ports occupy port numbers in the range 0 to 1023. The ports with numbers in the range 1024-65535 are not controlled by the IANA and on most systems can be used by ordinary user-developed programs. Confusion due to two different applications trying to use the same port numbers on one host is avoided by writing those applications to request an available port from TCP/IP. Because this port number is dynamically assigned, it may differ from one invocation of an application to the next. UDP, TCP and ISO TP-4 all use the same "port principle". (Please see Figure UDP, A Demultiplexer Based on Ports and Figure - TCP Connection.) To the extent possible, the same port numbers are used for the same services on top of UDP, TCP and ISO TP-4.
2.10.2 Sockets

Let us first consider the following terminologies:


A socket is a special type of file handle which is used by a process to request network services from the operating system. A socket address is the triple: {protocol, local-address, local-process} In the TCP/IP suite, for example: {tcp, 193.44.234.3, 12345}

A conversation is the communication link between two processes. An association is the 5-tuple that completely specifies the two processes that comprise a connection: {protocol, local-address, local-process, foreign-address, foreignprocess} In the TCP/IP suite, for example: {tcp, 193.44.234.3, 1500, 193.44.234.5, 21} could be a valid association.

A half-association is either: {protocol, local-address, local-process} or {protocol, foreign-address, foreign-process} which specify each half of a connection.

The half-association is also called a socket or a transport address. That is, a socket is an end point for communication that can be named and addressed in a network.

The socket interface is one of several application programming interfaces (APIs) to the communication protocols. Designed to be a generic communication programming interface, it was first introduced by the 4.2BSD UNIX system. Although it has not been standardized, it has become a de facto industry standard. 4.2BSD allowed two different communication domains: Internet and UNIX. 4.3BSD has added the Xerox Network System (XNS) protocols and 4.4BSD will add an extended interface to support the ISO OSI protocols.
b. proxy server

A proxy server has a variety of potential purposes, including:

To keep machines behind it anonymous, mainly for security.[1] To speed up access to resources (using caching). Web proxies are commonly used to cache web pages from a web server.[2] To prevent downloading the same content multiple times (and save bandwidth).

To log / audit usage, e.g. to provide company employee Internet usage reporting. To scan transmitted content for malware before delivery. To scan outbound content, e.g., for data loss prevention. Access enhancement/restriction

To apply access policy to network services or content, e.g. to block undesired sites.

To access sites prohibited or filtered by your ISP or institution.

To bypass security / parental controls.

To circumvent Internet filtering to access content otherwise blocked by governments.[3]

To allow a web site to make web requests to externally hosted resources (e.g. images, music files, etc.) when cross-domain restrictions prohibit the web site from

linking directly to the outside domains.

To allow the browser to make web requests to externally hosted content on behalf of a website when cross-domain restrictions (in place to protect websites from

the likes of data theft) prohibit the browser from directly accessing the outside domains.

Types of proxy
Proxy server can be placed in the user's local computer or at various points between the user and the destination servers on the Internet.

A proxy server that passes requests and responses unmodified is usually called a gateway or sometimes tunneling proxy. A forward proxy is an Internet-facing proxy used to retrieve from a wide range of sources (in most cases anywhere on the Internet). A reverse proxy is (usually) an Internet-facing proxy used as a front-end to control and protect access to a server on a private network, commonly also performing tasks such as

load-balancing, authentication, decryption or caching.

c. he TCP/IP Model separates networking functions into discrete layers. Each layer performs a specific function and is transparent to the layer above it and the layer below
it. Network models are used to conceptualize how networks should work, so that hardware and network protocols can interoperate. The TCP/IP model is one of the two most common network models, the other being the OSI Model. The TCP/IP Model of networking is a different way of looking at networking. Because the model was developed to describe TCP/IP, it is the closest model of the Internet, which uses TCP/IP. The TCP/IP network model breaks down into four (4) layers:

Application Layer Transport Layer Internet Layer Network Access Layer

TCP/IP MODEL LAYERS


APPLICATION LAYER
The Application Layer provides the user with the interface to communication. This could be your web browser, e-mail client (Outlook, Eudora or Thunderbird), or a file transfer client. The Application Layer is where your web browser, a telnet, ftp, e-mail or other clientapplication runs. Basically, any application that rides on top of TCP and/or UDP that uses a pair of virtual network sockets and a pair of IP addresses. The Application Layer sends to, and receives data from, the Transport Layer.

TRANSPORT LAYER
The Transport Layer provides the means for the transport of data segments across theInternet Layer. The Transport Layer is concerned with end-to-end (host-to-host) communication. Transmission Control Protocol provides reliable, connection-oriented transport of data between two endpoints (sockets) on two computers that use Internet Protocol to communicate. User Datagram Protocol provides unreliable, connectionless transport of data between two endpoints (sockets) on two computers that use Internet Protocol to communicate. The Transport Layer sends data to the Internet layer when transmitting and sends data to the Application Layer when receiving.

INTERNET LAYER
The Internet Layer provides connectionless communication across one or more networks, a global logical addressing scheme and packetization of data. The Internet Layer is concerned with network to network communication. The Internet Layer is responsible for packetization, addressing and routing of data on the network. Internet Protocol provides the packetization, logical addressing and routing functions that forward packets from one computer to another. The Internet Layer communicates with the Transport Layer when receiving and sends data to the Network Access Layer when transmitting.

NETWORK ACCESS LAYER


The Network Access Layer provides access to the physical network. This is your network interface card. Ethernet, FDDI, Token Ring, ATM, OC, HSSI, or even Wi-Fi are all examples of network interfaces. The purpose of a network interface is to allow yourcomputer to access the wire, wireless or fiber optic network infrastructure and send data to other computers. The Network Access Layer transmits data on the physical network when sending and transmits data to the Internet Layer when receiving.

All Internet-based applications and their data, whether it is a web browser downloading a web page, Microsoft Outlook sending an e-mail, a file, an instant message, a Skype video or voice call; the data is chopped into data segments and encapsulated in Transport Layer Protocol Data Units or PDU's (TCP or UDP segments). The Transport Layer PDU's are then encapsulated in Internet Layer's Internet Protocol packets. The Internet Protocol packets are then chopped into frames at the Network Access layer and transmitted across the physial media (copper wires, fiber optic cables or the air) to the next station in the network. The OSI Model uses seven layers, and differs quite a bit from the TCP/IP model. The TCP/IP model does a better job of representing how TCP/IP works in a network, but the OSI Model is still the networking model most technical people refer to during troubleshooting or network architecture discussions. We're going to teach you the TCP/IP model from the top down beginning with the Application Layer.

d. The User Datagram Protocol (UDP) is one of the core members of the Internet protocol suite, the set of network protocols used for the Internet.

With UDP, computer applications can send messages, in this case referred to as datagrams, to other hosts on an Internet Protocol (IP) network without prior communications to set up special transmission channels or data paths. The protocol was designed by David P. Reed in 1980 and formally defined in RFC 768. UDP uses a simple transmission model with a minimum of protocol mechanism.[1] It has no handshaking dialogues, and thus exposes any unreliability of the underlying network protocol to the user's program. As this is normally IP over unreliable media, there is no guarantee of delivery, ordering or duplicate protection. UDP provides checksums for data integrity, and port numbers for addressing different functions at the source and destination of the datagram. UDP is suitable for purposes where error checking and correction is either not necessary or performed in the application, avoiding the overhead of such processing at the network interface level. Time-sensitive applications often use UDP because dropping packets is preferable to waiting for delayed packets, which may not be an option in a real-time system.[2] If error correction facilities are needed at the network interface level, an application may use the Transmission Control Protocol (TCP) or Stream Control Transmission Protocol (SCTP) which are designed for this purpose. A number of UDP's attributes make it especially suited for certain applications.

It is transaction-oriented, suitable for simple query-response protocols such as the Domain Name System or the Network Time Protocol. It provides datagrams, suitable for modeling other protocols such as in IP tunneling or Remote Procedure Call and the Network File It is simple, suitable for bootstrapping or other purposes without a full protocol stack, such as the DHCP and Trivial File Transfer It is stateless, suitable for very large numbers of clients, such as in streaming media applications for example IPTV The lack of retransmission delays makes it suitable for real-time applications such as Voice over IP, online games, and many protocols Works well in unidirectional communication, suitable for broadcast information such as in many kinds of service discovery and shared

System. Protocol.

built on top of the Real Time Streaming Protocol. information such as broadcast time or Routing Information Protocol

2.

The fundamental differences between "GET" and "POST"

The HTML specifications technically define the difference between "GET" and "POST" so that former means that form data is to be encoded (by a browser) into a URL while the latter means that the form data is to appear within a message body. But the specifications also give the usage recommendation that the"GET" method should be used when the form processing is "idempotent", and in those cases only. As a simplification, we might say that "GET" is basically for just getting (retrieving) data whereas "POST"may involve anything, like storing or updating data, or ordering a product, or sending E-mail. The HTML 2.0 specification says, in section Form Submission (and the HTML 4.0 specification repeats this with minor stylistic changes): If the processing of a form is idempotent (i.e. it has no lasting observable effect on the state of the world), then the form method should be GET. Many database searches have no visible side-effects and make ideal applications of query forms. -If the service associated with the processing of a form has side effects (for example, modification of a database or subscription to a service), the method should be POST. In the HTTP specifications (specifically RFC 2616) the word idempotent is defined as follows: Methods can also have the property of "idempotence" in that (aside from error or expiration issues) the side-effects of N > 0 identical requests is the same as for a single request.
The word idempotent, as used in this context in the specifications, is (pseudo)mathematical jargon (see definition of "idempotent" in FOLDOC) and should not be taken too seriously or literally here. The phrase "no lasting observable effect on the state of the world" isn't of course very exact either, and isn't really the same thing. Idempotent processing, as defined above, does not exclude fundamental changes, only that processing the same data twice has

the same effect as processing it once. But here, in fact, idempotent processing means that a form submission causes no changes anywhere except on the user's screen (or, more generally speaking, in the user agent's state). Thus, it is basically for retrieving data. If such a form is resubmitted, it might get different data (if the data had been changed meanwhile), but the submission would not cause any update of data or other events. The concept of changes should not be taken too pedantically; for instance, it can hardly be regarded as a change that a form submission is logged into the server's log file. On the other hand, sending E-mail should normally be regarded as "an effect on the state of the world".

The HTTP specifications aren't crystal clear on this, and section Safe Methods in the HTTP/1.1 specification describes the principles in yet another way. It opens a different perspective by says that users "cannot be held accountable" for side effects, which presumably means any effect than mere retrieval: In particular, the convention has been established that the GET and HEAD methods SHOULD NOT have the significance of taking an action other than retrieval. These methods ought to be considered "safe". This allows user agents to represent other methods, such as POST, PUT and DELETE, in a special way, so that the user is made aware of the fact that a possibly unsafe action is being requested. Naturally, it is not possible to ensure that the server does not generate side-effects as a result of performing a GET request; in fact, some dynamic resources consider that a feature. The important distinction here is that the user did not request the side-effects, so therefore cannot be held accountable for them. The concept and its background is explained in section Allowing input in Tim Berners-Lee's Style Guide for online hypertext. It refers, for more information, to User agent watch points, which emphatically says that GET should be used if and only if there are no side effects. But this line of thought, however logical, is not always practical at present, as we shall see. See also answer to question "What is the difference between GET and POST?" in CGI Programming FAQ by Nick Kew.
Unit-5

1.

Type 1 JDBC Driver

JDBC-ODBC Bridge driver The Type 1 driver translates all JDBC calls into ODBC calls and sends them to the ODBC driver. ODBC is a generic API. The JDBCODBC Bridge driver is recommended only for experimental use or when no other alternative is available.

Type 1: JDBC-ODBC Bridge

Advantage
The JDBC-ODBC Bridge allows access to almost any database, since the database's ODBC drivers are already available.

Disadvantages
1. Since the Bridge driver is not written fully in Java, Type 1 drivers are not portable. 2. A performance issue is seen as a JDBC call goes through the bridge to the ODBC driver, then to the database, and this applies even in the reverse process. They are the slowest of all driver types. 3. The client system requires the ODBC Installation to use the driver. 4. Not good for the Web.

Type 2 JDBC Driver


Native-API/partly Java driver The distinctive characteristic of type 2 jdbc drivers are that Type 2 drivers convert JDBC calls into database-specific calls i.e. this driver is specific to a particular database. Some distinctive characteristic of type 2 jdbc drivers are shown below. Example: Oracle will have oracle native api.

Type 2: Native api/ Partly Java Driver

Advantage
The distinctive characteristic of type 2 jdbc drivers are that they are typically offer better performance than the JDBC-ODBC Bridge as the layers of communication (tiers) are less than that of Type 1 and also it uses Native api which is Database specific.

Disadvantage
1. 2. 3. 4. 5. Native API must be installed in the Client System and hence type 2 drivers cannot be used for the Internet. Like Type 1 drivers, its not written in Java Language which forms a portability issue. If we change the Database we have to change the native api as it is specific to a database Mostly obsolete now Usually not thread safe.

Type 3 JDBC Driver


All Java/Net-protocol driver Type 3 database requests are passed through the network to the middle-tier server. The middle-tier then translates the request to the database. If the middle-tier server can in turn use Type1, Type 2 or Type 4 drivers.

Type 3: All Java/ Net-Protocol Driver

Advantage
1. This driver is server-based, so there is no need for any vendor database library to be present on client machines. 2. This driver is fully written in Java and hence Portable. It is suitable for the web. 3. There are many opportunities to optimize portability, performance, and scalability. 4. The net protocol can be designed to make the client JDBC driver very small and fast to load. 5. The type 3 driver typically provides support for features such as caching (connections, query results, and so on), load balancing, and advanced system administration such as logging and auditing. 6. This driver is very flexible allows access to multiple databases using one driver. 7. They are the most efficient amongst all driver types.

Disadvantage
It requires another server application to install and maintain. Traversing the recordset may take longer, since the data comes through the backend server.

Type 4 JDBC Driver


Native-protocol/all-Java driver The Type 4 uses java networking libraries to communicate directly with the database server.

Type 4: Native-protocol/all-Java driver

Advantage
1. The major benefit of using a type 4 jdbc drivers are that they are completely written in Java to achieve platform independence and eliminate deployment administration issues. It is most suitable for the web. 2. Number of translation layers is very less i.e. type 4 JDBC drivers don't have to translate database requests to ODBC or a native connectivity interface or to pass the request on to another server, performance is typically quite good. 3. You dont need to install special software on the client or server. Further, these drivers can be downloaded dynamically.

Disadvantage
With type 4 drivers, the user needs a different driver for each database.

A Statement object is used to send SQL statements to a database. There are actually three kinds of Statement objects, all of which act as containers for executing SQL statements on a given connection: Statement,PreparedStatement, which inherits from Statement, and CallableStatement, which inherits from PreparedStatement. They are specialized for sending particular types of SQL statements; a Statement object is used to execute a simple SQL statement with no parameters, a PreparedStatement object is used to execute a precompiled SQL statement with or without IN parameters, and a CallableStatement object is used to execute a call to a database stored procedure.
2.

The Statement interface provides basic methods for executing statements and retrieving results. The PreparedStatement interface adds methods for dealing with IN parameters; the CallableStatement interface adds methods for dealing with OUT parameters. In the JDBC 2.0 core API, the ResultSet interface has a set of new updater methods (updateInt, updateBoolean, updateString, and so on) and other new related methods that make it possible to update table column values programmatically. This new API also adds methods to the Statement interface

(and PreparedStatement and CallableStatement interfaces) so that update statements may be executed as a batch rather than singly. 5.1.1 Creating Statement Objects Once a connection to a particular database is established, that connection can be used to send SQL statements. A Statement object is created with the Connection method createStatement, as in the following code fragment:
Connection con = DriverManager.getConnection(url, "sunny", ""); Statement stmt = con.createStatement();

The SQL statement that will be sent to the database is supplied as the argument to one of the execute methods on a Statement object. This is demonstrated in the following example, which uses the methodexecuteQuery:
ResultSet rs = stmt.executeQuery("SELECT a, b, c FROM Table2");

The variable rs references a result set that cannot be updated and in which the cursor can move only forward, which is the default behavior for ResultSet objects. There are also versions of the methodConnection.createStatement that create Statement objects that produce ResultSet objects that are scrollable, that are updatable, and that remain open after a transaction is committed or rolled back. 5.1.2 Executing Statements Using Statement Objects The Statement interface provides three different methods for executing SQL statements: executeQuery, executeUpdate, and execute. The correct method to use is determined by what the SQL statement produces. The method executeQuery is designed for statements that produce a single result set, such as SELECT statements. The method executeUpdate is used to execute INSERT, UPDATE, or DELETE statements and also SQL DDL (Data Definition Language) statements like CREATE TABLE, DROP TABLE, and ALTER TABLE. The effect of an INSERT, UPDATE, or DELETE statement is a modification of one or more columns in zero or more rows in a table. The return value of executeUpdate is an integer (referred to as the update count) that indicates the number of rows that were affected. For statements such as CREATE TABLE or DROP TABLE, which do not operate on rows, the return value of executeUpdate is always zero.

The method execute is used to execute statements that return more than one result set, more than one update count, or a combination of the two. Because it is an advanced feature that the majority of programmers will never use, it is explained in its own section later in this overview. The Statement methods executeQuery and executeUpdate close the calling Statement object's current result set if there is one open. This means that any processing of the current ResultSet object needs to be completed before a Statement object is re-executed. It should be noted that the PreparedStatement interface, which inherits all of the methods in the Statement interface, has its own versions of the methods executeQuery, executeUpdate and execute. Statementobjects do not themselves contain an SQL statement; therefore, one must be provided as the argument to the Statement.execute methods. PreparedStatement objects do not supply an SQL statement as a parameter to these methods because they already contain a precompiled SQL statement. CallableStatement objects, which call one of the DBMS's stored procedures, inherit the PreparedStatement forms of these methods. Supplying an SQL statement to the PreparedStatement or CallableStatement versions of the methods executeQuery, executeUpdate, or execute will cause an SQLException to be thrown. 5.1.3 Statement Completion When a connection is in auto-commit mode, the statements being executed within it are committed or rolled back when they are completed. A statement is considered complete when it has been executed and all its results have been returned. For the method executeQuery, which returns one result set, the statement is completed when all the rows of the ResultSet object have been retrieved. For the method executeUpdate, a statement is completed when it is executed. In the rare cases where the method execute is called, however, a statement is not complete until all of the result sets or update counts it generated have been retrieved. Some DBMSs treat each statement in a stored procedure as a separate statement; others treat the entire procedure as one compound statement. This difference becomes important when auto-commit is enabled because it affects when the method commit is called. In the first case, each statement is individually committed; in the second, all are committed together.

5.1.4 Retrieving Automatically Generated Keys Many DBMSs automatically generate a unique key field when a new row is inserted into a table. Methods and constants added in the JDBC 3.0 API make it possible to retrieve these keys, which is a two-step process. First the driver is alerted that it should make the keys available for retrieval. The second step is to access the generated keys by calling the Statement method getGeneratedKeys. The rest of this section explains these two steps more fully.
1. Step One - Tell the driver that it should make automatically generated

keys available for retrieval. This is done when an SQL statement is sent to the DBMS, which for Statement objects, is when the statement is executed. Three new versions of the method executeUpdate and three new versions of the method execute signal the driver about making automatically generated keys available. These six new methods take two parameters, the first being in all cases an SQL INSERT statement. The second parameter is either a constant indicating whether to make all generated keys retrievable (Statement.RETURN_GENERATED_KEYS orStatement.NO_GENERATED_KEYS) or an array indicating which specific key columns should be made retrievable. The array elements are either the indexes of the columns to be returned or the names of the columns to be returned. Note that although it is possible to use the method execute for executing a DML (Data Manipulation Language) statement, this method is generally reserved for executing CallableStatement objects that produce multiple return values. For a PreparedStatement object, the SQL statement is sent to the DBMS to be precompiled when the PreparedStatement object is created with one of the Connection.prepareStatementmethods. Thus, the driver is notified about making automatically generated keys retrievable via these methods. See the chapter on PreparedStatement objects (page 96) for examples. 2. Step Two - After the driver has been notified about making automatically generated keys available for retrieval, the keys can be retrieved by calling the Statement method getGeneratedKeys. This method returns a ResultSet object, with each row being a generated key. If there are no automaticallty generated keys, the ResultSet object will be empty.
o

The following code fragment creates a Statement object and signals the driver that it should be able to return any keys that are automatically generated as a result of executing the statement. The example then retrieves the keys that were generated and prints them out. If there are no generated keys, the printout says that there are none.
String sql = "INSERT INTO AUTHORS (LAST, FIRST, HOME) VALUES " + "'PARKER', 'DOROTHY', 'USA', keyColumn"; int rows = stmt.executeUpdate(sql, Statement.RETURN_GENERATED_KEYS); ResultSet rs = stmt.getGeneratedKeys(); if (rs.next()) { ResultSetMetaData rsmd = rs.getMetaData(); int colCount = rsmd.getColumnCount(); do { for (int i = 1; i <= colCount; i++) { String key = rs.getString(i); System.out.println("key " + i + "is " + key); } } while (rs.next();) } else { System.out.println("There are no generated keys."); }

Instead of telling the driver to make all automatically-generated keys available, it is possible to tell the driver to make particular columns retrievable. The following code fragment uses an array of column indexes (in this case, an array with one element) to indicate which columns with an automatically-generated key should be made available for retrieval.
String sql = "INSERT INTO AUTHORS (LAST, FIRST, HOME) VALUES " + "'PARKER', 'DOROTHY', 'USA', keyColumn"; int [] indexes = {4}; int rows = stmt.executeUpdate(sql, indexes);

The following code fragment shows a third alternative-supplying an array of column names to indicate which ResultSet columns to make available. In this case, the driver is told to make the automatically-generated key in the column AUTHOR_ID retrievable.
String sql = "INSERT INTO AUTHORS (LAST, FIRST, HOME) VALUES " + "'PARKER', 'DOROTHY', 'USA', keyColumn"; String [] keyColumn = {"AUTHOR_ID"}; int rows = stmt.executeUpdate(sql, keyColumn);

5.1.5 Closing Statements objects will be closed automatically by the Java garbage collector. Nevertheless, it is recommended as good programming practice that they be closed explicitly when they are no longer needed. This frees DBMS resources immediately and helps avoid potential memory problems.
Statement

5.1.6 SQL Escape Syntax in Statements objects may contain SQL statements that use SQL escape syntax. Escape syntax signals the driver that the code within it should be handled differently. When escape processing is enabled (by callingStatement.setEscapeProcessing(true) or RowSet.setEscapeProcessing( true)), the driver will scan for any escape syntax and translate it into code that the particular database understands. This makes escape syntax DBMSindependent and allows a programmer to use features that might not otherwise be available.
Statement

An escape clause is demarcated by curly braces and a key word, which indicates the kind of escape clause.
{keyword . . . parameters . . . }

The following keywords are used to identify escape clauses:

escape for LIKE escape characters The percent sign (%) and underscore (_) characters work like wild cards in SQL LIKE clauses (% matches zero or more characters, and _ matches exactly one character). In order to interpret them literally, they can be preceded by a backslash (\), which is a special escape character in strings. One can specify which character to use as the escape character by including the following syntax at the end of a query:
{escape 'escape-character'}

For example, the following query, using the backslash character as an escape character, finds identifier names that begin with an underbar.
stmt.executeQuery("SELECT name FROM Identifiers WHERE Id LIKE '\_%' {escape '\'}");

fn for scalar functions Almost all DBMSs have numeric, string, time, date, system, and conversion functions on scalar values. One of these functions can be used by putting it in escape syntax with the keyword fn followed by the name of the desired function and its arguments. For example, the following code calls the function concat with two arguments to be concatenated:
{fn concat("Hot", "Java")};

The name of the current database user can be obtained with the following syntax:
{fn user()};

Scalar functions may be supported by different DBMSs with slightly different syntax, and they may not be supported by all drivers. Various DatabaseMetaData methods will list the functions that are supported. For example, the method getNumericFunctions returns a comma-separated list of the Open Group CLI names of numeric functions, the method getStringFunctions returns string functions, and so on. The driver will either map the escaped function call into the appropriate syntax or implement the function directly itself. However, a driver is required to implement only those scalar functions that the DBMS supports.

d, t, and ts for date and time literals DBMSs differ in the syntax they use for date, time, and timestamp literals. The JDBC API supports ISO standard format for the syntax of these literals, using an escape clause that the driver must translate to the DBMS representation. For example, a date is specified in a JDBC SQL statement with the following syntax:
{d 'yyyy-mm-dd'}

In this syntax, yyyy is the year, mm is the month, and dd is the day. The driver will replace the escape clause with the equivalent DBMS-specific representation. For example, the driver might replace {d 1999-02-

with '28-FEB-99' if that is the appropriate format for the underlying database.
28}

There are analogous escape clauses for TIME and TIMESTAMP:


{t 'hh:mm:ss'} {ts 'yyyy-mm-dd hh:mm:ss.f . . .'}

The fractional seconds (.f omitted.

. . .)

portion of the TIMESTAMP can be

call or ? = call for stored procedures If a database supports stored procedures, they can be invoked from JDBC with the syntax shown below. Note that the square brackets ([ ]) indicate that what is between them is optional, and they are not part of the syntax.
{call procedure_name[(?, ?, . . .)]}

or, where a procedure returns a result parameter:


{? = call procedure_name[(?, ?, . . .)]}

Input arguments may be either literals or parameters. See the section "Numbering of Parameters" on page 103 for more information. One can call the method DatabaseMetaData.supportsStoredProcedures to see if the database supports stored procedures.

oj for outer joins The syntax for an outer join is:


{oj outer-join}

In this syntax, outer-join has the form


table {LEFT|RIGHT|FULL} OUTER JOIN {table | outer-join} ON search-condition

(Note that curly braces ({}) in the preceding line indicate that one of the items between them must be used; they are not part of the syntax.) The

following SELECT statement uses the escape syntax for an outer join. ;"'PARKER', 'DOROTHY', 'USA', keyColumn";
+ Statement stmt = con.createStatement("SELECT * FROM {oj TABLE1 " LEFT OUTER JOIN TABLE2 ON DEPT_NO =

003420930}");

Outer joins are an advanced feature and are not supported by all DBMSs; consult the SQL grammar for an explanation of them. JDBC provides three DatabaseMetaData methods for determining the kinds of outer joins a driver supports: supportsOuterJoins, supportsFullOuterJoins, and supportsLimitedOuterJoins. The method Statement.setEscapeProcessing turns escape processing on or off, with the default being on. A programmer might turn it off to cut down on processing time when performance is paramount, but it would normally be turned on. It should be noted that the method setEscapeProcessing does not work for PreparedStatement objects because the statement may have already been sent to the database before it can be called. See page 89, the overview of the PreparedStatement interface, regarding precompilation. 5.1.7 Sending Batch Updates A Statement object may submit multiple update commands together as a single unit, or batch, to the underlying DBMS. This ability to submit multiple updates as a batch rather than having to send each update individually can improve performance greatly in some situations. The following code fragment demonstrates how to send a batch update to a database. In this example, a new row is inserted into three different tables in order to add a new employee to a company database. The code fragment starts by turning off the Connection object con's auto-commit mode in order to allow multiple statements to be sent together as a transaction. After creating the Statement object stmt, it adds three SQLINSERT INTO commands to the batch with the method addBatch and then sends the batch to the database with the method executeBatch. The code looks like this:
Statement stmt = con.createStatement(); con.setAutoCommit(false); stmt.addBatch("INSERT INTO employees VALUES (1000, 'Joe Jones')"); stmt.addBatch("INSERT INTO departments VALUES (260, 'Shoe')"); stmt.addBatch("INSERT INTO emp_dept VALUES (1000, '260')");

int [] updateCounts = stmt.executeBatch();

Because the connection's auto-commit mode is disabled, the application is free to decide whether or not to commit the transaction if an error occurs or if some of the commands in the batch fail to execute. For example, the application may not commit the changes if any of the insertions fail, thereby avoiding the situation where employee information exists in some tables but not in others. In the Java 2 platform, a Statement object is created with an associated list of commands. This list is empty to begin with; commands are added to the list with the Statement method addBatch. The commands added to the list must all return only a simple update count. If, for example, one of the commands is a query (a SELECT statement), which will return a result set, the method executeBatch will throw a BatchUpdateException. A Statement object's list of commands can be emptied by calling the method clearBatch on it. In the preceding example, the method executeBatch submits stmt's list of commands to the underlying DBMS for execution. The DBMS executes each command in the order in which it was added to the batch and returns an update count for each command in the batch, also in order. If one of the commands does not return an update count, its return value cannot be added to the array of update counts that the methodexecuteBatch returns. In this case, the method executeBatch will throw a BatchUpdateException. This exception keeps track of the update counts for the commands that executed successfully before the error occurred, and the order of these update counts likewise follows the order of the commands in the batch. In the following code fragment, an application uses a try/catch block, and if a BatchUpdateException is thrown, it retrieves the exception's array of update counts to discover which commands in a batch update executed successfully before the BatchUpdateException object was thrown.
try { stmt.addBatch("INSERT INTO employees VALUES (" + "1000, 'Joe Jones')"); stmt.addBatch("INSERT INTO departments VALUES (260, 'Shoe')"); stmt.addBatch("INSERT INTO emp_dept VALUES (1000, '260')"); int [] updateCounts = stmt.executeBatch(); } catch(BatchUpdateException b) { System.err.println("Update counts of successful commands: "); int [] updateCounts = b.getUpdateCounts(); for (int i = 0; i < updateCounts.length; i ++) { System.err.print(updateCounts[i] + " "); }

System.err.println("");

If a printout was generated and looked similar to the following, the first two commands succeeded and the third one failed.
Update counts of successful commands: 1 ;1

JDBC drivers are not required to support batch updates, so a particular driver might not implement the methods addBatch, clearBatch, and executeBatch. Normally a programmer knows whether a driver that he/she is working with supports batch updates, but if an application wants to check, it can call the DatabaseMetaData method supportsBatchUpdates to find out. In the following code fragment, a batch update is used only if the driver supports batch updates; otherwise, each update is sent as a separate statement. The connection's auto-commit mode is disabled so that in either case, all the updates are included in one transaction.
con.setAutoCommit(false); if(dbmd.supportsBatchUpdates) { stmt.addBatch("INSERT INTO . . ."); stmt.addBatch("DELETE . . ."); stmt.addBatch("INSERT INTO . . ."); . . . stmt.executeBatch(); } else { System.err.print("Driver does not support batch updates; "); System.err.println("sending updates in separate statements."); stmt.executeUpdate("INSERT INTO . . ."); stmt.executeUpdate("DELETE . . ."); stmt.executeUpdate("INSERT INTO . . ."); . . . con.commit();

If one of the commands in a batch update fails, the method executeBatch will throw a BatchUpdateException. The BatchUpdateException method getUpdateCounts can be called to get an array of the update counts that were returned. In the previous examples, as soon as a command in a batch failed, the driver stopped processing commands, so the array contained update counts for only those commands that were executed before the first failure. A driver may be implemented so that it continues to process subsequent commands instead of stopping with a failure. In this case, the array of update counts returned by the methodgetUpdateCounts will contain a value for every command in the batch. The value for a command that failed is Statement.EXECUTE_FAILED.

5.1.8 Giving Performance Hints The Statement interface contains two methods for giving performance hints to the driver: setFetchDirection and setFetchSize. These methods are also available in the ResultSet interface and do exactly the same thing. The difference is that the Statement methods set the default for all of the ResultSet objects produced by a particular Statement object, whereas the ResultSet methods can be called any time during the life of the ResultSet object to change the fetch direction or the fetch size for that particular ResultSet object only. See the section "Providing Performance Hints" on page 72 for a full discussion of these methods. Both the Statement and ResultSet interfaces have the corresponding get methods: getFetchDirection and getFetchSize. If Statement.getFetchDirection is called before a fetch direction has been set, the value returned is implementation-specific, that is, it is up to the driver. The same is true for the method Statement.getFetchSize. 5.1.9 Executing Special Kinds of Statements The execute method should be used only when it is possible that a statement may return more than one ResultSet object, more than one update count, or a combination of ResultSet objects and update counts. These multiple possibilities for results, though rare, are possible when one is executing certain stored procedures or dynamically executing an unknown SQL string (that is, unknown to the application programmer at compile time). For example, a user might execute a stored procedure (using a CallableStatement object), and that stored procedure could perform an update, then a select, then an update, then a select, and so on. In more typical situations, someone using a stored procedure will already know what it returns.
3. ResultSetMetaData

ResultSetMetaData object. ResultSetMetaData is a class that is used to find information about theResultSet returned from a executeQuery call.
You can interrogate JDBC for detailed information about a querys result set using a It contains information about the number of columns, the types of data they contain, the names of the columns, and so on. Two of the most common methods in the

These retrieve the name of a column, and the name of its associated data type, respectively, each in the form of a

ResultSetMetaData aregetColumnName and getColumnTypeName. String.

DatabaseMetaData

DatabaseMetaData is a class that can be used to fetch information about the database you are using. Use it to answer questions such as: What kind of catalogs are in the database? What brand of database am I working with? What username am I?
ex: username = dbmd.getUserName();

unit -6

Package javax.servlet The javax.servlet package contains a number of classes and interfaces that describe and define the contracts between a servlet class and the runtime environment provided for an instance of such a class by a conforming servlet container.
1.

See: Description

Interface Summary
Filter A filter is an object that performs filtering tasks on either the request to a resource (a servlet or static content), or on the response from a resource, or both. A FilterChain is an object provided by the servlet container to the developer giving a view into the invocation chain of a filtered request for a resource. A filter configuration object used by a servlet container to pass information to a filter during initialization. Defines an object that receives requests from the client and sends them to any resource (such as a servlet, HTML file, or JSP file) on the server. Defines methods that all servlets must implement. A servlet configuration object used by a servlet container to pass information to a servlet during initialization. Defines a set of methods that a servlet uses to communicate with its servlet container, for example, to get the MIME type of a file, dispatch requests, or write to a log file.

FilterChain

FilterConfig

RequestDispatcher

Servlet ServletConfig

ServletContext

ServletContextAttributeListener Implementations of this interface receive notifications of changes to the attribute list

on the servlet context of a web application. ServletContextListener Implementations of this interface receive notifications about changes to the servlet context of the web application they are part of. Defines an object to provide client request information to a servlet.

ServletRequest

A ServletRequestAttributeListener can be ServletRequestAttributeListener implemented by the developer interested in being notified of request attribute changes. ServletRequestListener A ServletRequestListener can be implemented by the developer interested in being notified of requests coming in and out of scope in a web component. Defines an object to assist a servlet in sending a response to the client. Deprecated. As of Java Servlet API 2.4, with no direct replacement.

ServletResponse SingleThreadModel

Class Summary
GenericServlet Defines a generic, protocol-independent servlet. This is the event class for notifications about ServletContextAttributeEvent changes to the attributes of the servlet context of a web application. ServletContextEvent This is the event class for notifications about changes to the servlet context of a web application. Provides an input stream for reading binary data from a client request, including an efficient readLine method for reading data one line at a time. Provides an output stream for sending binary data to the client.

ServletInputStream

ServletOutputStream

ServletRequestAttributeEvent This is the event class for notifications of

changes to the attributes of the servlet request in an application. ServletRequestEvent Events of this kind indicate lifecycle events for a ServletRequest. Provides a convenient implementation of the ServletRequest interface that can be subclassed by developers wishing to adapt the request to a Servlet. Provides a convenient implementation of the ServletResponse interface that can be subclassed by developers wishing to adapt theesponse from a Servlet.

ServletRequestWrapper

ServletResponseWrapper

Exception Summary
ServletException Defines a general exception a servlet can throw when it encounters difficulty. Defines an exception that a servlet or filter throws to UnavailableException indicate that it is permanently or temporarily unavailable.

Package javax.servlet Description


The javax.servlet package contains a number of classes and interfaces that describe and define the contracts between a servlet class and the runtime environment provided for an instance of such a class by a conforming servlet container.
2.

Architecture Digram:

The following figure depicts a typical servlet life-cycle scenario.

First the HTTP requests coming to the server are delegated to the servlet container. The servlet container loads the servlet before invoking the service() method. Then the servlet container handles multiple requests by spawning multiple threads, each thread executing the service() method of a single instance of the servlet.

A servlet life cycle can be defined as the entire process from its creation till the destruction. The following are the paths followed by a servlet

The servlet is initialized by calling the init () method. The servlet calls service() method to process a client's request.

The servlet is terminated by calling the destroy() method. Finally, servlet is garbage collected by the garbage collector of the JVM.

Now let us discuss the life cycle methods in details.

The init() method :


The init method is designed to be called only once. It is called when the servlet is first created, and not called again for each user request. So, it is used for one-time initializations, just as with the init method of applets.

The servlet is normally created when a user first invokes a URL corresponding to the servlet, but you can also specify that the servlet be loaded when the server is first started.

When a user invokes a servlet, a single instance of each servlet gets created, with each user request resulting in a new thread that is handed off to doGet or doPost as appropriate. The init() method simply creates or loads some data that will be used throughout the life of the servlet.

The init method definition looks like this:

public void init() throws ServletException { // Initialization code... }

The service() method :


The service() method is the main method to perform the actual task. The servlet container (i.e. web server) calls the service() method to handle requests coming from the client( browsers) and to write the formatted response back to the client.

Each time the server receives a request for a servlet, the server spawns a new thread and calls service. The service() method checks the HTTP request type (GET, POST, PUT, DELETE, etc.) and calls doGet, doPost, doPut, doDelete, etc. methods as appropriate.

Here is the signature of this method:

public void service(ServletRequest request, ServletResponse response) throws ServletException, IOException{ }

The service () method is called by the container and service method invokes doGe, doPost, doPut, doDelete, etc. methods as appropriate. So you have nothing to do with service() method but you override either doGet() or doPost() depending on what type of request you receive from the client.

The doGet() and doPost() are most frequently used methods with in each service request. Here are the signature of these two methods.

The doGet() Method


A GET request results from a normal request for a URL or from an HTML form that has no METHOD specified and it should be handled by doGet() method.

public void doGet(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // Servlet code }

The doPost() Method


A POST request results from an HTML form that specifically lists POST as the METHOD and it should be handled by doPost() method.

public void doPost(HttpServletRequest request, HttpServletResponse response) throws ServletException, IOException { // Servlet code }

The destroy() method :


The destroy() method is called only once at the end of the life cycle of a servlet. This method gives your servlet a chance to close database connections, halt background threads, write cookie lists or hit counts to disk, and perform other such cleanup activities.

After the destroy() method is called, the servlet object is marked for garbage collection. The destroy method definition looks like this:

public void destroy() {

// Finalization code... }

Session management

Web browsers and e-commerce sites use HTTP to communicate. Since HTTP is a stateless protocol (meaning that each command is executed independently without any knowledge of the commands that came before it), there must be a way to manage sessions between the browser side and the server side. WebSphere Commerce supports two types of session management: cookie-based and URL rewriting. Cookie-based session management When cookie-based session management is used, a message (cookie) containing user's information is sent to the browser by the Web server. This cookie is sent back to the server when the user tries to access certain pages. By sending back the cookie, the server is able to identify the user and retrieves the user's session from the session database; thus, maintaining the user's session. A cookie-based session ends when the user logs off or closes the browser. Cookie-based session management is secure and has performance benefits. Cookie-based session management is secure because it uses an identification tag that only flows over SSL. Cookie-based session management offers significant performance benefits because the WebSphere Commerce caching mechanism only supports cookie-based sessions, and not URL rewriting. Cookie-based session management is recommended for shopper sessions. If you are not using URL rewriting and you want to ensure that users have cookies enabled on their browsers, check Cookie acceptance test on the Session Management page of Configuration Manager. This informs the shopper that if their browser does not support cookies, or if they have turned off cookies, they need a browser that supports cookies to browse the WebSphere Commerce site. For security reasons, cookie-based session management uses two types of cookies:

Non-secure session cookie

Used to manage session data. Contains the session ID, negotiated language, current store and the shoppers preferred currency when the cookie is constructed. This cookie can flow between the browser and server under either SSL or non-SSL connection. There are two types of non-secure session cookies:

A WebSphere Application Server session cookie is based on the servlet HTTP session standard. WebSphere Application Server cookies persist to memory or to the database in a multinode deployment. For more information, search for "session management" in the WebSphere Application Server Information Center. o A WebSphere Commerce session cookie is internal to WebSphere Commerce and does not persist to the database. To select which type of cookie to use, select WCS or WAS for the Cookie session manager parameter on the Session Management page of Configuration Manager.

Secure authentication cookie

Used to manage authentication data. An authentication cookie flow over SSL and is time stamped for maximum security. This is the cookie used to authenticate the user; whenever a sensitive command is executed, for example, the DoPaymentCmd which asks for a users credit card number. There is minimal risk that this cookie could be stolen and used by an unauthorized user. Authentication code cookies are always generated by WebSphere Commerce whenever cookie based session management is in use. Both the session and authentication code cookies are required to view secure pages. For cookie errors the CookieErrorView is called under the following circumstances:

The user has logged in from another location with the same Logon ID. The cookie became corrupted, or was tampered with or both. If cookie acceptance is set to "true" and the user's browser does not support cookies.

URL rewriting With URL rewriting, all links that are returned to the browser or that get redirected have the session ID appended to them. When the user clicks these links, the rewritten form of the URL is sent to the server as part of the client's request. The servlet engine recognizes the session ID in the URL and saves it for obtaining the proper object for this user. To use URL rewriting, HTML files (files with .html or .htm extensions) cannot be used for links. To use URL rewriting, JSP files must be used for display purposes. A session with URL rewriting expires when the shopper logs off. Note: WebSphere Commerce dynamic caching and URL rewriting cannot interoperate. With URL rewriting turned on, you need to disable WebSphere Commerce dynamic caching. The administrator can choose to support either only cookie-based session management or both cookie-based and URL rewriting session management. If WebSphere Commerce only supports cookie-based, shoppers' browsers must be able to accept cookies. If both cookie-based and URL rewriting are selected, WebSphere Commerce first attempts to use cookies to manage sessions; if the shopper's browser is set to not accept cookies then URL rewriting is used.

Interface ServletContext public interface ServletContext


4.

Defines a set of methods that a servlet uses to communicate with its servlet container, for example, to get the MIME type of a file, dispatch requests, or write to a log file. There is one context per "web application" per Java Virtual Machine. (A "web application" is a collection of servlets and content installed under a specific subset of the server's URL namespace such as /catalog and possibly installed via a .war file.) In the case of a web application marked "distributed" in its deployment descriptor, there will be one context instance for each virtual machine. In this situation, the context cannot be used as a location to share global information (because the information won't be truly global). Use an external resource like a database instead. The ServletContext object is contained within the ServletConfig object, which the Web server provides the servlet when the servlet is initialized. Version: $Version$ Author: Various
4.

HOW-TO: Handling cookies using the java.net.* API

Author: Ian Brown spam@hccp.org

This is a brief overview on how to retrieve cookies from HTTP responses and how to return cookies in HTTP requests to the appropriate server using the java.net.* APIs.

What are cookies? Retrieving cookies from a response. Setting a cookie value in a request. Setting multiple cookie values in a request. Sample code.

What are cookies? Cookies are small strings of data of the form name=value. These are delivered to the client via the header variables in an HTTP response. Upon recieving a cookie from a web server, the client application should store that cookie, returning it to the server in subsequent requests. For greater detail see the Netscape specification: http://wp.netscape.com/newsref/std/cookie_spec.html Retrieving cookies from a response:
1. Open a java.net.URLConnection to the server:
2. URL myUrl = new URL("http://www.hccp.org/cookieTest.jsp"); 3. URLConnection urlConn = myUrl.openConnection(); 4. urlConn.connect();

5. Loop through response headers looking for cookies: Since a server may set multiple cookies in a single request, we will need to loop through the response headers, looking for all headers named "Set-Cookie".
String headerName=null; for (int i=1; (headerName = uc.getHeaderFieldKey(i))!=null; i++) { if (headerName.equals("Set-Cookie")) { String cookie = urlConn.getHeaderField(i); ...

6. Extract cookie name and value from cookie string: The string returned by the getHeaderField(int index) method is a series of name=value separated by semi-colons (;). The first name/value pairing is actual data string we are interested in (i.e. "sessionId=0949eeee22222rtg" or "userId=igbrown"), the subsequent

name/value pairings are meta-information that we would use to manage the storage of the cookie (when it expires, etc.).
cookie = cookie.substring(0, cookie.indexOf(";")); String cookieName = cookie.substring(0, cookie.indexOf("=")); String cookieValue = cookie.substring(cookie.indexOf("=") + 1, cookie.length());

This is basically it. We now have the cookie name (cookieName) and the cookie value (cookieValue).

Setting a cookie value in a request:


1. Values must be set prior to calling the connect method: 4. Create a cookie string:
5. String myCookie = "userId=igbrown"; 2. URL myUrl = new URL("http://www.hccp.org/cookieTest.jsp"); 3. URLConnection urlConn = myUrl.openConnection();

6. Add the cookie to a request: Using the setRequestProperty(String name, String value); method, we will add a property named "Cookie", passing the cookie string created in the previous step as the property value.
urlConn.setRequestProperty("Cookie", myCookie);

7. Send the cookie to the server: To send the cookie, simply call connect() on the URLConnection for which we have added the cookie property:
urlConn.connect()

Setting a multiple cookie values in a request:


1. Perform the same steps as the above item (Setting a a cookie value in a

2. String myCookies = "userId=igbrown; sessionId=SID77689211949; isAuthenticated=true";

request), replacing the single valued cookie string with something like the following:

This string contains three cookies (userId, sessionId, and isAuthenticated). Separate cookie name/value pairs with "; " (semicolon and whitespace).

Note that you cannot set multiple request properties using the same name, so trying to call the setRequestProperty("Cookie" , someCookieValue) method will just overwrite any previously set value.

You might also like