Professional Documents
Culture Documents
CHAPTER 1
INTRODUCTION
Expertise:
Our teams combine cutting edge technology skills with rich domain expertise.
What’s equally important - they share a strong customer orientation that means they
actually start by listening to the customer. They’re focused on coming up with solutions
that serve customer requirements today and anticipate future needs.
3
In case of the proposed system we use the following method to improve the efficiency.
They are as follows.
• We implemented our models in a CBIR system for a specific application domain,
the retrieval of coats of arms. We implemented altogether 19 features, including a
color histogram, symmetry features.
• Content-based image retrieval uses the visual contents of an image such as color,
shape, texture, and spatial layout to represent and index the image.
CHAPTER 2
PROBLEM STATEMENT
In the existing system the CBIR method faced a lot of disadvantage in case of the image
retrieval. The following are the main disadvantage faced in case of the medical field -
Medical image description is an important problem in content-based medical image
retrieval. Hierarchical medical image semantic features description model is proposed
according to the main sources to get semantic features currently. Hence we propose the
new algorithm to overcome the existing system. In existing system, Images were first
annotated with text and then searched using a text-based approach from traditional
database management systems.
6
CHAPTER 3
Since the general assumption is that every user’s need is different (Kurita and Kato
1993) and time varying, the database cannot adopt a fixed clustering structure; and the
total number of classes and the class membership are not available before-hand since
these are assumed to be user-dependent and time varying as well. Of course, these rather
extreme assumptions can be relaxed in a real-world application to the degree of choice.
A typical scenario for relevance feedback in content-based image retrieval is as follows:
Step 2. User provides judgment on the currently displayed images as to whether, and to
what degree, they are relevant or irrelevant to her/his request;
Small sample issue. The number of training examples is small (typically <20 per round
of interaction) relative to the dimension of the feature space (from dozens to hundreds, or
even more),while the number of classes is large for most real-world image databases. For
such small sample size,some existing learning machines such as support vector machines
(SVM) (Vapnik 1995) cannot give stable or meaningful results (Tieu and Viola 2000;
Zhou and Huang 2001), unless more training samples can be elicited from the user (Tong
and Chang 2001).
Real time requirement Finally, since the user is interacting with the machine in real
time, the algorithm shall be sufficiently fast, and avoid if possible heavy computations
over the whole dataset.
It is not the intention of this section to list all existing techniques but rather to
point out the major variants and compare their merits. We would emphasize that under
the same notion of .relevance feedback, different methods might have adopted different
assumptions or problem settings thus incomparable.
Content-based Image Retrieval (CBIR) deals with finding images based on their content,
without using any additional textual information. CBIR has been investigated
for quite some time, with main focus on system entered methods [10]. In textual
information retrieval, it was shown that relevance feedback, i.e. having a user judge
retrieved documents and using these to refine the search can lead to a significant
performance improvement [8]. In image retrieval (content-based, as well as using textual
information), relevance feedback has been discussed in some papers, but in contrast to
the text retrieval domain, where the Rocchio relevance feedback method can be
considered as a reasonable and well established baseline, no standard method is defined.
A survey on relevance feedback techniques for image retrieval until 2002 was presented
in [12]. Most approaches use the marked images as individual queries and combine the
retrieval results. More recent approaches follow a query-instance-based approach [4] or
use support vector machines to learn a two-class classifier. The approach presented here,
is similar to the approach presented in [4], because it also follows a nearest neighbor
search for each query image, but
9
instead of using only the best matching query/database image combination, we consider
all query images (positive and negative) jointly. The nearest-neighbor approach is
appealing, because nowadays, most CBIR systems are based on nearest neighbor
searches on arbitrary vectorial features rather than restricting the features to sparse
binary feature spaces as they are used in the GNU Image Finding Tool (GIFT) [6] and
therefore the technique can easily be integrated into most image retrieval systems. Here,
we present a technique that has both advantages: the individual queries (i.e. positive and
negative images from relevance feedback) are considered independently and in a nearest
neighbor-like manner, but machine-learning techniques are applied to optimize the
search criterion for the nearest neighbors.
The learning step consists of tuning of weights for the distance function used in the
nearest neighbor search. The learning of weights for nearest-neighbor searches (using
L2-norm) was investigated in [7], where a weight for each component of the vectors to
be compared is determined from labeled training-data. In contrast to this work, here we
use the L1-norm, which is known to be better suited for histograms. Additionally, here
the amount of available training data is much smaller than normally used for distance
learning [7].
Relevance Feedback
In the following, we present our new methods for relevance feedback and compare it to
relevance score [4] and Rocchio’s method [8]. The methods presented can use negative
feedback, but also work without. The number of feedback images1 is not limited, so that
a user is free to mark as many images as relevant/non-relevant as he likes, so that a user
who marks many images is likely to obtain better results than a user who only marks a
few images.
In our setup, given an initial query, the user is presented with the top-ranked 20 images
from the database, and can choose to mark images as relevant, nonrelevant,or to not
mark them at all. Then, these judgements are sent back to the retrieval system which can
use the provided information to refine the results and to provide the user with a new set
of (hopefully) better results. In each iteration of relevance feedback, the system can use
machine learning techniques to refine its similarity measure and thus to improve the
results. In Section 3, we present the new distance learning technique which can be used
in combination with all retrieval schemes.
10
CHAPTER 4
SYSTEM ANALYSIS
4.1 Introduction
After analyzing the requirements of the task to be performed, the next step is to
analyze the problem and understand its context. The first activity in the phase is studying
the existing system and other is to understand the requirements and domain of the new
system. Both the activities are equally important, but the first activity serves as a basis of
giving the functional specifications and then successful design of the proposed system.
Understanding the properties and requirements of a new system is more difficult and
requires creative thinking and understanding of existing running system is also difficult,
improper understanding of present system can lead diversion from solution.
Computer systems have become more complex and often (especially with the advent of
Service Oriented Architecture) link multiple traditional systems potentially supplied by
different software vendors. To manage this level of complexity, a number of system
development life cycle (SDLC) models have been created: “waterfall”," "foution,"
"spiral," "build and fix," " Rapid Prototyping and "synchronize and stabilize."
Some agile and iterative proponents confuse the term SDLC with sequential or "more
traditional" processes; however, SDLC is an umbrella term for all methodologies for the
design, implementation, and release of software.
In Project Management a project has both a life cycle and a "systems development life
cycle," during which a number of typical activities occur. The project life cycle (PLC)
encompasses all the activities of the project, while the systems development life cycle
focuses on realizing the product Requirements
Software Requirements:
• Operating system : Windows XP Professional
• Front End : Microsoft Visual Studio .Net 2008
• Coding Language : C#.
CHAPTER 5
FEASIBILITY REPORT
Is there sufficient support for the project from management from users? If the current
system is well liked and used to the extent that persons will not be able to see reasons for
change, there may be resistance.
Are the current business methods acceptable to the user? If they are not, Users may
welcome a change that will bring about a more operational and useful systems.
Have the user been involved in the planning and development of the project?
Early involvement reduces the chances of resistance to the system and in general and
increases the likelihood of successful project.
Since the proposed system was to help reduce the hardships encountered. In the existing
manual system, the new system was considered to be operational feasible.
Evaluating the technical feasibility is the trickiest part of a feasibility study. This
is because, .at this point in time, not too many detailed design of the system, making it
difficult to access issues like performance, costs on (on account of the kind of technology
to be deployed) etc. A number of issues have to be considered while doing a technical
analysis.
CHAPTER 6
6.1 Introduction
Purpose: The main purpose for preparing this document is to give a general insight into
the analysis and requirements of the existing system or situation and for determining the
operating characteristics of the system.
Scope: This Document plays a vital role in the development life cycle (SDLC) and it
describes the complete requirement of the system. It is meant for use by the developers
and will be the basic during testing phase. Any changes made to the requirements in the
future will have to go through formal change approval process.
Output Definition
Input Design
Input design is a part of overall system design. The main objective during the
input design is as given below:
• To produce a cost-effective method of input.
• To achieve the highest possible level of accuracy.
• To ensure that the input is acceptable and understood by the user.
Input Stages:
The main input stages can be listed as below:
• Data recording
• Data transcription
• Data conversion
• Data verification
• Data control
• Data transmission
17
• Data validation
• Data correction
Input types:
It is necessary to determine the various types of inputs. Inputs can be categorized as
follows:
• External inputs, which are prime inputs for the system.
• Internal inputs, which are user communications with the system.
• Operational, which are computer department’s communications to the
system?
• Interactive, which are inputs entered during a dialogue.
Input media:
At this stage choice has to be made about the input media. To conclude about the
input media consideration has to be given to;
• Type of input
• Flexibility of format
• Speed
• Accuracy
• Verification methods
• Rejection rates
• Ease of correction
• Storage and handling requirements
• Security
• Easy to use
• Portability
Keeping in view the above description of the input types and input media, it can
be said that most of the inputs are of the form of internal and interactive. As
Input data is to be the directly keyed in by the user, the keyboard can be considered to be
the most suitable input device.
Error Avoidance
18
At this stage care is to be taken to ensure that input data remains accurate form
the stage at which it is recorded up to the stage in which the data is accepted by the
system. This can be achieved only by means of careful control each time the data is
handled.
Error Detection
Even though every effort is make to avoid the occurrence of errors, still a small
proportion of errors are always likely to occur, these types of errors can be discovered by
using validations to check the input data.
Data Validation
Procedures are designed to detect errors in data at a lower level of detail. Data
validations have been included in the system in almost every area where there is a
possibility for the user to commit errors. The system will not accept invalid data.
Whenever an invalid data is keyed in, the system immediately prompts the user and the
user has to again key in the data and the system will accept the data only if the data is
correct. Validations have been included where necessary.
The system is designed to be a user friendly one. In other words the system has
been designed to communicate effectively with the user. The system has been designed
with popup menus.
CHAPTER 7
SOFTWARE PROFILE
The pre-coded solutions that form the framework's Base Class Library cover a large
range of programming needs in a number of areas, including user interface, data access,
database connectivity, cryptography, web application development, numeric
algorithms, and network communications. The class library is used by programmers,
who combine it with their own code to produce applications.
Programs written for the .NET Framework execute in a software environment that
manages the program's runtime requirements. Also part of the .NET Framework, this
runtime environment is known as the Common Language Runtime (CLR). The CLR
provides the appearance of an application virtual machine so that programmers need
not consider the capabilities of the specific CPU that will execute the program. The CLR
also provides other important services such as security, memory management, and
exception handling. The class library and the CLR together compose the .NET
Framework.
Interoperability
Because interaction between new and older applications is commonly required,
the .NET Framework provides means to access functionality that is implemented in
programs that execute outside the .NET environment. Access to COM components is
provided in the System.Runtime.InteropServices and System.EnterpriseServices
namespaces of the framework; access to other functionality is provided using the
P/Invoke feature.
Simplified Deployment
Installation of computer software must be carefully managed to ensure that it
does not interfere with previously installed software, and that it conforms to security
requirements. The .NET framework includes design features and tools that help address
these requirements.
Security
The design is meant to address some of the vulnerabilities, such as buffer
overflows, that have been exploited by malicious software. Additionally, .NET provides
a common security model for all applications.
Portability
The design of the .NET Framework allows it to theoretically be platform
agnostic, and thus cross-platform compatible. That is, a program written to use the
framework should run without change on any type of system for which the framework is
implemented. Microsoft's commercial implementations of the framework cover
Windows, Windows CE, and the Xbox 360. In addition, Microsoft submits the
specifications for the Common Language Infrastructure (which includes the core class
24
libraries, Common Type System, and the Common Intermediate Language), the C#
language, and the C++/CLI language to both ECMA and the ISO, making them available
as open standards. This makes it possible for third parties to create compatible
implementations of the framework and its languages on other platforms.
Architecture
Assemblies
The intermediate CIL code is housed in .NET assemblies. As mandated by
specification, assemblies are stored in the Portable Executable (PE) format, common on
the Windows platform for all DLL and EXE files. The assembly consists of one or more
files, one of which must contain the manifest, which has the metadata for the assembly.
25
The complete name of an assembly (not to be confused with the filename on disk)
contains its simple text name, version number, culture, and public key token. The public
key token is a unique hash generated when the assembly is compiled, thus two
assemblies with the same public key token are guaranteed to be identical from the point
of view of the framework. A private key can also be specified known only to the creator
of the assembly and can be used for strong naming and to guarantee that the assembly is
from the same author when a new version of the assembly is compiled (required to add
an assembly to the Global Assembly Cache).
Metadata
All CLI is self-describing through .NET metadata. The CLR checks the metadata
to ensure that the correct method is called. Metadata is usually generated by language
compilers but developers can create their own metadata through custom attributes.
Metadata contains information about the assembly, and is also used to implement the
reflective programming capabilities of .NET Framework.
Security
.NET has its own security mechanism with two general features: Code Access
Security (CAS), and validation and verification. Code Access Security is based on
evidence that is associated with a specific assembly. Typically the evidence is the source
of the assembly (whether it is installed on the local machine or has been downloaded
from the intranet or Internet). Code Access Security uses evidence to determine the
permissions granted to the code. Other code can demand that calling code is granted a
specified permission. The demand causes the CLR to perform a call stack walk: every
assembly of each method in the call stack is checked for the required permission; if any
assembly is not granted the permission a security exception is thrown.
When an assembly is loaded the CLR performs various tests. Two such tests are
validation and verification. During validation the CLR checks that the assembly contains
valid metadata and CIL, and whether the internal tables are correct. Verification is not so
exact. The verification mechanism checks to see if the code does anything that is 'unsafe'.
The algorithm used is quite conservative; hence occasionally code that is 'safe' does not
pass. Unsafe code will only be executed if the assembly has the 'skip verification'
permission, which generally means code that is installed on the local machine.
26
.NET Framework uses appdomains as a mechanism for isolating code running in
a process. Appdomains can be created and code loaded into or unloaded from them
independent of other appdomains. This helps increase the fault tolerance of the
application, as faults or crashes in one appdomain do not affect rest of the application.
Appdomains can also be configured independently with different security privileges.
This can help increase the security of the application by isolating potentially unsafe code.
The developer, however, has to split the application into subdomains; it is not done by
the CLR.
Class library
Namespaces in the BCL
System
System. CodeDom
System. Collections
System. Diagnostics
System. Globalization
System. IO
System. Resources
System. Text
System.Text.RegularExpressions
Microsoft .NET Framework includes a set of standard class libraries. The class library
is organized in a hierarchy of namespaces. Most of the built in APIs are part of either
System.* or Microsoft.* namespaces. It encapsulates a large number of common
functions, such as file reading and writing, graphic rendering, database interaction, and
XML document manipulation, among others. The .NET class libraries are available to all
.NET languages. The .NET Framework class library is divided into two parts: the Base
Class Library and the Framework Class Library.The Base Class Library (BCL)
includes a small subset of the entire class library and is the core set of classes that serve
as the basic API of the Common Language Runtime. The classes in mscorlib.dll and
some of the classes in System.dll and System.core.dll are considered to be a part of
the BCL. The BCL classes are available in both .NET Framework as well as its
alternative implementations including .NET Compact Framework, Microsoft Silverlight
and Mono.The Framework Class Library (FCL) is a superset of the BCL classes and
refers to the entire class library that ships with .NET Framework. It includes an expanded
set of libraries, including WinForms, ADO.NET, ASP.NET, Language Integrated
Query, Windows Presentation Foundation, Windows Communication Foundation
27
among others. The FCL is much larger in scope than standard libraries for languages like
C++, and comparable in scope to the standard libraries of Java.
Memory management
The .NET Framework CLR frees the developer from the burden of managing
memory (allocating and freeing up when done); instead it does the memory management
itself. To this end, the memory allocated to instantiations of .NET types (objects) is done
contiguously from the managed heap, a pool of memory managed by the CLR. As long
as there exists a reference to an object, which might be either a direct reference to an
object or via a graph of objects, the object is considered to be in use by the CLR. When
there is no reference to an object, and it cannot be reached or used, it becomes garbage.
However, it still holds on to the memory allocated to it. .NET Framework includes a
garbage collector which runs periodically, on a separate thread from the application's
thread, that enumerates all the unusable objects and reclaims the memory allocated to
them.
The .NET Garbage Collector (GC) is a non-deterministic, compacting, mark-and-
sweep garbage collector. The GC runs only when a certain amount of memory has been
used or there is enough pressure for memory on the system. Since it is not guaranteed
when the conditions to reclaim memory are reached, the GC runs are non-deterministic.
Each .NET application has a set of roots, which are pointers to objects on the managed
heap (managed objects). These include references to static objects and objects defined as
local variables or method parameters currently in scope, as well as objects referred to by
CPU registers. When the GC runs, it pauses the application, and for each object referred
to in the root, it recursively enumerates all the objects reachable from the root objects
and marks them as reachable. It uses .NET metadata and reflection to discover the
objects encapsulated by an object, and then recursively walk them. It then enumerates all
the objects on the heap (which were initially allocated contiguously) using reflection. All
objects not marked as reachable are garbage. This is the mark phase. Since the memory
held by garbage is not of any consequence, it is considered free space. However, this
leaves chunks of free space between objects which were initially contiguous. The objects
are then compacted together, by using memcpy to copy them over to the free space to
make them contiguous again. Any reference to an object invalidated by moving the
28
object is updated to reflect the new location by the GC. The application is resumed after
the garbage collection is over.
The GC used by .NET Framework is actually generational. Objects are assigned
a generation; newly created objects belong to Generation 0. The objects that survive a
garbage collection are tagged as Generation 1, and the Generation 1 objects that survive
another collection are Generation 2 objects. The .NET Framework uses up to Generation
2 objects. Higher generation objects are garbage collected less frequently than lower
generation objects. This helps increase the efficiency of garbage collection, as older
objects tend to have a larger lifetime than newer objects. Thus, by removing older (and
thus more likely to survive a collection) objects from the scope of a collection run, fewer
objects need to be checked and compacted.
Versions
Microsoft started development on the .NET Framework in the late 1990s
originally under the name of Next Generation Windows Services (NGWS). By late 2000
the first beta versions of .NET 1.0 were released.
7.2 C#.NET
One important thing to make clear is that C# is a language in its own right. Although
it is designed to generate code that targets the .NET environment, it is not itself part
of .NET. There are some features that are supported by .NET but not by C#, and you
might be surprised to learn that there are actually features of the C# language that are not
supported by .NET like Operator Overloading.
However, since the C# language is intended for use with .NET, it is important for us
to have an understanding of this Framework if we wish to develop applications in C#
effectively.
At first sight this might seem a rather long-winded compilation process. Actually,
this two-stage compilation process is very important, because the existence of the
Microsoft Intermediate Language (managed code) is the key to providing many of the
benefits of .NET. Let's see why.
Platform Independence
First, it means that the same file containing byte code instructions can be placed
on any platform; at runtime the final stage of compilation can then be easily
accomplished so that the code will run on that particular platform. In other words, by
compiling to Intermediate Language we obtain platform independence for .NET, in much
the same way as compiling to Java byte code gives Java platform independence.
You should note that the platform independence of .NET is only theoretical at
present because, at the time of writing, .NET is only available for Windows. However,
porting .NET to other platforms is being explored (see for example the Mono project, an
effort to create an open source implementation of .NET, at http://www.go-mono.com/).
Performance Improvement
Although we previously made comparisons with Java, IL is actually a bit more
ambitious than Java byte code. Significantly, IL is always Just-In-Time compiled,
whereas Java byte code was often interpreted. One of the disadvantages of Java was that,
31
on execution, the process of translating from Java byte code to native executable resulted
in a loss of performance (apart from in more recent cases, here Java is JIT-compiled on
certain platforms).
Instead of compiling the entire application in one go (which could lead to a slow
start-up time), the JIT compiler simply compiles each portion of code as it is called (just-
in-time). When code has been compiled once, the resultant native executable is stored
until the application exits, so that it does not need to be recompiled the next time that
portion of code is run. Microsoft argues that this process is more efficient than compiling
the entire application code at the start, because of the likelihood those large portions of
any application code will not actually be executed in any given run. Using the JIT
compiler, such code will never get compiled.
This explains why we can expect that execution of managed IL code will be
almost as fast as executing native machine code. What it doesn't explain is why
Microsoft expects that we will get a performance improvement. The reason given for this
is that, since the final stage of compilation takes place at run time, the JIT compiler will
know exactly what processor type the program will run on. This means that it can
optimize the final executable code to take advantage of any features or particular
machine code instructions offered by that particular processor.
Traditional compilers will optimize the code, but they can only perform
optimizations that will be independent of the particular processor that the code will run
on. This is because traditional compilers compile to native executable before the
software is shipped. This means that the compiler doesn't know what type of processor
the code will run on beyond basic generalities, such as that it will be an x86-compatible
processor or an Alpha processor. Visual Studio 6, for example, optimizes for a generic
Pentium machine, so the code that it generates cannot take advantages of hardware
features of Pentium III processors. On the other hand, the JIT compiler can do all the
optimizations that Visual Studio 6 can, and in addition to that it will optimize for the
particular processor the code is running on.
Language Interoperability
How the use of IL enables platform independence, and how JIT compilation
should improve performance. However, IL also facilitates language interoperability.
32
Simply put, you can compile to IL from one language, and this compiled code should
then be interoperable with code that has been compiled to IL from another language.
Intermediate Language
From what we learned in the previous section, Intermediate Language obviously
plays a fundamental role in the .NET Framework. As C# developers, we now understand
that our C# code will be compiled into Intermediate Language before it is executed
(indeed, the C# compiler only compiles to managed code). It makes sense, then, that we
should now take a closer look at the main characteristics of IL, since any language that
targets .NET would logically need to support the main characteristics of IL too.
You should note that some languages compatible with .NET, such as VB.NET,
still allow some laxity in typing, but that is only possible because the compilers behind
the scenes ensure the type safety is enforced in the emitted IL.
Although enforcing type safety might initially appear to hurt performance, in
many cases this is far outweighed by the benefits gained from the services provided
by .NET that rely on type safety. Such services include:
• Language Interoperability
• Garbage Collection
• Security
• Application Domains
Garbage Collection
The garbage collector is .NET's answer to memory management, and in
particular to the question of what to do about reclaiming memory that running
applications ask for. Up until now there have been two techniques used on Windows
36
platform for deal locating memory that processes have dynamically requested from the
system:
• Make the application code do it all manually
The .NET runtime relies on the garbage collector instead. This is a program whose
purpose is to clean up memory. The idea is that all dynamically requested memory is
allocated on the heap (that is true for all languages, although in the case of .NET, the
CLR maintains its own managed heap for .NET applications to use). Every so often,
when .NET detects that the managed heap for a given process is becoming full and
therefore needs tidying up, it calls the garbage collector. The garbage collector runs
through variables currently in scope in your code, examining references to objects stored
on the heap to identify which ones are accessible from your code – that is to say which
objects have references that refer to them. Any objects that are not referred to are
deemed to be no longer accessible from your code and can therefore be removed. Java
uses a similar system of garbage collection to this.
Security
.NET can really excel in terms of complementing the security mechanisms
provided by Windows because it can offer code-based security, whereas Windows only
really offers role-based security.
Role-based security is based on the identity of the account under which the
process is running, in other words, who owns and is running the process. Code-based
security on the other hand is based on what the code actually does and on how much the
code is trusted. Thanks to the strong type safety of IL, the CLR is able to inspect code
before running it in order to determine required security permissions. .NET also offers a
mechanism by which code can indicate in advance what security permissions it will
require to run.
The importance of code-based security is that it reduces the risks associated with
running code of dubious origin (such as code that you've downloaded from the Internet).
For example, even if code is running under the administrator account, it is possible to use
code-based security to indicate that that code should still not be permitted to perform
certain types of operation that the administrator account would normally be allowed to
37
do, such as read or write to environment variables, read or write to the registry, or to
access the .NET reflection features.
Name Spaces
Namespaces are the way that .NET avoids name clashes between classes. They
are designed, for example, to avoid the situation in which you define a class to represent
a customer, name your class Customer, and then someone else does the same thing (quite
a likely scenario – the proportion of businesses that have customers seems to be quite
high).
A namespace is no more than a grouping of data types, but it has the effect that
the names of all data types within a namespace automatically get prefixed with the name
of the namespace. It is also possible to nest namespaces within each other. For example,
most of the general-purpose .NET base classes are in a namespace called System. The
base class Array is in this namespace, so its full name is System.Array.
38
Creating .Net Application using C#
C# can be used to create console applications: text-only applications that run in a
DOS window. You'll probably use console applications when unit testing class libraries,
and for creating Unix/Linux daemon processes. However, more often you'll use C# to
create applications that use many of the technologies associated with .NET. In this
section, we'll give you an overview of the different types of application that you can
write in C#.
Windows Control
Although Web Forms and Windows Forms are developed in much the same way,
you use different kinds of controls to populate them. Web Forms use Web Controls, and
Windows Forms use Windows Controls.
A Windows Control is a lot like an ActiveX control. After a Window control is
implemented, it compiles to a DLL that must be installed on the client's machine. In fact,
the .NET SDK provides a utility that creates a wrapper for ActiveX controls, so that they
can be placed on Windows Forms. As is the case with Web Controls, Windows Control
creation involves deriving from a particular class, System.Windows.Forms.Control.
Windows Services
A Windows Service (originally called an NT Service) is a program that is
designed to run in the background in Windows NT/2000/XP (but not Windows 9x).
39
Services are useful where you want a program to be running continuously and ready to
respond to events without having been explicitly started by the user. A good example
would be the World Wide Web Service on web servers, which listens out for web
requests from clients.
It is very easy to write services in C#. There are .NET Framework base classes
available in the System.ServiceProcess namespace that handle many of the boilerplate
tasks associated with services, and in addition, Visual Studio .NET allows you to create a
C# Windows Service project, which starts you out with the Framework C# source code
for a basic Windows service.
CHAPTER 8
SYSTEM DESIGN
8.1 Introduction
Software design sits at the technical kernel of the software engineering process
and is applied regardless of the development paradigm and area of application. Design is
the first step in the development phase for any engineered product or system. The
designer’s goal is to produce a model or representation of an entity that will later be
built. Beginning, once system requirement have been specified and analyzed, system
design is the first of the three technical activities -design, code and test that is required to
build and verify software.
The importance can be stated with a single word “Quality”. Design is the place
where quality is fostered in software development. Design provides us with
representations of software that can assess for quality. Design is the only way that we
can accurately translate a customer’s view into a finished software product or system.
Software design serves as a foundation for all the software engineering steps that follow.
Without a strong design we risk building an unstable system – one that will be difficult
to test, one whose quality cannot be assessed until the last stage.
During design, progressive refinement of data structure, program structure, and
procedural details are developed reviewed and documented. System design can be
viewed from either technical or project management perspective. From the technical
42
point of view, design is comprised of four activities – architectural design, data structure
design, interface design and procedural design.
MODULE DESCRIPTION:
1) RGB Projections:
The RGB color model is an additive color model in which red, green, and blue
light are added together in various ways to reproduce a broad array of colors. The name
of the model comes from the initials of the three additive primary colors, red, green, and
blue. The main purpose of the RGB color model is for the sensing, representation, and
display of images in electronic systems, such as conventional photography.In this
module the RGB Projections is used to find the size of the image vertically and
horizontally.
2) Image Utility:
CHAPTER 9
UML DIAGRAMS
CHAPTER 10
SYSTEM TESTING
10.1 Introduction
Software testing is a critical element of software quality assurance and represents
the ultimate review of specification, design and coding. In fact, testing is the one step in
the software engineering process that could be viewed as destructive rather than
constructive.
A strategy for software testing integrates software test case design methods into a
well-planned series of steps that result in the successful construction of software. Testing
is the set of activities that can be planned in advance and conducted systematically. The
underlying motivation of program testing is to affirm software quality with methods that
can economically and effectively apply to both strategic to both large and small-scale
systems.
A strategy for software testing may also be viewed in the context of the spiral.
Unit testing begins at the vertex of the spiral and concentrates on each unit of the
software as implemented in source code. Testing progress by moving outward along the
spiral to integration testing, where the focus is on the design and the construction of the
software architecture. Talking another turn on outward on the spiral we encounter
validation testing where requirements established as part of software requirements
analysis are validated against the software that has been constructed. Finally we arrive at
system testing, where the software and other system elements are tested as a whole.
49
UNIT TESTING
MODULE TESTING
SYSTEM TESTING
Integration Testing
ACCEPTANCE
TESTING
User Testing
Unit testing focuses verification effort on the smallest unit of software design, the
module. The unit testing we have is white box oriented and some modules the steps are
conducted in parallel.
3. Conditional testing
In this part of the testing each of the conditions were tested to both true and false
aspects. And all the resulting paths were tested. So that each path that may be generate
on particular condition is traced to uncover any possible errors.
4. Dataflow testing
This type of testing selects the path of the program according to the location of
definition and use of variables. This kind of testing was used only when some local
variable were declared. The definition-use chain method was used in this type of testing.
These were particularly useful in nested statements.
5. Loop Testing
In this type of testing all the loops are tested to all the limits possible. The following
exercise was adopted for all loops:
• All the loops were tested at their limits, just above them and just below them.
• All the loops were skipped at least once.
• For nested loops test the inner most loop first and then work outwards.
• For concatenated loops the values of dependent loops were set with the help of
connected loop.
51
• Unstructured loops were resolved into nested loops or concatenated loops and tested
as above.
Each unit has been separately tested by the development team itself and all the
input have been validated.
52
CHAPTER 11
SAMPLE SCREENS
CONCLUSION
The classification framework for CBIR is studied and powerful classification techniques
for information retrieval context are selected. The main contributions concern the
boundary correction to make the retrieval process more robust, and secondly, the
introduction of a new criterion for image selection that better represents the CBIR
objective of database ranking. Other improvements, as batch processing and speed-up
process are proposed and discussed. Our strategy leads to a fast and efficient active
learning scheme to online retrieve query concepts from a database. Experiments on large
databases show that it gives very good results in comparison to several other active
strategies. The framework introduced in this article may be extended. We are currently
working on kernel functions for object classes retrieval, based on bags of features: each
image is no more represented by a single global vector, but by a set of vectors.
65
REFERENCES
[3] E. Chang, B. T. Li, G. Wu, and K. Goh, “Statistical learning for effective visual
information retrieval,” in Proc. IEEE Int. Conf. Image Processing, Barcelona, Spain,
Sep. 2003, pp. 609–612.
[5] J. Peng, B. Bhanu, and S. Qing, “Probabilistic feature relevance learning for content-
based image retrieval,” Comput. Vis. Image Understand., vol. 75, no. 1-2, pp. 150–164,
Jul.-Aug. 1999.
[6] N. Doulamis and A. Doulamis, “A recursive optimal relevance feedback scheme for
CBIR,” presented at the Int.Conf. Image Processing, Thessaloniki, Greece, Oct. 2001.
[8] O. Chapelle, P. Haffner, and V. Vapnik, “Svms for histogram based image
classification,” IEEE Trans. Neural Netw., vol. 10, pp.1055–1064, 1999.
[10] S.-A. Berrani, L. Amsaleg, and P. Gros, “Recherche approximative de plus proches
voisins: Application la reconnaissance d’images par descripteurs locaux,” Tech. Sci. Inf.,
vol. 22, no. 9, pp. 1201–1230, 2003.
[11] N. Najjar, J. Cocquerez, and C. Ambroise, “Feature selection for semi supervised
learning applied to image retrieval,” in Proc. IEEE Int. Conf.Image Processing,
Barcelona, Spain, Sep. 2003, vol. 2, pp. 559–562.
66
[12] X. Zhu, Z. Ghahramani, and J. Lafferty, “Semi-supervised learning using gaussian
fields and harmonic functions,” presented at the Int.
Conf. Machine Learning, 2003.
67
APPENDIX-A
( D.Mahidhar and M.V.S.N.Maheswar, “A Novel Approach for Retrieving an Image
Using CBIR”, IJCSIT,September,2010,vol.02, No.06,2010, 2160-2162 )