You are on page 1of 46

1.

INTRODUCTION

1.1 ABOUT THE PROJECT


First computers, which have been invented many years ago, had a room-sized frame with
a stand-alone processing unit that were able to do some simple mathematical calculations.
Through these many years, which has passed from the invention of the first computers,
computers have been improved. Nowadays, connectivity is the biggest challenging issues for this
technology. In the recent years, each simple workplace is containing many interconnected
computers, printers, scanners, servers and so on. Also, even personal computers are connected to
other computers, smart home electronic systems and such examples from a private place. The
mentioned connectivity is provided by networks. By creating networks, communication,
transferring and even connectivity have become easier than before.

Through the years and by improving technology, networks have been changed from
connection between a few computers to connecting number of computers in many networks,
which is describing the term of Internet. This complex network would be demanding for having a
rich management. So network management has been a demand, from the formation of Internet.

The term of Network Management is related to monitoring and controlling the network as
well as each connected device in that network. Network monitoring will be responsible for
checking the connectivity of the devices in the network, checking and detecting malicious
activities in the network and so many other tasks with the goal of providing a healthy network
with high performance.

In a small sized network such as home network, network management might not be
considerable issue. However, for large organizations having a smooth and healthy network with
high performance, can be considered as a priority. In case of not having good network
management, organizations can lose even a big amount of their profit and it can cause
bankruptcy for that organization. For some instance, network of banks, airlines, libraries and so
many other organizations can be considered that in case of having problem in their network, they
would not be able to provide the promised services. So each organization must keep its network
up and healthy to be able to provide any services it promised for.

1
Thus, nowadays in organizations having smooth and healthy network is the top most
priority. For achieving this goal, organizations need to provide good management on their
network.

2
1.2 ORGANIZATION PROFILE

NOYCE Cyber Solutions – NOYCE provides enterprise level web design, web
development and maintenance services in and outside India with the development center in
Software Technology Park, Palakkad and global delivery center all over India. Our expert web
designers and developers accomplished various website projects and portals starting from simple
cms to complex customized web portals for many business domains.

Every Individual website that we design is custom built according to your specific needs
of our clients. Our website designers are good at delivering value for digital branding of our
clients profile online. We always make sure our clients get an impressive and effective online
presence as per global standards. Our website design and maintenance service will ensure
client’s stability and maintain your critical Internet services. Noyce is a brand as website
Development Company in Kerala. Noyce is good at developing and designing affordable and
professional website designs for companies all over India

As a website designing company we help small, medium and large businesses to develop
successful websites that will separate our clients from the rest and eventually help them get
customers from all over the world through online presence.

In the modern fast growing digital world, A website can represent a company globally.
So it has to be designed with international standards, modern facility and should be better from
competitor’s website. This is the service that we offer for the global companies. Website is a
powerful digital tool that can achieve your marketability on web. We can promote your website
as well as the business with impressive logo, web site design and more informative contents.

Blogs Website Design / Profile Building /Article Based Website / News Portal: Blogs can
generate a huge traffic to any site while simultaneously increasing the exposure to potential
clients and customers all across web. Noyce can help you in building high quality interactive
blogger website to have global attention.

3
Portfolio websites: We help individual, celebrities, artists & photographers promote,
exhibit, and sell their artwork or master piece online in an interactive way. Website not only give
them a brand but also helps in showcasing the talent on global front.

Business Websites (Flash / Non Flash), UI Design: Noyce always deliver high value for
it’s client by building high quality, attractive, feature based and attractive websites. We feel a
website is the one of the most powerful media in today’s world for any business to grow
successfully . We design client’s Website to empower the Business by delivering. We always
measure the growth of business for our clients by developing and improving their online
presence. We create a unique look and feel for ordered website and develop it into a symbol of
your business quality on international standard.

E- commerce website design: Ecommerce or digital store building or online sales


marketing portal are now in great demand. Every business is looking for virtual store to have
online business. We develop the world class eCommerce application from scratch and also from
opensource tool like magento ,opencart, oscommerce etc. Our developed eCommerce websites
has seen a tremendous growth in the past few years due to a variety of factors like attractive
theme and innovative extensions and plugins. We at Noyce global design website with unique
designs and user friendly graphical interfaces making the user experience as comfortable as it
can be.

Logo Designing: Logo is the first brand for the company and speaks a lot about company
vision, mission and business model too. Its sound the presence and also helps in making digital
brands. Hence Designing a good logo is not a simple task and requires a lot of experts
involvement. It requires clear idea about the concept (business domain) of logos and values of
the brand as well as understanding of the target consumer. Basic steps we following in logo
design process is formulating concept, doing initial sketch, development of the concept,
finalizing the logo concept, deciding the theme colors and format.

4
Our Team
Empowering Growth

Our people are our greatest asset and biggest differentiator. They are passionate about
results, and also believe in having a lot of fun along the way.

However, that does not take away from the focus on work. Our people are passionate
about delivering results to clients. All Noyce are direct and straightforward—even if that means
telling the uncomfortable truth. We are ambitious and impatient for success, and yet down-to-
earth and approachable.

In short, Noyce are not only the kind of people you would love to work with, but they are
also the people you would want to socialise with outside work. We encourage you to take every
opportunity to interact with our people and witness the vibrancy of the office in person

Quality Pledge

We are committed to being very aggressive in our attitude towards quality and customer
service, primarily since we want to be ranked as the "best" in our business. Quality is not just
another goal, it is our basic strategy for survival and future growth.

PRIORITY

Our customers demand and warrant a high quality product---it is our responsibility to
give them what they want If we don't,.they'll find someone who can. If customer requirements
are unclear, then it is our job to seek out a better understanding of their
requirements/specifications. If we fail at any time, then we must determine what went wrong and
assure that it doesn't happen again.

OBJECTIVES

Our quality objectives are to furnish high quality products, on time, and at the lowest
cost. The attainment of such objectives will lead to, customer satisfaction, enhanced copper
performance at the application level, and ongoing improvements in process efficiency. Once an

5
objective is achieved, it should be recognized and reset to stimulate further quality improvement.
To reach our objectives, we will have to maintain a constant focus on quality with full
dedication, commitment, and teamwork.

VISION

Our journey is Total Quality Management--fully satisfying our customers requirements


through a process of continuous improvement. It's critical to understand that Total Quality
Management is not a short term program. It's a long term commitment aimed at continuously
improving the way we work, providing a safe work environment, managing our business
processes, and supplier selection/retention. It is our goal to posture our company for market
expansion, thereby providing improved job security and quality of life for all.

QUALITY FIRST

It must be clearly understood that we'll not allow quality to take second place behind cost
or schedule. All employees have the right to question their supervisor's decisions or actions if
they feel that quality is being compromised.

PRODUCT

Noyce is a leading customized web application development company with expertise in


application development using opensource technology. We have combined experience of 25 +
years in web application development including development of retail sites, matrimony website,
trading website, classified website, content management website etc.

We develop scalable and robust web applications from scratch on platforms like Drupal,
Wordpress, Joomla, Opencart & Magento.

We also develop custom themes, plugins for wordpress, joomla, opencart& web
applications with php frameworks like Codeigniter/CackePHP. With our experience in web
application development for publishing houses and corporate businesses – we are capable of
building performance driven, responsive scalable web application, for cloud and traditional
servers .

6
APPLICATION DEVELOPMENT

At Noyce we develop unique web based applications which serves all the needs of our
esteemed client from several business domain. We have a team of experienced, enthusiastic,
result oriented, dedicated manpower who delivers of web application development. We develop
platform independent web applications which can be run on all kinds of platforms and serves the
exact need of clients.

We follow standards and guidelines of software application development phase to phase .


Our team delivers the most quality work in terms of back end integration and front end
representation while building any web application for SME or comprehend the demand of your
web application, study your opponents strategies and develop a unique tailor made web
application that match all your web application demands.

We are capable of developing scalable, creative, stable and trustworthy web application
solutions to complement your most complicated business needs. Web developers at Noyce are
committed to helping our clients by using the latest web technology with an objective to help you
gain income, improve performance, reduce maintenance costs, enhance productivity and boost
end user satisfaction.

Noyce Cyber Solutions is specialized in web application development and other


associated services like ecommerce web development(shopping cart development) and SEO
(Search Engine Optimisation) in Bangalore, India

MOBILE READY WEB SITES

We build responsive mobile interfaces for websites making them optimized for touch and
small screen devices.

With more and more people accessing web from hand held devices, mobile enabled
website are no more an option but a necessity. Lack of a mobile optimized site could even cause
for the loss of loyal visitors to your website because of the painful user experience your site has
when accessed from a mobile device.

7
At Noyce Cyber Solutions we build Mobile Interfaces for websites and helps it to run in all
major handheld devices with enhanced usability and user experience.

Web Design
 Responsive Website Design
 PSD TO XHTML Conversion
 User Interface Design
 Website Redesign
 Flash Web Design

Web Development
 Drupal Development
 WordPress Development
 eCommerce Development
 PHP Development
 Website Maintenance

Corporate Identity
 Brand Identity Design
 Corporate Stationery Design
 Marketing Collaterals Design
 Flash &Powerpoint Presentations
 Corporate Audio & Video Presentations

Web Marketing
 Search Engine Optimization
 Social Media Optimization
 Search Engine Marketing
 Web Analytics

8
1.3. SYSTEM SPECIFICATION

1.3.1 HARDWARE SPECIFICATION

Processor : i3 and above

RAM : 2 GB and above

HDD : 500GB and Above

1.3.2 SOFTWARE SPECIFICATION

Operating System : Windows XP/8/10

Programming Language : .NET

IDE : Microsoft Visual studio .Net

9
1.3.3 SOFTWARE DESCRIPTION

Introducing the .NET Framework

The .NET Framework is such a comprehensive platform that it can be a little difficult to
describe. I have heard it described as a Development Platform, an Execution Environment, and
an Operating System among other things. In fact, in some ways each of these descriptions is
accurate, if not sufficiently precise.

The software industry has become much more complex since the introduction of the
Internet. Users have become both more sophisticated and less sophisticated at the same time. (I
suspect not many individual users have undergone both metamorphoses but as a body of users
this has certainly happened). Folks who had never touched a computer less than five years ago
are now comfortably including the Internet in their daily lives. Meanwhile, the technophile or
professional computer user has become much more advanced, as have their expectations from
software.

It is this collective expectation from software that drives our industry. Each time a
software developer creates a successful new idea, they raise user expectations for the next new
feature. In a way this has been true for years. But now software developers face the added
challenge of addressing the Internet and Internet-users in many applications that in the past were
largely unconnected. It is this new challenge that the .NET Framework directly addresses.

Code in a Highly Distributed World

Software that addresses the Internet must be able to communicate. However, the Internet
is not just about communication. This assumption has led the software industry down the wrong
path in the past. Communication is simply the base requirement for software in an Inter-
networked world.

In addition to communication other features must be established. These include, security,


binary compose ability and modularity (which I will discuss shortly), scalability and
performance, and flexibility. Even these just scratch the surface, but they are a good start.

10
Here are some features that users will expect in the near future. Users will begin to
expect to run code served by a server that is not limited to the abilities (or physical display
window) of a browser. Users will begin to expect websites and server-side code to begin to
compose themselves of data and functionality from various venders, giving the end-user flexible
one-stop shopping. Users will expect their data and information to be both secured and to roam
from site to site so that they don’t have to type it in over and again. These are tall orders, and
these are the types of requirements that are addressed by the .NET Framework.

It is not possible for the requirements of the future to be addressed by a new


programming language, or a new library of tools and reusable code. It is also not practical to
require everyone to buy a new operating system to use that addresses the Internet directly. This
is why the .NET Framework is a development environment, execution environment and
Operating System.

One challenge for software in a highly distributed environment (like the Internet) is the
fact that many components are involved, with different needs in terms of technology. For
example, client software such as a browser or custom client has different needs then a server
object or data-base element. Developers creating large systems often have to learn a variety of
programming environments and languages just to create a single product.

Automatic Memory Management

The Common Language Runtime does more for your C# and .NET managed executable
than just JIT compile them. The CLR offers automatic thread management, security
management, and perhaps most importantly, memory management.

Memory management is an unavoidable part of software development. Commonly


memory management, to one degree or another, is implemented by the application. It is its sheer
commonality combined with its potential complexity, however, that make memory management
better suited as a system service.

Here are some simple things that can go wrong in software.

11
· Your code can reference a data block that has not been initialized. This can cause
instability and cause erratic behavior in your software.

· Software may fail to free up a memory block after it is finished with the data. Memory
leaks can cause an application or an entire system to fail.

· Software may reference a memory block after it has been freed up.

There may be other memory-management related bugs, but the great majority will fall
under one of these main categories. Developers are increasingly taxed with complex
requirements, and the mundane task of managing the memory for objects and data types can be
tedious. Furthermore, when executing component code from an un-trusted source (perhaps
across the internet) in your same process with your main application code you want to be
absolutely certain that the un-trusted code cannot obtain access to the memory for your data.
These things create the necessity for automatic memory management for managed code.

All programs running under the .NET Framework or Common Language Runtime
allocate memory from a managed heap. The managed heap is maintained by the CLR. It is used
for all memory resources, including the space required to create instances of objects, as well as
the memory required for data buffers, strings, collections, stacks and caches. The managed heap
knows when a block of data is referenced by your application (or by another object in the heap),
in which case that object will be left alone. But as soon as a block of memory becomes an
unreferenced item, it is subject to garbage collection. Garbage collection is an automatic part of
the processing of the managed heap, and happens as needed.

Your code will never explicitly clean-up, delete, or free a block of memory, so therefore
it is impossible to leak memory. Memory is considered garbage when it is no longer referenced
by your code, so therefore it is impossible for your code to reference a block of memory that has
already been freed or garbage collected. Finally, because the managed heap is a pointer-less
environment (at least from your managed code’s point of view), it is possible for the code
verifier to make it impossible for managed code to read a block of memory that has not been
written to first.

12
The managed heap makes all three of the major memory management bugs an impossibility.

Language Concepts and the CLR

Managed code runs with the constant maintenance of the Common Language Runtime.
The CLR provides memory management, type management, security and threading. In this
respect, the CLR is a runtime environment. However, unlike typical runtime environments,
managed code is not tied to any particular programming language.

You have most likely heard of C# (pronounced See-Sharp). C# is a new programming


language built specifically to write managed software targeting the .NET Framework. However,
C# is by no means the only language that you can use to write managed code. In fact, any
compiler developer can choose to make their compiler generate managed code. The only
requirement is that their compiler emits an executable comprised of valid IL and metadata.

At this time Microsoft is shipping five language compilers/assemblers with the .NET
Framework. These are C#, Visual Basic, C++, Java Script, and IL. (Yes, you can write
managed code directly in IL, however this will be as uncommon as it is to write assembly
language programs today). In addition to the five languages shipping with the framework,
Microsoft will release a Java compiler that generates managed applications that run on the CLR.

In addition to Microsoft’s language compilers, third parties are producing language


compilers for over 20 computer languages, all targeting the .NET Framework. You will be able
write managed applications in your favorite languages including Eiffel, PERL, COBOL and Java
amongst others.

Language agnosticism is really cool. Your PERL scripts will now be able to take
advantage of the same object libraries that you use in your C# applications. Meanwhile, your
friends and coworkers will be able to use your reusable components whether or not they are
using the same programming language as you. This division of runtime engine, API
(Application Programmer Interface), and language syntax is a real win for developers.

13
The CLR does not need to know (nor will it ever know) anything about any computer
language other than IL. All managed software is compiled down to IL instructions and
metadata. These are the only things that the CLR deals with. The reason this is important is
because it makes any computer language an equal citizen from the point of view of the CLR. By
the time JIT compilation occurs your program is nothing but logic and metadata.

IL itself is geared towards object oriented languages. However, compilers for procedural
or scripted languages can easily produce IL to represent their logic.

Advanced Topics for the Interested

If you are one of those that just must know some of the details, then this section is for
you. But, if you are looking for a practical but brief overview of the .NET Framework, you can
skip to section Error! Reference source not found. Error! Reference source not found. right now
and come back to this section when you have more time.

In specific, I am going to explain in more detail JIT compilation and garbage collection.

The first time that a managed executable references a class or type (such as a structure, interface,
enumerated type or primitive type) the system must load the code module or managed module
that implements the type. At the point of loading, the JIT compiler creates method stubs in
native machine language for every member method in the newly loaded class. These stubs
include nothing but a jump into a special function in the JIT compiler.

Once the stub functions are created, the system fixes up any method calls in the
referencing code to point to the new stub functions. At this time no JIT compilation of the type’s
code has occurred. However, if a managed application references a managed type, it is likely to
call methods on this type (in fact it is almost inevitable).

When one of the stub functions is called, the JIT compiler looks up the source code (IL
and metadata) in the associated managed module, and builds native machine code for the
function on the fly. Then, it replaces the stub function with a jump to the newly JIT compiled

14
function. The next time this same method is called in source code, it will be executed full speed
without any need for compilation or any extra steps.

The good thing about this approach is that the system never wastes time JIT compiling methods
that won’t be called by this run of your application.

Finally, when a method is JIT compiled, any types that it references are checked by the
CLR to see if they are new to this run of the application. If this is indeed the first time a type has
been referenced, then the whole process starts over again for this type. This is how JIT
compilation progresses throughout the execution of a managed application.

Take a deep breath, and exhale slowly, because now I am going to switch gears and
discuss the garbage collector.

Garbage collection is a process that takes time. The CLR must halt all or most of the
threads in your managed application when garbage buffers and garbage objects are cleaned out
of the managed heap. Performance is important, so it can help to understand the garbage
collection process.

Garbage collection is not an active process. Garbage collection is passive and will only
happen when there is not enough free memory to fulfill an instruction to new-up an instance of
an object or memory buffer. If there is not enough free memory then a garbage collection occurs
in the attempt to find enough free memory.

When garbage collection occurs, the system finds all objects referenced by local (stack)
variables and global variables. These objects are not garbage, because they are referenced by
your running threads. After this, the system searches referenced objects for more object
references. These objects are also not garbage because they are referenced. This continues until
the last referenced object is found. All other objects are garbage and are released. Object
Oriented Code Reuse.

Code reuse has been a goal for computer scientist for decades now. Part of the promise
of object oriented programming is flexible and advanced code reuse. The CLR is a platform

15
designed from the ground up to be object oriented, and therefore to promote all of the goals of
object oriented programming.

Today, most software is written nearly from scratch. The unique logic of most
applications can usually be described in several brief statements, and yet most applications
include many thousands or millions of lines of custom code to achieve their goals. This cannot
continue forever.

In the long run the software industry will simply have too much software to write to be
writing every application from scratch. Therefore systematic code reuse is a necessity.

Rather than go into a lengthy explanation about why OO and code reuse are difficult-but-
necessary, I would like to mention some of the rich features of the CLR that promote object
oriented programming.

· The CLR is an object oriented platform from IL up. IL itself includes many instructions
for dealing with memory and code as objects.

· The CLR promotes a homogeneous view of types, where every data type in the system,
including primitive types, is an object derived from a base object type called System.Object. In
this respect literally every data element in your program is an object and has certain consistent
properties.

· Managed code has rich support for object oriented constructs such as interfaces,
properties, enumerated types and of course classes. All of these code elements are collectively
referred to as types when referring to managed code.

· Managed code introduces new object oriented constructs including custom attributes,
advanced accessibility, and static constructors (which allow you to initialize types, rather than
instances of types) to help fill in the places where other object oriented environments fall short.

· Managed code can make use of pre-built libraries of reusable components. These
libraries of components are called managed Assemblies and are the basic building block of

16
binary composes ability. (Reusable components are packaged in files called assemblies,
however technically even a managed executable is a managed assembly).

· Binary compose ability allows your code to use other objects seamlessly without the
necessity to have or compile source code from the third party code. (This is largely possible due
to the rich descriptions of code maintained in the metadata).

· The CLR has very strong versioning ability. Even though your applications will be
composed of many objects published in many different assemblies (files), it will not suffer from
versioning problems as new versions of the various pieces are installed on a system. The CLR
knows enough about an object to know exactly which version of an object is needed by a
particular application.

These features and more build upon and extend previous object oriented platforms. In
the long run object oriented platforms like the .NET Framework will change the way
applications are built. Moving forward, a larger and larger percentage of the new code that you
write will directly relate to the unique aspects of your application. Meanwhile, the standard bits
that show up in many applications will be published as reusable and extendible types.

Class Library

Now that you have a taste of the goals and groundwork lay by the CLR and managed
code, let’s taste the fruits that it bears. The Framework Class Library is the first step toward the
end solution of component based applications. If you like, you can use it like any other library or
API. That is to say that you can write applications that make use of the objects in the FCL to
read files, display windows, and do various tasks. But, to exploit the true possibilities, you can
extend the FCL towards your applications needs, and then write a very thin layer that is just
“application code”. The rest is reusable types and extensions of reusable types.

The FCL is a class library; however it has been designed for extendibility and composes
ability. This is advanced reuse. Take, for example, the stream classes in the FCL. The
designers of the FCL could have defined file streams and network streams and been done with
it. Instead, all stream classes are derived from a base class, called System.IO.Stream. The FCL

17
defines two main kinds of streams: Streams that communicate with devices (such as files,
networks and memory), and streams whose devices are other instances of stream derived
classes. These abstracted streams can be used for IO formatting, buffering, encryption, data
compression, Base-64 encoding, or just about any other kind of data manipulation.

The result of this kind of design is a simple set of classes with a simple set of rules that
can be combined in a nearly infinite number of ways to produce the desired effect. Meanwhile,
you can derive your own stream classes which can be composed along with the classes that ship
with the Framework Class Library. The following sample applications demonstrate streams and
FCL compose ability in general.

ADO.NET

ADO.NET provides consistent access to data sources such as Microsoft SQL Server, as
well as data sources exposed via OLE DB and XML. Data-sharing consumer applications can
use ADO.NET to connect to these data sources and retrieve, manipulate, and update data.

ADO.NET cleanly factors data access from data manipulation into discrete components
that can be used separately or in tandem. ADO.NET includes .NET data providers for connecting
to a database, executing commands, and retrieving results. Those results are either processed
directly, or placed in an ADO.NET Dataset object in order to be exposed to the user in an ad-hoc
manner, combined with data from multiple sources, or remote between tiers. The ADO.NET
Dataset object can also be used independently of a .NET data provider to manage data local to
the application or sourced from XML.

Need for ADO.NET

As application development has evolved, new applications have become loosely coupled
based on the Web application model. More and more of today's applications use XML to encode
data to be passed over network connections. Web applications use HTTP as the fabric for
communication between tiers, and therefore must explicitly handle maintaining state between
requests. This new model is very different from the connected, tightly coupled style of

18
programming that characterized the client/server era, where a connection was held open for the
duration of the program's lifetime and no special handling of state was required.

In designing tools and technologies to meet the needs of today's developer, Microsoft
recognized that an entirely new programming model for data access was needed, one that is built
upon the .NET Framework. Building on the .NET Framework ensured that the data access
technology would be uniform—components would share a common type system, design
patterns, and naming conventions.

ADO.NET was designed to meet the needs of this new programming model:
disconnected data architecture, tight integration with XML, common data representation with the
ability to combine data from multiple and varied data sources, and optimized facilities for
interacting with a database, all native to the .NET Framework.

Leverage Current Ado Knowledge

Microsoft's design for ADO.NET addresses many of the requirements of today's


application development model. At the same time, the programming model stays as similar as
possible to ADO, so current ADO developers do not have to start from scratch in learning a
brand new data access technology. ADO.NET is an intrinsic part of the .NET Framework
without seeming completely foreign to the ADO programmer.

ADO.NET coexists with ADO. While most new .NET applications will be written using
ADO.NET, ADO remains available to the .NET programmer through .NET COM
interoperability services. For more information about the similarities and the differences between
ADO.NET and ADO.

ADO.NET provides first-class support for the disconnected, n-tier programming


environment for which many new applications are written. The concept of working with a
disconnected set of data has become a focal point in the programming model. The ADO.NET
solution for n-tier programming is the Dataset.

19
XML Support

XML and data access are intimately tied—XML is all about encoding data, and data
access is increasingly becoming all about XML. The .NET Framework does not just support
Web standards—it is built entirely on top of them.

C – SHARP (C#)

That's a lot to assimilate, but all that was just the runtime engine, the foundation.
Unfortunately, there are thousands of classes2 in the C# "framework classes," so I can't even
begin to introduce you to what is in the framework - the best I can do is give you an idea of why
you should take the trouble to learn it.

The framework classes constitute the runtime library that all .Net languages and
applications share. For portability between Delphi for Windows and Delphi for .Net you can just
stick to the Delphi RTL wrappings for various framework features. However, to really take
advantage of .Net, you should make an effort to learn the framework classes. Beyond what
learning the framework classes can do for today's projects, learning the framework classes is
what will make you a .Net programmer who can find work in any .Net shop on the planet.
["Learn once, work anywhere."]

You've probably all seen the dog and pony shows where .Net turns all the complexity of
XML, SOAP, and WSDL into straightforward remote calls that pass objects between systems.
This is great stuff - but there's a lot more to the framework classes than web services. .Net
includes cryptography classes, Perl-compatible regex classes, and a great suite of collection
classes that goes just light years beyond TList.

One thing to note is that even though C# is easy for Delphi programmers to read, you
don't have to learn C# to learn the framework classes. Microsoft does not currently provide
source to the library code, so that you can't Ctrl+Click on TObject.ToString and see the
implementation, any more than you can Ctrl+Click on CreateCompatibleDC() in Delphi for
Windows.

20
This is the Future

Historically, the Windows API has been a set of 'flat' function calls. If you were feeling
particularly charitable, you could say it was "object like", in that you created an object (like a
window or a font) and then kept passing the "handle" to various routines that manipulated it. Of
course, few people have ever been particularly willing to be quite so charitable. Learning the
Windows API was always a slow and frustrating exercise, and almost all Windows code
manipulates the flat API from behind various layers of incompatible object-oriented wrappers.
Knowing MFC didn't help much with Delphi and vice versa.

More, if you weren't working in C or C++, you were always working at a disadvantage.
When a new API came out, you'd either have to take the time to translate the headers and maybe
write some wrapper classes yourself, or you'd have to wait for someone else to do it. Either way,
there was always the danger that a translation might be wrong in some way - the pad bytes are
off, an optional parameter might be required, a routine might be declared with the wrong calling
convention, and so on.

All these problems disappear with .Net and the framework classes. The framework is
object-oriented from top to bottom. No more "handles" to pass to an endless set of flat functions
- you work with a window or a font by setting properties and calling methods. Just like Delphi,
of course - but now this is the native API, not a wrapper. The wrapper classes are organized into
hierarchical namespaces, which reduce the endless searching through alphabetical lists of
function names. Looking for file functions? System.IO is a pretty logical place to look. Want a
hash table like in Perl? System. Collections has a pretty nice one.

Finally, Microsoft promises that all future API's will be released as CLS-compliant parts
of the framework class library. This means that your Delphi for .Net programs can use a new
API the day it's released, without having to do any header translation, and without any danger
that the header translation might be wrong.

You might be skeptical about that promise. Perhaps you remember that COM was once
touted as Windows' object-oriented future. This is a sensible attitude - but .Net is a lot better than
COM ever was. Most people's first COM experiments produced a sort of stunned disbelief at just

21
how complex Microsoft had managed to make something as simple as object orientation. Most
people's first .Net experiments leave them pleasantly surprised that something this good could
have come from the same company that gave us COM and the Windows API.

VISUAL C# .NET OVERVIEW:

Strong C++ heritage immediately familiar to C++ and Java developers Allows C-style memory
management and pointers
First component-oriented language in C family Properties, methods, indexers, delegates, events
Design-time and runtime attributes Enables one-stop programming
No header files, IDL
Embeddable in ASP .NET
Component-Oriented
What defines a component?
Properties, methods, events
Design-time and runtime information
Integrated help and documentation
First class support in C#
Not naming patterns, adapters, etc.
Not external files
Easy to build and consume
Comparison to Visual Basic
Syntactic Differences
Visual Basic is NOT case sensitive

In C# but not in Visual Basic


Pointers, shift operators, inline documentation
Overloaded operators, unsigned integers
In Visual Basic but not in C#

22
Select Case, Interface implementation
Dynamic arrays, modules, optional parameters

Need for C#
Existing languages are powerful. Why do we need another language?
Important features are spread out over multiple languages
Example: must choose between pointers (C++) or garbage collection (Java)?
Old languages + new features = poor syntax
Garbage collection in C++?
Event-driven GUIs in Java?

Goals of C#
Give developers a single language with
A full suite of powerful features
A consistent and simple syntax
Increase developer productivity!
Type safety
Garbage collection
Exceptions
Leverage existing skills
Support component-oriented programming
 First class support for component concepts such as properties, events, attributes
Provide unified and extensible type system
 Where everything can be treated as an object
Build foundation for future innovation
 Concise language specification
 Standardization
Design of C#
 Derived from the features and syntaxes of other languages
o The safety of Java
o The ease of Visual Basic

23
o The power of C++
 Uses the .NET Framework
 Plus several unique features

24
2. SYSTEM STUDY
System analysis will be performed to determine if it is flexible to design information
based on policies and plans of organization and on user requirements and to eliminate the
weakness of present system. This chapter discusses the existing system, proposed system and
highlights of the system requirements.

2.1 EXISTING SYSTEM

The existing system has been maintained manually. The system, which has been
maintained manually, had been complex and complicated. There were many chances to loss the
data and the work wouldn’t be an effective and efficient one. Manual operation is always been
complicated to the organizations for maintaining the records. In the existing system

 In this system we can’t monitor who all are accessing the files at the same time.
 It’s difficult to find which are the files are updated and renamed.

2.1.1 DRAWBACKS OF EXISTING SYSTEM

 Maintenance of the system is very difficult.


 Employee’s attentions are needed for maintaining the system.
 There is a possibility for getting inaccurate results.
 User friendliness is very less.
 It consumes more time for processing the activities.

25
2.2 PROPOSED SYSTEM
The major focus of the proposed system, besides the monitoring tasks, is ease of use to be
useful for all users even without professional knowledge of networking. This application would
be used for monitoring and checking the clients’ devices, connected to network and the hardware
characteristics such as connection of cable or adapter type. Besides that, there are some other
features that this system provides for the users in order to monitor the entire network. So this
system would be a good choice for small and medium sized organizations and enterprises in
order to manage and monitor their entire network. Also, being easy to use would provide an
opportunity for practicing and learning monitoring of the network.

The primary purpose of implementing this project is to provide a simple to use network
monitoring system which contains all necessary functionalities. The network monitoring system
implemented for this thesis, has the capability of being used by students and novice users. So,
this application can be used for educational and training purpose. Besides the mentioned
purposes, the main focus of proposed network monitoring system is network security and
management. Monitoring and sniffing packets will allow the admin of user to control the security
of entire network. So, in a nutshell, the main purpose of implementing the proposed network
monitoring system is to provide an easy to use tool. The main usage would be in small and
medium sized organizations, as well as for students and with the purpose of training.

2.2.1 BENEFITS OF PROPOSED SYSTEM:


 Fully Secured
 Role based access
 Ease in maintenance
 Notification about the modification

26
3. SYSTEM DESIGN AND DEVELOPMENT
3.1 INPUT DESIGN
The input design is the link between the information system and the user. It comprises the
developing specification and procedures for data preparation and those steps are necessary to put
transaction data in to a usable form for processing can be achieved by inspecting the computer to
read data from a written or printed document or it can occur by having people keying the data
directly into the system. The design of input focuses on controlling the amount of input required,
controlling the errors, avoiding delay, avoiding extra steps and keeping the process simple. The
input is designed in such a way so that it provides security and ease of use with retaining the
privacy. Input Design considered the following things:’

 What data should be given as input?


 How the data should be arranged or coded?
 The dialog to guide the operating personnel in providing input.
 Methods for preparing input validations and steps to follow when error occur.

OBJECTIVES

1. Input Design is the process of converting a user-oriented description of the input into a
computer-based system. This design is important to avoid errors in the data input process and
show the correct direction to the management for getting correct information from the
computerized system.

2. It is achieved by creating user-friendly screens for the data entry to handle large volume of
data. The goal of designing input is to make data entry easier and to be free from errors. The data
entry screen is designed in such a way that all the data manipulates can be performed. It also
provides record viewing facilities.

3.When the data is entered it will check for its validity. Data can be entered with the help of
screens. Appropriate messages are provided as when needed so that the user will not be in maize
of instant. Thus the objective of input design is to create an input layout that is easy to follow.

27
3.2 OUTPUT DESIGN
A quality output is one, which meets the requirements of the end user and presents the
information clearly. In any system results of processing are communicated to the users and to
other system through outputs. In output design it is determined how the information is to be
displaced for immediate need and also the hard copy output. It is the most important and direct
source information to the user. Efficient and intelligent output design improves the system’s
relationship to help user decision-making.

1. Designing computer output should proceed in an organized, well thought out manner; the right
output must be developed while ensuring that each output element is designed so that people will
find the system can use easily and effectively. When analysis design computer output, they
should Identify the specific output that is needed to meet the requirements.

2.Select methods for presenting information.

3.Create document, report, or other formats that contain information produced by the system.

The output form of an information system should accomplish one or more of the following
objectives.

 Convey information about past activities, current status or projections of the


 Future.
 Signal important events, opportunities, problems, or warnings.
 Trigger an action.
 Confirm an action.

28
3.3 DATABASE DESIGN
Databases are normally implemented by using a package called a Data Base Management
System (DBMS). Each particular DBMS has somewhat unique characteristics, and so such,
general techniques for the design of database are limited. One of the most useful methods of
analyzing the data required by the system for the data dictionary has developed from research
into relational database, particularly the work of E.F.Codd. This method of analyzing data is
called “Normalization”. Unnormalized data are converted into normalized data by three stages.
Each stage has a procedure to follow.

NORMALIZATION:

The first stage is normalization is to reduce the data to its first normal form, by removing
repeating items showing them as separate records but including in them the key fields of the
original record.

The next stage of reduction to the second normal form is to check that the record, which
one is first normal form, all the items in each record are entirely dependent on the key of the
record. If a data item is not dependent on the key of the record, but on the other data item, then it
is removed with its key to form another record. This is done until each record contains data
items, which are entirely dependent on the key of their record.

The final stage of the analysis, the reduction of third normal form involves examining
each record, which one is in second normal form to see whether any items are mutually
dependent. If there are any item there are removed to a separate record leaving one of the items
behind in the original record and using that as the key in the newly created record.

BUSINESS MODELING:

The information flow among business function is modeled in a way that answers the
following questions: what information drives the business process? What information is
generated? What generate it? Where does the information go? Who process it?

29
DATA MODELING:

The information flow defined as a process of the business modeling is refined into a set
of data objects that are needed to support the business. The characteristics (called attributes) of
each object are identified and relationships between these objects are defined.

PROCESS MODELING:

The data objects defined in the data-modeling phase are transformed to achieve the
information flow necessary to implement a business function. Processing description is created
for addition, modifying, deleting, or retrieving a data object.

30
3.4. SYSTEM DEVELOPMENT

3.4.1. DESCRIPTION OF MODULES


1. System Authentication
2. Get Shared Folders
3. Local Machine and Remote Machine
4. Log Manager
5. Folder Watcher

System Authentication:
In local system module the user wants to type the IP address of the Network System and
they have to login for monitoring the corresponding system file accessing.

Get Shared Folders:


In Shared folder their show which files are shared in the network and also shows the file
path, description, status of the corresponding network system.

Local Machine and Remote Machine:


Remote machine means which are system connected in the LAN. The remote machine
also provides all the features presents in the local machine. The features are shared folder,
Current session, accessed folder and Folder watcher.

Log Manager:
The corresponding Network System IP address, Username, Access Time, Ideal Time and
Remote OS are displayed in the Current session module. The Accessed folder contains the
system Username and access shared folders.

Folder Watcher:
Watcher module displays what are the folders created, deleted and renamed in the each
network. In the file menu option its used to refresh the system and exit from the system. Setting
menu option provides to clear the history.

31
4. TESTING AND IMPLEMENTATION

4.1 SOFTWARE TESTING


Software testing is a critical element if software quality assurance represents the ultimate
reviews of specification, design and coding. Testing is vital of the system.

Errors can be injected at any stage during development. During testing, the program is
executed with correctness. A series of testing are performed for the proposed systems before the
system is delivered to the user.

UNIT TESTING

In the unit testing the testing is performed on each module and this module is known as
module testing. This testing was carried out during programming state itself. In this testing all
the modules working satisfactorily as regard to the expected output from the module.Unit testing
is a method by which individual units of source code are tested to determine if they are fit for
use. A unit is the smallest testable part of an application. In procedural programming a unit may
be an individual function or procedure. Unit tests are created by programmers or occasionally by
white box testers.

Unit test cases embody characteristics that are critical to the success of the unit. These
characteristics can indicate appropriate/inappropriate use of a unit as well as negative behaviors
that are to be trapped by the unit. A unit test case, in and of itself, documents these critical
characteristics, although many software development environments do not rely solely upon code
to document the product in development. Unit testing provides a sort of living documentation of
the system. Developers looking to learn what functionality is provided by a unit and how to use it
can look at the unit tests to gain a basic understanding of the unit API.

ACCEPTANCE TESTING

Acceptance testing is black-box testing performed on a system (e.g. software, lots of


manufactured mechanical parts, or batches of chemical products) prior to its delivery. It is also

32
known as functional testing, black-box testing, release acceptance, QA testing, application
testing, confidence testing, final testing, validation testing, or factory acceptance testing.

Acceptance testing generally involves running a suite of tests on the completed system.
Each individual test, known as a case, exercises a particular operating condition of the user's
environment or feature of the system, and will result in a pass or fail, or Boolean, outcome.
There is generally no degree of success or failure. The test environment is usually designed to be
identical, or as close as possible, to the anticipated user's environment, including extremes of
such. These test cases must each be accompanied by test case input data or a formal description
of the operational activities (or both) to be performed—intended to thoroughly exercise the
specific case—and a formal description of the expected results.

TYPES OF ACCEPTANCE TESTING

Typical types of acceptance testing include the following

USER ACCEPTANCE TESTING


This may include factory acceptance testing, i.e. the testing done by factory users before
the factory is moved to its own site, after which site acceptance testing may be performed by the
users at the site.

OPERATIONAL ACCEPTANCE TESTING


Also known as operational readiness testing, this refers to the checking done to a system
to ensure that processes and procedures are in place to allow the system to be used and
maintained.

CONTRACT AND REGULATION ACCEPTANCE TESTING


In contract acceptance testing, a system is tested against acceptance criteria as
documented in a contract, before the system is accepted. In regulation acceptance testing, a
system is tested to ensure it meets governmental, legal and safety standards.

33
ALPHA AND BETA TESTING
Alpha testing takes place at developers' sites, and involves testing of the operational
system by internal staff, before it is released to external customers. Beta testing takes place at
customers' sites, and involves testing by a group of customers who use the system at their own
locations and provide feedback, before the system is released to other customers. The latter is
often called “field testing”.

INTEGRATION TESTING

One module can have adverse effect on another such functions when combined may not
produce the desired results. Integration testing is a systematic technique for constructing the
program structure and conducting test to uncover errors associated with interface. All the
modules are combined in this testing step. The entire program is tested as the whole. The errors
uncovered are corrected for the next testing step.

BLACK BOX TESTING

The black box approach is attesting method in which test data are delivered from the
functional requirement without regard to the final program structure. Because only functionality
of the software is concerned.

In black box testing, only the functionality is determined by observing the outputs to the
corresponding input. In this testing various input images are exercised and the output images are
compared as required by the content retriever.

WHITE BOX TESTING

White box testing are the software predicates on close examination of procedure details.
It provides test cases that exercise specific test for conditions and loops. White box testing was
carried out in the order to guarantee that

 All independent parts within a module exercised at least once.


 All logical decision on this true and false side was exercised

34
VALIDATION TESTING

Computer input procedures are designed to detect errors in the data at the lower level of
detail which is beyond the capability of the control procedures. The validation succeeds when the
software functions in the manner that can be reasonably expected by the customer.

35
4.2. IMPLEMENTATION
The implementation phase focuses how the engineer attempts to develop the system. It
also deals with how data are to be structured, how procedural details are to be implemented, how
interfaces are characterized, how the design will be translated into programming and hoe the
testing will be performed. The methods applied during the development phase will vary but three
specific technical tasks should always occur.

 The software design


 Code generation
 Software testing
The system group has changed with responsibility to develop a new system to meet
requirements and design and development of new information system. The source of these study
facts is variety of users at all level throughout the organization.

Stage of Development of a System


 Feasibility assessment
 Requirement analysis
 External assessment
 Architectural design
 Detailed design
 Coding
 Debugging
 Maintenance
Feasibility Assessment
In Feasibility this stage problem was defined. Criteria for choosing solution were
developed, proposed possible solution, estimated costs and benefits of the system and
recommended the course of action to be taken.

Requirement Analysis
During requirement analysis high-level requirement like the capabilities of the system
must provide in order to solve a problem. Function requirements, performance requirements for

36
the hardware specified during the initial planning were elaborated and made more specific in
order to characterize features and the proposed system will incorporate.

External Design
External design of any software development involves conceiving, planning out and
specifying the externally observable characteristic of the software product. These characteristics
include user displays, report formats, external data source and data links and the functional
characteristics.

Internal Design Architectural and Detailed Design


Internal design involved conceiving, planning out and specifying the internal structure and
processing details in order to record the design decisions and to be able to indicate why certain
alternations were chosen in preference to others. These phases also include elaboration of the test
plans and provide blue prints of implementation, testing and maintenance activities. The product
of internal design is architectural structure specification. The work products of internal design
are architectural structure specification, the details of the algorithm, data structure and test plan.
In architectural design the conceptual view is refined.
Detailed Design
Detailed design involved specifying the algorithmic details concerned with data
representation, interconnections among data structures and packaging of the software product.
This phase emphasizes more on semantic issues and less synthetic details.

Coding
This phase involves actual programming, i.e, transacting detailed design into source code
using appropriate programming language.

Debugging
This stage was related with removing errors from programs and making them completely
error free.

Maintenance
During this stage the systems are loaded and put into use. They also get modified
accordingly to the requirements of the user. These modifications included making enhancements
to system and removing problems.

37
5. CONCLUSION & FUTURE ENHANCEMENT

5.1 CONCLUSION
Regarding this fact, the existing tools cannot be used by beginners and they are designed
for professional users with having strong understanding of network. Regarding the researches on
related works, the attempts were to provide a simple but functional network monitoring system
to be useful for professional users as well as novice users. The primary purpose of implementing
this project was to provide a simple to use network monitoring system which contains most of
the necessary functionalities. Besides the mentioned purposes, the main focus of proposed
network monitoring system is network security and management. Monitoring and sniffing
packets will allow the admin of user to control the security of entire network. The network
monitoring system implemented for this thesis, has the capability of being used by students and
novice users. So, this application can be used for educational and training purpose.

38
5.2 SCOPE FOR FUTURE ENHANCEMENT

As the future work, some features can be added to this application, such as providing
statistical report on the result of monitoring and transferring data by using File Transferring
Protocol (FTP). Additionally, this application, which has been designed to be run on Windows
operating system, might be written with another programming language for being used on other
operating systems.

39
BIBLIOGRAPHY

[1] Englander, I., (2013), The Architecture Of Computer Hardware, Systems Software &
Networking, Book, Fourth Edition.

[2] http://www.techopedia.com/definition/20974/network-management

[3] Richard, T. & Watson, (2007), Information Systems, University of Georgia.

[4] Sloman, M. & Jonathan, D., (1994) Policy Conflict Analysis in Journal of Organizational
Computing, Vol. 4, No. 1, pp. 1-22.

[5] Trimintzios, P., Polychronakis, M., Papadogiannakis, A., Foukarakis, M., Markatos, E. P. &
Oslebo, A., (2006), DiMAPI: An application programming interface for distributed network
monitoring, Conference on Network Operations and Management Symposium, IEEE, pp. 382-
393.

[6] Fang, W., Zhijin, Z. & Xueyi, Y., (2008), A New Dynamic Network Monitoring Based on
IA, International Symposium on Computer Science and Computational Technology, IEEE, Vol.
2, pp. 637 - 640.

[7] Michalski, M., (2009), A Software and Hardware System for a Fully Functional Remote
Access to Laboratory Networks, Fifth International Conference on Networking and Services,
IEEE, pp. 561 – 565.

[8] Suri, S. & Batra, V., (2010), Comparative Study of Network Monitoring Tools, International
Journal of Innovative Technology and Exploring Engineering (IJITEE), Vol. 1, No. 3, pp. 63-65.

[9] http://www.itqlick.com/spiceworks/feedback

[10] Stephen, P., Olejniczak & Kirby, B., (2007), Asterisk for Dummies, chapter 10.

40
APPENDICES

A. USE CASE DIAGRAM

Enter Network
IP Address

Network
System

View Shared
Folder and details

USER Get username,


folder name and
access time

Get details of
create, delete and
rename folder

Clear log History

41
B. FLOW CHART

42
C.SAMPLE INPUT
Main Form

Shared Folder Form

43
Current Session

Accessed Folder Form

44
D.SAMPLE OUTPUT

Watcher Form

Renamed Folder Details Form

45
Refresh Time Menu option

Menu Option Form

46

You might also like