You are on page 1of 56

CHAPTER-I

INTRODUCTION

1
1.INTRODUCTION
CLOUD methods turn IT resources into a tool for the business sector. On-
demand cloud services offer flexibility by allowing users to instantly get their
preferred choices, such as online databases,computer resources, and data storage.
Customers may use phones, laptops, and desktop computers to access services as
shown in Fig. 1. Cloud storage delivers data storage and management capabilities via
storing files in the cloud. It can also assist in calculating data, which is very easy
because to its ability to offer a number of services at once. Cloud may help you save
resources and cut down on the costs associated with storing data by lowering
communications costs and increasing accessibility. People are leery of trusting their
data in the cloud, despite the huge advantages it offers, because of the numerous
concerns about privacy and data protection. In many ways, data being uploaded to a
cloud server may result in loss of users' privacy, even if the cloud service provider
may be seen as being untrustworthy.
On order to protect the privacy of their data stored in the cloud, individuals
would usually encrypt their information before uploading it. But complex data
processing occurs with the encryption methods used in general. ABE's prospects for
overcoming this challenge seem promising. Sahai and Waters developed ABE in
2005, a data confidentiality tool which offered fine-grained access control. Everyone
believes it is an effective means of encrypting data stored in the cloud. With ABE, the
DO may optimise data access in the event that the data are being made available to
many people. It lets DO put in an access policy that regulates data access, and that
makes those who match it able to see the data that was submitted. Data contents can't
be discovered if a person doesn't meet the access structure. To provide one example,
data access control inside a business. The CEO of this organisation may send a very
sensitive file to managers in the sales, planning, and R&D departments through the
cloud. Then he/she should try the ABE system. First, he/she encrypts the file and
designates access as a manager and R&D, with the exception of the sales and planning
departments. The encrypted file and access structure will then be uploaded into the
CS. The file is only accessible to the three department managers and they're the only
ones who know what's in it. Everyone else, even if they had access to the file, would
be ignorant of its contents.

2
The vast majority of proposals using ABE have a very positive impact on
secure data exchange. Nevertheless, it completely overlooks the personal privacy of
the DO and the other users. To enable simple retrieval of data, the access policy is
always sent with ciphertexts. Some buildings have a design that makes it possible for
sensitive data to be captured by outsiders. Imagine that a patient who is ill wants to tell
his or her doctor and certain members of the family about the diagnosis, but doesn't
want everyone to know. Fig. 2 demonstrates that if the patient encrypts the PHR using
a standard method, the attacker may see basic information about the users, even if they
cannot get the data itself. The malevolent third party could figure out that the
doctor is suffering from a heart attack and is being treated at a DC hospital since the
access policy mentions “cardiopathy” and “DC hospital.” Thus, a primary issue is to
ensure the privacy of the information provided while simultaneously protecting its
integrity.
1.1 SCOPE OF THE PROJECT
Data privacy is maintained by CP-ABE with concealed access policy because
it maintains data confidentiality and ensures that user privacy is kept. This article
introduces a privacy preserving CP-ABE method that's effective at verifying
credentials. It's intended to help address the issues outlined above. Under the
decisional n-BDHE issue and linear assumption, the suggested method provides
selective security. The findings bear up the advantages of the strategy provided.
1.2 EXISTING SYSTEM
 First, Nishide et al. [8] developed a system where they obscured the attribute's full
access policy, which included both the value and the name, by splitting it into two
pieces, the value and the name. Because of the underlying policy, the adversary
has no knowledge of the users. Still, they have a significant cost to do the math.
Waters, in 2009, had suggested the use of a CP-ABE system with a dual
encryption mechanism. A new privacy-preserving option was found in CP-ABE
via it.
 then utilised this method to issue two concealed access policy CP-ABE schemes,
which was followed by Lai et al (HP-CP-ABE). Security is seen in both of them.
The first of these two options supports just AND gates, whereas the second offers
a more expressive access structure called Linear Secret Share Scheme (LSSS).
Although the size of both the secret keys and ciphertext increases linearly with the

3
number of attributes, the effect is mitigated because secret keys and ciphertext will
be encrypted together, meaning it would take the same amount of time to encrypt
each key or ciphertext block, and the performance of each block depends only on
the performance of the encryption of the secret key.
 Next, Rao et al. proposed a new approach to HP-CP-ABE that is 100% secure.
Compared to and , the secret key size and ciphertext size in this method are
constant, thus it is more efficient. While this protocol is a kind of compressed
logic, it only handles AND gate, which is inflexible. Zhang et al. utilised the
method of Abdalla et al. to build a hierarchical HP-CP-ABE system. It constantly
generates identical-sized secret keys and provides quick decoding.
 A team of researchers, Huang et al., recently published a research study in which
they showed how HP-CP-ABE secret keys may be made constant size while
reducing the calculation cost. Selective security is an insufficient security
paradigm, and therefore does not do a great job. A issue that must be disregarded
is the fact that the strategies described before will not help users' privacy. The
more time it takes to decode communications, the more time the users will need to
spend trying every conceivable key combination. Users must discover a way to
assist in decrypting ciphertexts easily and accurately.
 To reduce computational load for users, Zhang et al. proposed an HP-CP-ABE
method with an authority verification step. User verification capabilities allow end
users to confirm that they are legitimate users. During the match phase, the
discovery of privacy leakage was made..
Disadvantages
 In the current work, there is either an inability to improve scalability as more
people come on board, or a difficulty in keeping things scaled-down.
 Current encryptions are limited to one- or two-hundred-and-fifty-six- or five-
hundred-and-twelve-bit encryption keys.

4
1.3 PROPOSED SYSTEM
 A new method is created, that will help in identifying authority figures who
would be authorised to be the controllers of the HP-CP-ABE framework while
still protecting user privacy and preserving confidentiality.
 To prevent performing extra calculations on a user while using the
decryption technique, we created an authority identification mechanism that
allows users to easily check to see whether they have permission to decrypt
and then successfully do so.
 In the new approach, the size of private keys doesn't change even when
attribute numbers vary. Transmission and storage both cost less.
 In addition, the proposal offers a concise method to establish anonymity
via a compact approach, based on a hybrid game series.
Advantages
 The system is safer owing to the data being kept secret from those who are
unauthorised to see it, including cloud servers.
 Since there's no way to share cypher text and decode it amongst users of
different groups, the system is more secure.

5
CHAPTER-II
SYSTEM ANALYSIS

6
2.SYSTEM ANALYSIS
2.1 PRELIMINARY INVESTIGATION
A great method to begin the creation of a project is to create a
platform that is simple to use and has the ability to send and receive
messages and also includes a search engine, contact book, and entertainment
such as games. If our project guide and the organisation give it the go light,
the project starts with a preliminary study. The exercise is divided into three
sections.:
 Request Clarification
 Feasibility Study
 Request Approval
REQUEST CLARIFICATION
After getting permission from the project guide and the organisation
for the project request, the request must be thoroughly reviewed to identify
the systems that will be necessary.
Our project's primary goal is to provide users with systems who are
employed by the business and utilise the Local Area Network with one
another (LAN). Today's hectic life means men demand everything supplied
ready-made. The rise of the Internet has led to the rapid development of
portals.
FEASIBILITY ANALYSIS
Preliminary research led to the conclusion that the system request is
plausible. Only if it is doable within time and budget constraints is it
possible. There are many possibilities that must be examined.
 Operational Feasibility
 Economic Feasibility
 Technical Feasibility
Operational Feasibility
The examination of the operational feasibility of a system is focused
on the prospects of that system. This method eradicates all of the stresses that
come with being a project manager and helps him properly monitor his
project's progress. Using this kind of automation will certainly decrease the

7
time and effort it takes to do routine manual tasks. According to the test, the
system can be implemented.

8
Economic Feasibility
An evaluation of the monetary rationale for a computing project may be
known as Economic Feasibility or Cost-benefit. The hardware was already there
when the project started, and thus the costs on the project are quite cheap. Employees
in any branch of the company may use this tool whenever they are linked to the local
area network. The company will use its current resources to build the Virtual Private
Network. Therefore, the proposal is commercially viable.
Technical Feasibility
In Roger S. Pressman's view, Technological Feasibility refers to the evaluation
of the organization's technical resources. To properly run their network, the company
requires computers that can connect to the Internet and intranet, and have web
browsers. The platform-independent system was created. The system is developed
with the aid of Java Server Pages, JavaScript, HTML, SQL server, and WebLogic
Server. The technical aspects of the plan have been examined. It can be built using the
current resources and has already been designed.
REQUEST APPROVAL
There are requests that are just not worth doing. So many clients are
requesting projects that just a handful of them are finished. Projects that are possible
and desired must be expedited. The cost, priority, completion time, and necessary
people are gathered to decide where a project request should be placed on a project
list. Development may begin after the permission of these criteria has been obtained.
SYSTEM DESIGN AND DEVELOPMENT
INPUT DESIGN
Developers must provide close attention to Input Design throughout the
software creation cycle, since it plays a crucial part in the program's whole life. Data
entry is geared to provide the best possible data for the application. The design of the
inputs must be done well, so that any input mistakes, which arise during data entry,
are limited. The input forms include a validation control that works to limit, range,
and other related validations, as explained in Software Engineering Concepts.
Most of the modules have an input screen. When the user makes an error, error
messages help advise him so that incorrect entries are not produced. Let's go into
module design for more on this. Input design refers to the transformation of human
input into a machine-readable format.

9
The input design aims to make data entering more sensible and to reduce
mistakes. Input design is at fault; it dictates input error rates. The programme was
made with the user in mind. During the processing, the cursor is put in the location
where it must be entered. The design of the forms ensures this. The opportunity to
choose a suitable input from many different options is also offered to the user in some
situations.
Every piece of data has to be validated. Error messages are shown if a user
enters any wrong information, and the user may go on with the next pages after
finishing the inputs on the current page.

OUTPUT DESIGN
In the end, the project manager and his team members all have to have an easy
way to communicate, which comes through output. In short, VPN enables project
leaders to manage their clients by giving them the ability to create new clients and
assign them new projects, maintain project record-keeping, and provide each client
with a folder degree of access according to the projects given to them. Once a project
is finished, the customer may be given a new project. At the very beginning, the
processes for verifying users are established. A new user may register on their own,
but the administrator must allocate projects and validate the new user.
The programme launches the first time it is used. The browser is utilised, and
then the server must be turned on. The server will function as the administrator for the
network, and the linked computers will be able to be its clients. The system is
designed in a way that it is very accessible and easy to comprehend even for new
users.
2.2 FUNCTIONAL REQUIREMENTS
The Admin must log in with a valid user name and password in order to access
the system. After successfully logging in, he may do a variety of activities such as
View All Users, View All Owner, and more. Filter words should be included. View
All Spam Filter Words in One Place, Check your Spam Inbox, View All of Our
Products, View All Purchase Details, View All User Account Details, View All
Purchase Requests, and View All Purchase History are all options. View the Spam
Behavior of Every User, View all of the criminals who have signed up for the service.
View the Cyber Security Report and the Results for All Account Types.

10
The administrator has access to a list of all of the users who have registered. In
this section, the administrator may see the user's information, such as their name,
email address, and mailing address, and the administrator can approve the users.
There are a total of n number of Owners in the game. Before beginning any activities,
the owner should complete the registration process. Once a user has registered, their
information will be kept in the database. After completing the registration process
successfully, he must log in using his approved user name and password. Once the
login has been successful, the owner will be able to do certain activities such as
seeing his or her profile, adding products, viewing uploaded products, viewing all
purchased products, and viewing the total bill. View Cybercrime Victims and
Perpetrators, Check the reliability of an online product and recover any lost data.
2.3 NON -FUNCTIONAL REQUIREMENTS:
 Requirements. We've got a little of wiggle room.
 Robustness is the characteristic of being able to endure stress, pressure, or ease of
maintainability, in which maintainability is to make it simpler to maintain by
anticipating needed repairs. Our project offers it as well.
 Reliability is the capacity to fulfil one's duties when working under adverse
conditions. Our project also offers this service.
 The size of a specific application has a big impact; if it is little, it will be
efficient. The data file we have generated is 5.05 megabytes in size.
 If the speed is decent, it is excellent. The code is fast since the number of lines
is little.
 Power consumption is very significant in battery-powered devices. The battery
life may be set at the requirement stage. Customer permission cannot provide for
an unlimited amount of power in these things. Because fewer lines of code will
be executed, the CPU will take less time, thus it will need less power.

11
SYSTEM REQUIREMENTS
H/W System Configuration:-

Software Requirements:

12
CHAPTER-III
SYSTEM DESIGN

13
3.SYSTEM DESIGN
3.1 DATA FLOW DIAGRAM:
 In addition, the DFD is also known as bubble chart. It is a graphical representation
that may be used to show how a system is built using the input, processing, and
output of a set of data.
 One of the most essential tools is the data flow diagram (DFD). The system's
components are modelled using it. The process itself, the data it uses, the
other parties that have interaction with the system, and the information shared
and transferred within the system.
 DFD is able to demonstrate how information is sent through the system and how it
is affected by a long process of transformations. Information flow and
transformations are shown using the graphical method that shows information
moving between input and output.
 Additionally, it's referred to as bubble chart. To describe a system, DFDs may
be used at any degree of abstraction. DFD may be split into several levels that
progressively increase information flow and include progressively more details on
the functions.

14
Fig 3.1 Data Flow Diagram

15
3.2 UML DIAGRAMS
3.2.1 Sequence Diagram
A sequence diagram is a kind of interaction diagram that visually illustrates
how processes work together and what order they follow. It is a message sequence
chart in and of itself. Sequence diagrams, which represent processes in an image, are
also known as event diagrams, event scenarios, and timing diagrams.

Fig 3.2 UML Diagrams

16
3.2.2 Use case diagram
Use-case analysis yields the content of a use case diagram in the Unified
Modeling Language (UML), a kind of behavioural diagram that encapsulates activity.
It gives a visual picture of the system's functioning, showing who does what, how
things connect, and other important details. A use case diagram illustrates which
functionalities in the system apply to each actor. It is possible to show who performs
which role in the system.
Use case :Cloud Server

Fig 3.3 Use Case :Cloud Server

17
Use case: Authority

Fig 3.4 Use case: Authority

18
Fig 3.5 Use Case Owner

19
3.3 ER DIAGRAMS

Fig 3.6 ER Diagrams Flow Chart User

20
Fig 3.7 ER Diagrams Flow Cloud Server

21
Fig 3.8 ER Diagrams Flow Authority

22
Fig 3.9 ER Diagrams Flow Chart Owner

23
3.4 DATABASE TABLES
3.4.1 Owner
3.4.2 User

3.4.3 Transaction

3.4.4 Patient details

24
3.5 SYSTEM ARCHITECTURE

25
CHAPTER-IV
IMPLEMENTATION

26
4.IMPLEMENTATION
DATA OWNER
This module includes certain departmental assignments and job classifications,
and they need verification before uploading files to the cloud server and performing
other activities. Log in and look at your profile. Attach encrypted patient information
(including name, address, email, illness, age, gender, phone number, etc.) with PID,
and then encrypt PID. (Note: if you encrypt PID, all of the patient's other information
will be hidden as well.) patient's name information. to provide permissions to the
department and career of each user All the patient information submitted to your
online portal may be searched and sorted by date and time. Check every aspect of the
access control with date and time shown.
CLOUD SERVER
The cloud enables owner and user authorization and conducts activities like
See patient information while in decrypt mode. Access control details are available, all
you have to do is see them. See everything (such as uploads, downloads, and searches)
in the transaction list. See date and time information along with request and response
data for secret key management. Take a look at the chart to see how many of these
diseases have reappeared. Look at Patient Rank in the chart. Number of patients
having their data leaked via the improper secret key
AUT HORITY
To start, this module implements the following activities such as login,
viewing secret key request information, and granting/revoking access through secret
keys and generating/reserving secret keys. The authorities also use the module to
identify and blacklist attackers who exploit key vulnerabilities.
END USER
When using this app, the user needs to log in to the cloud and perform various
operations, including registering with Departments (Cardiology, Nephrology, etc.) and
Professions (like Doctor, Nurse, Surgeon, etc.) and logging in. They can also view
their profiles and search for patient details using a search tool.
Requests access control and is required to be provided a secret key to display
patient files and information. Let everyone download this file if they can find it. It's
hidden so well that only certain authority members can find it.

27
SAMPLE CODE
Homepage

28
29
CHAPTER-V
SYSTEM TESTING

30
5.SYSTEM TESTING
TESTING METHODOLOGIES
The following are the Testing Methodologies:

Unit Testing
Unit testing places emphasis on testing at the unit level, as modules, which are
the smallest possible design units. Unit testing attempts to cover every route in a
module's control structure with checks to discover as many errors as possible. The test
is designed to ensure that each module works correctly on its own. Therefore, the label
is called Unit Testing. The module interfaces are validated to ensure that they're in
accordance with the design specifications. In this testing, each module is tested
separately, and its performance is checked against the design specifications. Every
critical step is verified for accurate outcomes. Every possible error- handling scenario
is put via rigorous testing.
Integration Testing
Integration testing tackles the many difficulties that arise while verifying and
constructing programmes. After the programme is merged, a collection of rigorous
tests is performed. The goal is to get a structure according to the design by combining
several test-built modules and their unit tests.
The following are the types of Integration Testing:
1. Top Down Integration
To create programme structure, this is a more piecemeal technique. Beginning
with the main programme module, the control hierarchy's control modules are
integrated as you descend, while also going lower. A module may be included in the
structure either via a depth-first or a breadth-first approach. For this approach, when
the test starts at the main module, it gradually replaces individual stubs.

31
2. Bottom-up Integration
This approach starts with the lower-level modules, where building and testing
take place. Processing that's needed for modules that are part of a lower level is always
there and the requirement for stubs is gone since the modules are integrated from the
bottom up. These are the necessary stages for implementing the bottom-up integration
strategy: The modules are grouped together to execute particular software functions.
A driver (a.k.a. a test case control software) is developed to communicate input
and output data. The cluster has been examined.
In the software, the drivers are eliminated and the clusters are merged, going upwards.
Bottom-up methods inspect each module on its own, then connect the modules
together, then verify the operation of each module.
User Acceptance Testing
One of the biggest keys to having a successful system is having user
acceptance. The system being tested is kept in contact with users while it is being
developed, to ensure that there are no hiccups during usage. The system was designed
to be approachable with intuitive features. This will make it easy for even someone
new to the system to figure it out.
Output Testing
Next, output testing of the suggested system, since there is no use in having a
system that does not provide the necessary output in the requested format. Enquiring
about their desired file type gauges what the system's outputs produce or show. Thus,
the output format is regarded in two ways, which are printed and on-screen.
Validation Checking
Validation checks are performed on the following fields.
Text Field:
It must include just the appropriate amount of characters for its size. Tables in
the document include several text fields which use both numbers and letters. Every
time a mistake is made, the error message displays.
Numeric Field:
Only integers between 0 and 9 are allowed in the numeric field. Error
warnings pop up whenever anybody tries to enter a character. The modules are
examined to see what they do and whether they're accurate. We put each module
through tests with example data.
32
The components were all separately evaluated and combined to form a single
system. Testing refers to running a programme using actual data information to find
programme flaws in the programme output. Testing should be carried out in a way that
tests each need separately. A successful test delivers the faults in the system, both for
input data that doesn't apply to the problem at hand and for input data that doesn't
apply.
Preparation of Test Data
The aforementioned testing is done by using all types of test data. In system
testing, preparing test data is critical. The system is tested using test data after
preparation. Errors found during testing will be addressed by the same procedure
described above and corrections will be made in the future.
Using Live Test Data:
Testing data come from sources inside of company directories. After
constructing a portion of a system, programmers or analysts ask their end users to
input a predefined set of data while doing their day-to-day tasks. After this, the IT guy
will utilise the results to partly test the system. Some analysts and programmers need
to pull information from the data files and input the data themselves. It's challenging to
get good data (real results) that are enough to do thorough tests. The live data input
should be typical data so the system can provide accurate predictions about how well it
will perform under ordinary processing. In general, the live data are not complete
since there is a finite number of types of data that can be entered into the system. Then
this prejudice towards the normal scenarios leaves out the instances most likely to
cause system failure.
Using Artificial Test Data:
Test data are produced for testing alone, since they are capable of testing all
possible data sets with varying formats and values. Artificial data, easily created by
the information systems department's utility software, is essential for testing every
login and control route inside the programme. Artificial test data, produced by
someone other than the program's developers, is best for doing successful testing. To
test systems, many times an independent group of testers creates a testing strategy that
incorporates the requirements. The software requirement specification described
everything that was required in the “Virtual Private Network” package, and it was
approved after satisfying those needs.
33
USER TRAINING
Whenever a new system is created, user training is needed so the users
understand how the system works and can effectively utilise it. For demonstrating the
standard function of the project to the potential consumers, its everyday function was
conducted. Even with little to no prior understanding of computers, the individuals
who will be using this system will have no difficulty figuring out how to utilise it.

MAINTAINENCE
This is for fixing mistakes of all kinds: in coding and design. We have more
precisely specified the user's needs throughout the process of system creation, helping
to avoid problems with maintenance in the future. To fulfil as many desires as
possible, this method was constructed. The requirement-based addition of features
may be feasible if advancements in technology allow it. The code and design are both
straightforward and comprehensible, making it simpler to do maintenance.

TESTING STRATEGY :
The approach of system testing is a well planned sequence of stages that
incorporates test cases and design techniques into the development of software. The
results were consistent with those predicted by our testing approach. This was
because it included test planning, test case design, test execution, and how the results
were evaluated. When testing software, it is important to do tests of varying degrees of
granularity: some tests are basic and confirm that a smaller code section has been
properly implemented, while others check that key system operations fulfil user needs.
The last check is done through software testing, which evaluates whether the
specifications and code conform to each other. The results were surprising, because
software tests were a curious exception. So, before testing the finished system with
customers, it is tested many times beforehand.

34
CHAPTER-VI
OUTPUT SCREENS

35
6.OUTPUT SCREENS

Server Login

Owner Login

36
Owner Register

User Login

37
User Registration

Authority Login

38
CHAPTER-VII
CONCLUSION AND FUTURE
IMPLEMENTATION

39
7.CONCLUSIONS
In the standard model, we have presented a privacy-preserving CP-ABE
technique that we created. The benefits of the system over the other methods are
obvious: for example, private keys are constant in size, and cypher texts are brief. In
fact, decrypting a message requires only four calculations. Selective security and
anonymity may be achieved in a sequence of numbers known as a permutation
group. In the model described, we reveal that the security of the plan is now
comparable to that of the algorithmic (n-BDHE) and DL assumptions. Authority
verification, which doesn't expose information, is also supported by the scheme.
This new system only supports "AND" logic and it uses a very insecure
protocol. A project aimed at building a powerful, secure HP-CP-ABE programme that
gives people more access controls in the future is already underway.

40
SOFTWARE ENVIRONMENT
Client Server Over view:
A server that helps a lot of clients in a programme may have many problems,
and Client Server has introduced more controversies than solutions, and more excess
hype than facts. There is quite a bit of attention directed towards this technology, due
to its industry events and publications. Client Servers is being aggressively pushed by
computer manufacturers like as IBM and DEC, who claim that it is the future of their
businesses. A study conducted on the DBMS magazine's readership found that over
three-quarters of the respondents were interested in the client-server solution. The
jump in the amount of money that was invested in client server development tools
(from $200 million in 1992 to $1.2 billion in 1996).
Although client server solutions are complicated, the basic and strong
foundation they have is evident. A client is a programme that has its own resources,
but which also connects to databases and calls on other servers to handle different
functions. Middleware is a name that software uses to identify itself in the process of
interacting with a client server.
When it comes to conventional clients, they are computers (PCs or
workstations) linked to a powerful computer, workstation, midrange, or mainframe
server. Some servers may be configured to operate as both clients and servers. To
fulfil the initial client request, a server must visit another server.
Client-server is when the client's requirements are provided for by an abstract
server, which makes the user independent of their location and data types. With a
certain middleware installed, it's possible to work on local and distant databases on
one or more servers and have data on them altered or retrieved by someone using a
desktop computer. The client-server database features include support for
heterogeneous table joins, enabling the database to work with many different database
vendors.
What is a Client Server
Client-server and file-server systems are two big systems used now.
Distinguishing between the two types of systems is crucial. There's a contrast there:
both network resources for sharing info yet there is a difference! Files on the file
server are made available to other programmes via the network on a case-by-case
basis.

41
The client-server has complete relational database functionality, such as SQL-
Access, record modification, inserting, deleting, and relational integrity. Client server
middleware provides a very flexible interface between the client and the server, one
that can accommodate changing requirements and new conditions.
Why Client Server
The client server approach aims to resolve an issue that has remained since the
dawn of computers: How to distribute computing, data production, and data storage
resources to get departmental and enterprise-wide data processing results in the most
cost-effective way. Choices were very restricted during the mainframe era. The CPU
and DATA were both located in a single central unit (cards, tapes, drums and later
disks). To start, this information was restricted to departmental reports that came out
on an infrequent schedule, and were generated by run-batch jobs. The company was
governed by a powerful information service department. The remainder of the
business had a minor part of asking for more regular reports and giving hand-written
forms that served as the origins of the company's central data banks. Early client-
server systems are described as "SLAVE-MASTER" class technology.
Front end or User Interface Design
The whole user interface will be built in a browser-specific environment with
the aim of realising the Distributed Concept via Intranet-Based Architecture.
Browser-specific design elements are created using HTML standards, and
the dynamism of their designs focuses on Java Server Page structures.
Communication or Database Connectivity Tier
By focusing on the Standards of Servlets and Enterprise Java Beans, the
Communications architecture was developed. By utilising Java Database Connectivity,
we connect to the database.
The rules for three-tire design have significant attention, which helps to
provide greater cohesiveness and lower coupling for operations to remain effective.

42
Importance of Java to the Internet
Java's influence on the Internet has been tremendous. This is because Java
makes the world of things in cyberspace even bigger by allowing them to freely move.
When in a network, the Server sends two types of items to the personal computer.
Passive and dynamic data, as well as information that is not active. Programs that can
run on their own, making changes without any external intervention, are very
dangerous since they allow unauthorised access and introduce instability. However,
Java manages to solve those issues, and in doing so, opens the way to the Applet, a
new kind of application.
Java is used to build two different kinds of programmes:
Programs and Plug-ins: Applications operate on a computer, and often use its
operating system. It is similar to someone using C or C++ to create anything. It is
critical that Java be able to build Applets. A Java-enabled web browser that runs on
the Internet is where you may use an applet, a mini-application intended to do a certain
task. Much like an image, an applet is a small software that is downloaded over the
network, like many other applications that people use on a daily basis. the difference
is, however, that it is an intelligent program, not just a media file. It can react to the
user input and dynamically change.
Features Of Java
Security
Even downloading a software that seems to be mundane may lead to a
computer virus. Before Java, most users had little need to download executable files
and therefore checked them for viruses before running them. Some customers are
concerned that they might accidentally introduce a virus on their devices. An other
harmful software must be watched out for as well. Credit card details, bank account
balances, and passwords may be gathered with this kind of software. Java provides a
“firewall” between a network application and your PC to resolve both these issues.
You are able to securely download Java applets without being concerned about virus
infection or malicious intent, thanks to Java-compatible Web browsers.
Portability
Since any computer type, including smartphones, may be linked to the Internet,
executable code for distribution needs to be portable.

43
The same approach, which ensures security, also makes it easier to transfer
applications. Java's elegant and efficient answer to these two issues was perfect.
Byte-code code
One of the major advantages of Java is the Byte code compiler. This compiler
provides a series of tools that improve security and portability. Byte code is a pre-
compiled, specialised collection of instructions that are coded to be executed by the
Java Virtual Machine (the Java Runtime System), sometimes shortened to JVM
(JVM). To use the JVM, users must accept the need to understand byte code.
Translating a Java application into byte code may open up many new operating
environments for a programme. So far, any Java application can execute on a
particular machine provided a runtime package has been created for it.
Although Java was created to be interpreted, there is no fundamental feature in
Java that precludes bytecode from being compiled into native code on the fly. Sun
has recently finished a new byte code compiler known as Just-in-Time (JIT). The JIT
compiler converts byte code into executable code on a demand basis, using JVM's
piece-by-piece compilation. The reason you can't convert Java into executable code in
one go is because it has many run-time checks that can only be performed in that state.
During execution, the JIT builds code as it is required
Java, Virtual Machine (JVM)
The Java virtual machine is something much more special, which lives outside
of the Java language. Java virtual machine technology has one of its most significant
features in the Java virtual machine. Embed it in a web browser or an operating
system, whatever is most convenient. Java code is validated when it is loaded into a
computer. A class loader is used in the loading process to make sure that code
produced by the compiler will not damage the computer once it is loaded. Byte code
verification happens after compilation to confirm that everything was properly done.
Byte code verification is absolutely essential to compiling and running Java
programmes.
Overall Description
Java .Class

44
Picture showing the development process of JAVA Program
Byte code is the Java programming language's unique way of writing code. It
creates code which is then executed. The location of the Java source code is shown
in the first box. a Java compiler, javac, that works on Java files The Java compiler
generates a byte code file called a. class, which is used to define byte code for
applications. The byte code is subsequently loaded by the Java virtual computer,
which runs it and understands it.
Java architectural considerations
Java's design allows for efficient and secure programming that performs well.
Byte code compiled for the Java Virtual Machine (which is subsequently interpreted
on each platform by the run-time environment) gives Java the ability to run
everywhere. Java is a versatile programming language that can fetch code from other
computers to run. That might be a computer that's in the same room or a computer
across the globe.
creating code
The Java compiler uses the Java Virtual Machine to generate machine code (or
byte code) for the imaginary machine it models (JVM). The byte code is intended to
be executed by the JVM. In order to solve the problem of portability, the JVM was
developed. The code is coded to work specifically for one computer, but all machines
must understand it. The computer they're working on is known as the Java Virtual
Machine.
Compiling and interpreting Java Source Code

45
When the byte code file is loaded during runtime, the Java interpreter fools it
into believing it is being executed on a Java Virtual Machine. While in reality this may
be a computer running Windows 95 with an Intel Pentium chip or a Solaris or
Macintosh machine running a Solaris OS and an Apple OS, and it may be sending and
receiving Java applets and source code via the Internet.
The software was created to make it easier for an experienced developer to
understand and use. Even a professional C++ developer will find learning Java
simpler. Many object-oriented elements of C++ (e.g., classes, object-oriented
variables, operator overloading, and inheritance) have been incorporated into Java.
Most of the difficult ideas implemented in C++ have either been left out of Java or are
approached in a simpler and more practical way. For some operations, Java uses just a
few widely accepted approaches.
Object-Oriented Java was never created to be compatible with any other
language; as a result, it was not designed to be source-code compatible. The Java
team was able to create without restraint. One result was an approach to things that
was clear, useful, and pragmatic. In Java, basic objects, such as integers, are fast
whereas complex objects are slower.
Robust
Web-based systems have an exceedingly difficult programming environment,
since the software has to work in many different ways. The designers of Java put
emphasis on being able to build dependable software. Java is a type-safe language that
does both compile-time and runtime code verification.
Java does away with issues related to memory management and deallocation,
which is handled entirely by the compiler. All runtime faults should be handled by
your programme, which is why it's important that your programme be well-written.
JAVASCRIPT
Netscape Corporation created JavaScript, a script-based computer language
that is often used on websites. JavaScript has been known by a variety of names,
including Live Script, and its current moniker is an indication of its similarity to Java.
The JavaScript language helps to create a wide variety of web-based applications,
including client and server parts. It may be used by programmers to code applications
that run in the context of a web page in a web browser. Web servers will be able to
develop programmes to receive information from a web browser and modify the
display to reflect the results of that information.
46
JavaScript is used for Web programming on both the client and the server,
although we prefer it on the client side, since it is supported by the majority of
browsers. JavaScript is simple to learn, nearly as easy as HTML, and statements
written in JavaScript may be embedded into HTML pages using script tags.
Hyper Text Markup Language
Hypertext Markup Language (HTML), the common language of the World
Wide Web (WWW), is the backbone of web design; it enables users to construct pages
with both text and images, and to link to other websites and pages (Hyperlinks).
HTML is used in digital publishing to connect text and graphics to form
hypertexts, a type of presentation where the purpose is to navigate within a body of
interconnected material. It is based on ISO Standard 8879, SGML (Standard
Generalized Markup Language), which was specially adapted to hypertext and Web
applications. Hypertexts allow readers to quickly navigate around web content in a
non-linear fashion, making them great tools for web designers. We are able to choose
and choose how we learn information, depending on our own preferences. The markup
language is a sequence of components, each with a delimiter that describes how each
element contains text or another element. Hyperlinks load to different documents or
other sections of the same page and are shown as highlighted or stressed words.
HTML allows documents to be shown on any host computer regardless of
where it is located, as long as HTML is utilised. It is useful in many ways since it is
very portable, working on just about anything.
HTML documents are more aesthetically pleasing with the help of tags. HTML
tags are not case- sensitive, therefore you do not have to worry about upper and
lowercase. Graphics, fonts, size, colour, and many other features may be used to
make a document seem interesting. All of the contents are included inside the
document, unless a tag is being applied.

47
Basic HTML Tags :

ADVANTAGES
HTML documents are simple to transmit since they are tiny. Because of the
exclusion of prepared information, it is tiny.
HTML is independent of all platforms. Case is ignored for HTML tags.

48
Java Database Connectivity
What Is JDBC?
The Java API JDBC lets you run SQL statements. JDBC, which some think is
an abbreviation, albeit it is a trademarked term, is often believed to stand for Java
Database Connectivity, but it doesn't.) It's composed of several classes and interfaces
developed in Java. JDBC provides a universal API for application and database
developers and offers the ability to build a purely Java database application.
Sending SQL queries to any relational database using JDBC is very simple. A
single JDBC application may connect to any number of databases, perform SQL
queries, and receive results all inside the same programme. Using Java with JDBC
enables programmers to write once and execute it anywhere.
What Does JDBC Do?

JDBC versus ODBC and other APIs


The ODBC API from Microsoft is now one of the most commonly used ways
to access relational databases, and has a very broad install base. It provides
connectivity to virtually all databases, no matter what kind of software is being used.
So why use ODBC when Java can do the same? In Java, you may utilise
ODBC through the JDBC- ODBC Bridge, which we shall discuss later on. "Why do
you need JDBC?" is the query. The solution to this question is multifaceted.
Direct usage of ODBC from Java is not supported, since it utilises a C-style
interface. Applications that use Java calls to C/C++ native code suffer in the areas of
security, implementation, robustness, and automatability.
To create a Java API that translates the ODBC C API directly would be
undesirable. For instance, ODBC's generic pointer "void *" is known for being error-
prone, yet Java doesn't have pointers, as ODBC does. You may see JDBC as an
object-oriented interface for programmers that is identical to ODBC except for being
written in Java.

49
1. Learning ODBC is tough. It offers basic and advanced functionalities side by side,
and it includes sophisticated capabilities even for simple queries. JDBC was intended
to provide access to more complicated solutions without becoming complicated on its
own.
To ensure a "pure Java" approach, you'll need to include a Java API like JDBC.
When ODBC is implemented, users must install the ODBC driver manager and drivers
on every single client computer. JDBC code installed on network computers, mobile
devices, and mainframes (including ones that include Java) is inherently portable,
secure, and installable. This benefit occurs when JDBC drivers are created in Java
since they can be deployed everywhere.

Multi-Level Systems
JDBC supports two-tier and three-tier database models for database access.
Java applets or applications directly connect to the database in the two-tier
architecture. Accessing the database needs a JDBC driver that can connect to the
database management system. The database receives SQL queries from a user, then
provides data from those statements to the user. If the user is linked to a network, the
database may be stored on a separate computer. A client/server setup means that a
user's computer serves as the client, while the server is the machine that houses the
database. The network may be either an Intranet (for example, an internal company
network) or the Internet.

50
In the three-level approach, the intermediate level of services, in the middle
tier, gets instructions and relays SQL statements to the database. The middle tier
delivers the results to the user, who gets them through the database, which is also how
it receives the SQL queries and executes them. Because the intermediate tier allows
for the administration of data access and the types of changes that may be made,
many MIS directors find the three-tier model extremely appealing. A further benefit is
that the user may provide a simple-to-use higher-level API in the middle tier that
translates the low-level calls to the appropriate service. There are many of benefits that
three-tier architecture gives when used to certain applications.
In the past, it was common to implement the middle tier in languages like C or
C++ for their quick speed. Despite the increased performance that has made
implementing the middle tier in Java feasible, compilers for Java byte code
optimization remain very useful. One excellent feature is multithreading, which makes
it easy to use Java's resilience, multithreading, and security capabilities. To enable
database access from a Java middle layer, JDBC is critical.
JDBC-ODBC Bridge
Use a pure Java JDBC driver whenever feasible, as opposed to using the
Bridge plus an ODBC driver. This means that ODBC setup for the client is entirely
eliminated. There is a risk that the Java VM may be damaged by an error in the native
code brought in by the Bridge, and this is a risk that the Bridge resolves by not
introducing native code to the Java VM.
What Is the JDBC- ODBC Bridge?
When using the JDBC-ODBC Bridge, you are implementing JDBC operations
by converting them to be used by the ODBC driver. It looks to ODBC as if it were a
standard application programme. ODBC drivers are supported by the Bridge, thus it
can connect to any database that has an ODBC driver. A native library for ODBC
access is included in the sun.jdbc.odbc Java package, which implements the Bridge
and is available as a Java package. The Bridge is the result of a collaboration between
Intersolv and JavaSoft.
Java Server Pages (JSP)
Java server Pages (JSPs) are a simple yet powerful technology for developing
and managing dynamically generated web pages. JSP, which is built on the Java
programming language, has proven portability, open standards, and a mature
component architecture that can be reused across a variety of applications. With the
51
Java Server Pages architecture, it is possible to separate the creation of content from
its display. Apart from alleviating maintenance concerns, this split also enables web
team members to concentrate on their own areas of expertise. Because of this, web
page designers can focus on layout and web application designers can concentrate on
programming without having to worry about interfering with one other's work.
Steps in the execution of a JSP Application:
1. A request for a JSP file is sent by the client to the web server by including the
name of the JSP file in the form element of an HTML page.
2. It is then sent to the JavaWebServer to be processed. The request is received by
the JavaWebServer on the server side, and if it is a request for a jsp file, the
request is sent to the JSP engine on the client side.
3. A JSP engine is a software that recognises the tags in a jsp and then transforms
those tags into a Servlet programme that is stored on the server side of the
network. After being loaded into memory, this Servlet is run, with the result being
returned to the JavaWebServer and subsequently sent back to the client.
JDBC connectivity
 Database-independent connection with tabular data sources is made possible via
the J2EE platform's use of the JDBC (Java Database Connectivity). In the world
of database technology, application component providers have many benefits,
including:
Tomcat 6.0 web server
Apache's open source web server Tomcat was created by the Apache Group.
The Servlet and Server Pages technologies use Tomcat, the servlet container. Sun,
which created Java, develops standards for Java such Java Servlets and Java Server
Pages (JSPS) as part of the Java Community Process. Application servers such as
Apache Tomcat only handle web application software, whereas web servers handle
applications and basic webpages, with little support for each component type (BEAs
Weblogic, is one of the popular application server). If you want to execute a web
application made in Java Server Pages (JSP) or servlets, you'll need to install a web
server like JRun, Tomcat, etc.

52
53
REFERENCES
[1]. P. P.Kumar, P. S.Kumar, and P. J. A. Alphonse, “Attribute based encryption in
cloud computing:Asurvey, gap analysis, and future directions,” J. Netw.
Comput.Appl., vol. 108, pp. 37–52, 2018.
[2]. A. Sahai and B. Waters, “Fuzzy identity-based encryption,” in Proc. 24th Annu.
Int. Conf. Theory Applications Cryptographic Techn., May 2005, vol. LNCS
3494, 2015, pp. 457–473.
[3]. K. Emura, A. Miyaji, A. Nomura, K. Omote, and M. Soshi, “A
ciphertextpolicy attribute-based encryption scheme with constant ciphertext
length,” inProc. 5th Int. Conf. Inf. Security Practice Experience,Apr. 2009, pp.
13– 23.
[4]. J. Han, W. Susilo, Y. Mu, and J. Yan, “Privacy-preserving decentralized key-
policy attribute-based Encryption,” IEEE Trans. Parallel Distrib. Syst., vol. 23,
no. 11, pp. 2150–2162, Nov. 2012.
[5]. S. Wang, J. Zhou, J. K. Liu, J. Yu, J. Chen, and W. Xie, “An efficient file
hierarchy attribute-based encryption scheme in cloud computing,” IEEE Trans.
Inf. Forensics Secur., vol. 11, no. 6, pp. 1256–1277, Jun. 2016.
[6]. A. Lewko and B. Waters, “Decentralizing attribute-based encryption,” in Proc.
30th Annu. Int. Conf. Theory Appl. Cryptographic Techn.: Advances
Cryptology, May 2011, pp. 568–588.
[7]. B.Waters, “Dual system encryption: Realizing fully secure IBE and HIBE
Under simple assumptions,” in Proc. 29th Annu.Int. Cryptology Conf.
Advances Cryptology, Aug. 2009, pp. 619–636.
[8]. T. Nishide, K. Yoneyama, and K. Ohta, “Attribute-based encryption with
partially hidden encryptor-specified access structures,” in Proc. Appl.
Cryptogr.Netw. Security, Jun. 2008, vol. LNCS 5037, pp. 111–129.
[9]. J. Lai, X. Zhou, R. H. Deng, and Y. Li, “Fully secure cipertext-policy hiding
CP-ABE,” in Proc. 6th ACM Symp. Inf. Comput. Commun.Secur., 2011, pp.
24–39.
[10]. J. Lai, X. Zhou, R. H. Deng,Y. Li, and K. Chen, “Expressive CP-ABE with
partially hidden access structures,” in Proc. 7th ACM Symp. Inf. Comput.
Commun.Secur., May 2012, pp. 18–19.

54
[11]. B. Waters, “Ciphertext-policy attribute-based encryption: An expressive,
efficient, and provably secure realization,” in Proc. 14th Int. Conf. Practice
Theory Public Key Cryptography Conf. Public Key Cryptography, Mar. 2011,
pp 53–70.
[12]. Y. S. Rao and R. Dutta, “Recipient anonymous ciphertext-policy attribute
based encryption,” in Proc. 9th Int. Conf. Inf. Sys. Secur., Dec. 2013, pp. 329–
344.
[13]. L. Zhang, Q. Wu, Y. Mu, and J. Zhang, “Privacy-preserving and secure sharing
of PHR in the cloud,” J. Med. Syst., vol. 40, pp. 1–13, 2016.
[14]. M. Abdalla, D. Catalano, and D. Fiore,“Verifiable random functions: Relations
to identity-based key encapsulation and new constructions,” J. Cryptol., vol.
27, pp. 544–593, 2014.
[15]. C. Huang, K. Yan, S.Wei, G. Zhang, and D. H. Lee, “Efficient anonymous
attribute-based encryption with access policy hidden for cloud computing,”
inProc. IEEE Int. Conf. Progress Inform. Comput., Dec. 2017, pp. 266– 270.
[16]. Y. Zhang, X. Chen, J. Li, D.Wong, and H. Li “Anonymous attribute-based
Encryption supporting efficient decryption test,” in Proc. 8th ACM Symp. Inf.
Comput. Commun.Secur., May 2013, pp. 511–516.
[17]. J. Li, H. Wang, Y. Zhang, and J. Shen, “Ciphertext-policy attribute-based
encryption with hidden access policy and testing,” KSII Trans. Internet Inf.
Syst., vol. 10, no. 7, pp. 3339–3352, Jul. 2016.
[18]. H. Cui, R. H. Deng, G. Wu, and J. Lai, “An efficient and expressive
Ciphertext-policy attribute-based encryption scheme with partially hidden
access structures,” in Proc. 10th Int. Conf. Prov. Secur., Nov. 2016, pp. 19–38.
[19]. F. Khan, H. Li, L. Zhang, and J. Shen, “An expressive hidden access policy
CP-ABE,” in Proc. IEEE 2nd Int. Conf. Data Sci. Cyberspace, Jun. 2017, pp.
26–29.
[20]. Y. Zhang, Z. Dong, and R. H. Deng, “Security and privacy in smart health:
Efficient policy-hiding attribute-based access control,” IEEE Int. Things J.,
vol. 5, no. 3, pp. 2130–2145, Jun. 2018.
[21]. A. Lewko, T. Okamoto, A. Sahai, K. Takashima, and B. Waters, “Fully secure
functional encryption: Attribute-based encryption and (hierarchical) inner
product encryption,” in Proc. 29th Annu. Int. Conf. Theory Appl.
Cryptographic Techn., 2010, pp. 62–91.
55
[22]. T. Okamoto and K. Takashima, “Adaptively attribute-hiding (hierarchical)
inner product encryption,” in Proc. 31st Annu. Int. Conf. Theory Appl.
Cryptographic Techn., May 2012, pp. 591–608.
[23]. T. V. X. Phuong, G. Yang, and W. Susilo, “Hidden ciphertext policy attribute-
based encryption under standard assumptions,” IEEE Trans. Inf. Forensics
Secur., vol. 11, no. 1, pp. 35–45, Jan. 2015.
[24]. X. Boyen and B.Waters, “Anonymous hierarchical identity-based encryption
(without random oracles),” in Proc. 26th Annu. Int. Conf. Advances
Cryptology, Aug. 2006, pp. 290–307.
[25]. J. H. Park and H. L. Dong, “Anonymous HIBE: Compact construction over
prime-order groups, “IEEE Trans. Inf. Theory, vol. 59, no. 4, pp. 2531– 2541,
Apr. 2013.
[26]. J. H. Seo, T. Kobayashi, M. Oukubo, and K. Suzuki, “Anonymous
hierarchicalidentity-based encryption with constant size ciphertexts,” in
Proc.Int. Conf. Practice Theory PublicKey Cryptography,Mar. 2009, vol. 5443,
pp. 215–234.
[27]. F. Li and W. Wu, Pairing-Based Cryptography. Beijing, China: Science Press,
2014.

56

You might also like