Professional Documents
Culture Documents
OF
BACHELOR OF ENGINEERING
(COMPUTER ENGINEERING)
SUBMITTED BY
Submitted by
is a bonafide student of this institute and the work has been carried out by him/her under
the supervision of Prof. Pinjarkar N.R. and it is approved for the partial fulfillment
of the requirement of Savitribai Phule Pune University, for the award of the degree of
Bachelor of Engineering (Computer Engineering).
Place :
Date:
ACKNOWLEDGEMENT
I would like to take this opportunity to thank my internal guide for project Prof.Pinjarkar
N.R. for giving me all the help and guidance I needed. I am really grateful to them for
their kind support. Their valuable suggestions were very helpful.
In the end our special thanks to All Staff for providing various resources such as
laboratory with all needed software platforms, continuous Internet connection, for Our
Project.
Anil Singh
Dipti bhalerao
Kajal Wagh
Punam bhagwat
ABSTRACT
Collaborative computing uses multiple data servers to jointly complete data analysis,
e.g., statistical analysis and inference. One major obstruction for it lies in privacy con-
cern, which is directly associated with node’s participation and the fidelity of received
data. Existing privacy-preserving paradigms for cloud computing and distributed data
aggregation only provide nodes with homogeneous privacy protection without consid-
eration of nodes’ diverse trust degrees to different data servers. We propose a twophase
framework that computes the average value while preserving heterogeneous privacy for
nodes’ private data. The new challenge is that in the premise of meeting privacy require-
ments, we should guarantee the proposed framework has the same computation accu-
racy with existing privacy-aware solutions. In this paper, nodes obtain heterogeneous
privacy protection in the face of different data servers through one-shot noise perturba-
tion. Based on the definition of KL privacy, we derive the analytical expressions of the
privacy preserving degrees (PPDs) and quantify the relation between different PPDs.
Then, we obtain the closed-form expression of computation accuracy. Furthermore, an
efficient incentive mechanism is proposed to achieve optimized computation accuracy
when data servers have fixed budgets. Finally, extensive simulations are conducted to
verify the obtained theoretical results. . To provide an efficient solution to the conflict,
we adopt an incentive mechanism to compensate nodes’ privacy loss.
Index Terms : Collaborative computing, average consensus, privacy preservation, in-
centive mechanism.
Contents
LIST OF FIGURES 5
LIST OF TABLES 5
LIST OF ABBREVIATIONS 5
1 Introduction 10
1.1 Introduction of respective area . . . . . . . . . . . . . . . . . . . . . . 11
1.2 Motivation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
1.3 Problem Defination . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
2 Literature Survey 13
5
4 System Design 23
4.1 System Architecture . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
4.2 UML Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.1 Use Case Diagram . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.2 Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 25
5 Other Specification 27
5.1 Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.2 Advantages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
6 Project Plan 29
6.0.1 Reconciled Estimates . . . . . . . . . . . . . . . . . . . . . . . 31
6.0.2 Project Resources . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.1 Risk Management w.r.t. NP Hard analysis . . . . . . . . . . . . . . . . 33
6.1.1 Risk Identification . . . . . . . . . . . . . . . . . . . . . . . . 33
6.1.2 Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.1.3 Overview of Risk Mitigation, Monitoring, Management . . . . 36
6.2 Project Schedule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.2.1 Project task set . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.2.2 Timeline Chart . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.3 Team Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.3.1 Team structure . . . . . . . . . . . . . . . . . . . . . . . . . . 39
7 Software Implementation 40
7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
7.1.1 Structured Programming . . . . . . . . . . . . . . . . . . . . . 41
7.1.2 Functional Programming . . . . . . . . . . . . . . . . . . . . . 41
7.1.3 Programming style . . . . . . . . . . . . . . . . . . . . . . . . 42
7.1.4 Coding Guidelines . . . . . . . . . . . . . . . . . . . . . . . . 42
7.1.5 Software Documentation . . . . . . . . . . . . . . . . . . . . . 43
7.1.6 Software Implementation Challenges . . . . . . . . . . . . . . 44
8 Project Implementation 45
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
8.2 Tools and Technologies Used . . . . . . . . . . . . . . . . . . . . . . . 46
8.2.1 Hardware Resources Required . . . . . . . . . . . . . . . . . . 46
8.2.2 Software Resources Required . . . . . . . . . . . . . . . . . . 46
8.2.3 Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
8.2.4 Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
8.3 Verification and Validation for Acceptance . . . . . . . . . . . . . . . . 49
9 Software Testing 50
9.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
9.2 TYPES OF SOFTWARE TESTING . . . . . . . . . . . . . . . . . . . 51
6
10 Results 52
10.1 Screen shots . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
13 Plagiarism Report 75
13.1 ABSTRACT PLAGIARISM REPORT . . . . . . . . . . . . . . . . . . 76
List of Figures
8.1 V-Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
6.1 KLOC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
6.2 Risk Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.3 Risk Probability definitions . . . . . . . . . . . . . . . . . . . . . . . . 35
6.4 Risk Impact definitions . . . . . . . . . . . . . . . . . . . . . . . . . . 35
6.5 Risk 1 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.6 Risk 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.7 Risk 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.8 Risk 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.9 Risk 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.10 Timeline Chart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.11 Team Organization . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
CHAPTER 1
INTRODUCTION
1.1 Introduction of respective area
In the era of Internet of Things (IoT), more and more valuable data is generated in lo-
cal smart devices, e.g., smartphones, smartwatches, even smart meters and refrigerators
[2]. Drawing support from cloud computing framework, data servers can aggregate
these data for in-depth analysis, such as statistical inference, personalized recommen-
dation, and health monitoring. However, the amounts of data generated in IoT are
unprecedented, which brings extremely large communication, computation and stor-
age costs for data servers. Moreover, the data produced by smart devices are generally
geo-distributed [3]. In particular, for cross-border data sources and servers, direct data
communication may be prohibited by nations’ policies. Hence, it is impractical for one
single server to fulfill such data aggregation and analysis.
A promising alternative computation framework is collaborative computing, which as-
signs the aggregation and computation tasks to multiple data servers and instructs servers
to collaboratively complete the data analysis. Obviously, such a computation frame-
work significantly mitigates the resource bottlenecks of single server. Edge computing
[4] is a representative paradigm of collaborative computing. Also, some social network-
ing service companies, such as Facebook and Twitter, have deployed multiple servers
in different nations and utilized these servers to jointly underpin advanced tasks.
Nevertheless, the data generated in IoT often contains users’ sensitive information, such
as location, medical data and energy consumption [7]–[9]. Moreover, privacy issues
occur frequently, e.g., it is reported that the private information of about 50 million
Facebook users has been disclosed [10]. Many countries formulate privacy polices to
restrict data servers to make use of users’ private data, and some privacy requirements
specification methods have been proposed to map the privacy polices to a formal lan-
guage in description logic to ensure consistency between these polices and data usage
[11], [12]. Therefore, privacy has become an urgent concern for data aggregation and
analysis [13].
we view all users in IoT as generalized node’s which possess private data. Multiple
geo-distributed data servers first aggregate data from different groups of node’s (cor-
responding to distinct locations), and then collaboratively complete statistical analysis.
For a group of node’s, data servers are categorized into two types of data processors,
which have different permissions to node’s data. The server collecting their private data
is the first type while other servers having no direct connection with them are viewed
as the second type. The second type of data processors are less trustworthy since they
have no permission to directly access node’s data. This brings a new critical concern for
node’s when releasing private data, which is how to ensure data processors with distinct
trust degrees receive data versions with different privacy protection. That is, hetero-
geneously privacy-preserving problem should be further investigated in collaborative
computing.
1.2 Motivation
• Incentive mechanisms are used to motivate users in crowdsensing systems to par-
ticipate in executing sensing tasks.
11
1.3 Problem Defination
Collaborative computing uses multiple data servers to jointly complete data analysis,
e.g., statistical analysis and inference. One major obstruction for it lies in privacy con-
cern, which is directly associated with node’s participation and the fidelity of received
data.
12
CHAPTER 2
LITERATURE SURVEY
X. Wang, J. He, P. Cheng, and J. Chen, “Privacy preserving average consensus
with different privacy guarantee,” in Proc. IEEE Annu. Amer. Control Conf.,
2018, pp. 5189–5194
Privacy preserving average consensus problem concerns protecting the initial states of
participants not to be disclosed while achieving average consensus. In existing litera-
tures, most of works consider providing privacy guarantee for agents when all the other
participants are viewed as the same type of privacy attackers. However, for an agent,
not all the other participants are untrustworthy. The agent can set weaker privacy de-
mand against these credible participants to obtain potential utility improvement. In this
paper, we consider that network is composed of several groups. Within each group,
agents treat participants within and outside the group as two types of attackers. Then,
a private average consensus algorithm (PACA) is proposed to provide different privacy
protection against participants within and outside the group, by perturbing agents’ ini-
tial states with random noises. Based on the definition of -KL privacy, we prove that
PACA preserves distinct degrees of privacy guarantee against agents within and outside
the group. Moreover, the convergence accuracy is analyzed according to the deviation
between the final value and the true average one. Finally, simulations are conducted to
evaluate the performance of PACA.[1]
J. Gubbi, R. Buyya, S. Marusic, and M. Palaniswami, “Internet of things (IoT):
A vision, architectural elements, and future directions,” Future Gener. Comput.
Syst., vol. 29, no. 7, pp. 21645–1660, 2013.
Ubiquitous sensing enabled by Wireless Sensor Network (WSN) technologies cuts across
many areas of modern day living. This offers the ability to measure, infer and under-
stand environmental indicators, from delicate ecologies and natural resources to urban
environments. The proliferation of these devices in a communicating–actuating net-
work creates the Internet of Things (IoT), wherein sensors and actuators blend seam-
lessly with the environment around us, and the information is shared across platforms
in order to develop a common operating picture (COP). Fueled by the recent adapta-
tion of a variety of enabling wireless technologies such as RFID tags and embedded
sensor and actuator node’s, the IoT has stepped out of its infancy and is the next revo-
lutionary technology in transforming the Internet into a fully integrated Future Internet.
As we move from www (static pages web) to web2 (social networking web) to web3
(ubiquitous computing web), the need for data-on-demand using sophisticated intuitive
queries increases significantly. This paper presents a Cloud centric vision for worldwide
implementation of Internet of Things. The key enabling technologies and application
domains that are likely to drive IoT research in the near future are discussed. A Cloud
implementation using Aneka, which is based on interaction of private and public Clouds
is presented. We conclude our IoT vision by expanding on the need for convergence of
WSN, the Internet and distributed computing directed at technological research com-
munity.[2]
W. Shi, J. Cao, Q. Zhang, Y. Li, and L. Xu, “Edge computing: Vision and chal-
lenges,” IEEE Internet Things J., vol. 3, no. 5, pp. 637–646, Oct. 2016.
The proliferation of Internet of Things (IoT) and the success of rich cloud services have
pushed the horizon of a new computing paradigm, edge computing, which calls for
processing the data at the edge of the network. Edge computing has the potential to
address the concerns of response time requirement, battery life constraint, bandwidth
cost saving, as well as data safety and privacy. In this paper, we introduce the definition
of edge computing, followed by several case studies, ranging from cloud offloading to
14
smart home and city, as well as collaborative edge to materialize the concept of edge
computing. Finally, we present several challenges and opportunities in the field of edge
computing, and hope this paper will gain attention from the community and inspire
more research in this direction.[4]
A. Thusoo et al., “Data warehousing and analytics infrastructure at facebook,” in
Proc. ACM SIGMOD Int. Conf. Manage. Data, 2010, pp. 1013– 1020.
Scalable analysis on large data sets has been core to the functions of a number of teams
at Facebook - both engineering and nonengineering. Apart from ad hoc analysis of
data and creation of business intelligence dashboards by analysts across the company,
a number of Facebook’s site features are also based on analyzing large data sets. These
features range from simple reporting applications like Insights for the Facebook Adver-
tisers, to more advanced kinds such as friend recommendations. In order to support this
diversity of use cases on the ever increasing amount of data, a flexible infrastructure
that scales up in a cost effective manner, is critical. We have leveraged, authored and
contributed to a number of open source technologies in order to address these require-
ments at Facebook. These include Scribe, Hadoop and Hive which together form the
cornerstones of the log collection, storage and analytics infrastructure at Facebook. In
this paper we will present how these systems have come together and enabled us to
implement a data warehouse that stores more than 15PB of data (2.5PB after compres-
sion) and loads more than 60TB of new data (10TB after compression) every day. We
discuss the motivations behind our design choices, the capabilities of this solution, the
challenges that we face in day today operations and future capabilities and improve-
ments that we are working on.[5]
G. Lee, J. Lin, C. Liu, A. Lorek, and D. Ryaboy, “The unified logging infrastruc-
ture for data analytics at twitter,” Proc. VLDB Endowment, vol. 5, no. 12, pp.
1771–1780, 2012.
there has been a substantial amount of work on large-scale data analytics using Hadoop-
based platforms running on large clusters of commodity machines. A lessexplored topic
is how those data, dominated by application logs, are collected and structured to begin
with. In this paper, we present Twitter’s production logging infrastructure and its evo-
lution from application-specific logging to a unified “client events” log format, where
messages are captured in common, well-formatted, flexible Thrift messages. Since most
analytics tasks consider the user session as the basic unit of analysis, we pre-materialize
“session sequences”, which are compact summaries that can answer a large class of
common queries quickly. The development of this infrastructure has streamlined log
collection and data analysis, thereby improving our ability to rapidly experiment and
iterate on various aspects of the service.[6]
15
CHAPTER 3
SOFTWARE REQUIREMENTS
SPECIFICATION
3.1 Introduction
In the period of Internet of Things (IoT), to an ever increasing extent important infor-
mation is created in nearby savvy gadgets, e.g., cell phones, smartwatches, even keen
meters and fridges [2]. Drawing support from distributed computing system, informa-
tion workers can total these information for inside and out examination, like factual
deduction, customized proposal, and wellbeing checking. Nonetheless, the measures of
information produced in IoT are phenomenal, which brings amazingly huge correspon-
dence, calculation and capacity costs for information workers. Also, the information
delivered by savvy gadgets are for the most part geo-disseminated [3]. Specifically, for
cross-line information sources and workers, direct information correspondence might
be precluded by countries’ approaches. Thus, it is unreasonable for one single worker
to satisfy such information collection and analysis.
A promising elective calculation system is shared registering, which allocates the accu-
mulation and calculation errands to various information workers and teaches workers to
cooperatively finish the information investigation. Clearly, such a calculation structure
essentially mitigates the asset bottlenecks of single worker. Edge processing [4] is an
agent worldview of collective registering. Additionally, some long range interpersonal
communication administration organizations, like Facebook and Twitter, have conveyed
numerous workers in various countries and used these workers to together support pro-
gressed errands [5], [6].
By the by, the information produced in IoT regularly contains clients’ delicate data,
like area, clinical information and energy utilization [7]–[9]. Besides, protection is-
sues happen often, e.g., it is accounted for that the private data of around 50 million
Facebook clients has been revealed [10]. Numerous nations form protection polices to
limit information workers to utilize clients’ private information, and some protection
necessities particular strategies have been proposed to plan the protection polices to a
proper language in depiction rationale to guarantee consistency between these polices
and information use [11], [12]. In this way, protection has become a dire worry for
information conglomeration and examination [13].
In this paper, we see all clients in IoT as summed up hubs which have private informa-
tion. Numerous geo-dispersed information workers first total information from various
gatherings of hubs (relating to unmistakable areas), and afterward cooperatively com-
plete factual examination. For a gathering of hubs, information workers are ordered
into two kinds of information processors, which have extraordinary consents to hubs’
information. The worker gathering their private information is the primary sort while
different workers having no immediate association with them are seen as the subse-
quent kind. The second sort of information processors are less reliable since they have
no authorization to straightforwardly get to hubs’ information. This brings another ba-
sic worry for hubs while delivering private information, which is the way to guarantee
information processors with particular trust degrees get information variants with var-
ious security insurance. That is, heterogeneously protection saving issue ought to be
further researched in collective registering.
17
• To know what are the hardware and software requirements of proposed system.
3.3.3 Logout
The user must be logged into our System. This function is used when a logged in
user finishes his/her job and wants to be logged out so that no one can misuse his/her
username. The system will state the user has been logged out successfully
• 24 X 7 availability.
• Flexible service based architecture will be highly desirable for future extension.
18
3.4.4 Software Quality Attributes
Our software has many quality attribute that are given below:-
• Availability: This software is freely available to all users. The availability of the
software is easy for everyone.
• Maintainability: After the deployment of the project if any error occurs then it
can be easily maintained by the software developer.
• Reliability: The performance of the software is better which will increase the
reliability of the Software.
• User Friendly: Since, the software is a GUI application; the output generated is
much user friendly in its behavior.
• Security: Users are authenticated using many security phases so reliable security
is provided.
• Standards for fonts, icons, button labels, images, color schemes, field tabbing
sequences, commonly used controls, and the like.
• Standard buttons, functions, or navigation links that will appear on every screen,
such as a help button.
• Shortcut keys.
19
3.5 System Requirements
3.5.1 Database Requirements
• Server: Apache Tomcat
• Scripts: JavaScript.
• RAM: 2 GB(min).
• Monitor: SVGA
20
Figure 3.1: Incremental Model
1. Requirement analysis: In the first phase of the incremental model, the product
analysis expertise identifies the requirements. And the system functional requirements
are understood by the requirement analysis team. To develop the software under the
incremental model, this phase performs a crucial role.
2. Design Development: In this phase of the Incremental model of SDLC, the design of
the system functionality and the development method are finished with success. When
software develops new practicality, the incremental model uses style and development
phase.
3. Testing: In the incremental model, the testing phase checks the performance of each
existing function as well as additional functionality. In the testing phase, the various
methods are used to test the behavior of each task.
4. Implementation: Implementation phase enables the coding phase of the develop-
ment system. It involves the final coding that design in the designing and development
phase and tests the functionality in the testing phase. After completion of this phase,
the number of the product working is enhanced and upgraded up to the final system
product.
Advantage of Incremental Model:
• More flexible.
21
3.7 Mathematical Model
Set S={I,R,P,O}
Where,
I=Set of Input For Our System
R=Set Of Rules that are applied while processes are performed.
P=Set Of Processes
O=Set Of Outputs
I=I1,I2,I3,I4
I1=Add User Information
I2=Provide File Information
I3=Provide Key
R=R1,R2
R1=Get Proper Display
R2=Find Out Proper Information
P=P1,P2,P3
P1=Validation Of Required Details
P2=Upload Data Properly
P3=Distribute Data Properly
O=O1,O2
O1=Data Uploaded
O2=Data Accessing Properly
22
CHAPTER 4
SYSTEM DESIGN
4.1 System Architecture
24
node’s will report perturbed data with PPDs corresponding to the rewards. Specifically,
to minimize the computation deviation under a fixed incentive budget, each data server
computes the compensation and PPD for each node by solving an optimization problem.
25
attributes, operations (or methods), and the relationships among objects.The class dia-
gram is the main building block of object-oriented modelling. In the diagram, classes
are represented with boxes that contain three compartments: The top compartment con-
tains the name of the class. It is printed in bold and centered, and the first letter is
capitalized. The middle compartment contains the attributes of the class. They are
left-aligned and the first letter is lowercase. The bottom compartment contains the op-
erations the class can execute. They are also left-aligned and the first letter is lowercase.
26
CHAPTER 5
OTHER SPECIFICATION
5.1 Algorithms
AES Algorithm
5.2 Advantages
• Simple and easy to understand and use.
• Easy to manage due to the rigidity of the model. Each phase has specific deliver-
ables and a review process.
• Works well for smaller projects where requirements are very well understood. •
Clearly defined stages.
5.3 Applications
• Security of Data.
Using this software we can do the security require data uploaded here.
28
CHAPTER 6
PROJECT PLAN
Here the prediction is made about the size of total project. Effective software projectes-
timation is one of the most challenging and important activity in software development
once you have on estimate size of your product you can desire the effort estimate. Soft-
ware project estimation are having 4 basic steps:
• Ample resources with required expertise are available to support the product.
30
Waterfall Model - Advantages
The advantages of waterfall development are that it allows for departmentalization and
control. A schedule can be set with deadlines for each stage of development and a prod-
uct can proceed through the development process model phases one by one.
Development moves from concept, through design, implementation, testing, installa-
tion, troubleshooting, and ends up at operation and maintenance. Each phase of devel-
opment proceeds in strict order.
Some of the major advantages of the Waterfall Model are as follows
• Simple and easy to understand and use
• Easy to manage due to the rigidity of the model. Each phase has specific deliver-
ables and a review process.
• Phases are processed and completed one at a time.
• Works well for smaller projects where requirements are very well understood.
• Clearly defined stages.
• Well understood milestones.
• Easy to arrange tasks.
• Process and results are well documented.
31
6.0.1.2 Time Estimates
Efforts: In which we are calculated efforts done by the each person in month. Efforts
are calculated by using formula
E=3.2(KLOC)1:05E = 3:2(4:2)1:05
E=4*30 Person-month
Development time:
D=E/N
D=4*30/4
D=3.82 month
Development time for Project
Requirements analysis require 3 months
Implementation and testing requires 3.82 months.
Total Duration for completion of project D= 6.82 months.
32
3. Software Version : JDK 1.7 or above
4. Tools : Eclipse
5. Front End : JSP
6. Data Base : Mysql
This problem is NP-Hard and we can make it NP-Compete by using this system.
33
Figure 6.3: NP-hard and NP-complete
a serious threat. Risk can even destroy the the whole setup but as the project is to be
made we have to take risks.
For risks identification, review of scope document, requirements specifica-tions
and schedule is done. Answers to questionnaire revealed some risks. Each risk is
categorized as per the categories mentioned in. Please refer table 2.1 for all the risks.
You can refer following risk identification questionnaire.
1 Have top software and customer managers formally committed to support the
project?
3 Are requirements fully understood by the software engineering team and its cus-
tomers?
6 Does the software engineering team have the right mix of skills?
34
9 Do all customer/user constituencies agree on the importance of the project and on
the requirements for the system/product to be built?
35
6.1.3 Overview of Risk Mitigation, Monitoring, Management
Following are the details for each risk.
36
Table 6.7: Risk 3
37
• Task 1: Requirement Analysis (Base Paper Explanation).
38
6.3.1 Team structure
The team structure for the project is identified. Roles are defined.
39
CHAPTER 7
SOFTWARE IMPLEMENTATION
7.1 Introduction
7.1.1 Structured Programming
In the process of coding, the lines of code keep multiplying, thus, size of the software
increases. Gradually, it becomes next to impossible to remember the flow of program. If
one forgets how software and its underlying programs, files, procedures are constructed
it then becomes very difficult to share, debug and modify the program. The solution
to this is structured programming. It encourages the developer to use subroutines and
loops instead of using simple jumps in the code, thereby bringing clarity in the code
and improving its efficiency Structured programming also helps programmer to reduce
coding time and organize code properly.
Structured programming states how the program shall be coded. Structured program-
ming uses three main concepts: Top-down analysis -
A software is always made to perform some rational work. This rational work is known
as problem in the software parlance. Thus it is very important that we understand how
to solve the problem. Under top-down analysis, the problem is broken down into small
pieces where each one has some significance. Each problem is individually solved and
steps are clearly stated about how to solve the problem.
Modular Programming -
While programming, the code is broken down into smaller group of instructions. These
groups are known as modules, sub-programs or subroutines. Modular programming
based on the understanding of top-down analysis. It discourages jumps using ’goto’
statements in the program, which often makes the program flow non-traceable. Jumps
are prohibited and modular format is encouraged in structured programming.
Structured Coding -
In reference with top-down analysis, structured coding sub-divides the modules into fur-
ther smaller units of code in the order of their execution. Structured programming uses
control structure, which controls the flow of the program, whereas structured coding
uses control structure to organize its instructions in definable patterns.
41
other functions as results.
Pure functions -
These functions do not include destructive updates, that is, they do not affect any I/O or
memory and if they are not in use, they can easily be removed without hampering the
rest of the program.
Recursion -
Recursion is a programming technique where a function calls itself and repeats the pro-
gram code in it unless some pre-defined condition matches. Recursion is the way of
creating loops in functional programming.
Strict evaluation -:
It is a method of evaluating the expression passed to a function as an argument. Func-
tional programming has two types of evaluation methods, strict (eager) or non-strict
(lazy). Strict evaluation always evaluates the expression before invoking the function.
Non-strict evaluation does not evaluate the expression unless it is needed. -calculus -
Most functional programming languages use −calculus as their type systems. −expressions
are executed by evaluating them as they occur.
42
Functions − This defines how functions should be declared and invoked, with and with-
out parameters.
Variables − This mentions how variables of different data types are declared and de-
fined.
Comments −This is one of the important coding components, as the comments included
in the code describe what the code actually does and all other associated descriptions.
This section also helps creating help documentations for other developers.
43
license updation etc.738777
44
CHAPTER 8
PROJECT IMPLEMENTATION
8.1 Introduction
Our work is a special type of query processing on encrypted data, which has gained
much more focus recently, especially in the situation of cloud computing where the
data owner outsources his data to the cloud. Existing works primarily concern the fol-
lowing aspects: traditional SQL query , textual query , range search, top-k query and
k-NN query. In this section, we mainly review some recent achievements on secure
k-NN query.
In previous works, some schemes have been proposed to solve secure k-NN computa-
tion on encrypted cloud data which can be mainly classified into two categories based
on whether the key of data owner is sharing with others or not: key-sharing scheme and
key-confidentiality scheme.
4. Hard disk - 20 GB
4. Tools : Eclipse
Tools:
1)Documentation:Texmaker 4.4.1
2)Digram:StarUML 2.5
3)Testing:Manual Testing
46
8.2.3 Technologies
• JSP
JavaServer Pages (JSP) is a server-side programming technology that enables
the creation of dynamic, platform-independent method for building Web-based
applications. JSP have access to the entire family of Java APIs, including the
JDBC API to access enterprise databases.
JavaServer Pages (JSP) is a technology for developing web pages that sup-
port dynamic content which helps developers insert java code in HTML pages by
making use of special JSP tags, most of which start with ¡
A JavaServer Pages component is a type of Java servlet that is designed to
fulfill the role of a user interface for a Java web application. Web developers
write JSPs as text files that combine HTML or XHTML code, XML elements,
and embedded JSP actions and commands.
Using JSP, you can collect input from users through web page forms, present
records from a database or another source, and create web pages dynamically.
JSP tags can be used for a variety of purposes, such as retrieving information
from a database or registering user preferences, accessing JavaBeans components,
passing control between pages and sharing information between requests, pages
etc.
Advantages of JSP:
• Pure Servlets
• Static HTML
47
8.2.4 Tools
• Eclipse
Java 8 support including language enhancements, search and refactoring,
Quick Assist and Clean Up to migrate anonymous classes to lambda expressions
and back, and new formatter options for lambdas.
The Eclipse workbench provides a new dark theme which includes syntax
highlighter settings for several programming languages.
Advantages
• MySQL
MySQL is a fast, easy-to-use RDBMS being used for many small and big
businesses. MySQL is developed, marketed, and supported by MySQL AB,
which is a Swedish company.
48
– MySQL works very quickly and works well even with large data sets.
– MySQL is very friendly to PHP, the most appreciated language for web
development.
49
CHAPTER 9
SOFTWARE TESTING
9.1 Introduction
Software testing is an activity aimed at evaluating an attribute or capability of a pro-
gram or system and determining that it meets its required results. It is more than just
running a program with the intention of finding faults. Every project is new with dif-
ferent parameters. No single yardstick maybe applicable in all circumstances. This
is a unique and critical area with altogether different problems. Although critical to
software quality and widely deployed by programs and testers. Software testing steel
remains an art, due to limited understanding of principles of software. The difficulty
stems from complexity of software. The purpose of software testing can be quality
assurance, verification and validation or reliability estimation. Testing can be used as
a generic metric as well. Software testing is a trade-off between budget,time and quality.
• Automated Testing
Automation testing which is also known as Test Automation is when the tester
writes scripts and uses software to test the software. This process involves au-
tomation of a manual process. Automation testing is used to re-run the test sce-
narios that were performed manually, quickly and repeatedly.
51
CHAPTER 10
RESULTS
10.1 Screen shots
53
54
55
56
CHAPTER 11
DEPLOYMENT AND MAINTENANCE
11.1 Installation and Un-installation
In installation process it is a process of assembling the hardware components to your
toll so that the automation can take place smoothly. The software installation part is so
easy that user have to install JDK(Java Development Tool) and MySQL for database.
http://www.oracle.com/technetwork/java/javase/downloads
/jdk7-downloads-1880260.html
JavaTM Platform, Standard Edition Development Kit (JDKTM). The JDK is a devel-
opment environment for building applications, applets, and components using the Java
programming language.
58
Install Eclipse on Your Windows
Download Eclipse Software:
Now we need to download Eclipse software. You can download eclipse software by
visiting this page. You will see two versions of eclipse. That is, Eclipse Standard and
Eclipse IDE for Java EE Developers.
Extract the downloaded Eclipse Zip folder to install eclipse. Here we don’t get
any setup file to install. All we have to do is just extract the zip folder and configure
eclipse.ini file to start using it. You can extract this eclipse folder on any location you
want. Once you extract that folder, open eclipse folder. In that, you will see eclipse.ini
file. Open that file.
Open Eclipse and set the workplace where you want store the program files n set
jdk path and create a project.
59
11.2 User help
Running the System
Step One
Open Eclipse
Open Mysql
Step Two
Run the Project
Open in Browser
60
CHAPTER 12
CONCLUSIONS & FUTURE WORK
We proposed a two-phase computation framework to instruct data servers to aggregate
node’s private data and compute the average value collaboratively. More importantly,
through one-shot noises perturbation, node’s obtain heterogeneous privacy guarantee
against different types of privacy violators. We also derived the closed-form expres-
sions of two PPDs and computation accuracy of the proposed framework. Then, in
order to efficiently solve the conflict between privacy and accuracy, we devised an
incentive mechanism for data servers, which provides optimal computation accuracy
when servers have fixed incentive budget. Lastly, extensive simulations verified the ob-
tained theoretical results.
For future investigations, we will further analyze the effect of the number of node’s re-
porting data on computation accuracy, which is beneficial to the co-design of incentive
scheme to motivate node’s participation. Also, to simultaneously maximize the rewards
for node’s and minimize the computation error for server, a game-theoretic approach
based incentive mechanism will be investigated. We are also interested in evaluating
the performance of the proposed privacy-preserving collaborative computation frame-
work in a real data center environment.
62
ANNEXURE A
LABORATORY ASSIGNMENTS ON
PROJECT ANALYSIS OF ALGORITHMIC
DESIGN
To develop the problem under consideration and justify feasibilty using concepts of
knowledge canvas and IDEA Matrix. for IDEA Matrix and Knowledge canvas model.
Case studies are given in this book. IDEA Matrix is represented in the following form.
Knowledge canvas represents about identification of opportunity for product. Feasibil-
ity is represented w.r.t. business perspective.
A.1 THEORY
A.1.1 FEASIBILITY STUDY
A feasibility study is carried out to select the best system that meets performance re-
quirements. The main aim of the feasibility study activity is to determine whether it
would be financially and technically feasible to develop the product. The feasibility
study activity involves the analysis of the problem and collection of all relevant infor-
mation relating to the product such as the different data items which would be input to
the system, the processing required to be carried out on these data, the out- put data
required to be produced by the system as well as various constraints on the behavior of
the system.
Technical Feasibility This is concerned with specifying equipment and software that
will successfully satisfy the user requirement. The technical needs of the system may
vary.
Economic Feasibility Economic analysis is the most frequently used technique for eval-
uating the effectiveness of a proposed system. More commonly known as Cost / Benefit
analysis, the procedure is to determine the benefits and savings that are expected from
a proposed system and compare them with costs.
Operational Feasibility This is mainly related to human organizational and political as-
pects.
64
ANNEXURE B
LABORATORY ASSIGNMENTS ON
PROJECT QUALITY AND RELIABILITY
TESTING OF PROJECT DESIGN
It should include assignments such as
• Use of above to draw functional dependency graphs and relevant Software mod-
eling methods, techniques including UML diagrams or other necessities using
appropriate tools.
• Testing of project problem statement using generated test data (using mathemat-
ical models, GUI, Function testing principles, if any) selection and appropriate
use of testing tools, testing of UML diagram’s reliability. Write also test cases
[Black box testing] for each identified functions. You can use Mathematica or
equivalent open source tool for generating test data.
66
ANNEXURE C
PROJECT PLANNER
Figure C.1: Timeline Chart (I) (Actual Days)
Indicates Milestone
M1: Requirement Gathering
M2: UML diagrams
Indicates Milestone
M3: .
M4:
M5:
M6:
M7:
M8:
68
ANNEXURE D
REVIEWERS COMMENTS OF PAPER
SUBMITTED
(At-least one technical paper must be submitted in Term-I on the project design in the
conferences/workshops in IITs, Central Universities or UoP Conferences or equivalent
International Conferences Sponsored by IEEE/ACM)
1. Paper Title: Privacy Preserving for Efficient Incentive Mechanism with Collabo-
rative Computing
70
Sinhgad Technical Education Society’s
SMT. KASHIBAI NAVALE COLLEGE OF ENGINEERING, PUNE
DEPARTMENT OF COMPUTER ENGINEERING
In Association with
THE INSTITUTION OF ENGINEERS (INDIA)
ICINC-2021
C E RT I F I C AT E O F A PPR EC I AT I O N
THIS IS TO CE RTI FY TH AT
Anil Singh
HAS PRESE N TED T HE PA PER T ITLED
P ri v a c y P re se r v i n g f o r Ef fi ci e nt I n c en t iv e M ec h an i s m wit h Col la b or at i ve Comp u ti ng
Co-ordinator Co-ordinator Convener Vi c e - P r i n c i p a l Principal
Sinhgad Technical Education Society’s
SMT. KASHIBAI NAVALE COLLEGE OF ENGINEERING, PUNE
DEPARTMENT OF COMPUTER ENGINEERING
In Association with
THE INSTITUTION OF ENGINEERS (INDIA)
ICINC-2021
C E RT I F I C AT E O F A PPR EC I AT I O N
THIS IS TO CE RTI FY TH AT
Dipti Bhalerao
HAS PRESE N TED T HE PA PER T ITLED
P ri v a c y P re se r v i n g f o r Ef fi ci e nt I n c en t iv e M ec h an i s m wit h Col la b or at i ve Comp u ti ng
Co-ordinator Co-ordinator Convener Vi c e - P r i n c i p a l Principal
Sinhgad Technical Education Society’s
SMT. KASHIBAI NAVALE COLLEGE OF ENGINEERING, PUNE
DEPARTMENT OF COMPUTER ENGINEERING
In Association with
THE INSTITUTION OF ENGINEERS (INDIA)
ICINC-2021
C E RT I F I C AT E O F A PPR EC I AT I O N
THIS IS TO CE RTI FY TH AT
Kajal Wagh
HAS PRESE N TED T HE PA PER T ITLED
P ri v a c y P re se r v i n g f o r Ef fi ci e nt I n c en t iv e M ec h an i s m wit h Col la b or at i ve Comp u ti ng
Co-ordinator Co-ordinator Convener Vi c e - P r i n c i p a l Principal
Sinhgad Technical Education Society’s
SMT. KASHIBAI NAVALE COLLEGE OF ENGINEERING, PUNE
DEPARTMENT OF COMPUTER ENGINEERING
In Association with
THE INSTITUTION OF ENGINEERS (INDIA)
ICINC-2021
C E RT I F I C AT E O F A PPR EC I AT I O N
THIS IS TO CE RTI FY TH AT
Punam Bhagwat
HAS PRESE N TED T HE PA PER T ITLED
P ri v a c y P re se r v i n g f o r Ef fi ci e nt I n c en t iv e M ec h an i s m wit h Col la b or at i ve Comp u ti ng
Co-ordinator Co-ordinator Convener Vi c e - P r i n c i p a l Principal
CHAPTER 13
PLAGIARISM REPORT
13.1 ABSTRACT PLAGIARISM REPORT
76
Page 1
0% 100% 0 11
Plagiarized
Plagiarism Unique Unique Sentences
Sentences
Shared processing utilizes various information workers to together finish information examination, e.g., factual
investigation and deduction. One significant deterrent for it lies in privacy concern, which is straightforwardly connected
with hub's interest and the loyalty of got information. Existing privacypreserving ideal models for distributed computing
and appropriated information conglomeration just give hubs homogeneous privacy insurance without thought of hubs'
various trust degrees to various information workers. We propose a twophase system that processes the normal worth
while preserving heterogeneous privacy for hubs' private information. The new test is that in the reason of meeting
privacy necessities, we ought to ensure the proposed system has a similar calculation exactness with existing privacy
mindful arrangements. In this paper, hubs get heterogeneous privacy security despite various information workers
through oneshot commotion annoyance. In light of the meaning of KL privacy, we infer the insightful articulations of the
privacy preserving degrees (PPDs) and evaluate the connection between various PPDs. At that point, we get the shut
structure articulation of calculation precision. Moreover, an effective motivation component is proposed to accomplish
improved calculation precision at the point when information workers have fixed spending plans. At long last, broad
recreations are directed to check the got hypothetical outcomes. . To give an effective answer for the contention, we
embrace a motivator component to remunerate hubs' privacy misfortune
Sources Similarity