You are on page 1of 46

Bon Maharaj Engineering College

(Faculty of Computer Science and Engineering)

Zero-Knowledge protocols, Plagiarism Checking and Information Security


Analysis and Possibilities

On Award of the degree of

Master of Technology (M. Tech.)

Submitted by

SHIVANSHI SRIVASTAVA

(Roll Number: 1536710502)

i
CANDIDATE DECLARATION

I hereby declare that this thesis entitled Zero-Knowledge protocols,


Plagiarism Checking and Information Security Analysis and Possibilities is my
own work, that I have not presented it elsewhere for examination purposes and that I
have not used any source or aids other than those stated. I have marked verbatim and
indirect quotations as such.

It is being presented solely by me on the award of degree Master of


Technology in CSE, submitted in the Department of CSE, Bon Maharaj Engineering
College, Vrindavan (Affiliated to Dr. A. P. J. Abdul Kalam Technical University,
Uttar Pradesh, Lucknow) and is carried out under the guidance of Prof. Sunil Kumar
Verma (Head of Department) of Computer Science and Engineering, Bon Maharaj
Engineering College, Vrindavan.

Date: Shivanshi Srivastava

Roll. No.: 1136710502

ii
CERTIFICATE

This is to certify that Shivanshi Srivastava has carried out the work embodies
in this thesis entitled Zero-Knowledge protocols, Plagiarism Checking and
Information Security Analysis and Possibilities under my guidance and
supervision during the academic session 2017-18 in fulfillment of the requirement for
M. Tech. Final Year C.S.E. from Bon Maharaj Engineering College. The Work in this
thesis is original and I am completely satisfied with this work, and wish her all
success in future life.

Mr. S.K.VERMA MR. SUKRIT GOSWAMI

(Head of Dept. - CSE) (PROJECT COORDINATOR)

iii
ACKNOWLEDGEMENT

I wish to express my deepest gratitude to Prof. S. K. Verma, Faculty of Computer


Science and Engineering at Bon Maharaj Engineering College, Vrindavan for
encouraging me to take up this thesis work, providing me with all the necessary inputs
throughout this phase and directions to complete it. I sincerely thank Prof. Sukrit
Goswami, for his constant guidance, ideas, opinions, encouragement and most
importantly his time for successful completion of my master thesis.

I would also like to take this opportunity to express my appreciation towards


the library services in college and the Internet, which has been of great help in
researching various topics with its vast resources.

My heartfelt thanks to my family and friends for their unparalleled support and
encouragement throughout the journey of this master program!

iv
ABSTRACT

Behind every research work, these is months of experimentation, hard work,


exploration and brainstorming. Information Security, here too, is important and
viable. However, every scholar checks if his/her research work is showing any kind of
plagiarism or not.

During the process, the thesis or research work is subjected to a vulnerable


environment. The verifier, i.e. the tool may or may not steal the information, and
could ruin the whole career or hard work of the person in the worst case. Any of
confidential or research-based document, subjected to plagiarism-analysis passes
through the same situation. This whole scenario forms the basis of my thesis.

As it will proceed, we will analysis different methods for plagiarism detection,


multiple zero-knowledge protocols and how we can improve the plagiarism checking
for everyone. Thesis will also have some suggestions to be implemented, along with
the current protocol in order to reduce or eliminate the vulnerabilities related to
information misuse, stealing, or unauthorized access.

Plagiarism and the plagiarism checkers, both, are the threat to academic integrity.
However, checking plagiarism is important as it detects whether a work is original or
not. So, we can only try to enhance the process of plagiarism checking, instead of
avoiding it.

The motive of this research work is to make the plagiarism checking a fair
practice for every scholar or the owner of original work. It will help in recognizing the
original works and ensure the confidentiality and information security for everyone.

5
CONTENTS

Candidate Declaration .......................................................................................................ii

Certificate ........................................................................................................................ iii

Acknowledgements ........................................................................................................... iv

Abstract ............................................................................................................................... 5

Contents .............................................................................................................................. 6

1. Foreword ..................................................................................................................... 8

2. Plagiarism - Introduction ........................................................................................... 9

3. Reasons of Dishonesty .............................................................................................. 10

4. How Plagiarism Affects the Original Creators and promotes Cheating


Culture ............................................................................................................................. 11

5. Methods of Plagiarism Detection ............................................................................ 12

6. Approaches of Plagiarism Detection ....................................................................... 14

7. Methods for Plagiarism Detections ......................................................................... 15

8. Software Plagiarism Detection Tools ...................................................................... 16

9. Protocols for Plagiarism Detection ......................................................................... 19

10. Working of Plagiarism Detection tools ................................................................... 22

11. Zero-Knowledge Protocols Investigation ............................................................ 24

11.1 Introduction ................................................................................................... 24

11.2 Zero knowledge Terminology ....................................................................... 25

11.3 Proof of Knowledge ...................................................................................... 27

11.4 Proof of Identity ............................................................................................ 29

11.5 Feige-Fiat-Shamir Proof of Identity .............................................................. 30

11.6 Guillou-Quisquater Proof of Identity ............................................................ 31

11.7 Schnorr's Mutual Digital Signature Authentication ...................................... 32

11.8 Hamiltonian Cycles ....................................................................................... 32

11.9 Case Study: Smart Cards ............................................................................... 33

6
11.10 Conclusion ................................................................................................. 35

12. Role of Zero-Knowledge Protocols in Plagiarism Checking ................................ 36

12.1 Interactive Proof System ............................................................................... 36

12.2 Chances of Vulnerability............................................................................... 37

13. Improving Plagiarism Detection Protocols ............................................................ 38

14. Benefits of increasing Information Security during Plagiarism checks .............. 40

15. Conclusion ................................................................................................................. 41

7
1. Foreword

Information Security is important for every academic work, scholarly article,


confidential document and other secret papers. While checking the uniqueness of the
above-mentioned documents, we unknowingly allow the tools to read the information
present in it. Such tools open up the paths for intrusion and stealing very easy.

To avoid this situation, zero-knowledge protocols are introduced. However, every


commonly used tool is not guaranteed to be implementing zero-knowledge protocols
but it has provided a positive way to overcome the information security threats to
some extent.

Our area of research will be revolving around

Plagiarism
Importance of Plagiarism Detections
How Plagiarism Affects the Original Creators and promotes Cheating
Culture
Tools for Plagiarism Detections
Protocols for Plagiarism Detection
Zero-Knowledge Protocols Investigation
Role of Zero-Knowledge Protocols in Plagiarism Checking
Improving Plagiarism Detection Protocols
Benefits of increasing Information Security during Plagiarism checks
Conclusion

8
2. Plagiarism - Introduction

Plagiarism, as per the dictionary, is described as

The act of appropriating the literary composition of another author, or


excerpts, ideas, or passages therefrom, and passing the material off as one's own
creation.1

It is considered a way of stealing other peoples ideas and representing those


as ones own work. The method is being used in by multiple writers, students,
scholars, and in every industry just to save the efforts.

Plagiarism is not a legal term and is specially used in the online world.
However, it is sometimes also used in courts or lawsuits to replace the term
Copyright infringement, when someone tries to represent someone elses intellectual
property as his/her own creation. The phenomenon is widely possible to occur in any
of the creative designing, writing or research work field due to the availability of
someones work in public domain.

In broader aspects, plagiarism is taking the writings or literary concepts (a


plot, characters, words) of another and selling and/or publishing them as one's own
product. Quotes which are brief or are acknowledged as quotes do not constitute
plagiarism. The actual author can bring a lawsuit for appropriation of his/her work
and against the plagiarist, and recover the profits. Normally plagiarism is not a crime,
but it can be used as the basis of a fraud charge or copyright infringement, if prior
creation can be proved.2

The practice of copy-pasting has really affected every section of academic and
non-academic industries. Internet has made the whole copying thing easy and hence it
is one of the major promoters of academic plagiarism. Discouraging students from
working and researching, the online landscape allows them to easily enter any public
directory. However, the whole responsibility is of the researcher itself, that how he or
she is making use of the internet.

1
West's Encyclopedia of American Law, edition 2. S.v. "Plagirism." Retrieved August 2 2017 from
http://legal-dictionary.thefreedictionary.com/Plagirism
2
West's Encyclopedia of American Law, edition 2. S.v. "Plagirism." Retrieved August 2 2017 from
http://legal-dictionary.thefreedictionary.com/Plagirism

9
To a large extent, it is a way of compromising the information security
knowingly and hence it is termed as Academic Dishonesty and electronic crimes.

The content which is formulated by the hard work of honest people, who spend
most of their time and energy to create master pieces is ruined through just some
Google searches. This indirectly violates the basic rights and it initially discourages
the creation of good and genuine contents.3

3. Reasons of Dishonesty

Cheating is a global phenomenon, only the method changes according to the region
and context. Many people, even being a high-perform, indulge in copying activities.
The reason, given by them behind opting for this method is the quantity of work.

The overwhelming tasks


Lack of time
Lack of motivation
Lack of guidance
Exciting scholarships
Urge to maintain a good social status
Easy availability of helping material

3
Plagiarism Checker X, Blog Title Why plagiarism worth your concern Retrieved August 1 2017
from https://plagiarismcheckerx.com/blog/why-plagiarism-worth-your-concern/

10
4. How Plagiarism Affects the Original Creators and promotes
Cheating Culture

There are many causes why plagiarism should be immediately detected and the
practitioner should be punished. Some of these are

No recognition to the actual owner of work


Non-talented people qualify the graduate or post-graduate programs, without
making any sensible efforts
These people, when get employed, performs bad due to the lack of practical
knowledge.
It reduces the chances of the student, knowing about the research he/she
should have technically carried out.
Reduces the geniuses from the market, at the same time making it tough to
identity the real talents and fake people.

Importance of Plagiarism Detections

The reasons are self-explanatory

For identification of real authors or inventors


Itll allow the recognition for actual creators
Increasing the talented workforce in the market and neglecting the others.

11
5. Methods of Plagiarism Detection

To combat cheating, it is required to make the plagiarism statistics more "optimistic".


Few methods for this are:

Assigning tests and essays that are cheat-resistant

Professors of upper-level courses usually have less to fear when it comes to


plagiarism not only because most of the students have chosen to enroll in the classes
for their majors or areas of interest (as opposed to being forced to take a class to fulfill
a core requirement), but also because the nature of the assignments aren't based on
regurgitating class notes or numbers. Many professors encourage students to use their
books or notecards, or even allow students to revise essays a few times, before they
receive a final grade because it is the nature in which they are applying specific
theories or concepts that is being tested, which isn't something that can easily be
copied from another source. If a student were to plagiarize, it would be easier to spot
and follow-up on, simply by asking a few questions that the author of the test or essay
would be able to answer.

Using plagiarism-detection software

Several companies have seized the opportunity to assist professors on their


quest to eliminate cheating. Plagiarism Software can easily detect content that has
been plagiarized from the web and can even be customized to review text books and
original content (like course materials). Some teachers require students to run their
essays through these tools and turn in the subsequent report with their essay to prove
they have taken the time to eliminate any copied content. However, as discussed
above, this software might not deter students who believe their cheating tactics to be
"too good" to be detected.

Proctoring

It's no longer up to the professor to prove the identity of a student that is


taking an online test. Several companies now specialize in online identity
confirmation techniques, often using a webcam to verify that a student is who he or
she says s/he is, thus helping to eliminate cheating in online classes.

12
Virtual Student Monitors

Especially effective in online courses, universities can work together with


teachers to have a virtual student monitoring program, where "fake" student profiles,
monitored by university officials, "attend" classes to keep an eye and ear on any
activity that might lead to cheating among the other students.

Guilt

Appealing to their students' sense of fairness and reminding the class of the
negative effects of their classmates' cheating actually can make a difference,
especially if the professor employs old-fashion, tried-and-true methods of grading,
like grading on a curve so that the student with the highest score on a test or paper sets
the standard for all other students. Students are less likely to cheat or, at least, more
likely to cheat less, if they know they could face consequences from their classmates
in addition to their professor.

13
6. Approaches of Plagiarism Detection

Manual detection

Done manually by human, its suitable for lectures and teachers in checking
students assignments but it is not effective and impractical for a large number of
documents and not economical also need highly effort and wasting time.

Automatic detection (Computer assisted detection)

There are many software and tools used in automatic plagiarism detection,
likeTurintin, Edutie, PlagiServe, Glatt Plagiarism Self-Detection program (GPSD)
and many more. This thesis compares the various software tools for detecting the
plagiarism.

14
7. Methods for Plagiarism Detections

Plagiarism detection software can be classified into four main categories namely4

Online or remote search tools


Stand-alone desktop software
Web search engines
Subscription databases.

Online or remote search tools

Developing web systems for plagiarism detection overcomes machine


capability problems, facilitate the availability of the system to many users and extend
the search of plagiarized resources to the World Wide Web easily. Turnitin is the
most well-known commercial plagiarism detection system to which many universities
from UK and USA subscribe. It uses an enormous database from the Internet and
previous student works to be compared with the query document. Plagiserve is
another online plagiarism detection software.

Stand-alone desktop software

Stand-alone software is developed to be installed on computers. EVE (The


Essay Verification Engine) is a desktop application but it has the capability to make
large number of searches on the Internet to locate matches between sentences in the
query document and suspected websites. Thus, in order for EVE to work, the machine
should be connected to the Internet. Other examples include CopyCatch Gold, Glatt
Plagiarism Screening program (GPSP) and Word Check Keyword DP

Web search engines

Internet search engines such as Google, AltaVista, and Yahoo can be used an
alternate method to detect suspected plagiarism without the need to download
software or register for a detection service. Examples of such systems include Google,
Alta vista, Looksmart, Amazon and many more.

Subscription databases

4
R. Symons, "Teaching and Learning Committee Plagiarism Detection software Report," University of
Sydney, 2003.

15
Finally, subscription databases of scholarly and popular literature which include
abstracts or full texts of articles may be searched, particularly if assignments relate to
the location of such material. These are specialized databases which we subscribe to
which are not available on Google or other common search engines. They include
online encyclopaedias, periodical indexes, business directories and other resources.
They are broken down by subject to help you find exactly what you need.

8. Software Plagiarism Detection Tools

1. TURNITIN

Turnitin.com uses digital fingerprinting to match submitted papers against internet


resources and them against an in-house database of previously submitted papers. The
website provides online tutorials in the use of the service for both lecturers and
students, with a free trial period of one month. Turnitin.com has the highest rate of
detection amongst subscription detection tools. Papers can be submitted individually
by either student or lecturer. All papers are archived for future checking a feature
which is particularly useful if copying of previous students papers is suspected.
Reports are provided within 24-48 hours and show similarities of the submitted text to
other sources.

2. PLAGISERVE

Plagiserve.com is a free service which searches the internet for duplicates of


submitted papers, analyses them, and provides evidence of plagiarism to the lecturer.
It has an extensive database of 90,000 papers, essays and Cliff Notes study guides,
and papers from all known paper mills. Reports are generated in 12 hours. The service
is only available through its website, and papers must be submitted in one batch.

3. GLATT PLAGIARISM SCREENING PROGRAM (GPSP)

The Screening Program (GPSP) evaluates a students knowledge of their own


writing by producing a test whereby every fifth word of a students paper is
eliminated and replaced with blanks which the student has to replace. Accuracy and
speed in replacing the blanks is evaluated against a proprietary database, and a
probability score returned immediately. Useful for detecting plagiarism where the
original source cannot be located through other sources such as internet search

16
engines and other plagiarism detection services, its limitations lie in not being able to
identify the source of the suspect text and the requirement for students to sit a test.

4. COPYCATCH GOLD

CopyCatch Gold is stand-alone desktop software which can be either installed on


a single PC or on a network. It detects collusion between students by checking
similarities between words and phrases within work submitted by one group of
students.

5. EVE2 - ESSAY VERIFICATION ENGINE

EVE2 is a windows based system, installed on individual workstations. It is not


easily installed on servers. Papers are submitted by cutting and pasting plain text,
Microsoft Word, or Word Perfect documents into a text box. The program then
searches internet resources for matching text. Reports are provided within a few
minutes, highlighting suspect text, and indicating the percentage of the paper that is
plagiarized.

6. GPLAG

GPlagwas developed by Chao LIU, Chen Chen, Jiawei Han at the University of
Illinois-UC, Urban in 2006. GPlag, which detects plagiarism by mining program
dependence, graphs (PDGs). A PDG is a graphic representation of the data and
control dependencies within a procedure. The PDG thus developed from original
program and modified program are checked whether it is copied or not by graph
isomorphism. In order to make GPlag scalable to large programs, a statistical lossy
filter is proposed to prune the plagiarism search space.

7. JPLAG

JPlag was developed by Guido Malpohl at the University of Karlsruhe. In 1996 it


started out as a student research project and a few months later it evolved into a first
online system. In 2005 JPlag was turned into a web service by EmericKwemou and
Moritz Kroll. Jplag converts programs into token strings that represent the structure of
the program, and can therefore be considered as using a structure-based approach.

17
For comparing two token strings JPlag uses the Greedy String Tiling" algorithm as
proposed by Michael Wise but with different optimizations for better efficiency. JPlag
is a system that finds similarities among multiple sets of source code files.

JPlag currently supports Java, C#, C, C++, Scheme and natural language text. Jplag
has a powerful graphical interface for presenting its results. It takes input as set of
programs, compares these programs pair wise (computing for each pair a total
similarity value and a set of similarity regions), and provides as output a set of HTML
pages that allow for exploring and understanding the similarities found in detail.

8. MOSS

Moss is an acronym for Measure Of Software Similarity. Moss was developed in


1994 at Stanford University by Aiken et al. It is being provided as a web service that
can be accessed using a script. The MOSS submission script works for Unix/Linux
platforms and may work under Windows with Cygwin. To measure similarity
between documents, moss compares the standardized versions of the documents:
moss uses a document fingerprinting algorithm called winnowing. Document
fingerprinting is a technique that divides a document into contiguous substrings,
called k-grams, with k being picked by the user.

Every k-gram is hashed, and a subset of all the k-gram hashes is selected as the
document's fingerprint. Moss is an automatic system for determining the similarity of
programs. Moss can currently analyse code written in the following languages: C,
C++, Java, C#, Python, Visual Basic, JavaScript, FORTRAN, ML, Haskell, Lisp,
Scheme, Pascal, Modula2, Ada, Perl, TCL, Matlab, VHDL, Verilog, Spice, MIPS
assembly, a8086 assembly, a8086 assembly, MIPS assembly, HCL2.Moss is also
being provided as an Internet service.

9. SIM

SIM is a software similarity tester for programs written in C, Java, Pascal,


Modula-2, Lisp, Miranda, and for natural language. It was developed in 1989 by Dick
Grune at the VU University Amsterdam. The process SIM uses to detect similarities is
to tokenize the source code first, then to build a forward reference table that can be
used to detect the best matches between newly submitted files, and the text they need

18
to be compared to. SIM detects similarities between programs by evaluating their
correctness, style, and uniqueness.

10. GOOGLE

Google is an American multinational corporation specializing in Internet-related


services and products. These include online advertising technologies, search, cloud
computing, and software. It can be used as an alternate method to detect suspected.

9. Protocols for Plagiarism Detection

The types of algorithms that practitioners predominantly deploy to detect


plagiarism in natural language productions divide into two broad categoriescorpus
based and interrogatory. The two categories reflect different approaches and
philosophies.

The corpus-based approach encourages the global submission of documents that


respond to writing assignments. Most typically, faculty, as a course requirement, has
students submit their work to a plagiarism detection service just prior to the
submission deadline. Faculty then examine the works that have been identified as
meeting criteria for academic dishonesty, and those that are not found to be false
positives are further investigated.

This approach is exemplified by universality of application and by the placing of


computer analysis before human scrutiny. The interrogative approach is based on the
assumption that plagiarism is revealed when a student is demonstrably unfamiliar
with a suspected submission. A standard implementation has a two-tier form.

Faculty read student submissions and, in the course of that reading, identifies
suspect documents. Under the supervision of the faculty member, software removes
words from the suspected document, replaces them with blanks, and prompts the
student to fill in the blanks. The program determines familiarity from the speed and
accuracy of the responses. This approach inverts the corpus-based approach. It is
selective rather than comprehensive. It gives priority to human judgment. It situates
plagiarism in human conduct rather than in impersonal pattern finding.

19
At the highest level of abstraction, corpus-based plagiarism detection software
takes as input a suspect document and an archived corpus of authenticated documents,
compares the suspect document to the corpus, and outputs passages that the suspect
document shares with the corpus and a measure of the likelihood that the author
plagiarized material.

At a more operational level, a corpus-based protocol is implemented in several


ways. TurnItIn.com, the most high profile company in the field, employs document
source analysis to generate digital fingerprints of documents, those submitted for
authentication, those in the archive, and those available through ProQuest, and it
complements the search of its in-house archive with searches of the World Wide Web
(iParadigms, 2007).

It must be noted that documents present neither in ProQuest nor on the Web are
beyond the reach of TurnItIn.com The Essay Verification Engine, which employs the
World Wide Web as its corpus, uses techniques to target exhaustively the sites that
most likely are sources of plagiarized material and compares their content with the
suspect work (CANexus, 2007).

Glatt Plagiarism Services exemplifies an interrogative approach to plagiarism


detection. Faculty have students submit a work suspected of being plagiarized to the
Glatt Plagiarism Screening Program, which is free standing, non-Web-based software.
The program replaces every fifth word of the suspected paper with a standard size
blank, and the student is then prompted to supply the missing words (Glatt, 2007).
The number of correct responses, the amount of time intervening between responses,
and various other factors are considered in calculating a plagiarism probability index.

Although the programs mentioned above illustrate different approaches to


plagiarism detection, they share a common characteristic. They are deployed by
commercial, proprietary organizations that guard their intellectual property and that
keep private any quality control studies that they conduct. These are attributes that
clash with the free inquiry and the openness associated with institutions of higher
education. Faculty members who employ these programs do not know in detail how
they work, nor do they have any means of judging their efficacy.

20
Niezgoda and Way (2006) provide a partial corrective for these deficits in their
description of SNITCH, Spotting and Neutralizing Internet Cheaters. As the name
suggests, SNITCH uses Web pages on the Internet as its corpus, and Niezgoda and
Way, in providing some insight into the program, implicitly provide insight into other
programs in its category.

The authors provide a detailed exposition of the corpus-based algorithm that they
use to detect plagiarism in scientific and technological writing, an arena of discourse
that poses particular problems for plagiarism detection, and they provide data
regarding the effectiveness of their algorithm as measured against an oracle and
against a competitor, the Essay Verification Engine, Eve 2. E-Leader Bangkok, 2008.

SNITCH employs a sliding window technique that determines the average length
of words within a succession of windows (Niezgoda and Way, 2006). SNITCH reads
a specified number of words, determines the average number of letters per word, and
associates that average with the current window. The procedure is repeated, moving
the windows forward one word with each iteration until the end of the document is
reached.

Windows are ranked from highest weights to lowest, and the top ranked windows
are then submitted to search engines to detect matches. SNITCHs success rate for
detecting plagiarism in actual student submissions range from 40% for papers with
minimal plagiarism to 63% for papers with a high level of plagiarism, and it produced
no false positives (Niezgoda and Way, 2006). SNITCH outperformed Eve 2, a
commercial program, in detecting plagiarism in submissions with low actual
plagiarism, 40% compared to 12%, and its performance against submissions with a
high level of plagiarism, 63%, was almost indistinguishable from that of Eve 2 at
63%. In terms of revealing what the program does and how well it does it, Niezada
and Way provide a model approach, one that contrasts with the secrecy of commercial
software.

21
10.Working of Plagiarism Detection tools

Curious about how the current plagiarism detection tools work, I decided to start a
conversation with support team of such famous tools website.

From the conversation, it was clear that they are not storing the information,
present in the documents checked through them.

22
23
11.Zero-Knowledge Protocols Investigation

Zero knowledge protocols play an important role in the domain of cryptography.


They allow for one party, the verifier, to identify and authenticate another party, the
prover. At the end of this process, the verifier has only assessed the validity of the
prover's assertion and leaves with zero knowledge.

The next five sections deal with the major zero knowledge protocols. Both
abstract and concrete examples are explored in order to further one's understanding of
these protocols. The following section deals with proving the existence of
Hamiltonian cycles while maintaining zero knowledge. The final section explores a
case study of smart cards and how the implementation of zero knowledge protocols
can increase their security. The essay concludes by highlighting the significance of
the applications of zero knowledge as the demand for information security continues
to increase into the future.

11.1 Introduction

Zero knowledge proofs show a statement to be true without revealing anything


other than the veracity of the statement to be proven. The word "proof" is not used in
the traditional mathematical sense (e.g. a proof of the Pythagorean Theorem). Instead,
a zero knowledge proof is a protocol in which a series of interactions takes place
between two parties, the verifier and the prover.[7] Throughout these interactions,
four essential laws must remain unbroken in order to classify the system as zero
knowledge.[1] The prover cannot learn anything from the verifier. The prover cannot
cheat the verifier. The verifier cannot cheat the prover.[9] The verifier cannot pretend
to be the prover to any third party. At the end of the interaction, the verifier has only
determined the validity of the prover's assertion. Thus, the verifier leaves the system
with zero knowledge.

Zero knowledge protocols function as useful constructs for analyzing theoretical


cryptographic situations, but also function as practical tools for constructing secure
security systems. The main incentive for using zero knowledge protocols over more
commonly used protocols such as the RSA public-key cryptographic protocol and the
symmetric protocol families, lies within its computational requirements. They require
anywhere from to of the computing power used by public key protocols. Zero

24
knowledge allows a protocol to be split into an iterative process of many light
transactions rather than one "heavy" transaction.[1] For many systems, applying zero
knowledge protocols are practical, efficient, and therefore the economical protocol of
choice.

In the modern era of online-auctions, e-voting, and internet banking, having a


certain paranoia is healthy to ensure the legitimacy and security of information.[2]
Certain procedures must be followed to foil malicious individuals' attempts at
acquiring authorized knowledge. Zero knowledge procedures involve a series of
proving and verifying between two parties. Because zero knowledge protocols yield
nothing beyond the validity of an assertion, they make excellent tools in ensuring
information privacy.

11.2 Zero knowledge Terminology

Due to its abstract, often metaphorical manifestation in cryptography textbooks,


zero knowledge terminology must be understood before it can be used to analyze
problems in cryptography.

Though cryptographic protocols are commonly viewed as existing only between


two parties, a total of four can exist in some cases. Cryptographers give these parties
alliterative titles for convenience. The parties are: Peggy the Prover, Victor the
Verifier, Eve the Eavesdropper, and Maggie the Malice. Peggy the Prover makes a
claim that she wants to prove to Victor the Verifier without actually revealing the
information or secret behind the claim. Victor asks Peggy under a series of questions
to assess the validity of Peggy's claim. Victor does not learn Peggy's secret and cannot
masquerade as Peggy to a third party. Eve the Eavesdropper is listening to Peggy and
Victor's conversation. She tries to replay the conversation to a third party, but is
unsuccessful at convincing them of her claim. Maggie the Malice is also listening to
the protocol traffic and tries to manipulate the conversation by modifying, destroying,
and sending extra messages.[1] These names are widely used throughout
cryptography literature and will be used for the remainder of this paper.

The interactions between Peggy and Victor are involve three elements: the
secret, the accreditation, and the problem. The secret is a piece of information that
Peggy knows. It can be any piece of information that is useful (e.g. password, an

25
algorithm). The accreditation is the system of building confidence with each iteration
of the protocol.[1] With each iteration, a successful proof by Peggy increases Victor's
confidence by a power of 2. However, if Peggy is unsuccessful at her proof, Victor's
confidence is reduced to 0 and the entire protocol fails. The problem is Victor's
method of accreditation. His problem asks for one of the multiple solutions to Peggy's
claim. If Peggy does possess the secret, she will always be able to correctly answer
Victor's problem. However, if Peggy does not possess the secret, she will only be able
to correctly answer Victor's problem a fraction of the time (depending on the exact
protocol). Thus, Victor is only able to verify that Peggy holds a secret, if she can
always solve his problem for enough rounds of his accreditation.

Protocol identification schemes must be both "complete" and "sound" to be


classified as being zero knowledge. Complete means that "if the user who tries to
identify himself follows the protocol, then the identification is surely successful".[3]
Completeness holds that an honest verifier will always be convinced of a true
statement by an honest prover.[6] Soundness means that "nobody can identify himself
as somebody else".[3] Soundness holds that a cheating prover can convince an honest
verifier that a false statement is true with a small probability.[9] If a system is both
complete and sound, and no additional knowledge is gained by either party, the
system can be classified as zero knowledge.

Zero knowledge protocols' practicality extends only to problems that are of the
complexity class NP. NP (nondeterministic polynomial time) complexity means that
"There exists a (polynomial in the length of the input) bound on the number of steps
in each possible run of the machine".[5] "If one-way functions exist, any problem in
NP has a zero knowledge proof. Thus, Peggy must be limited to polynomial time. If
Peggy were any more powerful, it would be trivial to demonstrate her knowledge, as
she could already calculate it in every case.[1]

Various levels of zero knowledge exist. There is perfect zero knowledge,


statistical zero knowledge, and computational zero knowledge. In perfect zero
knowledge, Victor can create a phony list of problem solutions which is statistically
identical to Peggy's list. His list is indistinguishable from Peggy's, and therefore, he
cannot prove that he knows the secret to any third-party.[6] Statistical zero knowledge
states that Peggy and Victor's lists are statistically similar and that their inconsistency

26
is negligible. Computational zero knowledge, also referred to as general zero
knowledge, states that Peggy and Victor's lists are computationally
indistinguishable.[5] Although computational zero knowledge is the most common
form of zero knowledge, examples perfect zero knowledge and statistical zero
knowledge have been found to exist.

11.3 Proof of Knowledge

Five major zero knowledge protocols are used, each ensuring information
privacy throughout the verification process in its own way. The first (and most
concrete) of such protocols is the Proof of Knowledge. It states that "if Peggy has a
non-negligible chance of making Victor accept, then Peggy can also compute the
secret, from which knowledge is being proved".[1] The definition of this protocol
raises the need to distinguish between two terms that are often misused
interchangeably. Knowledge is very different from information. Knowledge is related
to computational difficulty of publicly known objects, whereas information relates
mainly to objects on which only partial information is known.[4]

Many examples exist for illustrating the Proof of Knowledge. Imagine the
following situation. Peggy claims that she can count the leaves of a big maple tree in a
few seconds without revealing to Victor her method of calculation and without
revealing the number of leaves. To test Peggy's claim, Victor designs a protocol.
When Peggy closes her eyes, Victor either pulls off a leaf or does nothing. Then
Peggy opens her eyes and tells Victor what he did.

Determining whether or not this situation is zero knowledge requires the


definition of what makes a system zero knowledge.[3] First, the system is complete
because if Peggy truly possesses such an ability, she could recognize a difference in
the number of leaves because Victor pulled one off. Or, she could compute that the
number of leaves has remained unchanged and that Victor has done nothing. The
system is also sound because if Peggy doesn't know the secret, she will still have a
chance of guessing the answer to Victor's problem on the first try. Because this
system is both complete and sound, it is zero knowledge. With a few rounds of
positive accreditation, Victor should be convinced that Peggy knows the secret. Let us
assume that Victor can afford a error in this validation process. After only 40 rounds

27
of accreditation, he is convinced that Peggy knows the secret. If put Peggy under 100
rounds or 100,000 rounds the chance of him making an error is nearly 0. This is
modeled by the following equation:

where n is the number of rounds of accreditation. Given enough rounds of


accreditation, it would be extremely difficult for a cheating Peggy to fool Victor.

One of the most well known examples of the Proof of Knowledge is Jean-
Jacques Quisquater's "magical cave" allegory. The cave has a magical door deep
inside which opens only upon the utterance of a secret word, which Peggy claims to
know. Victor will pay $10 million dollars for the secret word, but he must be
convinced that Peggy knows the secret word. Peggy cannot simply reveal the word to
him, as a dishonest Victor would rescind his monetary offer and leave with the secret
and all of his money. Victor and Peggy decide to construct a zero knowledge system
so that Peggy can prove to Victor that she knows the secret without actually telling it
to Victor.[7] The devise the following scheme [7] :

Victor will wait outside the cave while Peggy enters. She chooses either path A
or path B at random while Victor is not looking. Then Victor will enter the cave as far
as the fork and announces the path by which he wants Peggy to return. If Peggy
knows the secret word, she will be able to return by either path A or path B regardless
of which path she chose initially. If Victor announces the same path through which
Peggy initially chose to enter, she simply turns around and exits via the same path. If
Victor announces the path that Peggy did not choose, she whispers the secret word
and returns along the desired path. [7] Thus, the system is complete. If Peggy is lying
and does not know the secret word, then she will only be able to return along the
correct path if Victor announces the same path that she chose. The probability of this
happening is and with multiple rounds of accreditation, Victor should grow
increasingly confident about whether or not Peggy truly knows the secret. Thus, the
system is sound. Because the system is both complete and sound, it is zero
knowledge. Peggy can successfully prove to Victor that she knows the secret word
without actually telling him what it is. Therefore, this system is exemplifies the Proof
of Knowledge protocol.

28
11.4 Proof of Identity

The next zero knowledge protocol, the Proof of Identity, ensures that nobody
can masquerade as Peggy or Victor for any third party.[1] This protocol is used to
solve the cryptographic problem of "Mafia man-in-the-middle attacks", which is often
modeled by the Chess Grand master Problem. The malicious user (Maggie the
Malice) wants to prove that she is the better than the champion chess player in her
city, Bob. She sets up two separate, yet concurrent, online matches with both the local
champion and Grandmaster Garry Kasparov. She lets Kasparov move first and relays
his move to the other game with Bob. She then relays Bob's move back to Garry
Kasparov. Maggie is acting as the "man-in-the-middle" and has essentially set up a
match between Kasparov and Bob, where she has access to both boards in play.
Eventually, Maggie will relay Kasparov's checkmate to Bob and be crowned the new
city champ. As expected, Maggie (playing as Bob), will lose to Kasparov. This
scenario raises the security issue of the malicious Maggie having access to the
intermediate stages of accreditation. Cryptographers have decided that the best
countermeasure to this attack would be to impose a time limit for replies, in the hope
that there is not enough time for Maggie to relay the communications.[3]

Another way to thwart a man-in-the-middle Maggie is by using a pair of


security keys. Suppose Alice and Bob share a secret key, and Alice wants to prove the
legitimacy of her identity to Bob. They devise the following scheme. Alice sends Bob
her public identity (her name Alice). Bob sends Alice a challenge, ?. Alice encrypts ?
and sends back E(?) to Bob. Bob then decrypts E(?) with D(E(?)) and if he obtains ?,
he can verify that he is communicating with Alice.[3] This protocol is especially
useful in circumventing Eve and Maggie's attempts at learning the secret. Suppose the
secret is:

E(p) = p + 20 and

It follows that:

= p - 20.

Now suppose that Bob sends Alice the challenge

? = 5,

29
and that this message is also heard by Eve and Maggie.

Alice performs E(?) and returns 25 to Bob.

Upon intercepting the value of E(?), Eva and Maggie can only guess as to what the
secret E(?) really is. It is possible that they are fooled into believing E(?) = .

When Bob receives 25, he applies D(?) to determine that D(?) = 5.

Throughout this process, Eva is prevented from learning the secret, and sending the
preliminary public identity helps prevent Maggie from tampering with the
accreditation process. Therefore, this process preserves zero knowledge, and
demonstrates the protocol of the Proof of Identity.

11.5 Feige-Fiat-Shamir Proof of Identity

Cryptographers Uriel Feige, Amos Fiat, and Adi Shamir developed the Feige-
Fiat-Shamir Protocol Proof of Identity in 1988. It is the best known zero knowledge
proof of identity protocol.

The Feige-Fiat-Shamir Proof demonstrates the difficulty of finding square roots


of quadratic residues (squares mod n). This protocol is one of the rare examples of
being a perfect zero knowledge proof. The proof is split up into two steps, the pre-
calculation phase and the identification phase.

In the pre-calculation phase, Peggy chooses two random prime numbers (p,q).
She keeps these two numbers as her secret. Then, she calls n, the product of p and q.
She randomly chooses a number s {1, n-1} that is co-prime to n. Two numbers are co-
prime if they do not share any common positive factors other than 1. Peggy then
chooses a random number r {1, n-1}. She computes x = (mod n) and sends these
values (n, s, and r) to the verifier. This signifies the start of the identification phase.
Victor chooses a number B{0,1}. And sends it to Peggy.

If B = 0, Peggy sends y, where y = r to the verifier. If B = 1, then Peggy sends y


where y = (r s) mod n. Finally, Victor determines whether or not = mod n (where v =
(mod n). The Feige-Fiat-Shamir Protocol can be more clearly demonstrated by
following an example run:

Let p = 5 and q = 7.

30
Then, n = 35.

Peggy secretly chooses s = 16.

She calculates v = mod 35 which equals 11.

Suppose Victor only requires two rounds of accreditation of the protocol in order to
accept Peggy's claim.

Peggy randomly selects r = 10.

She sends x = 100 mod 35 = 30 to Victor.

Victor randomly selects B = 0 and sends it to Peggy. She returns y = r = 10 to Victor.

He verifies that 100 mod 35 = x = 30. In the second round, Peggy randomly selects r
= 20. She sends x = 15 to Victor and he randomly selects B = 1, and sends it back to
Peggy.

She returns y = (16 mod 35 = 5 to Victor. He verifies that 25 = (15 mod 35.

After two rounds of successful accreditation, Victor is able to verify Peggy's claim
using the Feige-Fiat-Shamir Proof of Identity.

11.6 Guillou-Quisquater Proof of Identity

The Guillou-Quisquater Proof of Identity, invented by Louis Guillou and Jean-


Jacques Quisquater is an improvement to the Feige-Fiat-Shamir Proof of Identity. The
Feige-Fiat-Shamir Protocol is weak in that it requires multiple iterations between
Peggy and Victor and that Peggy needs a large amount of memory.

Guillou-Quisquater optimizes this protocol by using longer computations. Take


for example the following situation. Peggy wants to authenticate her identity with
Victor. She holds a public certificate J and a private certificate S, where S = mod n.

Similar to the Feige-Fiat-Shamir Protocol, n is the product of the two primes p


and q. The number v is the public key such that the greatest common divisor between
n and v is 1. Thus, n and v are said to be co-prime. First, Peggy chooses a random
number r. She computes x = (mod n). Peggy sends both x and J (the public certificate)
to Victor. He chooses and random number e {1, v} and sends it to Peggy. She
computes y = r (mod n) and sends it to Victor. He computes and verifies that the result

31
is equal to x and different from 0. By using this longer procedure of computation,
Victor reduces Peggy's cheating probability. This is modeled by the following
equation:

where Pr is the probability of Peggy making an error, are the total possible
computations of the protocol, and n is the number of rounds of accreditation. Because
, the rounds required for Victor to achieve a certain level of confidence is reduced
exponentially. Thus, the Guillou-Quisquater Proof of Identity serves as an
improvement to the Feige-Fiat-Shamir Protocol, because it reduces the number of
rounds of accreditation by demanding more complex computations by using both a
public and a private key.

11.7 Schnorr's Mutual Digital Signature Authentication

Schnorr's identification and authentication scheme can be used for digital signatures
by replacing Victor with a hash function. A digital signature is simply a mathematical
scheme to demonstrate the authenticity of a digital document. If Victor is replaced by
a cryptographically secure hash function, most zero knowledge protocols can be
turned into digital signatures. They are implemented as follows. Peggy creates a
number of problems and uses the hash function as a virtual Victor.

The inputs of the hash function are just the message and problems presented to Victor.
Using these inputs guarantees that neither the message nor the problems can be
altered without making the signature void. Also, the output of the hash function is
completely random and unpredictable.

Thus, Peggy cannot try to change the inputs to the hash in her favor to try and get
values which would allow her to cheat. The receiving end of the protocol can
calculate the hash function itself and check that Peggy returns the correct solutions in
order to determine that the digital signature is valid. Schnorr's Mutual Digital
Signature Authentication is the last of the five prominent zero knowledge protocols.

11.8 Hamiltonian Cycles

A Hamiltonian cycle for a graph is the path through the graph that passes every
node exactly once. As the size of the graph increases, the difficulty of calculating its
Hamiltonian cycle increases as well, and is therefore classified as having NP

32
complexity. The most popular example of a zero knowledge Hamiltonian cycle
consists of Peggy, who tries to prove that she knows the Hamiltonian cycle for a
certain graph and Victor, who is to determine whether or not Peggy knows the secret
to calculating a graph's Hamiltonian cycle.

Peggy gives Victor a permuted version of the original graph. Victor, in return,
asks for either a proof that the graph is a permutation or the original graph, or for
Peggy to show the Hamiltonian cycle for the permuted graph. Either of these
problems can be calculated easily from the original data, but being able to respond to
both of these possible requests requires Peggy to truly know the secret (the
Hamiltonian cycle of a graph).

This protocol can be better illustrated with a concrete example. Consider a


fictional city "Orbiville" which recently updated its subway system with its own
Hamiltonian path.[2] Suppose Peggy claims to know the Hamiltonian cycle of
Orbiville's subway system illustrated below.

Peggy first permutes the graph a to generate a partial graph b with a new
Hamiltonian cycle c. By revealing only the partial graph b, Victor can verify that c
visits each station once and exists in the subway system. After multiple rounds of
accreditation (with a new permuted graph each time), Victor will be assured of the
existence of a Hamiltonian path, without actually knowing the path itself. This system
is complete because an honest Peggy will be able to solve Victor's problem every
time. The system is sound because a cheating Peggy will only be able to solve the
problem of the time (either the permutation or the path would be incorrect). Because
the Hamiltonian cycle system is both complete and sound, it is zero knowledge.

11.9 Case Study: Smart Cards

Zero knowledge protocols are often discussed in a theoretical sense and not in a
practical sense. However, zero knowledge protocols do have a variety of practical
applications. They are used to ensure secure data transactions during identification
and authentication. The following case study illustrates how using zero knowledge
protocols increases the security of smart cards.

Smart cards, or Integrated Circuit Cards (ICCs) are small pocket sized cards
with embedded integrated circuits which are used to process the input and output of

33
data. They are most commonly used as ATM, SIM, health, and National ID cards, but
have grown increasingly popular for their ability to store certificates during web
browsing.

Since their introduction into the technological market, Smart cards have
experienced major breaches in security. They are often encrypted using simple
cryptographic techniques, making them easily decrypted by pirates. Pirates have
developed efficient techniques to reverse-engineer smart card CPUs and their
memories. These techniques include: using nonstandard programming voltages to
clear code protection fuses, magnetic scanning of currents throughout the integrated
circuits, and acid washing the chip one layer at a time. Fortunately, smart cards are
not yet used widely enough to cause any major organized criminal activity. In the
future, smart card applications will need public key and zero knowledge protocols and
solutions to circumvent such malicious activity.

The first step to solving smart card security issues is to use a light zero
knowledge protocol. The protocol should mandate that each round is completed
within a very short time limit (so that a Mafia man-in-the-middle attack will fail) and
that a dictionary or pre-calculated table based brute force attack is not feasible.
Assume that the smart card only has 36 bytes of RAM available to work on the
protocol and that some of this space must be reserved for other use. Each key should
therefore be about 8 bytes in length. Suppose the intruder has 5 orders of magnitude
more processing power than the given system. Even if the brute-force calculation is
fast, the combination of a 64 bit key and a time limit would foil even the fastest
computers. Although intruders can try and anticipate Feige-Fiat-Shamir protocols
with pre-calculated prime number tables before launching their attack, simply
changing the system's public key values periodically would make an intrusion
infeasible and would effectively negate any attack. Thus, zero knowledge protocols
prove to have practical applications in solving cryptographic problems in current and
future technological systems.

34
11.10 Conclusion

Zero knowledge proofs are both fascinating and useful concepts. They are
convincing yet return nothing beyond the validity of a claim. Zero knowledge
protocols ensure that after reading a proof, the verifier cannot perform any
computational tasks the he could not perform before. Thus, the integrity and privacy
of information is maintained. Because zero knowledge proofs force malicious parties
to act according to a predetermined protocol, they have vast applications in the
domain of cryptography.

The need to effectively understand and implement zero knowledge protocols is


increasing as internet capabilities continue to expand at such a rapid rate. Because
zero knowledge protocols are relatively easy to implement, but difficult to foil, they
make excellent constructs for solving cryptographic security issues. Although the
various zero knowledge protocols will remain the same, they will have unforeseen
applications in the years to come.

35
12.Role of Zero-Knowledge Protocols in Plagiarism Checking

It is required for the protocols, being used in plagiarism checking, to use zero-
knowledge algorithms. See what could be happening in the current scenario -

12.1 Interactive Proof System

An interactive proof for the decision problem , is a following verification protocol:

1. There are two participants, approver and a verifier.

2. The proof consists of a specified number of rounds.

3. In the beginning of the proof both participants get the same input.

4. In each round, the verier challenges the prover, and the prover responds to the
challenge.

5. Both the verier and the prover can perform some private computation (they are
both modeled as a randomized Turing machine).

6. At the end, the verier states whether he was convinced or not.

In such situation, if the verifier is corrupted or vulnerable, prover will be affected


badly too. There will be strong chances of prover getting attacked, or affected or the
information being stolen.

36
How zero-knowledge systems work?

12.2 Chances of Vulnerability

Even if we are using a zero-knowledge protocol, there are good chances that our
information is compromised. Scenario could be like

37
13.Improving Plagiarism Detection Protocols

To improve plagiarism detection, following steps can be taken

Perfect zero-knowledge protocols should be introduced.


It should be a compulsion for every plagiarism checker to implement perfect
zero-knowledge protocols while checking plagiarism.
For that, prover itself should have a capability of checking the uniqueness.
The network itself should be well-protected through cryptography.

A protocol, which does not reveal any information, is as follows:

Security is ensured because

38
39
14.Benefits of increasing Information Security during Plagiarism
checks

There are many advantages of security enhancement

It keeps the origin rights of creator safe.


Promotes researches.

By not plagiarizing, people will:

feel more confident when tackling exams

feel confident at seminar discussions

know how to express an academic opinion, backed up by strong information


sources

be better able to answer questions at your project viva or presentation

be competent in handling literature searches for major coursework projects

develop your subject knowledge

gain academic credibility (and thus gain credibility with future employers)

have pride in your work : it's all yours

apply for jobs with confidence, knowing that you won't be discovered as
incompetent in basic information-handling skills.

not be detected by Turnitin and so will avoid losing marks or being disciplined

40
15.Conclusion

The strictness on the plagiarism checkers should be as much as it should be on the


people who copy. By not regulation the tools used in plagiarism checking, we all
giving them a chance to steal the research work easily. These tools, on the other hand,
can sell or distribute someones work, even before he or she claims the ownership.

This thought gives motivation to the phenomenon of Zero-knowledge proofs, and


reveals the importance of existence of perfect zero knowledge proofs. Such
algorithms will be able to claim whether the checked content is valid or not, without
yielding or knowing any part of information.

Information security will be of the highest level when we will be able to integrate the
plagiarism checking practices with perfect zero-knowledge procedures.

41
References:

West's Encyclopedia of American Law, edition 2. S.v. "Plagirism." Retrieved


August 2 2017 from http://legal-dictionary.thefreedictionary.com/Plagirism

Plagiarism Checker X, Blog Title Why plagiarism worth your concern


Retrieved August 1 2017 from https://plagiarismcheckerx.com/blog/why-
plagiarism-worth-your-concern/

Arwin, C. and S. M. M. Tahaghogh (2006). Plagiarism Detection Across


Programming Languages. ACM International Conference Proceeding Series,
vol. 171. Proceedings of the 29th Australasian Computer Science Conference,
Hobart, Australia, vol. 48 ,pp. 277 286.

Association for Computing Machinery (2006). ACM Policy and Procedures


on Plagiarism. Available at
http://www.acm.org/pubs/plagiarism%20policy.html. Accessed November 6,
2007.

Boisvert, R. F. and M. J. Irwin. (2006). ACM Policy on Plagiarism:


Plagiarism On the Rise. Communications of the ACM, vol. 49, n. 6, pp. 23
24.

Bowers, W. J. (1964) Student Dishonesty and Its Control in College. New


York: Bureau of Applied Social Research, Columbia University.

Brimble, M. and P. Stevenson-Clarke (2005). Perceptions of the Prevalence


and Seriousness of Academic Dishonesty in Australian Universities.
Australian Educational Researcher, v32 n3 p19-44 Dec 2005. (EJ743503)

CANexus (2007). Eve 2: Essay Verification Engine.


http://www.canexus.com/eve/index.shtml. Accessed November 5, 2016.

42
Day, C. and J. Horgan (2005). Patterns of Plagiarism. Proceedings of the
36th SIGCSE Technical Symposium on Computer Science Education. Pp.
383 387.

eCheat.com (2007). Capital Punishment Persuasive Essay.


http://www.echeat.com/essay.php?t=33395. Accessed November 7, 2016.

Florida State University Center for Teaching anf Learning (2007). FSU's
Information for Turnitin.com. Available at
http://learningforlife.fsu.edu/ctl/explore/bestPractices/docs/turnitin.pdf.
Accessed November 7, 2016.

Gibaldi, J. (1998) MLA Style Manual and Guide to Scholarly Publishing. 2nd
ed. New York: MLA.

Glatt Plagiarism Services (2007). http://plagiarism.com/. Accessed


November 5, 2007.

iParadigms (2007). http://www.plagiarism.org/technology.html. Accessed


November 5, 2016.

McCabe, D. and L. Trevino (1993). Academic Dishonesty: Honor Codes and


Other Contextual Influences. Journal of Higher Education, (64) n5 p522-38
Sep-Oct 1993. E-Leader Bangkok, 2008

Niezgoda, S. and T. Way. (2006) SNITCH: A Software Tool for detecting


Cut and Paste Plagiarism. Proceedings of the 37th Special Interest Group on
Computer Science Education. Pp. 51 55. New York: Association for
Computing Machinery.

Pennsylvania State University Senate Committee on Computing and


Information Systems (2006). Turnitin: A Tool to Assess Student Plagiarism.

43
Available at http://www.psu.edu/dept/cew/TurnitinFinalReportFS.doc.
Accessed November 6, 2016.

Prechelt, L., Malpohl, G., Philippsen M., Finding (2002). Plagiarisms


Among a Set of Programs with JPlag. Journal of Universal Computer Science.
Vol. 8, issue 11, pp. 1016 1038.

Robelean, E. (2007). Online Anti-Plagiarism Service Sets Off Court Fight.


Education Week, May 9, 2007. vol 26, issue. 36.

Sheard, J. M. Dick, S. Markham, I. Macdonald, and M. Walsh (2002).


Cheating and Plagiarism: Perceptions and Practices of First Year IT Students.
ACM SIGCSE Bulletin , Proceedings of the 7th annual conference on
Innovation and Technology in Computer Science Education ITiCSE '02,
Volume 34 Issue 3, pp. 183- 187.

UCLA Office of Instructional Development (2007). Exploring the


Controversy Surrounding the Use of TurnItIn.
http://www.oid.ucla.edu/training/trainingarticles/turnitin/turnitin-2. Accessed
November 6, 2016.

Asim M., Hussan M., Vaclav S., "Overview and comparison of Plagiarism
detection tools," Dateso, pp. 161-172, 2011.

R. Symons, "Teaching and Learning Committee Plagiarism Detection


software Report," University of Sydney, 2003.
L. Chao, "GPLAG : detection of software plagiariasm by program dependence
graph analysis," in The 12th ACM SIGKDD international conference on
knowledge discovery and data mining, Philadelphia, PA, USA,pp 872-881
2006.

C. J. Neill, G. Shanmuganthan, "A Web-enabled plagiarism detection tool,"


IEEE IT Professional, Vol. 6 No. 5, pp 19-23, 2004.

44
Jasper P, "Turnitin.com," Library Journal 126(8), p. 138, May, 2001.

[Online]. Available: http://www.plagiserve.com.

[Online]. Available: http://www.copycatch.freeserve.co.uk.

University of Birmingham, "Report on Plagiarism workshops: show and tell -


tackling the causes of collusion and Plagiarism," University of Birmingham,
Birmingham, UK, 2002.

Delvin M., "Plagiarism Detection Software : How effective is it?," Assessing


Learning in Australian Universities, Elvines, 2002.

[Online]. Available: http://wwwipd.ira.uka.de/jplag/..

[Online]. Available: http://theory.stanford.edu/~aiken/moss/..

S. Burrows., S. M. M. Tahaghoghi, and J. Zobel, "Efficient Palgiarsim


detection for large code repositiories," Softw. Pract. Expert, pp. 37(2):
151175, 2007.

D. Gitchell and N. Tran, "Sim : A utility for detecting similarity in computer


programs," in Proceedings of Tech. Symp. Comput. Sci. Ed., pp 266-270,
1999.

[Online]. Available: http://en.wikipedia.org/wiki/Google.

[Online]. Available: https://unicheck.com/wiki/academic-integrity/

Arronsson, Hannu. "Network Security: Zero Knowledge and Small Systems."


TKK - TML. Helsinki University of Technology. Web. 19 Feb. 2010.
<http://www.tml.tkk.fi/Opinnot/Tik-110.501/1995/zeroknowledge.html>.

45
Chazelle, Bernard. "The security of knowing nothing." Nature 446 (2007). 26
Apr. 2007. Web.

Giani, Annarita. "Identification with Zero Knowledge Protocols." Thesis.


University of California Berkeley, 2001. SANS Institute. 2001. Web. 20 Feb.
2010.

Goldreich, Oded. Foundations of cryptography basic tools. Cambridge, U.K:


Cambridge UP, 2001. Print.

Goldreich, Oded. Zero-Knowledge twenty years after its invention. Thesis.


Weizmann Institute of Science, 2004. Print.

Hoffstein, Jeffrey. An introduction to mathematical cryptography. New York:


Springer, 2008. Print.

Mohr, Austin. "A Survey of Zero Knowledge Proofs with Applications to


Cryptography." Thesis. Southern Illinois University at Carbondale. Web.
<http://www.austinmohr.com/work/files/zkp.pdf>.

Pass, Rafael. "Lecture 18: Zero-Knowledge Proofs." Lecture. Cornell


University, Ithaca. 26 Mar. 2009. Web.

Rosen, Alon. Concurrent Zero-Knowledge With Additional Background by


Oded Goldreich (Information Security and Cryptography). New York:
Springer, 2006. Print.

Weis, Steve. "Lecture 3: Zero-Knowledge Proofs Continued." Lecture.


Massachusetts Institute of Technology, Cambridge. 12 Feb. 2003. Web.

_____________

46

You might also like