Professional Documents
Culture Documents
Submitted by
SHIVANSHI SRIVASTAVA
i
CANDIDATE DECLARATION
ii
CERTIFICATE
This is to certify that Shivanshi Srivastava has carried out the work embodies
in this thesis entitled Zero-Knowledge protocols, Plagiarism Checking and
Information Security Analysis and Possibilities under my guidance and
supervision during the academic session 2017-18 in fulfillment of the requirement for
M. Tech. Final Year C.S.E. from Bon Maharaj Engineering College. The Work in this
thesis is original and I am completely satisfied with this work, and wish her all
success in future life.
iii
ACKNOWLEDGEMENT
My heartfelt thanks to my family and friends for their unparalleled support and
encouragement throughout the journey of this master program!
iv
ABSTRACT
Plagiarism and the plagiarism checkers, both, are the threat to academic integrity.
However, checking plagiarism is important as it detects whether a work is original or
not. So, we can only try to enhance the process of plagiarism checking, instead of
avoiding it.
The motive of this research work is to make the plagiarism checking a fair
practice for every scholar or the owner of original work. It will help in recognizing the
original works and ensure the confidentiality and information security for everyone.
5
CONTENTS
Acknowledgements ........................................................................................................... iv
Abstract ............................................................................................................................... 5
Contents .............................................................................................................................. 6
1. Foreword ..................................................................................................................... 8
6
11.10 Conclusion ................................................................................................. 35
7
1. Foreword
Plagiarism
Importance of Plagiarism Detections
How Plagiarism Affects the Original Creators and promotes Cheating
Culture
Tools for Plagiarism Detections
Protocols for Plagiarism Detection
Zero-Knowledge Protocols Investigation
Role of Zero-Knowledge Protocols in Plagiarism Checking
Improving Plagiarism Detection Protocols
Benefits of increasing Information Security during Plagiarism checks
Conclusion
8
2. Plagiarism - Introduction
Plagiarism is not a legal term and is specially used in the online world.
However, it is sometimes also used in courts or lawsuits to replace the term
Copyright infringement, when someone tries to represent someone elses intellectual
property as his/her own creation. The phenomenon is widely possible to occur in any
of the creative designing, writing or research work field due to the availability of
someones work in public domain.
The practice of copy-pasting has really affected every section of academic and
non-academic industries. Internet has made the whole copying thing easy and hence it
is one of the major promoters of academic plagiarism. Discouraging students from
working and researching, the online landscape allows them to easily enter any public
directory. However, the whole responsibility is of the researcher itself, that how he or
she is making use of the internet.
1
West's Encyclopedia of American Law, edition 2. S.v. "Plagirism." Retrieved August 2 2017 from
http://legal-dictionary.thefreedictionary.com/Plagirism
2
West's Encyclopedia of American Law, edition 2. S.v. "Plagirism." Retrieved August 2 2017 from
http://legal-dictionary.thefreedictionary.com/Plagirism
9
To a large extent, it is a way of compromising the information security
knowingly and hence it is termed as Academic Dishonesty and electronic crimes.
The content which is formulated by the hard work of honest people, who spend
most of their time and energy to create master pieces is ruined through just some
Google searches. This indirectly violates the basic rights and it initially discourages
the creation of good and genuine contents.3
3. Reasons of Dishonesty
Cheating is a global phenomenon, only the method changes according to the region
and context. Many people, even being a high-perform, indulge in copying activities.
The reason, given by them behind opting for this method is the quantity of work.
3
Plagiarism Checker X, Blog Title Why plagiarism worth your concern Retrieved August 1 2017
from https://plagiarismcheckerx.com/blog/why-plagiarism-worth-your-concern/
10
4. How Plagiarism Affects the Original Creators and promotes
Cheating Culture
There are many causes why plagiarism should be immediately detected and the
practitioner should be punished. Some of these are
11
5. Methods of Plagiarism Detection
Proctoring
12
Virtual Student Monitors
Guilt
Appealing to their students' sense of fairness and reminding the class of the
negative effects of their classmates' cheating actually can make a difference,
especially if the professor employs old-fashion, tried-and-true methods of grading,
like grading on a curve so that the student with the highest score on a test or paper sets
the standard for all other students. Students are less likely to cheat or, at least, more
likely to cheat less, if they know they could face consequences from their classmates
in addition to their professor.
13
6. Approaches of Plagiarism Detection
Manual detection
Done manually by human, its suitable for lectures and teachers in checking
students assignments but it is not effective and impractical for a large number of
documents and not economical also need highly effort and wasting time.
There are many software and tools used in automatic plagiarism detection,
likeTurintin, Edutie, PlagiServe, Glatt Plagiarism Self-Detection program (GPSD)
and many more. This thesis compares the various software tools for detecting the
plagiarism.
14
7. Methods for Plagiarism Detections
Plagiarism detection software can be classified into four main categories namely4
Internet search engines such as Google, AltaVista, and Yahoo can be used an
alternate method to detect suspected plagiarism without the need to download
software or register for a detection service. Examples of such systems include Google,
Alta vista, Looksmart, Amazon and many more.
Subscription databases
4
R. Symons, "Teaching and Learning Committee Plagiarism Detection software Report," University of
Sydney, 2003.
15
Finally, subscription databases of scholarly and popular literature which include
abstracts or full texts of articles may be searched, particularly if assignments relate to
the location of such material. These are specialized databases which we subscribe to
which are not available on Google or other common search engines. They include
online encyclopaedias, periodical indexes, business directories and other resources.
They are broken down by subject to help you find exactly what you need.
1. TURNITIN
2. PLAGISERVE
16
engines and other plagiarism detection services, its limitations lie in not being able to
identify the source of the suspect text and the requirement for students to sit a test.
4. COPYCATCH GOLD
6. GPLAG
GPlagwas developed by Chao LIU, Chen Chen, Jiawei Han at the University of
Illinois-UC, Urban in 2006. GPlag, which detects plagiarism by mining program
dependence, graphs (PDGs). A PDG is a graphic representation of the data and
control dependencies within a procedure. The PDG thus developed from original
program and modified program are checked whether it is copied or not by graph
isomorphism. In order to make GPlag scalable to large programs, a statistical lossy
filter is proposed to prune the plagiarism search space.
7. JPLAG
17
For comparing two token strings JPlag uses the Greedy String Tiling" algorithm as
proposed by Michael Wise but with different optimizations for better efficiency. JPlag
is a system that finds similarities among multiple sets of source code files.
JPlag currently supports Java, C#, C, C++, Scheme and natural language text. Jplag
has a powerful graphical interface for presenting its results. It takes input as set of
programs, compares these programs pair wise (computing for each pair a total
similarity value and a set of similarity regions), and provides as output a set of HTML
pages that allow for exploring and understanding the similarities found in detail.
8. MOSS
Every k-gram is hashed, and a subset of all the k-gram hashes is selected as the
document's fingerprint. Moss is an automatic system for determining the similarity of
programs. Moss can currently analyse code written in the following languages: C,
C++, Java, C#, Python, Visual Basic, JavaScript, FORTRAN, ML, Haskell, Lisp,
Scheme, Pascal, Modula2, Ada, Perl, TCL, Matlab, VHDL, Verilog, Spice, MIPS
assembly, a8086 assembly, a8086 assembly, MIPS assembly, HCL2.Moss is also
being provided as an Internet service.
9. SIM
18
to be compared to. SIM detects similarities between programs by evaluating their
correctness, style, and uniqueness.
10. GOOGLE
Faculty read student submissions and, in the course of that reading, identifies
suspect documents. Under the supervision of the faculty member, software removes
words from the suspected document, replaces them with blanks, and prompts the
student to fill in the blanks. The program determines familiarity from the speed and
accuracy of the responses. This approach inverts the corpus-based approach. It is
selective rather than comprehensive. It gives priority to human judgment. It situates
plagiarism in human conduct rather than in impersonal pattern finding.
19
At the highest level of abstraction, corpus-based plagiarism detection software
takes as input a suspect document and an archived corpus of authenticated documents,
compares the suspect document to the corpus, and outputs passages that the suspect
document shares with the corpus and a measure of the likelihood that the author
plagiarized material.
It must be noted that documents present neither in ProQuest nor on the Web are
beyond the reach of TurnItIn.com The Essay Verification Engine, which employs the
World Wide Web as its corpus, uses techniques to target exhaustively the sites that
most likely are sources of plagiarized material and compares their content with the
suspect work (CANexus, 2007).
20
Niezgoda and Way (2006) provide a partial corrective for these deficits in their
description of SNITCH, Spotting and Neutralizing Internet Cheaters. As the name
suggests, SNITCH uses Web pages on the Internet as its corpus, and Niezgoda and
Way, in providing some insight into the program, implicitly provide insight into other
programs in its category.
The authors provide a detailed exposition of the corpus-based algorithm that they
use to detect plagiarism in scientific and technological writing, an arena of discourse
that poses particular problems for plagiarism detection, and they provide data
regarding the effectiveness of their algorithm as measured against an oracle and
against a competitor, the Essay Verification Engine, Eve 2. E-Leader Bangkok, 2008.
SNITCH employs a sliding window technique that determines the average length
of words within a succession of windows (Niezgoda and Way, 2006). SNITCH reads
a specified number of words, determines the average number of letters per word, and
associates that average with the current window. The procedure is repeated, moving
the windows forward one word with each iteration until the end of the document is
reached.
Windows are ranked from highest weights to lowest, and the top ranked windows
are then submitted to search engines to detect matches. SNITCHs success rate for
detecting plagiarism in actual student submissions range from 40% for papers with
minimal plagiarism to 63% for papers with a high level of plagiarism, and it produced
no false positives (Niezgoda and Way, 2006). SNITCH outperformed Eve 2, a
commercial program, in detecting plagiarism in submissions with low actual
plagiarism, 40% compared to 12%, and its performance against submissions with a
high level of plagiarism, 63%, was almost indistinguishable from that of Eve 2 at
63%. In terms of revealing what the program does and how well it does it, Niezada
and Way provide a model approach, one that contrasts with the secrecy of commercial
software.
21
10.Working of Plagiarism Detection tools
Curious about how the current plagiarism detection tools work, I decided to start a
conversation with support team of such famous tools website.
From the conversation, it was clear that they are not storing the information,
present in the documents checked through them.
22
23
11.Zero-Knowledge Protocols Investigation
The next five sections deal with the major zero knowledge protocols. Both
abstract and concrete examples are explored in order to further one's understanding of
these protocols. The following section deals with proving the existence of
Hamiltonian cycles while maintaining zero knowledge. The final section explores a
case study of smart cards and how the implementation of zero knowledge protocols
can increase their security. The essay concludes by highlighting the significance of
the applications of zero knowledge as the demand for information security continues
to increase into the future.
11.1 Introduction
24
knowledge allows a protocol to be split into an iterative process of many light
transactions rather than one "heavy" transaction.[1] For many systems, applying zero
knowledge protocols are practical, efficient, and therefore the economical protocol of
choice.
The interactions between Peggy and Victor are involve three elements: the
secret, the accreditation, and the problem. The secret is a piece of information that
Peggy knows. It can be any piece of information that is useful (e.g. password, an
25
algorithm). The accreditation is the system of building confidence with each iteration
of the protocol.[1] With each iteration, a successful proof by Peggy increases Victor's
confidence by a power of 2. However, if Peggy is unsuccessful at her proof, Victor's
confidence is reduced to 0 and the entire protocol fails. The problem is Victor's
method of accreditation. His problem asks for one of the multiple solutions to Peggy's
claim. If Peggy does possess the secret, she will always be able to correctly answer
Victor's problem. However, if Peggy does not possess the secret, she will only be able
to correctly answer Victor's problem a fraction of the time (depending on the exact
protocol). Thus, Victor is only able to verify that Peggy holds a secret, if she can
always solve his problem for enough rounds of his accreditation.
Zero knowledge protocols' practicality extends only to problems that are of the
complexity class NP. NP (nondeterministic polynomial time) complexity means that
"There exists a (polynomial in the length of the input) bound on the number of steps
in each possible run of the machine".[5] "If one-way functions exist, any problem in
NP has a zero knowledge proof. Thus, Peggy must be limited to polynomial time. If
Peggy were any more powerful, it would be trivial to demonstrate her knowledge, as
she could already calculate it in every case.[1]
26
is negligible. Computational zero knowledge, also referred to as general zero
knowledge, states that Peggy and Victor's lists are computationally
indistinguishable.[5] Although computational zero knowledge is the most common
form of zero knowledge, examples perfect zero knowledge and statistical zero
knowledge have been found to exist.
Five major zero knowledge protocols are used, each ensuring information
privacy throughout the verification process in its own way. The first (and most
concrete) of such protocols is the Proof of Knowledge. It states that "if Peggy has a
non-negligible chance of making Victor accept, then Peggy can also compute the
secret, from which knowledge is being proved".[1] The definition of this protocol
raises the need to distinguish between two terms that are often misused
interchangeably. Knowledge is very different from information. Knowledge is related
to computational difficulty of publicly known objects, whereas information relates
mainly to objects on which only partial information is known.[4]
Many examples exist for illustrating the Proof of Knowledge. Imagine the
following situation. Peggy claims that she can count the leaves of a big maple tree in a
few seconds without revealing to Victor her method of calculation and without
revealing the number of leaves. To test Peggy's claim, Victor designs a protocol.
When Peggy closes her eyes, Victor either pulls off a leaf or does nothing. Then
Peggy opens her eyes and tells Victor what he did.
27
of accreditation, he is convinced that Peggy knows the secret. If put Peggy under 100
rounds or 100,000 rounds the chance of him making an error is nearly 0. This is
modeled by the following equation:
One of the most well known examples of the Proof of Knowledge is Jean-
Jacques Quisquater's "magical cave" allegory. The cave has a magical door deep
inside which opens only upon the utterance of a secret word, which Peggy claims to
know. Victor will pay $10 million dollars for the secret word, but he must be
convinced that Peggy knows the secret word. Peggy cannot simply reveal the word to
him, as a dishonest Victor would rescind his monetary offer and leave with the secret
and all of his money. Victor and Peggy decide to construct a zero knowledge system
so that Peggy can prove to Victor that she knows the secret without actually telling it
to Victor.[7] The devise the following scheme [7] :
Victor will wait outside the cave while Peggy enters. She chooses either path A
or path B at random while Victor is not looking. Then Victor will enter the cave as far
as the fork and announces the path by which he wants Peggy to return. If Peggy
knows the secret word, she will be able to return by either path A or path B regardless
of which path she chose initially. If Victor announces the same path through which
Peggy initially chose to enter, she simply turns around and exits via the same path. If
Victor announces the path that Peggy did not choose, she whispers the secret word
and returns along the desired path. [7] Thus, the system is complete. If Peggy is lying
and does not know the secret word, then she will only be able to return along the
correct path if Victor announces the same path that she chose. The probability of this
happening is and with multiple rounds of accreditation, Victor should grow
increasingly confident about whether or not Peggy truly knows the secret. Thus, the
system is sound. Because the system is both complete and sound, it is zero
knowledge. Peggy can successfully prove to Victor that she knows the secret word
without actually telling him what it is. Therefore, this system is exemplifies the Proof
of Knowledge protocol.
28
11.4 Proof of Identity
The next zero knowledge protocol, the Proof of Identity, ensures that nobody
can masquerade as Peggy or Victor for any third party.[1] This protocol is used to
solve the cryptographic problem of "Mafia man-in-the-middle attacks", which is often
modeled by the Chess Grand master Problem. The malicious user (Maggie the
Malice) wants to prove that she is the better than the champion chess player in her
city, Bob. She sets up two separate, yet concurrent, online matches with both the local
champion and Grandmaster Garry Kasparov. She lets Kasparov move first and relays
his move to the other game with Bob. She then relays Bob's move back to Garry
Kasparov. Maggie is acting as the "man-in-the-middle" and has essentially set up a
match between Kasparov and Bob, where she has access to both boards in play.
Eventually, Maggie will relay Kasparov's checkmate to Bob and be crowned the new
city champ. As expected, Maggie (playing as Bob), will lose to Kasparov. This
scenario raises the security issue of the malicious Maggie having access to the
intermediate stages of accreditation. Cryptographers have decided that the best
countermeasure to this attack would be to impose a time limit for replies, in the hope
that there is not enough time for Maggie to relay the communications.[3]
E(p) = p + 20 and
It follows that:
= p - 20.
? = 5,
29
and that this message is also heard by Eve and Maggie.
Upon intercepting the value of E(?), Eva and Maggie can only guess as to what the
secret E(?) really is. It is possible that they are fooled into believing E(?) = .
Throughout this process, Eva is prevented from learning the secret, and sending the
preliminary public identity helps prevent Maggie from tampering with the
accreditation process. Therefore, this process preserves zero knowledge, and
demonstrates the protocol of the Proof of Identity.
Cryptographers Uriel Feige, Amos Fiat, and Adi Shamir developed the Feige-
Fiat-Shamir Protocol Proof of Identity in 1988. It is the best known zero knowledge
proof of identity protocol.
In the pre-calculation phase, Peggy chooses two random prime numbers (p,q).
She keeps these two numbers as her secret. Then, she calls n, the product of p and q.
She randomly chooses a number s {1, n-1} that is co-prime to n. Two numbers are co-
prime if they do not share any common positive factors other than 1. Peggy then
chooses a random number r {1, n-1}. She computes x = (mod n) and sends these
values (n, s, and r) to the verifier. This signifies the start of the identification phase.
Victor chooses a number B{0,1}. And sends it to Peggy.
Let p = 5 and q = 7.
30
Then, n = 35.
Suppose Victor only requires two rounds of accreditation of the protocol in order to
accept Peggy's claim.
He verifies that 100 mod 35 = x = 30. In the second round, Peggy randomly selects r
= 20. She sends x = 15 to Victor and he randomly selects B = 1, and sends it back to
Peggy.
She returns y = (16 mod 35 = 5 to Victor. He verifies that 25 = (15 mod 35.
After two rounds of successful accreditation, Victor is able to verify Peggy's claim
using the Feige-Fiat-Shamir Proof of Identity.
31
is equal to x and different from 0. By using this longer procedure of computation,
Victor reduces Peggy's cheating probability. This is modeled by the following
equation:
where Pr is the probability of Peggy making an error, are the total possible
computations of the protocol, and n is the number of rounds of accreditation. Because
, the rounds required for Victor to achieve a certain level of confidence is reduced
exponentially. Thus, the Guillou-Quisquater Proof of Identity serves as an
improvement to the Feige-Fiat-Shamir Protocol, because it reduces the number of
rounds of accreditation by demanding more complex computations by using both a
public and a private key.
Schnorr's identification and authentication scheme can be used for digital signatures
by replacing Victor with a hash function. A digital signature is simply a mathematical
scheme to demonstrate the authenticity of a digital document. If Victor is replaced by
a cryptographically secure hash function, most zero knowledge protocols can be
turned into digital signatures. They are implemented as follows. Peggy creates a
number of problems and uses the hash function as a virtual Victor.
The inputs of the hash function are just the message and problems presented to Victor.
Using these inputs guarantees that neither the message nor the problems can be
altered without making the signature void. Also, the output of the hash function is
completely random and unpredictable.
Thus, Peggy cannot try to change the inputs to the hash in her favor to try and get
values which would allow her to cheat. The receiving end of the protocol can
calculate the hash function itself and check that Peggy returns the correct solutions in
order to determine that the digital signature is valid. Schnorr's Mutual Digital
Signature Authentication is the last of the five prominent zero knowledge protocols.
A Hamiltonian cycle for a graph is the path through the graph that passes every
node exactly once. As the size of the graph increases, the difficulty of calculating its
Hamiltonian cycle increases as well, and is therefore classified as having NP
32
complexity. The most popular example of a zero knowledge Hamiltonian cycle
consists of Peggy, who tries to prove that she knows the Hamiltonian cycle for a
certain graph and Victor, who is to determine whether or not Peggy knows the secret
to calculating a graph's Hamiltonian cycle.
Peggy gives Victor a permuted version of the original graph. Victor, in return,
asks for either a proof that the graph is a permutation or the original graph, or for
Peggy to show the Hamiltonian cycle for the permuted graph. Either of these
problems can be calculated easily from the original data, but being able to respond to
both of these possible requests requires Peggy to truly know the secret (the
Hamiltonian cycle of a graph).
Peggy first permutes the graph a to generate a partial graph b with a new
Hamiltonian cycle c. By revealing only the partial graph b, Victor can verify that c
visits each station once and exists in the subway system. After multiple rounds of
accreditation (with a new permuted graph each time), Victor will be assured of the
existence of a Hamiltonian path, without actually knowing the path itself. This system
is complete because an honest Peggy will be able to solve Victor's problem every
time. The system is sound because a cheating Peggy will only be able to solve the
problem of the time (either the permutation or the path would be incorrect). Because
the Hamiltonian cycle system is both complete and sound, it is zero knowledge.
Zero knowledge protocols are often discussed in a theoretical sense and not in a
practical sense. However, zero knowledge protocols do have a variety of practical
applications. They are used to ensure secure data transactions during identification
and authentication. The following case study illustrates how using zero knowledge
protocols increases the security of smart cards.
Smart cards, or Integrated Circuit Cards (ICCs) are small pocket sized cards
with embedded integrated circuits which are used to process the input and output of
33
data. They are most commonly used as ATM, SIM, health, and National ID cards, but
have grown increasingly popular for their ability to store certificates during web
browsing.
Since their introduction into the technological market, Smart cards have
experienced major breaches in security. They are often encrypted using simple
cryptographic techniques, making them easily decrypted by pirates. Pirates have
developed efficient techniques to reverse-engineer smart card CPUs and their
memories. These techniques include: using nonstandard programming voltages to
clear code protection fuses, magnetic scanning of currents throughout the integrated
circuits, and acid washing the chip one layer at a time. Fortunately, smart cards are
not yet used widely enough to cause any major organized criminal activity. In the
future, smart card applications will need public key and zero knowledge protocols and
solutions to circumvent such malicious activity.
The first step to solving smart card security issues is to use a light zero
knowledge protocol. The protocol should mandate that each round is completed
within a very short time limit (so that a Mafia man-in-the-middle attack will fail) and
that a dictionary or pre-calculated table based brute force attack is not feasible.
Assume that the smart card only has 36 bytes of RAM available to work on the
protocol and that some of this space must be reserved for other use. Each key should
therefore be about 8 bytes in length. Suppose the intruder has 5 orders of magnitude
more processing power than the given system. Even if the brute-force calculation is
fast, the combination of a 64 bit key and a time limit would foil even the fastest
computers. Although intruders can try and anticipate Feige-Fiat-Shamir protocols
with pre-calculated prime number tables before launching their attack, simply
changing the system's public key values periodically would make an intrusion
infeasible and would effectively negate any attack. Thus, zero knowledge protocols
prove to have practical applications in solving cryptographic problems in current and
future technological systems.
34
11.10 Conclusion
Zero knowledge proofs are both fascinating and useful concepts. They are
convincing yet return nothing beyond the validity of a claim. Zero knowledge
protocols ensure that after reading a proof, the verifier cannot perform any
computational tasks the he could not perform before. Thus, the integrity and privacy
of information is maintained. Because zero knowledge proofs force malicious parties
to act according to a predetermined protocol, they have vast applications in the
domain of cryptography.
35
12.Role of Zero-Knowledge Protocols in Plagiarism Checking
It is required for the protocols, being used in plagiarism checking, to use zero-
knowledge algorithms. See what could be happening in the current scenario -
3. In the beginning of the proof both participants get the same input.
4. In each round, the verier challenges the prover, and the prover responds to the
challenge.
5. Both the verier and the prover can perform some private computation (they are
both modeled as a randomized Turing machine).
36
How zero-knowledge systems work?
Even if we are using a zero-knowledge protocol, there are good chances that our
information is compromised. Scenario could be like
37
13.Improving Plagiarism Detection Protocols
38
39
14.Benefits of increasing Information Security during Plagiarism
checks
gain academic credibility (and thus gain credibility with future employers)
apply for jobs with confidence, knowing that you won't be discovered as
incompetent in basic information-handling skills.
not be detected by Turnitin and so will avoid losing marks or being disciplined
40
15.Conclusion
Information security will be of the highest level when we will be able to integrate the
plagiarism checking practices with perfect zero-knowledge procedures.
41
References:
42
Day, C. and J. Horgan (2005). Patterns of Plagiarism. Proceedings of the
36th SIGCSE Technical Symposium on Computer Science Education. Pp.
383 387.
Florida State University Center for Teaching anf Learning (2007). FSU's
Information for Turnitin.com. Available at
http://learningforlife.fsu.edu/ctl/explore/bestPractices/docs/turnitin.pdf.
Accessed November 7, 2016.
Gibaldi, J. (1998) MLA Style Manual and Guide to Scholarly Publishing. 2nd
ed. New York: MLA.
43
Available at http://www.psu.edu/dept/cew/TurnitinFinalReportFS.doc.
Accessed November 6, 2016.
Asim M., Hussan M., Vaclav S., "Overview and comparison of Plagiarism
detection tools," Dateso, pp. 161-172, 2011.
44
Jasper P, "Turnitin.com," Library Journal 126(8), p. 138, May, 2001.
45
Chazelle, Bernard. "The security of knowing nothing." Nature 446 (2007). 26
Apr. 2007. Web.
_____________
46