You are on page 1of 10

Locality-sensitive hashing

In computer science, locality-sensitive hashing (LSH) is an algorithmic technique that hashes similar input
items into the same "buckets" with high probability.[1] (The number of buckets is much smaller than the
universe of possible input items.)[1] Since similar items end up in the same buckets, this technique can be
used for data clustering and nearest neighbor search. It differs from conventional hashing techniques in that
hash collisions are maximized, not minimized. Alternatively, the technique can be seen as a way to reduce
the dimensionality of high-dimensional data; high-dimensional input items can be reduced to low-
dimensional versions while preserving relative distances between items.

Hashing-based approximate nearest-neighbor search algorithms generally use one of two main categories of
hashing methods: either data-independent methods, such as locality-sensitive hashing (LSH); or data-
dependent methods, such as locality-preserving hashing (LPH).[2][3]

Definitions
An LSH family[1][4][5] is defined for

a metric space ,
a threshold ,
an approximation factor ,
and probabilities and .

This family is a set of functions that map elements of the metric space to buckets . An
LSH family must satisfy the following conditions for any two points and any hash function
chosen uniformly at random from :

if , then (i.e., p and q collide) with probability at least ,


if , then with probability at most .

A family is interesting when . Such a family is called -sensitive.

Alternatively[6] it is defined with respect to a universe of items U that have a similarity function
. An LSH scheme is a family of hash functions H coupled with a probability
distribution D over the functions such that a function chosen according to D satisfies the property
that for any .

Locality-preserving hashing

A locality-preserving hash is a hash function f that maps points in a metric space to a scalar
value such that

for any three points .


In other words, these are hash functions where the relative distance between the input values is preserved in
the relative distance between the output hash values; input values that are closer to each other will produce
output hash values that are closer to each other.

This is in contrast to cryptographic hash functions and checksums, which are designed to have random
output difference between adjacent inputs.

The first family of locality-preserving hash functions was devised as a way to facilitate data pipelining in
implementations of parallel random-access machine (PRAM) algorithms that use universal hashing to
reduce memory contention and network congestion.[7][8]

Locality preserving hashes are related to space-filling curves.

Amplification

Given a -sensitive family , we can construct new families by either the AND-
construction or OR-construction of .[1]

To create an AND-construction, we define a new family of hash functions g , where each function g is
constructed from k random functions from . We then say that for a hash function ,
if and only if all for . Since the members of are
independently chosen for any , is a -sensitive family.

To create an OR-construction, we define a new family of hash functions g , where each function g is
constructed from k random functions from . We then say that for a hash function ,
if and only if for one or more values of i. Since the members of are
independently chosen for any , is a -sensitive family.

Applications
LSH has been applied to several problem domains, including:

Near-duplicate detection[9]
Hierarchical clustering[10][11]
Genome-wide association study[12]
Image similarity identification
VisualRank
Gene expression similarity identification
Audio similarity identification
Nearest neighbor search
Audio fingerprint[13]
Digital video fingerprinting
Physical data organization in database management systems[14]
Training fully connected neural networks[15][16]
Computer security[17]

Methods

Bit sampling for Hamming distance

One of the easiest ways to construct an LSH family is by bit sampling.[5] This approach works for the
Hamming distance over d -dimensional vectors . Here, the family of hash functions is simply the
family of all the projections of points on one of the coordinates, i.e.,
, where is the th coordinate of .
A random function from simply selects a random bit from the input point. This family has the
following parameters: , . That is, any two vectors with Hamming
distance at most collide under a random with probability at least . Any with Hamming distance
at least collide with probability at most .

Min-wise independent permutations

Suppose U is composed of subsets of some ground set of enumerable items S and the similarity function of
interest is the Jaccard index J. If π is a permutation on the indices of S , for let
. Each possible choice of π defines a single hash function h mapping input sets to
elements of S .

Define the function family H to be the set of all such functions and let D be the uniform distribution. Given
two sets the event that corresponds exactly to the event that the minimizer of π
over lies inside . As h was chosen uniformly at random,
and define an LSH scheme for the Jaccard index.

Because the symmetric group on n elements has size n !, choosing a truly random permutation from the full
symmetric group is infeasible for even moderately sized n . Because of this fact, there has been significant
work on finding a family of permutations that is "min-wise independent" — a permutation family for which
each element of the domain has equal probability of being the minimum under a randomly chosen π . It has
been established that a min-wise independent family of permutations is at least of size
,[18] and that this bound is tight.[19]

Because min-wise independent families are too big for practical applications, two variant notions of min-
wise independence are introduced: restricted min-wise independent permutations families, and approximate
min-wise independent families. Restricted min-wise independence is the min-wise independence property
restricted to certain sets of cardinality at most k.[20] Approximate min-wise independence differs from the
property by at most a fixed ε.[21]

Open source methods

Nilsimsa Hash
Nilsimsa is a locality-sensitive hashing algorithm used in anti-spam efforts.[22] The goal of Nilsimsa is to
generate a hash digest of an email message such that the digests of two similar messages are similar to each
other. The paper suggests that the Nilsimsa satisfies three requirements:

1. The digest identifying each message should not vary significantly for changes that can be
produced automatically.
2. The encoding must be robust against intentional attacks.
3. The encoding should support an extremely low risk of false positives.

Testing performed in the paper on a range of file types identified the Nilsimsa hash as having a significantly
higher false positive rate when compared to other similarity digest schemes such as TLSH, Ssdeep and
Sdhash.[23]

TLSH

TLSH is locality-sensitive hashing algorithm designed for a range of security and digital forensic
applications.[17] The goal of TLSH is to generate hash digests for messages such that low distances
between digests indicate that their corresponding messages are likely to be similar.

An implementation of TLSH is available as open-source software.[24]

Random projection

The random projection method of LSH due to Moses Charikar[6]


called SimHash (also sometimes called arccos[25]) uses an
approximation of the cosine distance between vectors. The
technique was used to approximate the NP-complete max-cut
problem.[6]

The basic idea of this technique is to choose a random hyperplane


(defined by a normal unit vector r) at the outset and use the
hyperplane to hash input vectors.
is proportional to
Given an input vector v and a hyperplane defined by r, we let on the interval [0,
. That is, depending on which side
]
of the hyperplane v lies. This way, each possible choice of a
random hyperplane r can be interpreted as a hash function .

For two vectors with angle between them, it can be shown that
is proportional to . In fact the ratio is always within a factor of .87856.[6][26]
Meaning the probability of the two vectors being on the same side of the random hyperplane is proportional
to the cosine distance between them.

Stable distributions

The hash function [27] maps a d -dimensional vector onto the set of integers. Each
hash function in the family is indexed by a choice of random and where is a d -dimensional vector
with entries chosen independently from a stable distribution and is a real number chosen uniformly from
the range [0,r]. For a fixed the hash function is given by .

Other construction methods for hash functions have been proposed to better fit the data. [28] In particular k-
means hash functions are better in practice than projection-based hash functions, but without any theoretical
guarantee.

Semantic hashing

Semantic hashing is a technique that attempts to map input items to addresses such that closer inputs have
higher semantic similarity.[29] The hashcodes are found via training of an artificial neural network or
graphical model.

Algorithm for nearest neighbor search


One of the main applications of LSH is to provide a method for efficient approximate nearest neighbor
search algorithms. Consider an LSH family . The algorithm has two main parameters: the width
parameter k and the number of hash tables L .

In the first step, we define a new family of hash functions g , where each function g is obtained by
concatenating k functions from , i.e., . In other words, a random
hash function g is obtained by concatenating k randomly chosen hash functions from . The algorithm
then constructs L hash tables, each corresponding to a different randomly chosen hash function g .

In the preprocessing step we hash all n d -dimensional points from the data set S into each of the L hash
tables. Given that the resulting hash tables have only n non-zero entries, one can reduce the amount of
memory used per each hash table to using standard hash functions.

Given a query point q , the algorithm iterates over the L hash functions g . For each g considered, it
retrieves the data points that are hashed into the same bucket as q . The process is stopped as soon as a point
within distance from q is found.

Given the parameters k and L , the algorithm has the following performance guarantees:

preprocessing time: , where t is the time to evaluate a function on an input


point p ;
space: , plus the space for storing data points;
query time: ;
the algorithm succeeds in finding a point within distance from q (if there exists a point
within distance R) with probability at least ;

For a fixed approximation ratio and probabilities and , one can set and

, where . Then one obtains the following performance guarantees:

preprocessing time: ;
space: , plus the space for storing data points;
query time: ;

Improvements

When t is large, it is possible to reduce the hashing time from . This was shown by[30] and[31]
which gave

query time: ;
space: ;

It is also sometimes the case that the factor can be very large. This happens for example with Jaccard
similarity data, where even the most similar neighbor often has a quite low Jaccard similarity with the
query. In[32] it was shown how to reduce the query time to (not including hashing costs) and
similarly the space usage.

See also
Bloom filter
Curse of dimensionality
Feature hashing
Fourier-related transforms
Geohash
Multilinear subspace learning
Principal component analysis
Random indexing[33]
Rolling hash
Singular value decomposition
Sparse distributed memory
Wavelet compression

References
1. Rajaraman, A.; Ullman, J. (2010). "Mining of Massive Datasets, Ch. 3" (http://infolab.stanford.
edu/~ullman/mmds.html).
2. Zhao, Kang; Lu, Hongtao; Mei, Jincheng (2014). Locality Preserving Hashing (https://ojs.aaa
i.org/index.php/AAAI/article/view/9133/8992). AAAI Conference on Artificial Intelligence.
Vol. 28. pp. 2874–2880.
3. Tsai, Yi-Hsuan; Yang, Ming-Hsuan (October 2014). "Locality preserving hashing". 2014
IEEE International Conference on Image Processing (ICIP). pp. 2988–2992.
doi:10.1109/ICIP.2014.7025604 (https://doi.org/10.1109%2FICIP.2014.7025604). ISBN 978-
1-4799-5751-4. ISSN 1522-4880 (https://www.worldcat.org/issn/1522-4880).
S2CID 8024458 (https://api.semanticscholar.org/CorpusID:8024458).
4. Gionis, A.; Indyk, P.; Motwani, R. (1999). "Similarity Search in High Dimensions via Hashing"
(http://people.csail.mit.edu/indyk/vldb99.ps). Proceedings of the 25th Very Large Database
(VLDB) Conference.
5. Indyk, Piotr.; Motwani, Rajeev. (1998). "Approximate Nearest Neighbors: Towards Removing
the Curse of Dimensionality." (http://people.csail.mit.edu/indyk/nndraft.ps). Proceedings of
30th Symposium on Theory of Computing.
6. Charikar, Moses S. (2002). "Similarity Estimation Techniques from Rounding Algorithms".
Proceedings of the 34th Annual ACM Symposium on Theory of Computing. pp. 380–388.
CiteSeerX 10.1.1.147.4064 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.147.
4064). doi:10.1145/509907.509965 (https://doi.org/10.1145%2F509907.509965).
7. Chin, Andrew (1991). Complexity Issues in General Purpose Parallel Computing (https://per
ma.cc/E47H-WCVP) (DPhil). University of Oxford. pp. 87–95.
8. Chin, Andrew (1994). "Locality-Preserving Hash Functions for General Purpose Parallel
Computation" (http://unclaw.com/chin/scholarship/hashfunctions.pdf) (PDF). Algorithmica.
12 (2–3): 170–181. doi:10.1007/BF01185209 (https://doi.org/10.1007%2FBF01185209).
S2CID 18108051 (https://api.semanticscholar.org/CorpusID:18108051).
9. Das, Abhinandan S.; et al. (2007), "Google news personalization: scalable online
collaborative filtering", Proceedings of the 16th International Conference on World Wide
Web: 271, doi:10.1145/1242572.1242610 (https://doi.org/10.1145%2F1242572.1242610),
ISBN 9781595936547, S2CID 207163129 (https://api.semanticscholar.org/CorpusID:20716
3129).
10. Koga, Hisashi; Tetsuo Ishibashi; Toshinori Watanabe (2007), "Fast agglomerative
hierarchical clustering algorithm using Locality-Sensitive Hashing", Knowledge and
Information Systems, 12 (1): 25–53, doi:10.1007/s10115-006-0027-5 (https://doi.org/10.100
7%2Fs10115-006-0027-5), S2CID 4613827 (https://api.semanticscholar.org/CorpusID:4613
827).
11. Cochez, Michael; Mou, Hao (2015), "Twister Tries: Approximate Hierarchical Agglomerative
Clustering for Average Distance in Linear Time" (https://jyx.jyu.fi/bitstream/123456789/4653
7/1/cochezmousigmod15finalcameraready.pdf) (PDF), Proceeding SIGMOD '15
Proceedings of the 2015 ACM SIGMOD International Conference on Management of Data:
505–517, doi:10.1145/2723372.2751521 (https://doi.org/10.1145%2F2723372.2751521),
ISBN 9781450327589, S2CID 14414777 (https://api.semanticscholar.org/CorpusID:144147
77).
12. Brinza, Dumitru; et al. (2010), "RAPID detection of gene–gene interactions in genome-wide
association studies", Bioinformatics, 26 (22): 2856–2862, doi:10.1093/bioinformatics/btq529
(https://doi.org/10.1093%2Fbioinformatics%2Fbtq529), PMC 3493125 (https://www.ncbi.nlm.
nih.gov/pmc/articles/PMC3493125), PMID 20871107 (https://pubmed.ncbi.nlm.nih.gov/2087
1107)
13. dejavu - Audio fingerprinting and recognition in Python (https://github.com/worldveil/dejavu),
2018-12-19
14. Aluç, Güneş; Özsu, M. Tamer; Daudjee, Khuzaima (2018), "Building self-clustering RDF
databases using Tunable-LSH", The VLDB Journal, 28 (2): 173–195, doi:10.1007/s00778-
018-0530-9 (https://doi.org/10.1007%2Fs00778-018-0530-9), S2CID 53695535 (https://api.s
emanticscholar.org/CorpusID:53695535)
15. Chen, Beidi; Medini, Tharun; Farwell, James; Gobriel, Sameh; Tai, Charlie; Shrivastava,
Anshumali (2020-02-29). "SLIDE : In Defense of Smart Algorithms over Hardware
Acceleration for Large-Scale Deep Learning Systems". arXiv:1903.03129 (https://arxiv.org/a
bs/1903.03129) [cs.DC (https://arxiv.org/archive/cs.DC)].
16. Chen, Beidi; Liu, Zichang; Peng, Binghui; Xu, Zhaozhuo; Li, Jonathan Lingjie; Dao, Tri;
Song, Zhao; Shrivastava, Anshumali; Re, Christopher (2021), "MONGOOSE: A Learnable
LSH Framework for Efficient Neural Network Training" (https://openreview.net/forum?id=wW
K7yXkULyh), International Conference on Learning Representation
17. Oliver, Jonathan; Cheng, Chun; Chen, Yanggui (2013). TLSH - a locality sensitive hash (http
s://www.academia.edu/7833902). 4th Cybercrime and Trustworthy Computing Workshop.
pp. 7–13. doi:10.1109/CTC.2013.9 (https://doi.org/10.1109%2FCTC.2013.9). ISBN 978-1-
4799-3076-0.
18. Broder, A.Z.; Charikar, M.; Frieze, A.M.; Mitzenmacher, M. (1998). "Min-wise independent
permutations" (http://www.cs.princeton.edu/~moses/papers/minwise.ps). Proceedings of the
Thirtieth Annual ACM Symposium on Theory of Computing. pp. 327–336.
CiteSeerX 10.1.1.409.9220 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.409.
9220). doi:10.1145/276698.276781 (https://doi.org/10.1145%2F276698.276781). Retrieved
2007-11-14.
19. Takei, Y.; Itoh, T.; Shinozaki, T. "An optimal construction of exactly min-wise independent
permutations". Technical Report COMP98-62, IEICE, 1998.
20. Matoušek, J.; Stojakovic, M. (2002). "On Restricted Min-Wise Independence of
Permutations" (http://citeseer.ist.psu.edu/689217.html). Preprint. Retrieved 2007-11-14.
21. Saks, M.; Srinivasan, A.; Zhou, S.; Zuckerman, D. (2000). "Low discrepancy sets yield
approximate min-wise independent permutation families" (http://citeseer.ist.psu.edu/saks99l
ow.html). Information Processing Letters. 73 (1–2): 29–32. CiteSeerX 10.1.1.20.8264 (https://
citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.20.8264). doi:10.1016/S0020-
0190(99)00163-5 (https://doi.org/10.1016%2FS0020-0190%2899%2900163-5). Retrieved
2007-11-14.
22. Damiani; et al. (2004). "An Open Digest-based Technique for Spam Detection" (http://spdp.d
i.unimi.it/papers/pdcs04.pdf) (PDF). Retrieved 2013-09-01.
23. Oliver; et al. (2013). "TLSH - A Locality Sensitive Hash" (https://www.academia.edu/783390
2/TLSH_-A_Locality_Sensitive_Hash). 4th Cybercrime and Trustworthy Computing
Workshop. Retrieved 2015-06-04.
24. "TLSH" (https://github.com/trendmicro/tlsh). GitHub. Retrieved 2014-04-10.
25. Alexandr Andoni; Indyk, P. (2008). "Near-Optimal Hashing Algorithms for Approximate
Nearest Neighbor in High Dimensions". Communications of the ACM. 51 (1): 117–122.
CiteSeerX 10.1.1.226.6905 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.226.
6905). doi:10.1145/1327452.1327494 (https://doi.org/10.1145%2F1327452.1327494).
S2CID 6468963 (https://api.semanticscholar.org/CorpusID:6468963).
26. Goemans, Michel X.; Williamson, David P. (1995). "Improved approximation algorithms for
maximum cut and satisfiability problems using semidefinite programming". Journal of the
ACM. Association for Computing Machinery (ACM). 42 (6): 1115–1145.
doi:10.1145/227683.227684 (https://doi.org/10.1145%2F227683.227684). ISSN 0004-5411
(https://www.worldcat.org/issn/0004-5411).
27. Datar, M.; Immorlica, N.; Indyk, P.; Mirrokni, V.S. (2004). "Locality-Sensitive Hashing Scheme
Based on p-Stable Distributions" (http://theory.csail.mit.edu/~mirrokni/pstable.ps).
Proceedings of the Symposium on Computational Geometry.
28. Pauleve, L.; Jegou, H.; Amsaleg, L. (2010). "Locality sensitive hashing: A comparison of
hash function types and querying mechanisms" (http://hal.inria.fr/inria-00567191/en/).
Pattern Recognition Letters. 31 (11): 1348–1358. doi:10.1016/j.patrec.2010.04.004 (https://d
oi.org/10.1016%2Fj.patrec.2010.04.004).
29. Salakhutdinov, Ruslan; Hinton, Geoffrey (2008). "Semantic hashing" (https://doi.org/10.101
6%2Fj.ijar.2008.11.006). International Journal of Approximate Reasoning. 50 (7): 969–978.
doi:10.1016/j.ijar.2008.11.006 (https://doi.org/10.1016%2Fj.ijar.2008.11.006).
30. Dahlgaard, Søren, Mathias Bæk Tejs Knudsen, and Mikkel Thorup. "Fast similarity
sketching." (https://arxiv.org/pdf/1704.04370) 2017 IEEE 58th Annual Symposium on
Foundations of Computer Science (FOCS). IEEE, 2017.
31. Christiani, Tobias. "Fast locality-sensitive hashing frameworks for approximate near
neighbor search." (https://arxiv.org/pdf/1708.07586) International Conference on Similarity
Search and Applications. Springer, Cham, 2019.
32. Ahle, Thomas Dybdahl. "On the Problem of $$ p_1^{-1} $$ in Locality-Sensitive Hashing."
International Conference on Similarity Search and Applications. Springer, Cham, 2020.
33. Gorman, James, and James R. Curran. "Scaling distributional similarity to large corpora." (ht
tps://aclanthology.org/P06-1046.pdf) Proceedings of the 21st International Conference on
Computational Linguistics and the 44th annual meeting of the Association for Computational
Linguistics. Association for Computational Linguistics, 2006.

Further reading
Samet, H. (2006) Foundations of Multidimensional and Metric Data Structures. Morgan
Kaufmann. ISBN 0-12-369446-9
Indyk, Piotr; Motwani, Rajeev; Raghavan, Prabhakar; Vempala, Santosh (1997). "Locality-
preserving hashing in multidimensional spaces". Proceedings of the twenty-ninth annual
ACM symposium on Theory of computing. STOC '97. pp. 618–625.
CiteSeerX 10.1.1.50.4927 (https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.50.49
27). doi:10.1145/258533.258656 (https://doi.org/10.1145%2F258533.258656). ISBN 978-0-
89791-888-6. S2CID 15693787 (https://api.semanticscholar.org/CorpusID:15693787).
Chin, Andrew (1994). "Locality-preserving hash functions for general purpose parallel
computation" (http://www.unclaw.com/chin/scholarship/hashfunctions.pdf) (PDF).
Algorithmica. 12 (2–3): 170–181. doi:10.1007/BF01185209 (https://doi.org/10.1007%2FBF0
1185209). S2CID 18108051 (https://api.semanticscholar.org/CorpusID:18108051).

External links
Alex Andoni's LSH homepage (http://web.mit.edu/andoni/www/LSH/index.html)
LSHKIT: A C++ Locality Sensitive Hashing Library (https://lshkit.sourceforge.net/)
A Python Locality Sensitive Hashing library that optionally supports persistence via redis (htt
ps://github.com/simonemainardi/LSHash)
Caltech Large Scale Image Search Toolbox (https://web.archive.org/web/20101203074412/
http://www.vision.caltech.edu/malaa/software/research/image-search/): a Matlab toolbox
implementing several LSH hash functions, in addition to Kd-Trees, Hierarchical K-Means,
and Inverted File search algorithms.
Slash: A C++ LSH library, implementing Spherical LSH by Terasawa, K., Tanaka, Y (https://gi
thub.com/salviati/slash)
LSHBOX: An Open Source C++ Toolbox of Locality-Sensitive Hashing for Large Scale
Image Retrieval, Also Support Python and MATLAB. (https://github.com/RSIA-LIESMARS-W
HU/LSHBOX)
SRS: A C++ Implementation of An In-memory, Space-efficient Approximate Nearest
Neighbor Query Processing Algorithm based on p-stable Random Projection (https://github.c
om/DBWangGroupUNSW/SRS)
TLSH open source on Github (https://github.com/trendmicro/tlsh)
JavaScript port of TLSH (Trend Micro Locality Sensitive Hashing) bundled as node.js
module (https://github.com/idealista/tlsh-js)
Java port of TLSH (Trend Micro Locality Sensitive Hashing) bundled as maven package (htt
ps://github.com/idealista/tlsh)

Retrieved from "https://en.wikipedia.org/w/index.php?title=Locality-sensitive_hashing&oldid=1164694785"

You might also like