A Concurrent Implementation of Linear Hashing

Huxia Shi
Department of Computer Science and Engineering, York University 4700 Keele Street, Toronto, Ontario, Canada, M3J 1P3

Abstract. Traditional hashing algorithms have fixed hash function ranges. Although they provide efficient access to the indices of static databases, they are not appropriate for dynamic databases because of their inability in handling data size growth and shrinkage. Linear hashing, a dynamic hashing algorithm, addresses the main deficiency of traditional hashing. This paper presents a concurrent access solution to linear hashing. The efficiency of this solution has been compared with a sequential access solution. Both solutions are implemented in Java1 . Some properties of these implementations have been checked by means of the model checker Java PathFinder2 (JPF for short).

1

Introduction

Linear hashing is a dynamic hashing algorithm, which is of interest to the database community. Hash tables have been widely used to save indices for relatively static databases. However, they are not appropriate for dynamic databases. If there are not enough buckets in the hash table, each bucket of the hash table will contain a lot of indices when the database size is dramatically increased. The efficiency of the hash table in retrieving data is impaired in this case. On the contrary, if the hash table is created with a large amount of buckets, it is waste of memory space if the database always contains a small set of data or shrinks from a large size to a small size. The dynamic hashing techniques address this problem of traditional hash tables. The number of buckets can be adjusted in dynamic hashing according to the change in the data size. Therefore, it is well suited for dynamic database applications. Concurrent access is an important aspect for linear hashing. As the main application of linear hashing is in databases, it is normal that one user is deleting some indices and another user is adding some indices at the same time. The requirement of providing multiple users with acceptable response time motivates the research of efficient concurrent algorithms for linear hashing. This paper presents a concurrent solution [1]. Its performance is discussed and analyzed by comparing with a sequential solution. Both solutions are implemented in Java. These implementations and their verification using JPF are briefly explained. The rest of the paper is organized as follows. In Section 2, we discuss related work. Section 3 is a general introduction of the linear hashing technique and its
1 2

http://java.sun.com http://javapathfinder.sourceforge.net/

there are three areas in the linear hash table as shown in Figure 1. All keys in a bucket chain are saved in order. . . The primary buckets without the overflow buckets are called length one bucket chains. We summarize this paper in Section 8. 3 Linear Hashing In this section. Section 7 discusses the experimental results about the performance. Section 4 explains one concurrent solution [1] for linear hashing. When more and more keys are inserted into an initialized linear hash table. The keys with the hash value 0 are left. the next split operation will be moved back to the first bucket chain with a higher level hash function h2 . We choose a new hash function h1 (k) such that for any key value k. N − 1} maps a key to the id of a primary bucket. 2 Related work Linear hashing is a dynamic hash table algorithm invented by Witold Litwin in 1980 [2]. It is not efficient for the find operation. .2 operations. We call the primary bucket and all of its overflow buckets a bucket chain. Section 6 describes the verification of both implementations using JPF. More than one overflow bucket may be created when more keys are added. . the Java implementations of this solution and a sequential solution are presented. 0 or N . Other dynamic hashing algorithms include extendible hashing [3]. The split operation is used at this time to expand the number of the bucket chains and thus reduce the length of them. Each primary bucket has the maximal capacity to save b keys. The first split operation is applied to the first bucket chain. This hash function is applied to all keys in the first bucket chain. The new created overflow bucket is linked to the last overflow bucket. The techniques used in this concurrent solution are similar to the algorithms investigated in B-trees [6] [7] and binary search trees [8] [9]. The further split operations are applied to the next bucket chains one by one. we outline the data structure and the operations of a linear hash table. Other keys are put into a new bucket chain. When the number of the keys in a primary bucket exceeds its maximal capacity b. and maps them to two values. When a new key is inserted into the linear hash table. either h1 (k) = h0 (k) or h1 (k) = h0 (k) + N . In Section 5. we first calculate the primary bucket id and then put the key into the target bucket. . After the bucket chain at position N − 1 is split. To summarize. a bucket named overflow bucket is created and linked to the end of this primary bucket. The initialized hash function h0 : k → {0. some bucket chains may contain a long list of overflow buckets. and dynamic hashing [5]. N primary buckets with contiguous logical addresses are created. The number of bucket chains in this linear hash table is increased to N + 1 by adding the new bucket chain at the end. exponential hashing [4]. 1. Figure 1 shows the result of the splitting on the first bucket chain. When a linear hash table is initialized.

It first locates the target bucket chain by the hash function and then checks the key data in this bucket chain. and the last bucket chain is deleted. The properties of the three areas area1 area2 area3 The hash function hlevel is used in the middle area. Hash function hlevel maps any key to a value ranged from 0 to 2level N − 1. The next and level variables are called the root variables. The split result of the first bucket chain Lower boundary Upper boundary Hash function 0 next-1 hlevel+1 next 2level N − 1 hlevel 2level N 2level N + next − 1 hlevel+1 Table 1. A variable named next is introduced to save the position of the bucket chain to be split next. It is illustrated in the following pseudocode. 1. Otherwise. The boundaries of the three areas in a linear hash table can be determined by the next and level variables as shown in Table 1.3 0 16 24 32 1 13 17 2 26 30 38 … N-1 15 N 20 … next hlevel+1(k) hlevel(k) hlevel+1(k) Fig. They are updated by a split operation as follows: next ← (next + 1)mod(N ∗ 2level ) if next = 0 then level ← level + 1 endif The merge operation is opposite to the splitting. the level value is decreased by 1 and next moves to 2level N − 1. the merged new bucket chain is used to replace the one at position next-1. If the next value is not zero. The relation between hlevel and hlevel+1 is hlevel+1 (k) = hlevel (k) or hlevel+1 (k) = hlevel (k) + 2level N . Because of the special three areas data structure explained above. The other two areas use hash function hlevel+1 . . It merges the bucket chain at position next-1 and the one at the end of the table to a new bucket chain. if next = 0 then level ← level − 1 endif next ← (next − 1)mod(N ∗ 2level ) The other three operations of a linear hash table are find. it is decreased by 1. insert and delete. Then. The find operation checks if a key exists in a linear hash table.

The insert operation adds a key. hlevel+1 is used to get the target bucket id. A bucket chain object contains an integer variable locallevel and a sequence of buckets. Finally it releases the second lock. The find operation first reads the level value and then uses it to find the target bucket chain. 4 Concurrent Solution The concurrent solution in [1] is illustrated in this section. It introduces a data race problem. This variable saves the correct level value used in this bucket chain. The . a second read lock is put on the target bucket chain. The variable bucketChainList is an array containing all bucket chains. it is accepted as the valid target bucket id. To handle this problem.1 Find operation In the find operation. Integers level and next are root variables. All locks used in the concurrent solution This concurrent solution allows the find and the split operations to work simultaneously. The next step compares this value with the lower boundary of the middle area. After this read lock is successfully added. How to use this variable is illustrated in following subsections. However. It is redundant information if all operations are serialized. Existing lock Lock Request Read lock Selective lock Exclusive lock Read lock yes yes no Selective lock yes no no Exclusive lock yes no no Table 2. next : I n t e g e r b u c k e t C h a i n L i s t : Array<BucketChain> 4. a variable locallevel is added into each bucket chain. It is based on three different locks shown in Table 2. l e v e l . Now the linear hashing algorithm has three variables. Both the delete and the insert operations use the same location procedure as the find operation to get the target bucket id. Then the find operation releases the first lock on the root variables and reads data in the target bucket chain. If the calculated position is higher or equal to next. In the first step. hlevel is used to calculated a bucket chain position. The delete operation removes a key from a linear hash table.4 the location procedure here is changed to two steps. Otherwise. a read lock is first added on the root variables. the level value may be updated by a split process before it is used to calculate the target bucket chain.

both selective locks we added at the beginning are released.3 Split operation The split operation adds a selective lock first on the root variables and then on the bucket chain pointed to by next. before this selective lock is released.5 level value used by the find operation may be changed by a simultaneous split operation. splitting. it can not work concurrently with any other operations on the same bucket chain. The first insert or delete process put a selective lock on the target bucket chain. two bucket chains are merged. First the root variables are exclusively locked and updated. 4. They create a bucket sequence with the updated result. Then. and all locks are released. 4. the first lock is released and the target bucket chain is updated. the insert and delete operations do not change the bucket sequence in the bucket chain directly. Such an obsolete level can be identified by comparing it with the locallevel variable in the target bucket chain. It is called the concurrent . the bucket chain pointed to by next is split. After the second lock is successfully added. Thus. To allow the simultaneous accesses to the same target bucket chain with the find processes. the value of the next variable is increased. If there are several insert and delete operation requests on same bucket chain. it continues to increase the level value and locate the new bucket chain until the correct one is found.2 Insert and delete operations The insert and delete operations are similar. They first add a read lock on the root variables and then selectively lock on the target bucket chain. other insert or delete processes cannot successfully add their selective locks. these operations must work serially. Then the merge operation adds exclusive locks on two bucket chains which are going to be merged. After both locks are successfully added. the lock on the root variables is degraded to selective to allow other find processes. and deleting on the same bucket chain must run serially because all of them put selective locks on the target bucket chain. This procedure of probing the correct target chain is shared with the insert and delete operations.4 Merge operation The merge operation is the only one which uses an exclusive lock. 4. The split operation and find operation can work in parallel at the same bucket chain. inserting. 5 Java Implementations Two Java implementations are presented in this section. However. Finally. to continue. Finally. In the next step. Then the old sequence is replaced with the newer one. Consequently. which do not access the same bucket chains. If the find procedure detects this problem. Finally the second lock is released. The first one is based on the concurrent solution shown in previous section.

.* 1 1 1 Lock -readLockNum -selectiveLockNum -exclusiveLockNum +requestLock() +releaseLock() +degradeLock() 1 1 LocalLevelBucketChain -localLevel +getLocalLevel() +setLocalLevel() +chain +chainLock BucketChain -primary 0. 2. It is used to verify the performance of the former. All operations in the second implementation are forced to be serialized. The class diagram of the concurrent implementation The Lock class encapsulates the logic of how these locks cooperate with each other. The parameters for the two implementations 5. it is called the sequential implementation. Thus.* 1 Fig..6 implementation. Both implementations use the same parameters shown in Table 3.1 Concurrent implementation The classes of the concurrent implementation are shown in Figure 2. Another implementation is rather simple. N 2 hlevel (k) k mod 2level+1 Table 3. The readLockN um. Bucket <<interface>> LinearHashTable +find() +insert() +delete() <<interface>> Serializable -data -size -next +addData() +getData() +setNext() +getNext() 1 ThreeLockLinearHashTable #level #next #rootLock #bucketChainList +find() +insert() +delete() #split() #merge() Node 1 0. The . and exclusiveLockN um attributes record the number of the existing locks on a lockable object. selectiveLockN um.

LocalLevelBucketChain. The first one in this sequence is the primary bucket. f romT ype and toT ype. Node.7 Lock class has three methods. The rootLock attribute is a Lock object.Serializable interface. In comparison with the BucketChain class. and used to protect the root variables. split. or loaded from the hard disk files. delete. Otherwise. Thus. A Bucket object is composed of an integer array with a fixed size b and a reference to the next Bucket. BucketChain. The degradeLock method has two parameters. The releaseLock method releases an added lock by reducing the lock number. A Lock object is associated with either the root variables or a bucket chain.io. insert. insert. No Lock object is used in this implementation. a linear hash table represented by the ThreeLockLinearHashTable class can be saved onto the hard disk. The requestLock method uses a loop to add a lock.io. and merge methods in the SequentialLinearHashTable class are synchronized.Serializable interface to allow a linear hash table to be saved onto the hard disk. split. It contains the root variables and a bucket chain list. and Bucket classes implement the java. which has been illustrated in previous section. they have to run serially. A BucketChain object contains a sequence of the Bucket objects. The LocalLevelBucketChain class inherits the BucketChain class. An array named bucketChainList in the ThreeLockLinearHashTable class saves a sequence of Node elements.2 Sequential implementation Figure 3 is the class diagram of the sequential implementation of a linear hash table. The Lock class does not implement the java. the loop in the requestLock method will try to add a lock again. The concurrent implementation of this interface is the ThreeLockLinearHashTable class. the LocalLevelBucketChain class has one more attribute named localLevel. The SequentialLinearHashTable class implements the LinearHashTable interface. Therefore.Serializable interface because it is runtime information and need not to be persisted. the requestLock method increases the lock number and exits the loop. The find. the SequentialLinearHashTable. and BucketChain classes implement the java. and merge methods in the ThreeLockLinearHashTable class implement the detailed logic of the concurrent operations on a linear hash table. delete. Similar to the concurrent implementation. The ThreeLockLinearHashTable. . 5. the request thread goes to the waiting queue. The find. When this thread is woken up later. Each Node object has a LocalLevelBucketChain object and a Lock object associated with this bucket chain. It reduces the number of the lock whose type is f romT ype and increases the number of the toT ype lock. The LinearHashTable is an interface which defines the operations of a linear hash table. All of these methods are synchronized to avoid the possible error caused by the concurrent accesses to the lock number attributes. If the lock is successfully added. The integer attributes level and next in the ThreeLockLinearHashTable class are the root variables.io.

If there is no insert thread in a verification test. the importance of the locks. In order to cover a higher test range.5G. delete. and find threads. It is chosen to be 6 because a smaller number cannot introduce any interesting problem and for a bigger number JPF runs out of memory.* BucketChain -primary Fig. They are the freedom of deadlock. Four properties of these implementations are checked. the freedom of data races.6.. The last two properties are only checked in the concurrent implementation because they are related with the locks. For example. They are the insert. The threads with different types share the same test data. The maximal memory assigned to JPF is 2.* <<interface>> Serializable SequentialLinearHashTable -level -next -bucketChainList +find() +insert() +delete() -split() -merge() 1 1 0. Because JPF consumes a lot of memory in the test. while another insert thread works with the integers from 7 to 12. there are two insert threads and one delete thread in a verification test. is chosen to be 2 in the verification.0. The verification was conducted at a Linux server with one dual core CPU. Three different types of threads are used in the verification. One insert thread adds the integers from 1 to 6. and the consistence of the number of locks.. we can only use a very limited number of test data.8 <<interface>> LinearHashTable +find() +insert() +delete() Bucket -data -size -next +addData() +getData() +setNext() +getNext() 0. The number of test data manipulated by a thread is fixed. The class diagram of the sequential implementation 6 Verification JPF is used to verify the above two Java implementations. the threads with the same types use different test data. the linear hash table will be initialized by adding all of the test data used in the delete and find threads. . The delete thread tries to remove the integers from 1 to 6 at the same time. Thus they have chances to collide at the same bucket chains. Only a small bucket size can make it possible to trigger the split and merge operations with a tiny amount of test data. 3. The maximal size of a bucket. b. The JRE version is 1.

next becomes −1 which points out of the buck chain array. Therefore. The verification of the concurrent implementation has the out of memory problem when the number of threads is increased to three. This delete thread can remove the overflow bucket introduced by above insert thread. it has a flaw that some merge operations are discarded just because they are scheduled at an inappropriate time. A possible complete solution is that the merge thread keeps waiting on the lock of the root variables and retrying the merge operation when it is notified later until it finds the appropriate root variables. Our proposed solution is to check the values of the root variables after the merge operation adds the exclusive lock on the root variables. finds it is necessary to invoke a split operation. a delete thread can run on the same bucket chain before the split operation is started.1 Deadlock The freedom of deadlock is verified in both concurrent and sequential implementations. Figure 4 compares the state space of the two Java implementations. If both of the root variables are zero. it sets a local boolean variable to true and then releases all of the locks it holds. and thus generates an underflow event. it will check the local variable and call the split operation. when an insert operation. . and this operation will be finished with releasing the exclusive lock on the root variables.9 The reason of this special treatment is that the find and delete operations on an empty linear hash table cannot generate any interesting problem. After the merge operation. which creates an overflow event on a linear hash table with the initial root variables. This problem occurs in some rare cases. 6. The test results are listed in Table 4 and Table 5. There is no handling of this possible issue in the pseudocode presented in [1]. No code is changed and the default JPF properties are used in this verification. If this thread can continue. the merge action will not be performed. However. For example. The consequent merge operation causes the array index out of bounds problem. The different combinations of three thread types are tested. The first approach is used in our work because the second one brings a big change to the algorithm in [1]. After fixing this problem. An array index out of bounds exception is found by JPF in the deadlock verification. this implementation is only verified with a very limited number of threads. no other error is found in the deadlock verifications. The root variables next and level are both initialized to 0. However. This problem is not mentioned in [1]. This solution is very simple. The analysis of the exception stack trace shows that it occurs when a merge operation is performed on a linear hash table with the initial root variables.

The deadlock verification of the sequential implementation Insert Thread Number 1 0 1 2 0 0 1 0 0 2 1 1 2 Delete Thread Number 1 1 0 0 2 0 1 2 1 1 2 0 0 Find Thread Number 0 1 1 0 0 2 1 1 2 0 0 2 1 Time 00:02:16 00:00:56 00:00:58 00:02:32 00:02:30 00:05:33 12:21:31 12:42:30 13:54:28 12:45:10 11:36:48 12:27:10 13:12:00 New State Number 494229 203312 219901 535261 519308 1181728 memory memory memory memory memory memory memory Out Out Out Out Out Out Out of of of of of of of Table 5.tools.2 Data Race The data race problem is checked in both Java implementations.10 Insert Thread Number 1 0 1 2 0 0 1 0 0 2 1 1 2 2 0 2 2 Delete Thread Number 1 1 0 0 2 0 1 2 1 1 2 0 0 2 2 0 2 Find Thread Number 0 1 1 0 0 2 1 1 2 0 0 2 1 0 2 2 2 Time 00:00:03 00:00:02 00:00:02 00:00:02 00:00:03 00:00:02 00:00:38 00:00:11 00:00:31 00:00:14 00:00:11 00:01:30 00:00:18 00:11:35 00:18:17 00:11:20 10:01:53 New State Number 733 476 1159 655 1219 841 136972 29065 90986 35861 29862 328336 58655 1898391 3556731 2172770 Out of memory Table 4.P reciseRaceDetector . The test method is the same as the deadlock verification. Different combinations of the different thread types are tried. The deadlock verification of the concurrent implementation 6. The jpf.nasa.jpf.listener = gov.

6. all three locks are shown to be essential to the concurrent implementation. Similar null pointer exceptions are reported by JPF when the selective and the exclusive locks are disabled. this bucket chain can be set to null by a parallel merge operation just before the find thread accesses it. JPF reports a null pointer exception. All of them extend the ThreeLockLinearHashTable class. The state space of the two implementations attribute is added into the local jpf. The local level technique explained in Section 4 can handle this problem.11 The sequential implementation 10000000 1000000 State Number 100000 10000 1000 100 10 1 1I/1D 1D/1F 1I/1F 2I 2D 2F Thread Number The concurrent implementation Fig.properties file to enable the data race detection. It is an expected result. The concurrent solution allows that the split operation updates the root variables and other operations read the root variables at the same time. When a delete thread and a find thread run at the same time on the concurrent implementation without the read lock. respectively.3 Importance of the locks In this verification. 4. The LackReadLinearHashTable class does not use the read lock by overriding the find method in the superclass. Because the find thread does not add a read lock on the target bucket chain. the selective and exclusive locks are commented in the LackSelectiveLinearHashTable and LackExclusiveLinearHashTable classes. No data race is found in the tests of the sequential implementation. . Three new classes are added in this test. JPF detects a data race on the root variable level when it verifies the concurrent implementation. Similarly.

selectiveLockN um. A thread number t means there are t insert threads and t delete threads at the same time. The total task of this test is to insert ten million different integers and delete five million of them. The maximal size of a bucket. b. we are not interested in this possible problem. When the number of threads is increased to three or more. Other parts of the Lock2 class are same as the Lock class.0. selectiveLockNum. 7 Experiments of Performance The experiments of the performance were conducted at a Linux server with four dual core CPUs. Another issue in this test is same as the one in the verification of the deadlock. We can only run two threads on the concurrent implementation. and no hard disk IO is involved. JPF raises a lot of warnings about the unprotected field accesses on the readLockN um. The number of locks is verified after a lock is successfully required and before it is released. and exclusiveLockNum attributes in the Lock2 class are changed to public. No other problem is reported by JPF. is chosen to be 400 to avoid creating too many files in the experiments involving the hard disk IOs. The readLockNum. The JRE version is 1. If the number of a lock is found to be incorrect.6.12 6. JPF will report an uncaught exception error thrown by the assert clauses. JPF runs out of memory. Read lock exclusiveLockN um == 0 Selective lock selectiveLockN um == 1 exclusiveLockN um == 0 Exclusive lock readLockN um == 0 selectiveLockN um == 0 exclusiveLockN um == 1 Table 6. all operations are in memory. The ThreeLockWithAssert class is generally same as the ThreeLockLinearHashTable class except that it uses the Lock2 class instead of the Lock class and has the above assert clauses added. In this test.4 Number of locks We want to check that the numbers of different locks are always correct when multiple threads access the concurrent implementation in parallel. and exclusiveLockN um attributes. these attributes are private and all methods that access them are synchronized. The assert clauses used for different locks are listed in Table 6. Because in the real implementation. Figure 5 compares the performance of the two implementations. All workloads . The insert and delete operations run simultaneously. The assert clauses for three locks Two new classes are added in this verification.

The root variables and each bucket chain have their own storage files. When there are changes in a linear hash table.5 0.2 0. The performance of the concurrent insert and delete operations without the hard disk writing In real applications. The sequential implementation works in another way. the performance of the sequential implementation becomes worse with the increasing of the number of threads. It means the benefit we get from the parallel accesses does not obviously exceed its overhead. The disk IOs are the main costs in these tests. In the single thread case. 5. in the multiple threads cases. In the concurrent implementation.9 0.6 0. On the contrary. we find that the performance of the two implementations is similar. and deletes half of them in the delete threads running at the same time. the changes in a hash table may need to be written back to the hard disk files. The test of Figure 7 inserts the same number of integers. During the running of a system.1 0 1 3 5 The sequential implementation 10 20 50 Thread Number Fig.4 0. the performance of the concurrent implementation is worse than the sequential one. From this experimental result. Figure 6 and 7 give the performance comparisons of the two implementations in which the hard disk IOs are added. However. The concurrent implementation 0. They work in serialized mode. other treads can continue to add the locks and prepare the data in the memory. The test of Figure 6 only has the insert operations.3 0. When a thread is writing . when a thread is doing the disk IO. the concurrent implementation is slightly better than the sequential one.13 are equally distributed in the insert and delete threads. and then wait for disk IO. Another feature of the concurrent implementation is that its performance is rather stable when the number of threads is increased. Totally four hundred thousand integers are inserted. a hash table is saved onto the hard disk. The workloads are equally distributed in each thread. only the affected files are updated.7 0.8 Time Usage (min) 0.

4 0. 6. The performance of the concurrent insert and delete operations with the hard disk writing .2 0 1 2 3 5 The sequential implementation 10 20 50 Thread Number Fig.8 0.1 0 1 2 3 5 The sequential implementation 10 20 50 Thread Number Fig.3 0. 7.5 0.14 the hard disk.2 0.6 Time Usage (min) 0. even if these operations are not related to the part which is being saved.7 0. The performance of the concurrent insert operations with the hard disk writing The concurrent implementation 1. The concurrent implementation 0.2 1 Time Usage (min) 0. other threads can not do any operations in memory.6 0.4 0.

the performance of the concurrent implementation is slightly better than the sequential one in the multiple threads mode. H. ACM Transactions on Database Systems 12(2) (June 1987) 195–217 2. W. The correctness of the number of locks is also checked by JPF in the concurrent implementation. R. Y. P. BIT Numerical Mathematics 18(2) (June 1978) 184–201 6. ACM Transactions on Database Systems 4(3) (September 1979) 315–344 4. The concurrent implementation does not show any explicit advantage in the performance when all operations are in memory.: Efficient locking for concurrent operations on B-trees.: A new method for concurrency in B-trees. D. P. H. Larson. C. Both implementations are checked by means of JPF.. ACM Transactions on Database Systems 5(3) (September 1980) 354–382 9. J. D.L.S. ACM Transactions on Database Systems 6(4) (December 1981) 650–670 7.: Linear hashing: A new tool for file and table addressing. The freedom of deadlock and data races is verified with a limited number of threads. Then. Kwong. the Java implementations of this algorithm and a sequential solution are presented.. Fagin. P. Lomet. N..: Concurrent search and insertion in AVL trees. Strong. It allows the adjustment of the hashing function range according to the growth or shrinkage of the stored data.: Concurrent manipulation of binary search trees. ACM Transactions on Database Systems 8(1) (March 1983) 136–165 5. If the changes in a linear hash table are written back to the hard disk. Kung. In: Proceedings of the 6th Conference on Very Large Data Bases. Yao.. Wood. S. The performance of both implementations are compared. Montreal. References 1. Litwin.15 8 Conclusion Linear hashing is a dynamic hashing algorithm. IEEE Transactions on Software Engineering 8(3) (May 1982) 211–222 8. Lehman. Lehman.T. C.: Dynamic hashing. Ellis. Canada.: Bounded index exponential hashing.a fast access method for dynamic files. IEEE Computer Society (October 1980) 213–223 3.S.P. IEEE Transactions on Computers C-29(9) (September 1980) 811–817 .: Extendible hashing . Ellis. This paper introduces a high level concurrent access algorithm of linear hashing. Nievergelt.: Concurrency in linear hashing.

Sign up to vote on this title
UsefulNot useful

Master Your Semester with Scribd & The New York Times

Special offer for students: Only $4.99/month.

Master Your Semester with a Special Offer from Scribd & The New York Times

Cancel anytime.