W3C home > Mailing lists > Public > www-jigsaw@w3.org > May to June 1998

jdbm's patch --- bug fix and performance improvement

From: Toshiki Murata <mura@kansai.oki.co.jp>
Date: Thu, 25 Jun 1998 12:22:05 +0900
To: www-jigsaw@w3.org
Message-Id: <19980625122205Z.mura@kansai.oki.co.jp>
Hi,
I and my fellow worker (Tsuyoshi Fukui <tfukui@kansai.oki.co.jp>)
fixed jdbm's bugs (and performance improvements.)

fixed bugs:
1. jdbm can leave old LRUEntry now.

   jdbm could not leave any LRUEntry.
   LRUList was not a LRU.
   jdbm has all data that jdbm stored.
   So OutOfMemory Error was occurred when large data are stored.
   Now jdbm's LRU mechanism works well,
   so that jdbm treats any size data without OutOfMemory Error!

2. LRUList's instance variables are hided
   Any class cannot access internal data of LRUList.
   (This was a hotbed of bug.)
   getSize() method is added.
   jdbm#loadBucket(int) uses LRUList#getSize()
   instead of jdbm#loaded_buckets.

3. jdbm can close jdbm-file.
   jdbm#close() method is added.
   In Windows95/NT, files which are not closed cannot be deleted.
   So jdbm#reorganize() didn't work in Windows95/NT,
   and nobody can delete used jdbm files without re-launch of java.
   Now even if Jigsaw are running, jdbmfiles can be reorganized,
   moved, renamed and deleted.

4. uses RandomAccessFile#readFully(byte[]) instead of #read(byte[])
   readFully(byte[]) can read whole data,
   but read(byte[]) may not read whole data.

performance improvements:
1. created new class -- FastByteArrayInputStream
   This has readInt() and read(byte[]) methods.
   These methods are very faster than DataInputStream's methods.
   These don't check end of file and read data at once.
   (in jdbm mechanism, all bucket data sizes are smaller than buffer size.
    So any EOF check isn't unnecessary.)

2. added two methods to FastByteArrayOutputStream
   writeInt(int) and write(byte[]) methods are added.
   These methods are very faster than DataOutputStream's methods.
   These don't check end of file and write data at once.

3. jdbm, jdbmBucket and jdbmBucketElement use
   FastByteArray{Input,Output}Stream.
   These classes use FastByteArray{Input,Output}Stream directly
   instead of Data{Input,Output}Stream.
   So jdbm can read and write data 5 times faster.

4. all statements for debeg are removed.
   jdbm#trace(String) method wasted much time.
   Because argments of trace(String) method were always evalated,
   even if debug was false.
   For examples, following code of jdbm#splitBucket(int, jdbmBucket) method,
       trace("splited b0="+a0) ;
       trace("splited b1="+a1) ;
   always make string representations of jdbmBucket
   spending very much time and spaces.

others:
1.jdbm can read read-only-files now.
   jdbm(File file, boolean isReadOnly) constructer is added.
   If isReadOnly is true, jdbm can read read-only-file.
   jdbm doesn't modify the file.


Now jdbm becomes very useful dbms.
jdbm can treat any size data.
And I think jdbm performance becomes 6 times faster.

Following is patch for src/classes/org/w3c/tools/dbm directry of Jigsaw 2.0b1.

Best regards,
Toshiki Murata.
--
                                     --------------------------------------
                                                             Toshiki Murata
                                                          Kansai Laboratory
                                            Oki Electric Industry Co., Ltd.
                                                      mura@kansai.oki.co.jp
                                     --------------------------------------

diff -c -r -N dbm.orig/FastByteArrayInputStream.java dbm/FastByteArrayInputStream.java
*** dbm.orig/FastByteArrayInputStream.java	Thu Jan  1 09:00:00 1970
--- dbm/FastByteArrayInputStream.java	Thu May 14 16:54:21 1998
***************
*** 0 ****
--- 1,24 ----
+ package org.w3c.tools.dbm ;
+ 
+ import java.io.* ;
+ 
+ class FastByteArrayInputStream extends ByteArrayInputStream {
+     FastByteArrayInputStream(byte buf[]) {
+ 	super(buf);
+     }
+     public int readInt() {
+ 	int ch1 = buf[pos++] & 0xFF;
+ 	int ch2 = buf[pos++] & 0xFF;
+ 	int ch3 = buf[pos++] & 0xFF;
+ 	int ch4 = buf[pos++] & 0xFF;
+ 	int val
+ 	     = (ch1 << 24) + (ch2 << 16) + (ch3 << 8) + (ch4 << 0);
+ 	return val;
+     }
+     public int read(byte b[]) {
+ 	int len = b.length;
+ 	System.arraycopy(buf, pos, b, 0, len);
+ 	pos += len;
+ 	return len;
+     }
+ }
diff -c -r -N dbm.orig/FastByteArrayOutputStream.java dbm/FastByteArrayOutputStream.java
*** dbm.orig/FastByteArrayOutputStream.java	Wed Apr 10 22:56:19 1996
--- dbm/FastByteArrayOutputStream.java	Thu May 14 16:54:21 1998
***************
*** 10,14 ****
--- 10,25 ----
  class FastByteArrayOutputStream extends ByteArrayOutputStream {
      FastByteArrayOutputStream(byte buf[]) {
  	this.buf = buf ;
+     }
+     public void writeInt(int v) {
+         buf[count++] = (byte)(v >>> 24);
+         buf[count++] = (byte)(v >>> 16);
+         buf[count++] = (byte)(v >>>  8);
+         buf[count++] = (byte)(v >>>  0);
+     }
+     public void write(byte b[]) {
+ 	int len = b.length;
+ 	System.arraycopy(b, 0, buf, count, len);
+ 	count += len;
      }
  }
diff -c -r -N dbm.orig/jdbm.java dbm/jdbm.java
*** dbm.orig/jdbm.java	Wed Oct  8 17:49:46 1997
--- dbm/jdbm.java	Thu May 14 16:54:22 1998
***************
*** 27,36 ****
   */
  
  class LRUList {
!     LRUEntry head = null ;
!     LRUEntry tail = null ;
      
!     synchronized void removeEntry (LRUEntry lru) {
  	if ( lru == head ) {
  	    head     = lru.next ;
  	    lru.next = null ;
--- 27,37 ----
   */
  
  class LRUList {
!     private LRUEntry head = null ;
!     private LRUEntry tail = null ;
!     private int size = 0 ;
      
!     protected synchronized void removeEntry (LRUEntry lru) {
  	if ( lru == head ) {
  	    head     = lru.next ;
  	    lru.next = null ;
***************
*** 48,53 ****
--- 49,55 ----
  	    lru.next      = null ;
  	    lru.prev      = null ;
  	}
+ 	size-- ;
      }
  
      private final synchronized void atTop (LRUEntry lru) {
***************
*** 60,65 ****
--- 62,68 ----
  	lru.prev  = null ;
  	head.prev = lru ;
  	head      = lru ;
+ 	size++ ;
  	return ;
      }
  
***************
*** 85,90 ****
--- 88,94 ----
  	head      = lru ;
  	if ( tail == null )
  	    tail = head ;
+ 	size++ ;
  	return lru ;
      }
      
***************
*** 120,129 ****
--- 124,137 ----
  	}
      }
  
+     protected synchronized int getSize() {
+ 	return size;
+     }
  
      LRUList () {
  	this.head = null ;
  	this.tail = null ;
+ 	this.size = 0 ;
      }
  }
  
***************
*** 225,230 ****
--- 233,242 ----
       */
      int diridx[] = null ;
      /**
+      * Is opened in read only mode?
+      */
+     boolean isReadOnly = false ;
+     /**
       * IO buffer, for all read/write operations.
       */
      private byte buffer[] = null;
***************
*** 240,250 ****
       * List of loaded buckets.
       */
      private LRUList list = null ;
!     /**
!      * Number of loaded buckets.
!      */
!     private int loaded_buckets = 0 ;
! 	    
      protected final void trace(String msg) {
  	if ( debug )
  	    System.out.println("jdbm: "+msg) ;
--- 252,258 ----
       * List of loaded buckets.
       */
      private LRUList list = null ;
! 
      protected final void trace(String msg) {
  	if ( debug )
  	    System.out.println("jdbm: "+msg) ;
***************
*** 271,281 ****
  	throws IOException
      {
  	jdbmBucket select = null ;
! trace("split bucket: " + bucket.fileptr) ;
  	while (bucket.count == bucket_elems) {
  	    // Remove bucket to be split from LRU list, and free the bucket:
  	    list.removeBucket(bucket) ;
! 	    loaded_buckets-- ;
  	    markAvailable(bucket.fileptr, block_size) ;
  	    // Get two new buckets (should be allocated through the cache):
  	    int a0 = allocateSpace(block_size) ;
--- 279,291 ----
  	throws IOException
      {
  	jdbmBucket select = null ;
! //trace("split bucket: " + bucket.fileptr) ;
  	while (bucket.count == bucket_elems) {
  	    // Remove bucket to be split from LRU list, and free the bucket:
  	    list.removeBucket(bucket) ;
! 
! 	    unloadBucket();
! 
  	    markAvailable(bucket.fileptr, block_size) ;
  	    // Get two new buckets (should be allocated through the cache):
  	    int a0 = allocateSpace(block_size) ;
***************
*** 284,292 ****
  	    jdbmBucket b1   = new jdbmBucket(this, a1, -1) ;
  	    LRUEntry   lru0 = list.addEntry(b0) ;
  	    LRUEntry   lru1 = list.addEntry(b1) ;
! trace("splited b0="+a0) ;
! trace("splited b1="+a1) ;
! 	    loaded_buckets += 2 ;
  	    // Compute new bits, split the bucket:
  	    int newbits = bucket.bits + 1 ;
  	    b0.bits = newbits ;
--- 294,301 ----
  	    jdbmBucket b1   = new jdbmBucket(this, a1, -1) ;
  	    LRUEntry   lru0 = list.addEntry(b0) ;
  	    LRUEntry   lru1 = list.addEntry(b1) ;
! //trace("splited b0="+a0) ;
! //trace("splited b1="+a1) ;
  	    // Compute new bits, split the bucket:
  	    int newbits = bucket.bits + 1 ;
  	    b0.bits = newbits ;
***************
*** 325,335 ****
  	    int dir_end    = (dir_start1+1) << (dir_bits-newbits) ;
  	    dir_start1     = (dir_start1 << (dir_bits - newbits)) ;
  	    int dir_start0 = dir_start1 - (dir_end - dir_start1) ;
! trace("updating dir from "+dir_start0+" to "+dir_start1) ;
  	    for (int i = dir_start0 ; i < dir_start1 ; i++) {
  		diridx[i] = a0 ;
  	    }
! trace("updating dir from "+dir_start1+" to "+dir_end) ;
  	    for (int i = dir_start1 ; i < dir_end ; i++) {
  		diridx[i] = a1 ;
  	    }
--- 334,344 ----
  	    int dir_end    = (dir_start1+1) << (dir_bits-newbits) ;
  	    dir_start1     = (dir_start1 << (dir_bits - newbits)) ;
  	    int dir_start0 = dir_start1 - (dir_end - dir_start1) ;
! //trace("updating dir from "+dir_start0+" to "+dir_start1) ;
  	    for (int i = dir_start0 ; i < dir_start1 ; i++) {
  		diridx[i] = a0 ;
  	    }
! //trace("updating dir from "+dir_start1+" to "+dir_end) ;
  	    for (int i = dir_start1 ; i < dir_end ; i++) {
  		diridx[i] = a1 ;
  	    }
***************
*** 345,356 ****
      }
  
      /**
       * Save the database header into the provided buffer.
       * @param out The data output stream to save the header to.
       * @exception IOException If some IO error occured.
       */
  
!     private void saveHeader (DataOutputStream out) 
  	throws IOException
      {
  	out.writeInt(block_size) ;
--- 354,374 ----
      }
  
      /**
+      * Save and close the database.
+      * @exception IOException If some IO error occured.
+      */
+     public void close() throws IOException {
+ 	save();
+ 	fd.close();
+     }
+ 
+     /**
       * Save the database header into the provided buffer.
       * @param out The data output stream to save the header to.
       * @exception IOException If some IO error occured.
       */
  
!     private void saveHeader (FastByteArrayOutputStream out) 
  	throws IOException
      {
  	out.writeInt(block_size) ;
***************
*** 374,380 ****
       * @exception IOException If some IO Error occurs.
       */
  
!     private void restoreHeader (DataInputStream in) 
  	throws IOException
      {
  	this.block_size   = in.readInt() ;
--- 392,398 ----
       * @exception IOException If some IO Error occurs.
       */
  
!     private void restoreHeader (FastByteArrayInputStream in) 
  	throws IOException
      {
  	this.block_size   = in.readInt() ;
***************
*** 433,443 ****
      void saveBucket(jdbmBucket bucket)
  	throws IOException
      {
! 	DataOutputStream out = (new DataOutputStream
! 				(new FastByteArrayOutputStream(buffer))) ;
  	bucket.save(out) ;
! 	fd.seek(bucket.fileptr) ;
! 	fd.write(buffer) ;
      }
      
      /**
--- 451,463 ----
      void saveBucket(jdbmBucket bucket)
  	throws IOException
      {
! 	FastByteArrayOutputStream out = 
! 				new FastByteArrayOutputStream(buffer) ;
  	bucket.save(out) ;
! 	if (!isReadOnly) {
! 	    fd.seek(bucket.fileptr) ;
! 	    fd.write(buffer) ;
! 	}
      }
      
      /**
***************
*** 446,452 ****
       * @exception IOException If some IO error occured.
       */
  
!     private void saveDirectory(DataOutputStream out) 
  	throws IOException
      {
  	for (int i = 0 ; i < diridx.length ; i++)
--- 466,472 ----
       * @exception IOException If some IO error occured.
       */
  
!     private void saveDirectory(FastByteArrayOutputStream out) 
  	throws IOException
      {
  	for (int i = 0 ; i < diridx.length ; i++)
***************
*** 459,465 ****
       * @exception IOException If some IO error occured.
       */
  
!     private void restoreDirectory(DataInputStream in)
  	throws IOException
      {
  	this.diridx = new int[dir_size];
--- 479,485 ----
       * @exception IOException If some IO error occured.
       */
  
!     private void restoreDirectory(FastByteArrayInputStream in)
  	throws IOException
      {
  	this.diridx = new int[dir_size];
***************
*** 479,486 ****
      void markAvailable(int ptr, int size) {
  	// Some data space will indeed leak here, the db should be reorganized
  	// Fix suggested by Glen Diener <grd@atg.andor.com>
! 	if ( avail_count + 1 >= avail_size.length )
! 	    return;
  	header_changed = true ;
  	// Keep the list sorted:
  	for (int i = 0 ; i < avail_count ; i++) {
--- 499,507 ----
      void markAvailable(int ptr, int size) {
  	// Some data space will indeed leak here, the db should be reorganized
  	// Fix suggested by Glen Diener <grd@atg.andor.com>
! 	if ( avail_count + 1 >= avail_size.length ) {
! 	    removeAvailable(0);
! 	}
  	header_changed = true ;
  	// Keep the list sorted:
  	for (int i = 0 ; i < avail_count ; i++) {
***************
*** 545,551 ****
  
      protected int allocateSpace (int size) {
  	header_changed = true ;
! trace("allocateSpace: avail_count="+avail_count) ;
  	// Look in our own avail list:
  	for (int i = 0 ; i < avail_count ; i++) {
  	    if ( avail_size[i] >= size )
--- 566,572 ----
  
      protected int allocateSpace (int size) {
  	header_changed = true ;
! //trace("allocateSpace: avail_count="+avail_count) ;
  	// Look in our own avail list:
  	for (int i = 0 ; i < avail_count ; i++) {
  	    if ( avail_size[i] >= size )
***************
*** 585,594 ****
  	if ( fileptr < 0 )
  	    // No more space in suggested bucket, try our own pool
  	    fileptr = allocateSpace(size) ;
! trace("write: @"+fileptr) ;
! 	fd.seek(fileptr) ;
! 	fd.write(key) ;
! 	fd.write(data) ;
  	return fileptr ;
      }
  
--- 606,617 ----
  	if ( fileptr < 0 )
  	    // No more space in suggested bucket, try our own pool
  	    fileptr = allocateSpace(size) ;
! //trace("write: @"+fileptr) ;
! 	if (!isReadOnly) {
! 	    fd.seek(fileptr) ;
! 	    fd.write(key) ;
! 	    fd.write(data) ;
! 	}
  	return fileptr ;
      }
  
***************
*** 600,610 ****
      byte[] readKey (jdbmBucketElement el)
  	throws IOException
      {
! trace("read: @"+el.fileptr) ;
  	byte key[] = new byte[el.key_size] ;
  	fd.seek(el.fileptr) ;
! 	if (fd.read(key) != el.key_size) 
  	    throw new RuntimeException ("invalid key read.") ;
  	return key ;
      }
  
--- 623,636 ----
      byte[] readKey (jdbmBucketElement el)
  	throws IOException
      {
! //trace("read: @"+el.fileptr) ;
  	byte key[] = new byte[el.key_size] ;
  	fd.seek(el.fileptr) ;
! 	try {
! 	    fd.readFully(key);
! 	} catch (IOException e) {
  	    throw new RuntimeException ("invalid key read.") ;
+ 	}
  	return key ;
      }
  
***************
*** 618,625 ****
      {
  	byte data[] = new byte[el.data_size] ;
  	fd.seek(el.fileptr+el.key_size) ;
! 	if (fd.read(data) != el.data_size)
  	    throw new RuntimeException ("invalid data read.") ;
  	return data ;
      }
  
--- 644,654 ----
      {
  	byte data[] = new byte[el.data_size] ;
  	fd.seek(el.fileptr+el.key_size) ;
! 	try {
! 	    fd.readFully(data);
! 	} catch (IOException e) {
  	    throw new RuntimeException ("invalid data read.") ;
+ 	}
  	return data ;
      }
  
***************
*** 638,644 ****
  	if ( bucket.modified ) 
  	    saveBucket(bucket) ;
  	// Remove it from directory cache:
- 	loaded_buckets-- ;
  	return bucket ;
      }
  
--- 667,672 ----
***************
*** 653,672 ****
      {
  	jdbmBucket bucket = null ;
  	// Should we remove an entry from the cache:
! 	if ( loaded_buckets >= cache_size ) {
! trace("*** removing bucket from cache !") ;
  	    bucket = unloadBucket() ;
  	} else {
! trace("*** filling cache.") ;
! 	    loaded_buckets++ ;
  	    bucket = new jdbmBucket(this, fileptr, -1) ;
  	}
  	// Seek to the appropriate location, and restore:
  	fd.seek((long) fileptr) ;
! 	if (fd.read(buffer, 0, buffer.length) != buffer.length) 
  	    throw new IOException ("invalid read length.") ;
! 	jdbmBucket.restore(new DataInputStream
! 			   (new ByteArrayInputStream(buffer))
  			   , fileptr
  			   , bucket) ;
  	// Put this bucket in our cache, and return it:
--- 681,701 ----
      {
  	jdbmBucket bucket = null ;
  	// Should we remove an entry from the cache:
! 	if ( list.getSize() >= cache_size ) {
! //trace("*** removing bucket from cache !") ;
  	    bucket = unloadBucket() ;
  	} else {
! //trace("*** filling cache.") ;
  	    bucket = new jdbmBucket(this, fileptr, -1) ;
  	}
  	// Seek to the appropriate location, and restore:
  	fd.seek((long) fileptr) ;
! 	try {
! 	    fd.readFully(buffer, 0, buffer.length);
! 	} catch (IOException e) {
  	    throw new IOException ("invalid read length.") ;
! 	}
! 	jdbmBucket.restore(new FastByteArrayInputStream(buffer)
  			   , fileptr
  			   , bucket) ;
  	// Put this bucket in our cache, and return it:
***************
*** 800,857 ****
      {
  	// Write the header if needed:
  	if ( header_changed ) {
! trace ("saving header.") ;
! 	    DataOutputStream out = (new DataOutputStream
! 				    (new FastByteArrayOutputStream(buffer))) ;
  	    saveHeader(out) ;
! 	    fd.seek(0) ;
! 	    fd.write(buffer) ;
  	    header_changed = false ;
  	}
  	// Write the directory if needed:
  	if ( dir_changed ) {
! trace ("saving directory.") ;
  	    byte dir_buffer[] = new byte[dir_size*4];
! 	    DataOutputStream out = (new DataOutputStream
! 				    (new FastByteArrayOutputStream(dir_buffer))); 
  	    saveDirectory(out) ;
! 	    fd.seek(dir_adr) ;
! 	    fd.write(dir_buffer) ;
  	    dir_changed = false ;
  	}
  	// Write any modified bucket
  	list.saveModified(this) ;
      }
  
      public jdbm (File file) 
  	throws IOException
      {
  	boolean exists = file.exists() ;
! 	// Open the file, and write options:
  	this.file   = file ;
! 	this.fd     = new RandomAccessFile(file, "rw") ;
  	this.buffer = new byte[block_size] ;
  	this.list   = new LRUList() ;
  	if ( exists ) {
  	    // Restore the data base state:
  	    // Restore its header infos:
  	    fd.seek(0) ;
! 	    if (fd.read(buffer) != buffer.length) 
  		throw new IOException("unable to restore DB header.") ;
! 	    restoreHeader(new DataInputStream
! 			  (new ByteArrayInputStream(buffer)));
  	    // Restore the directory:
  	    fd.seek(dir_adr) ;
  	    byte dir_buffer[] = new byte[dir_size*4];
! 	    fd.readFully(dir_buffer);
! 	    if (fd.read(buffer) != buffer.length) 
  		throw new IOException("unable to restore DB directory.");
! 	    restoreDirectory(new DataInputStream
! 			     (new ByteArrayInputStream(dir_buffer))) ;
  	    // Initialize other fields:
  	    int dir_size        = (1<<dir_bits) ;
- 	    this.loaded_buckets = 0 ;
  	} else {
  	    // Create a new DBM file
  	    this.block_size   = BLOCK_SIZE ;
  	    this.dir_bits     = DIR_BITS ;
--- 829,907 ----
      {
  	// Write the header if needed:
  	if ( header_changed ) {
! //trace ("saving header.") ;
! 	    FastByteArrayOutputStream out
! 			= new FastByteArrayOutputStream(buffer) ;
  	    saveHeader(out) ;
! 	    if (!isReadOnly) {
! 		fd.seek(0) ;
! 		fd.write(buffer) ;
! 	    }
  	    header_changed = false ;
  	}
  	// Write the directory if needed:
  	if ( dir_changed ) {
! //trace ("saving directory.") ;
  	    byte dir_buffer[] = new byte[dir_size*4];
! 	    FastByteArrayOutputStream out
! 			= new FastByteArrayOutputStream(dir_buffer) ; 
  	    saveDirectory(out) ;
! 	    if (!isReadOnly) {
! 		fd.seek(dir_adr) ;
! 		fd.write(dir_buffer) ;
! 	    }
  	    dir_changed = false ;
  	}
  	// Write any modified bucket
  	list.saveModified(this) ;
      }
  
+ 
      public jdbm (File file) 
  	throws IOException
      {
+ 	this(file, false);
+     }
+ 
+     public jdbm (File file, boolean isReadOnly) 
+ 	throws IOException
+     {
  	boolean exists = file.exists() ;
! 	// Open the file, and write/readonly options:
  	this.file   = file ;
! 	this.isReadOnly = isReadOnly;
! 	if (isReadOnly) {
! 	    this.fd     = new RandomAccessFile(file, "r") ;
! 	} else {
! 	    this.fd     = new RandomAccessFile(file, "rw") ;
! 	}
  	this.buffer = new byte[block_size] ;
  	this.list   = new LRUList() ;
  	if ( exists ) {
  	    // Restore the data base state:
  	    // Restore its header infos:
  	    fd.seek(0) ;
! 	    try {
! 		fd.readFully(buffer);
! 	    } catch (IOException e) {
  		throw new IOException("unable to restore DB header.") ;
! 	    }
! 	    restoreHeader(new FastByteArrayInputStream(buffer));
  	    // Restore the directory:
  	    fd.seek(dir_adr) ;
  	    byte dir_buffer[] = new byte[dir_size*4];
! 	    try {
! 		fd.readFully(dir_buffer);
! 	    } catch (IOException e) {
  		throw new IOException("unable to restore DB directory.");
! 	    }
! 	    restoreDirectory(new FastByteArrayInputStream(dir_buffer)) ;
  	    // Initialize other fields:
  	    int dir_size        = (1<<dir_bits) ;
  	} else {
+ 	    if (isReadOnly) {
+ 		throw new IOException("cannot found jdbm file.");
+ 	    }
  	    // Create a new DBM file
  	    this.block_size   = BLOCK_SIZE ;
  	    this.dir_bits     = DIR_BITS ;
***************
*** 868,874 ****
  		throw new RuntimeException ("block_size can't match dir_size");
  	    // Setup the cache and allocate the directory:
  	    this.cache_size     = CACHE_SIZE ;
- 	    this.loaded_buckets = 1 ;
  	    this.diridx         = new int[dir_size] ;
  	    int bucket_adr      = 2*block_size ;
  	    LRUEntry b          = list.addEntry(new jdbmBucket(this
--- 918,923 ----
***************
*** 883,901 ****
  	    this.avail_count  = 0 ;
  	    this.next_block   = 4 ;
  	    // Write back these configuration options:
! 	    DataOutputStream out = null ;
  	    // Block 0: the header
! 	    out = new DataOutputStream(new FastByteArrayOutputStream(buffer));
  	    saveHeader (out) ;
  	    fd.seek(0) ;
  	    fd.write(buffer) ;
  	    // Block 1: the directory
! 	    out = new DataOutputStream(new FastByteArrayOutputStream(buffer)) ;
  	    saveDirectory(out);		
  	    fd.seek(dir_adr) ;
  	    fd.write(buffer) ;
  	    // Block 2: the initial bucket
! 	    out = new DataOutputStream(new FastByteArrayOutputStream(buffer));
  	    b.bucket.save(out) ; 
  	    fd.seek(2*block_size) ;
  	    fd.write(buffer) ;
--- 932,950 ----
  	    this.avail_count  = 0 ;
  	    this.next_block   = 4 ;
  	    // Write back these configuration options:
! 	    FastByteArrayOutputStream out = null ;
  	    // Block 0: the header
! 	    out = new FastByteArrayOutputStream(buffer);
  	    saveHeader (out) ;
  	    fd.seek(0) ;
  	    fd.write(buffer) ;
  	    // Block 1: the directory
! 	    out = new FastByteArrayOutputStream(buffer);
  	    saveDirectory(out);		
  	    fd.seek(dir_adr) ;
  	    fd.write(buffer) ;
  	    // Block 2: the initial bucket
! 	    out = new FastByteArrayOutputStream(buffer);
  	    b.bucket.save(out) ; 
  	    fd.seek(2*block_size) ;
  	    fd.write(buffer) ;
***************
*** 1011,1016 ****
--- 1060,1066 ----
  	    }
  	    // Save new database:
  	    clean.save();
+ 	    clean.close();
  	    if ( trace )
  		System.out.println("reorganization done ("
  				   + (System.currentTimeMillis()-time)
***************
*** 1022,1030 ****
  	} finally {
  	    if ( tmpfile != null ) {
  		// Success:
- 		file.delete();
- 		tmpfile.renameTo(file);
  		try {
  		    ret = new jdbm(file);
  		} catch (IOException ex) {
  		    ex.printStackTrace();
--- 1072,1081 ----
  	} finally {
  	    if ( tmpfile != null ) {
  		// Success:
  		try {
+ 		    close();
+ 		    file.delete();
+ 		    tmpfile.renameTo(file);
  		    ret = new jdbm(file);
  		} catch (IOException ex) {
  		    ex.printStackTrace();
diff -c -r -N dbm.orig/jdbmBucket.java dbm/jdbmBucket.java
*** dbm.orig/jdbmBucket.java	Sat Aug 10 00:16:41 1996
--- dbm/jdbmBucket.java	Thu May 14 16:54:22 1998
***************
*** 75,85 ****
  	int len = Math.min(a.length, s.length) ;
  	for (int i = 0 ; i < len ; i++) {
  	    if ( a[i] != s[i] ) {
! db.trace("array doesn't start with.") ;
  		return false ;
  	    }
  	}
! db.trace("array matches.") ;
  	return true ;
      }
      
--- 75,85 ----
  	int len = Math.min(a.length, s.length) ;
  	for (int i = 0 ; i < len ; i++) {
  	    if ( a[i] != s[i] ) {
! //db.trace("array doesn't start with.") ;
  		return false ;
  	    }
  	}
! //db.trace("array matches.") ;
  	return true ;
      }
      
***************
*** 108,114 ****
       * @exception IOException If some IO error occured.
       */
  
!     void save (DataOutputStream out) 
  	throws IOException
      {
  	out.writeInt(bits) ;
--- 108,114 ----
       * @exception IOException If some IO error occured.
       */
  
!     void save (FastByteArrayOutputStream out) 
  	throws IOException
      {
  	out.writeInt(bits) ;
***************
*** 130,136 ****
       * @exception IOException If some IO error occured.
       */
  
!     static jdbmBucket restore (DataInputStream in
  			       , int fileptr
  			       , jdbmBucket into)
  	throws IOException
--- 130,136 ----
       * @exception IOException If some IO error occured.
       */
  
!     static jdbmBucket restore (FastByteArrayInputStream in
  			       , int fileptr
  			       , jdbmBucket into)
  	throws IOException
***************
*** 162,168 ****
  	int hloc = iloc ;
  	while (true) {
  	    jdbmBucketElement el = elements[iloc] ;
! db.trace("lookup: at "+iloc+" "+el) ;
              if ( el.hashval == -1 )
  		return null ;
  	    if ((el.hashval == hashval) && arrayStartsWith(key,el.keystart)) {
--- 162,168 ----
  	int hloc = iloc ;
  	while (true) {
  	    jdbmBucketElement el = elements[iloc] ;
! //db.trace("lookup: at "+iloc+" "+el) ;
              if ( el.hashval == -1 )
  		return null ;
  	    if ((el.hashval == hashval) && arrayStartsWith(key,el.keystart)) {
diff -c -r -N dbm.orig/jdbmBucketElement.java dbm/jdbmBucketElement.java
*** dbm.orig/jdbmBucketElement.java	Wed Apr 10 22:56:26 1996
--- dbm/jdbmBucketElement.java	Thu May 14 16:54:22 1998
***************
*** 47,53 ****
      int fileptr = -1 ;
  
  
!     static final jdbmBucketElement restore (DataInputStream in
  					    , jdbmBucketElement into) 
  	throws IOException
      {
--- 47,53 ----
      int fileptr = -1 ;
  
  
!     static final jdbmBucketElement restore (FastByteArrayInputStream in
  					    , jdbmBucketElement into) 
  	throws IOException
      {
***************
*** 60,73 ****
  	
      }
  
!     static final jdbmBucketElement restore (DataInputStream in) 
  	throws IOException
      {
  	jdbmBucketElement el = new jdbmBucketElement() ;
  	return restore (in, el) ;
      }
  
!     void save (DataOutputStream out)
  	throws IOException
      {
  	out.writeInt(hashval) ;
--- 60,73 ----
  	
      }
  
!     static final jdbmBucketElement restore (FastByteArrayInputStream in)
  	throws IOException
      {
  	jdbmBucketElement el = new jdbmBucketElement() ;
  	return restore (in, el) ;
      }
  
!     void save (FastByteArrayOutputStream out)
  	throws IOException
      {
  	out.writeInt(hashval) ;
Received on Wednesday, 24 June 1998 23:22:00 GMT

This archive was generated by hypermail 2.2.0+W3C-0.50 : Monday, 9 April 2012 12:13:27 GMT