After my comments about Axion, I got an email from
Rod Waldhoff, one of the Axion
developers asking me to join the axion-dev
list to discuss my issues.
I posted a summary of what I was seeing,
and some speculation about possible causes. Within 24 hours a fix
was in CVS. That's service! What is better is , as Rod noted:
inserting 5,000 rows: ~3,589 rows/sec inserting 10,000 rows: ~6,309 rows/sec inserting 50,000 rows: ~9,498 rows/sec inserting 100,000 rows: ~10,892 rows/sec inserting 200,000 rows: ~11,300 rows/sec Not only does this show 33 times the throughput in the 5,000 row test, but the throughput gets significantly better as the number of inserts per transaction increases (approaching some limit, of course). The 200,000 row tests show more than 3 times the throughput of the 5,000 row tests, in contrast to the "before" results Nick experienced (a ~25% decrease in throughput between 4,500 and 10,000 rows).
All I need to do is have a play.
I have actually done some benchmarking of insert speed of various embedded Java databases. I'll try and write that up sometime this
weekend. From the numbers Rod was getting Axion should be very competitive.