Well, I finally got down to running a benchmark myself (I'd been hoping to avoid it).
Methodology
===========
The code I used is reproduced at the end of this post. I used Jeremy's test1, test2, and test3 routines, with modifications to let me control the iteration count and with a restriction of "DiscrepancyID > -1". I didn't use test4, because it merely fetches the same value repeatedly from an open recordset. It seems to me that a competent programmer wouldn't even consider using DCount() if he has the value at hand like this--though he should move it to a variable rather than search the rst.Fields collection repeatedly.
My PC is a 400 MHz Pentium III, 128 MB RAM, running Windows ME and Access 97 for this test. Background processes were allowed to run during the test, but System Monitor reports a static background CPU utilization of 1%, so I don't think this introduced any significant error.
My table initially consisted of 32 rows with two columns:
DiscrepancyID, Long Integer, set to a number from 1 to 32
Test1, Text(255), set to String(255, "x"

The table had no index defined on it.
For successive benchmarks, I kept enlarging the table (using Copy and Paste Append) to produce row counts an order of magnitude higher, and reduced the iteration counts to keep the test timings in the same general range. I recompiled and saved the code after each iteration count change. I did not compact the database. I then ran the TestThemAll procedure once in the hopes of preloading everything possible into memory, to eliminate aberrations caused by disk I/O overhead. Finally, I ran the TestThemAll procedure (forward order) twice, followed immediately by the TestThemAllReverse procedure (backward order) twice.
As in Jeremy's tests, the timing function consisted of the VBA Now() function, which has a 1 second resolution. This yields an error margin of + or - just under 1 second, which is introduced by the absence of synchronization between the function value changing and me hitting the F5 key to run the test.
Other than keeping my hands off the mouse and keyboard while the tests were running, nothing was done to control for other error sources. However, the benchmark results, as you will see below, were remarkably consistent across all tests, so it appears that there were no significant sources of error.
Benchmark Results
=================
Code:
Table record count = 32, Iterations = 5000
Test1 Test2 Test3
Run #1, forward order 25 36 18
Run #2, forward order 25 36 18
Run #3, backward order 24 37 18
Run #4, backward order 25 36 18
Table record count = 320, Iterations = 3000
Test1 Test2 Test3
Run #1, forward order 18 25 14
Run #2, forward order 18 25 14
Run #3, backward order 18 25 14
Run #4, backward order 19 25 14
Table record count = 3,200, Iterations = 1500
Test1 Test2 Test3
Run #1, forward order 27 31 25
Run #2, forward order 27 31 26
Run #3, backward order 27 31 25
Run #4, backward order 27 31 25
Table record count = 32,000, Iterations = 300
Test1 Test2 Test3
Run #1, forward order 44 45 44
Run #2, forward order 43 45 44
Run #3, backward order 44 45 44
Run #4, backward order 43 45 44
(At this point I set the Text1 column of the table to Null, compacted the database, ran TestThemAll to reestablish starting conditions, and ran all tests again.)
Table record count = 32,000, Iterations = 300
Test1 Test2 Test3
Run #1, forward order 35 35 35
Run #2, forward order 35 35 35
Run #3, backward order 35 35 35
Run #4, backward order 35 35 35
Error Analysis
==============
The results were highly consistent for any given algorithm and benchmark. Since all test results were within the predicted margin of error, there are no statistically significant deviations. This yields a very high level of confidence in the measurements.
Conclusions
===========
1. For large numbers of queries against tiny tables, running DCount() is about 30% faster than creating both a Database object and a Recordset object using a Count() query.
2. For large numbers of queries against tiny tables, creating a Recordset object using a Count() query and an already existing Database object is about 25% faster than running DCount().
3. As the table row count increases, the algorithms tend to converge toward the same speed.
Corollary conclusion:
CurrentDb() is a relatively slow function. If you would use it often, it's better to create a global Database variable at application startup and reference it instead.
Additional notes:
1. In my first preconditioning run, I had accidentally left the restriction out of Test3 (which takes advantage of an already open Database object). The timing for that test was 16 seconds, 2 seconds less than the runs with the restriction. This suggests that either Jet has no optimization for Count(*), or it's only minimally effective, at least on a small table.
2. I noticed during my test runs that my HD light barely flickered. I think it's safe to conclude that Jet does, in fact, cache database pages.
3. The last benchmark, with shorter rows, suggests that as the row size decreases, all three algorithms get faster, and also that the algorithms tend to converge toward the same speed. However, more tests with additional row sizes would be needed to establish a confidence level in these hypotheses.
Rick Sprague
Want the best answers? See faq181-2886
To write a program from scratch, first create the universe. - Paraphrased from Albert Einstein