timeGetTime vs. QueryPerformanceCounter
timeGetTime vs. QueryPerformanceCounter
(OP)
Something I was curious about as I was trying to get some good program timing code working:
I notice these two values aren't anywhere in sync in terms of what the numbers are. (This is after I take QPC / (QPF / 1000) of course.) I was wondering if anyone knew the difference between the two numbers and where they came from.
Is QPC/QPF somehow hardware dependent compared to timeGetTime being software? Or is there some other explanation?
I notice these two values aren't anywhere in sync in terms of what the numbers are. (This is after I take QPC / (QPF / 1000) of course.) I was wondering if anyone knew the difference between the two numbers and where they came from.
Is QPC/QPF somehow hardware dependent compared to timeGetTime being software? Or is there some other explanation?
RE: timeGetTime vs. QueryPerformanceCounter
1. Yes, QPF returned value is hardware dependent (but it's not a CPU frequency!).
2. timeGetTime has 1 millisecond precision by default (too low;).
3. To obtain short time interval(s) by QPF/QPC calls:
CODE
...
LARGE_INTEGER
f, /* HPC ticks per second */
t0, /* start counter */
t1; /* end counter value */
double micros; // want in microseconds, see 1e6 constant
...
if (QueryPerformanceFrequency(&f))
{
printf("Performance Frequency is %d\n",(int)f.QuadPart);
if (QueryPerformanceCounter(&t0))
{
/* ** tested code here ** */
if (QueryPerformanceCounter(&t1))
{
micros = (double)(t1.QuadPart - t0.QuadPart) * 1e6
/
(double)x.QuadPart;
printf("Elapse time is %g microseconds\n",micros);
}
}
}
else
printf("*** QPF failed.\n");
...
RE: timeGetTime vs. QueryPerformanceCounter