Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations bkrike on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

process-timing question

Status
Not open for further replies.

DoraC

Programmer
May 7, 2002
98
US
Hi,

I need to time a process within my c++ program, to the finest/lowest level possible (i.e. the smallest fraction of a second possible).

I am running my program on a Sun Solaris UNIX machine, and compiling it with g++. I am unable to get what should(?) be simple code to return a meaningful answer. example:
Code:
#include <iostream>
#include <time.h>
...
int main( int argc, char* argv[] )
{
...
    time_t t1 = time(0);
//-- code to be timed goes here... should take about 
//-- .001 seconds or so...
    time_t t2 = time(0);

    cout << "elapsed time: ." 
         << double(difftime(t2,t1)) 
         << "." << endl;

When I run this, I invariably get
elapsed time: .0.
,even if I put a "sleep(2)" etc. in the process...

This is very frustrating. I've tried various permutations with time_t t = clock() etc. and all produce the same result.

Any help would be greatly appreciated, as I'm about at my wits end.

Thank you,
dora
 
time() returns value in seconds. You can't get fractional value - so, you can't get 0.00... sec elapsed periods.
 
Take a look at gettimeofday() instead of time(), it will give you up to microsecond resolution. The actual resolution is system dependent, but it may do what you need:

Code:
#include <iostream>
#include <sys/time.h>

int main(void) {

  struct timeval tv;
  double t1,t2;

  gettimeofday(&tv,NULL);

  t1 = tv.tv_sec * 1000000 + tv.tv_usec;

  gettimeofday(&tv, NULL);
  cout << tv.tv_sec << ":"
       << tv.tv_usec
       << "\n";

  //-- code to be timed goes here... should take about
  //-- .001 seconds or so...

  gettimeofday(&tv,NULL);

  cout << tv.tv_sec << ":"
       << tv.tv_usec
       << "\n";

  t2 = tv.tv_sec * 1000000 + tv.tv_usec;

  cout << "elapsed time: "
       << t2-t1
       << " uSec\n";

  return 0;
}

Good luck.
 


Assuming you don't care about including assembly code in your project you can create an Assembly function which pulls the CPU cylce counter directly from the chip. This is incremented once every CPU CYLCE. On a 1 GHZ chip this is once per Nano Second. On a 3 GHZ chip this is 3 times per Nano second.

It is finer granularity than you can get with
GetTimeOfDay()

The only draw back is it only works on the Pentium Chips ( 586, 686, 2,3, and 4). Should work on the AMD X86 chips as well but I haven't verified it.

Also translating the CYLCE counter to actual NANO SECONDS is a function of the CHIP SPEED you are running on and is different for each speed chip.

Use your favorite search engine and look for

RDTSC



 
If you're using bash, and you don't have a requirement for the timing to occur inside a C++ program, you could use the shell's built-in timing facilities.

e.g.
Code:
$ time sleep 2

real    0m2.269s
user    0m0.002s
sys     0m0.004s
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top