1) What level of accuracy do you need
2) Is interpolation OK for in between values
3) Are your sines/cosines in radians, grades or degrees?
We used to use LUTs for tomographic reconstruction in the early 80s. The difference was significant: without LUTs 7 days, with LUTs 20 minutes - that was on a Data General Eclipse. In the late 80s, we used array processors without LUTs and the best we could get was 40 minutes.
A sine/cosine takes appx 120-130 cycles on a Pentium. Logs (base 2, base 10 and natural), in comparison, only take 3-5 cycles so it is probably worth using LUTs for sines/cosines but not for logs. This may be different on different architectures. If the architecture you're using doesn't have a floating point unit, it really would be worth considering LUTs.
Setting one up is simple. Say you are working in degrees and want an accuracy of 0.01 degree
Code:
#define SINMAX 9000
double sinlut[SINMAX + 1];
void SinCreate ()
{
int i;
double angle, angleinc;
angleinc = 3.1415926535 / 2.0 / (SINMAX);
for (i = 0, angle = 0.0; i <= SINMAX; ++i, angle += angleinc)
{
sinlut[i] = sin(angle);
}
}
Using it is quite simple unless you wish to interpolate for in between values
Code:
double Sin(double degrees)
{
int ix;
ix = int (degrees * 100.0);
return sinlut[ix];
}
As Salem says, if you do a speed comparison on a modern desktop, you won't see any difference. If you use it on something without a floating point unit, the difference is significant.