There have been a few times when I've found it usefull to define what is essentially a bit array. For example I'll define a variable like char flags; and treat it as an array of 800 bits. This is usefull if you want to send a large number of binary flags over a network.
Accessing a single bit in this array does involve a bit of math that might not be immediately obvious to someone reading your code, so here are a few #define's you can use.
Here, x is your char array and i is your index into the array. So if you want to look at bit index 9 (the 10th bit) in your array, the bit you want is in the second byte ( x ) two bits over ( x&1 ). In general, you take the index and divide it by 8 ( x[i/8] or x[i>>3] ) to get the byte you want, then mod the index by 8 ( (i%8) or (i&7) )to get the proper bit index within the byte and take 2 to that power ( (1<<(i&7)) ) to create the bitmask to strip out the bit you want ( x[i>>3] & (1<<(i&7)) ).
The set of macros above assume little endian bit ordering within the array. If you've defined your own array, this shouldn't matter. If you're working with someone else's array however, for example when reading a monochrome TIFF file, you may need to use big endian bit ordering. The #define's below use big endian.