All good points.
The technical answer to the question is RAID 0, but it's a really bad idea. If a drive crashes you will lose everything that was written since the last backup, and the server will be unavailable to users as well. Say you're using 250 GB drives, you will need only 4 of them.
RAID 5 is a better answer, because it is the cheapest way to get a redundant array of a given size (unless you can do a mirror with only 2 disks). However, RAID 5 isn't suitable for all situations because of it's relatively poor write performance (all forms of parity RAID suffer from this). If you're using 250 GB drives, you will need 5 of them plus a RAID controller that does RAID 5 (most servers have this already).
RAID 10 offers much more write performance, but at twice the cost. But it is also much more fault tolerant. If you are using 250 GB drives you would need 8 drives to build this array. But you could potentially suffer 4 drive failures and still have a working array.
Normally when I size an array I have to look at how it's going to be used. Will it be used for an application that is I/O intensive (databases and email for example), or will the application just need lots of space (file server)? Will the I/O operations be largely reads, writes, or an even mix of both? Assuming the storage controller provides enough bandwidth to the disks, how many disks will you need to reach the required performance levels? Disk space is cheap, but to get more performance you have to add more spindles (disks) and stripe the data across them. All of that needs to be factored into the equation.