The answer is, it depends.
For most applications, the onboard RAID controller will be sufficient. However, if you are considering running RAID 5 with the onboard controller, you should first make sure that the controller performs parity calculations (called XOR operations) in hardware, rather than software. The reason for this is that the XOR computations used in parity-based RAID is compute intensive, so if the system has to do it in software then it will a) eat up CPU cycles from the main processor in the server, and b) not be as fast as they would be if the computations were done in hardware. So if you are going with RAID 5 AND your onboard controller doesn't do hardware parity/XOR, then I would recommend a dedicated RAID 5 card that does.
Now, the part where I try to talk you out of using RAID 5 altogether. In the past RAID 5 has been the most popular form of RAID used on servers, but that is now changing. The main reasons for RAID 5's popularity is that it allowed you to have redundancy in the event of disk failure but still make use of nearly all of the disk space on your hard disks (you do lose the equivalent of 1 disk for partiy data). When disk space was more expensive, this was an unbeatable deal. However, nowdays disk space is quite cheap. In most cases, a RAID 5 array will have far more disk space than is likely to be used by the server applications, for two simple reasons:
1. In RAID array, you will get better performance the more disks that you use (more disks equals more disk space).
2. The smallest hard disks that you can buy today are much larger than they used to be (larger disks equals more disk space).
Now, there is a performance problem with RAID 5 that a lot of people either don't know about or talk about, and it is related to the parity calculations mentioned above. When using a RAID 5 array, the disk read performance is usually quite good. This is because instead of reading a single file from a single disk, you are able to read a piece of the file (stripe) simultaneously from multiple disks, resulting in you being able to get the entire file faster than had it been stored on a single disk.
However, any time you write to or modify data on a RAID 5 array, all of the stripes of the file have to be read from the disk, then the file has to be broken up into the correct number of stripes, then the parity data has to be calculated, then the stripes have to be written to the disk. As you can imagine, this can take quite a long time.
So in summary, RAID 5 gets it's high read speeds from the data being striped across multiple disks, whereas it gets it's slow write speed from the data parity (which is what makes it fault tolerant to begin with).
If you are building a server that is going to be primarily read-only, or has a very high ratio of reads to writes (like a web server), then RAID 5 is probably OK. But if your server will be doing a roughly equivalent number of reads and writes (like many database or mail servers, and even file servers) then RAID 5 can be a performance bottleneck.
Instead I recommend RAID 10 (also called 1+0). In RAID 10, your disks are first broken up into sets of mirrored pairs, and then data is striped across those mirror sets. Because it is striping the data, you will see a significant increase in performance over just a single disk in both read and write, and since you will not be doing parity calculations you will see a much higher level of write performance than you would with RAID 5. For even small numbers of disks (4-8) RAID 10 can offer read performance nearly on par with RAID 5 and much higher write performance.
Not only that, RAID 5 is far less fault tolerant than RAID 10. For example, in RAID 5 you can lose a single disk without any problems. But if you lose two disks simultaneously (or lose a second disk before the first disk is replaced and the array is rebuilt) then you have lost all of the data on your array. In RAID 10, you can lose multiple disks without losing all of the data on the array, as long as you do not have both disks in the same mirrored set fail. You could theoretically lose half of your disks in a RAID 10 array and, depending on which disks failed, the array could still function. And the more disks you add to the array, the lower the probability of losing all of your data to multiple drive failures becomes.
Furthermore, when you are rebuilding a RAID 5 array after a disk failure, the performance of the array is severely degraded. This is because the RAID controller has to use all of that parity data in conjuntion with the data that wasn't lost in order to recreate the missing data. This is not only compute intensive, it is extremely I/O intensive (much like writing to or modifying data in a RAID 5 array, only on a much larger scale since it will always involve all disks simultaneously). But in a RAID 10 array, the "missing" data doesn't have to be reconstructed. Instead it is merely copied from the surviving disk of the mirrored set to the replacement disk, which results in a much less I/O and compute intensive rebuild of the array, which translates into a lower degree of perforance degradation during the rebuild, as well as potentially faster rebuilds.
So from my perspective, RAID 5 doesn't make a whole lot of sense for most applications. If you need to store large amounts of static data (archive systems or web sites) then it is a good option. For for storing and accessing dynamic data RAID 10 is a much better option.