First of all, in a data warehouse environment, you will want to use a big db_block_size (at least 8k). I don't know what is the maximum you can set, but I would go even higher that 8k. The idea is this: since you use the DB for data warehouse, you will get lots of information each time you do a query on it, contrary to a OLTP application where you get little data at a time. So, by having big blocks, each IO will retrieve more data per IO. You specify the block size at database creation time. You cannot change it after (you have to re-create the database).
Allocate most of the memory available to the DB_BLOCK_BUFFER parameter. On the opposite side, you can have a small SHARED_POOL_SIZE, since your queries should be different from one another (I assume). The Library cache is where the parsed statement resides. Again, in an OLTP environment, the same query gets executed lots of time, and it is usefull to have a parsed version of those, and save parse time, But in a DW environment, this has no meaning.
If you have few users connected to the instance, I would also give a boost to the sort_area_size parameter. This is the amount of memory allocated to each process connected to Oracle for sorting purposes. The more memory allowed, less time will sort be done on disk.
It is critical that create your users with a temporary tablespace (not SYSTEM, which is the default), on another disk, hopefully!)
There is probably lots of other things, but these are guidelines.
Feel free to ask more specifics questions.
Jean