..and to answer your second question, "It Depends..."
Several things can affect the 'speed' with which your database runs. However, here are some general rules:
1) Memory and CPU speed are always very good. The more, the merrier. I have run databases with over a millon records on a single PC, but have made sure that we had plenty of RAM. CPU speed is not as critical, nowadays in the world of multi-gigahertz Pentiums and AMD processors. I'd rather have a GB of RAM over a GB of CPU speed...
2) Make good use of indexed tables when you can, but don't over-index. Each record revision may require multiple indexes to be rebuilt, which adds to overhead.
3) Watch those joins when they aren't really necessary.
4) Keep record lengths as short as possible. Several small tables are usually better than one large table, so normalization will help, to a certain degree.
If you have a Front End/Back End situation, and the data tables are kept on a network volume, remember that ALL data sorting and retrieval will be done on the local PC - the network will have to send all the data to the local pc for any queries, etc. This is the biggest bugaboo regarding splitting a database. Access is not a database engine like SQL that can run a query remotely and then pass only the needed data to the local PC.
So in some cases, the speed issue will be network-dependent more than anything else. Much of the "wait time" will be I/O between your local PC and the network drive. The only way to get around this is to be careful how you structure your application, to reduce the number of times large tables have to be sent down the pipe. Also, you can save a bit of time, especially with a shared, networked database, by making sure the tables are compacted on a regular basis.
"For a successful technology, reality must take precedence over public relations, for Nature cannot be fooled." - Richard P. Feynman