My point, from the other discussion, is the same principle. Retrieving a whole table over the network is never going to be as fast as viewing a table locally, even if the local database is only Access. This is limited by factors such as network speed, driver efficiency, etc... Also, your network itself could be having collision problems, etc... Have you run any diagnostics on your network?
A 100 Mbit network is probably about 50x slower than a local hard drive. Also, when you are viewing a table locally, you don't always "open" the whole table; you only retrieve the parts that you are viewing at the time. But when you view a table or query remotely, you have to wait for the complete dataset to arrive at your client location. That is why it is better to work with queries that generate small result sets. Why would you need to retrieve the complete table into your application anyway? Why have a remote database at all, if you working with a complete dataset locally? The point is to only work with the data you need at any one time.
I just opened PGAdminII on my Windows 2000 workstation, and connected to PostgreSQL on my FreeBSD Unix server, and retrieved a table with 6500 records. It took about 6 seconds to deliver the data. If I make the same query on the server console, I get the results in less than one second. As this table grows larger, the difference in retrieval time will increase drastically.
So, in short, if the results are fast when you query from the server console, then your bottleneck is either in the network, the driver, or your application. I think it's most likely in the network. Try limiting your query results (using "previous <-> next" pages is a good solution), and see what happens. -------------------------------------------
"Now, this might cause some discomfort..."
(