I execute two similar queries with the only difference that the second one selects all columns from a table and the first one all columns but one. (The one column can be any column with size greater than 3 Bytes!). A single row is approximately 320 bytes and the table has about 60000 records. When I select all records using the first query it takes around 20 seconds to get the result set, when I execute the second one it takes hours. When looking at the CPU activity I noticed that the first query takes 100% of it for the some 3-4 seconds before it returns the result set, while the second one reaches the 100% for a moment and then goes back to normal with regular "birsts" of activity every 10 or so seconds until at the end (hours later) it returns the result set. There is more than enough virtual memory available on both machines (client & server). What can be the reason that such a small difference in the size of the result set can cause such big difference in the time of execution.