Indexes in general can be imagined as a very tall stack of hierarcial GREATER-THAN LESS-THAN branches. To find a particular node, you start at the top and do a GREATER/LESS plunge down the tree.
What happens if the tree is originally build in perfect alphabetical order. Then every node has a GREATER branch and an EMPTY LESSER branch. To find anything, you'll need to scan the entire stack!
Cluster index attempt to address this problem. No only by frequent rebalancing but each node contains dozens of downward pointers, so each "read" grabs dozens of branches. The key to clusters, is each node is kept balanced and sorted ... even if the database needs to stop for a second and rebuild an entire branch to keep the index balanced.
Using clustered indexes, and statistical analysis suggests that a single record, in a database of a billion records, can usually be found in SEVEN or less reads.
Clustered indexes are harder for the database to build and harder to keep maintined, but far superior in speed of access.
Well, that's my 2 cents worth. (smile)