Craig,
Yes, the technique used in VFP 7.0 was meant to introduce more empty spots in the index leaves, so that more records could be added before having to re-balance the tree "limbs". Since there is more exposure to corruption during the rebalancing phase, it was thought that by reducing the number of times this was necessary, it might reduce (one) source of index corruption.
Note: The indexes weren't really different in structure, but in how they were initially filled in and some of the algorithms that did the rebalancing. It was these changes that effected the size of the indexes and some reported that SEEKs were also slower because they required more physical IO's to trace down the deeper links.
Of course the worst case was for those that had very large index files - 1+ gb in VFP 6.0 (and earlier). In 7.0 people were reporting that they were going over the 2gb file limit on the .CDX files. This stopped applications from running when they tried recreating indexes and they ran into this file size limit. Also, in some cases there wasn't enough room on the file servers for all the "new" bloated index files - even when each was below 2gb. e.g. We have one app that has 160+ .CDX files - imagine each one growing 50% to 500% (and more) - that's a lot of disk extra space required, more physical IOs and of course more time to backup!
Rick