Indeed, as I also mentioned already, THISFORM or THIS or any other dependency on objects, variables etc not coming from the table you index should never be part of an index. If at all, this should be made a temp separate IDX file, which would need the syntax [tt]
INDEX ON expression TO some.IDX[/tt].
Most important: Every INDEX command will read all records and transform them according to the expression result sort order into a index tag, which technically is a tree structure with fast access to siblings and leaf nodes point to record numbers. It's needless to say alone for the reason INDEX reads ALL data, this is not helpful to optimize reading just the part of data you really want to read. An index only makes your queries run fast, when it already exists for using it. Creating it is a costly routine and rather slowing things down. With low number of records you're better off not indexing at all, if you don't comprehend that nature of indexes.
You rather only once index date and then use that index, that means you remove the INDEX ON command from your code, an INDEX stored as a tag of a CDX index file is as permanent to the table as its fields. So you spare the time to create the index everytime from creating it once, that spares much more time. So no, Nasibs code is not a nice one.
You should also have an index on EmpID to further optimize the filter.
Code:
Select HoursEmployeesB
* Set Order To Date && I named the tag Date, but whatever tag name now exist you don't need to set some order, SET FILTER will decide on it's own what index(es) to use for optimization.
Set Filter To Date >= ThisForm.DateFrom .And. Date =< ThisForm.DateTo .And. EmpID = AllTrim(ThisForm.Combo1.Value)
Browse
[tt]Date >= ThisForm.DateFrom[/tt] is optimized with Date index
[tt]Date =< ThisForm.DateTo[/tt] is optimized with Date index, too
[tt]EmpID = AllTrim(ThisForm.Combo1.Value)[/tt] is optimized with EmpID index you also need to create.
In short: Create indexes in table management/creation routines or manually in the table designer. Don't do that within application code running, indexes are permanent structural parts of your data like the fields and data itself, indexes update together with changing data without any further ado from yourself or your code. Look for any INDEX commands in your code and rather make it a comment, instead do an index once and for all. Think about atomic (single field) indexes, mainly on the most interesting fields of your table to filter, that's mainly ID fields and for date filtering also date fields. You would also index currencies for prices or loans to have fast results for a certain price or loan range, but stop creating indexes on the fly for every complex expression. Create them just once in the table designer, like you also only create the table fields once. To INDEX will mean to never read less than all data, unless you INDEX ... FOR some condition, then this index will only contain partial record numbers and it's creation itself is optimized, if it can be. You need the atomic indexes on single fields for this process too, which again hints on them being much more useful. INDEX within an application only is acceptible for creating sort order on cursors, to index views or any other data freshly creating a cursor, view cursor or table and when it's acceptible to go through all data once and use that index many times at least within the current applicaiton session. Otherwise INDEXing wastes time.
Further steps to make much better use of indexes will be going for SQL queries, which can use index combinations in a much more efficient way than FILTER or BROWSE or any FOR clause optimized workarea related xBase command can do. SQL builds up bitmaps of single/partial where conditions, then bitands/bitors all those bitmaps and thereby has a set of records being the full result to fetch, while any FOR clause is evaluated and optimized only to find the first or next row matching, not the full result set. That only has corner case better perormance, if the result set is huge and positioning on the first result row is much faster than fetching the full result set.
BROWSEing a filtered or indexed table you also only get the impression you are faster than SQL, because unless a result set is small finding a first result row or the few first result rows displayed in a browse always is faster and thus seems to be faster overall, while it just postpones fetching further result rows. With tables growing to more than 10000 rows scrolling through a filtered workarea or through a key value can get sluggish because with each scroll down or up you trigger another portion of the postponed work to find next/previous rows.
The transition is a lot of work, but for the start you better index your tables in some single fields, then use SET FILTER or BROWSE FOR to get your results, even though that's not ideal, it's most probably fitting your usage of INDEX to limit shown records. VFP will keep the CDX file indexes current, which are used for the FOR/Filter condition optimization at least, no need to INDEX again and again. You then can aim for better performance in sections your application is slowing down with more data and go SQL, unless you are at a very old legacy foxpro version not at all allowing SQL.
Bye, Olaf.