Hi Mike Lewis,
Mike Lewis said:
In this case, the m.dot here:
Code:
m.IDENTIFIER = ALLTRIM(UPPER(TSULAT.STATUS))
serves no purpose, as there can be no doubt that IDENTIFIER is a variable (but it would do no harm to leave the m.dot in place).
Not so fast. In a line on it's own [tt]IDENTIFIER = ALLTRIM(UPPER(TSULAT.STATUS))[/tt] would be an assignment and IDENTIFIER therefore wouldn't be read and there is no benefit of m. here as VFP would immediately only look for variables of that name and not fields.
But this is in a FOR condition, where = means comparison and not assignment. In a comparison both sides - RHS or LHS - can be fields or expressions involving fields, too. That's the tricky part of a tool like Thor's adding mdots, by the way.
You can easily fall for this RHS/LHS way of writing a condition when you strictly program by the rule to always make field/variable comparisons in the order field=variable, field as LHS, variable as RHS. Similarly with fields order in a join condition. And that makes sense with the default non-EXACT matching the = operator means. It's not wrong to ask for variable = field, though you have to be careful variable is the length of the field or you get .F. since inexact comparisons are made to the length of RHS and automatically .f. when the LHS is shorter than the RHS.
Or to say it in simpler words, since string comparison in VFP is "starts with" comparison, a shorter string can never start with a longer string. The only case where some part of a string is ignored is not a whitespace case but the case of the right string being shorter than the left string and in that case the left string can continue with whitespace but also with anything else, which makes this comparison a bit questionable, if you're not used to it being a "starts with" comparison.
Mandy,
I see you have a solution so there's nothing to add for this case, just note I'd also vote for using a LOCATE. I think (but don't know it - it's just a strong assumption) the conditions you make are for finding a single record. Because it makes no sense to make multiple control value assignments, the user will finally only see the last assignment.
So in very short: Look for a single record?
Use SEEK if an index exists that is sufficient for the search condition, usually conditions of the kind field=value qualify for this, where field is indexed.
Use LOCATE if multiple indexes could be used for the single parts of a more complex condition, also works in any case even without indexes, just not optimized.
Use SCAN only to process all or at least a group of records. SCAN with an EXIT in the loop before ENDSCAN can act the same as a LOCATE, if you're sure you'll only find one record you can also do without EXIT, but in any case a LOCATE then is the simpler tool to use.
I think all this and your use of NOOPTIMIZE comes from a previous thread where you tried to learn from a discussion following the solution to your question. It's usually NOT a good idea to use NOOPTIMIZE. Only in very rare cases you know trying to find a rushmore optimization for the for condition is a waste of time. It usually isn't a waste, even for complex search conditions, as they do make use of indexes for the single parts of a condition that has several terms combined with boolean AND/OR/NOT/etc. operations.
For finding single records you could always also use SEEK, if you prepare it with an index of whatever complex condition you have, but that'll be a waste of time for that index only can be used for the special condition.
There are curious special cases where indexing itself with a FOR condition means you create a small index that only covers some records of a DBF, aka filtered index. The index creation itself, more precisely the FOR condition telling which records are put into the index and which not, are rushmore optimized if the FOR condition allows it. In very short that just means the FOR condition of any command having it is Rushmore optimized.
Why filtered indexes are usually avoided is that they hide the fact SET ORDER to such an index does not only set a sort order, at the same time it filters the dbf workarea. This filtering affects records seen by a BROWSE or SCAN/ENDSCAN even when they don't have a FOR condition, just like a SET FILTER does. This makes creating a filtered index an alternative to a SCAN FOR, when you know you'll scan this group of records repeatedly. You just always need to be aware using filtered indexes means filtering the DBF to records that make it into the index. An advantage is, that records which are not fulfilling the FOR condition after an UPDATE or new or modified records that do fulfill the FOR condition will get into the index and so such an index has a good reusability, too. The only bad thing is this filtering is hidden in the index definition and so that filtering effect can surprise someone who only sees the SET ORDER and doesn't know it's a SET FILTER at the same time because it's a filtered index.
If you don't understand that last bit about creating filtered indexes, never mind. It's not necessary to use that mechanism anyway. You could actually also file it under "does more harm than good, even when you know what you're doing".
But if you have questions about when to use SEEK, LOCATE, and SCAN, please ask for anything still unclear.
Chriss