Often times simply any number other than -1 will be a false value. If the app isn't critical, you might just play with the number and see what happens.
False is simply what is NOT true. The following will work nicely.
Iff DDEenabled is defined as boolean then DDEenabled = FALSE.
DDEenabled = 0.
The reason that TRUE is -1 harks back to the days when assembler was King and -1 simply loaded a register with FFFFx which insured that any OR operation would, if I remember correctly, force a "branch conditions reset". (BCR).
Robert Berman
Data Base consultant
Vulcan Software Services
thornmastr@earthlink.net
I beleive that in Visual BASIC, the truth value true is specified by the Boolean literal 'True' ( case insensitive ); the trught value false is specified by the Boolean lietral 'False'. Not <Boolean-variable> results in the opposite truth value. If such is the case the underlying ( numeric ) representation of True and False is irrelevant except whilr crossing type boundaries.
If you use "True" and "False" in VB you can't go wrong. To make DDEenabled=-1 false, say DDEenabled = False. If you say "print DDEenabled" it will say "false".
xbafge has a point. To me, -1 is a numeric, not a boolean value. In VB-code you could use -1 as 'true', and 1 just as well, but eventually you'll get confusion as
-1 does not equal 1,
while true equals true.
True Booleans can only have 2 values, which is the equivalent of a single bit. VB native numeric type is a Long. Storing signed Integers as "1's complement form" results in -1 being stored as &HFFFF. In other words, all 32 bits are set to 1. Zero is stored as &H0, so all bits are set to zero.
The Not function (at processor level) results in an inversion of a bit pattern, so Not 0 will return -1. Once False is assigned a zero value, True becomes -1.
There is a processor level flag which is set when the main processor register contains zero, and a fast test is available to check the zero flag. Therefore although False will be stored as zero, and tested for as zero, True will be set as &HFFFF, but tested as any non-zero value
________________________________________________________________
If you want to get the best response to a question, please check out FAQ222-2244 first
'If we're supposed to work in Hex, why have we only got A fingers?'
________________________________________________________________
If you want to get the best response to a question, please check out FAQ222-2244 first
'If we're supposed to work in Hex, why have we only got A fingers?'
________________________________________________________________
If you want to get the best response to a question, please check out FAQ222-2244 first
'If we're supposed to work in Hex, why have we only got A fingers?'
johnwm has put forth a powerful and lucid reason for the adoption of 0 to represent the Boolean value 'false' by 0 and 'true' by any non-zero number, especially -1 by the *implementor* of a high-level language. But what is the point of abstraction if you revert to basics when the abstarct mechanisms are serving the very purpose they were developed for?
E.g.: in C/C++ many people think the value of a null pointer is 0 but in fact it may be any invalid address. In the construct int *ip = 0, 0 is not the number 0 but the null pointer literal. An implementor of a 4-bit C compiler is free to implement the null pointer literal 0 as the number 2,000,000,000 ( an invalid address for 4-bit machines ).
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.