Realistically, this is a very simple straightforward bit of code, so it's 99.99% certain the compiler will do it in absolutely the most efficient way possible. The people who write compilers are really, really top-notch at assembly and low-level algorithm writing, and you won't out-do them.
The only way you can out-do standard features of a language is if you can identify something in your data or your application that is non-standard, and optimise to handle your personal non-standard case. Note that this is the exact opposite of the normal advice (i.e., the normal: keep things general so they can be reapplied).
For example, if you have a stream of real numbers like this, are they related to each other in any way? If they're a stream of measured temperatures, where you expect long periods over Xmax but no sudden changes, you may not need to check every number; you could just check every 100 numbers and then scan backwards and forwards from a change to find exactly the point it went over Xmax. I doubt this is your application, but it illustrates the general idea.
Always look and see if you actually need to do what you're doing anyway, and if so, whether you've accidentally done it (or nearly done it) somewhere already.
And do make sure you've turned off all possible safety checks on your code. No range checking, no stack checking, etc. (but of course only after you know the code is rock-solid and reliable). Extra checks do take time.
You really need to think big about optimisation. As an example, the leaps in strength in chess playing programs haven't happened because someone optimised an inner loop. They've happened because someone's devised a new algorithm where they realised you only need to go round much the same inner loop ten times rather than a thousand.