Before we can assess it, we have to understand why the SQL Server design team decided that UDFs must be deterministic. Is it just an expression of some deeply held but unrealistic idealism about functions? Or are there larger issues?
I can certainly understand why
most functions should be deterministic. I'd be irritated if floor(1.234) gave 1 sometimes and 2 other times. But GetDate() is a perfect example of a function where nondeterminism is the
entire point. I'd be just as irritated if it always gave 'Jan 23, 8267 16:24:58.997'.
Can UDFs be used in calculated table columns? That could explain it. But in that case, instead of banning nondeterminism in UDFs entirely, there should be a provision for writing either kind. Something like:
Code:
CREATE FUNCTION Blah () NONDETERMINISTIC
RETURNS datetime
AS
BEGIN RETURN GetDate() END
And instead of saying "Invalid use of 'getdate' within a function." it might say "Use of 'getdate' within a function requires function creation keyword 'NONDETERMINISTIC'."
As I demonstrated with the windows scripting host object, though, you could still break determinism if any access to objects is allowed. And that's a problem because there may be some objects which have functions or properties that are in fact deterministic (such as regexp operations, maybe). There's no conceivable way for SQL Server to enforce this when accessing outside code.
And so what if I choose to use a nondeterministic UDF in a calculated table column? It's my responsibility to ensure that my database performs the way I want it to. If I use such things it should be expected I know the potential side-effects (such as values changing unexpectedly, or not changing when expected, or joins against the column failing or improperly joining, or updates or deletes missing rows or selecting too many rows, and so on...).
My instinct says that requiring UDFs to be deterministic is more about protecting uneducated people from confusing results rather than about any dire database integrity requirement. But I really don't know.