As far as I know, it can't be done with a generic split function.
A little research shows that in 2000, you're right. I was surprised to find that in SQL 2005, that's not true. Taking from Erland's page you linked to above:
In SQL 2000, it was not possible to call a table-valued function and pass a table column as parameter, but SQL 2005 adds the APPLY operator that permits you to do this. Here is a script that creates the table above and then unpacks it with a query:
Code:
CREATE TABLE models (modelid char(4) NOT NULL,
-- other columns like modelname etc.
colours varchar(200) NOT NULL,
CONSTRAINT pk_models PRIMARY KEY (modelid))
go
INSERT models (modelid, colours)
SELECT 'A200', 'Blue, Green, Magenta, Red'
UNION
SELECT 'A220', 'Blue, Green, Magenta, Red, White'
UNION
SELECT 'A230', 'Blue, Green, Magenta, Red, Cyan, White, Black'
UNION
SELECT 'B130', 'Brown, Orange, Red'
UNION
SELECT 'B150', 'Yellow, Brown, Orange, Red'
go
SELECT m.modelid, t.str AS colour
FROM models m
CROSS APPLY iter_charlist_to_tbl(m.colours, ',') AS t
ORDER BY m.modelid, t.str
And regarding option 5 in my FAQ, there was a comment in there saying "This first column will drastically hurt performance." Take that column out and rerun your test.

I have never needed that first column, so it's more efficient for me to just leave the TokenID column out.
I'll offer a version of the function in the FAQ to insert to a table variable with an identity column, but it can't be inline, it will have to be multi-statement. I also took out the delimiter parameter to compare more exactly.
Code:
CREATE FUNCTION SplitNumbers2 (
@InputText varchar(8000)
)
RETURNS @Values TABLE (TokenID int identity(1, 1) primary key clustered, Value varchar(100)) -- change data type of output as you please
AS
BEGIN
INSERT @Values
SELECT
Value = Substring(@InputText, Num, CharIndex(',', @InputText + ',', Num) - Num)
FROM Numbers
WHERE
Substring(',' + @InputText, Num, 1) = ','
AND Num <= Datalength(@InputText) + 1
RETURN
END
And comparing the three methods, the one you posted, the Method 5 from my FAQ (without the TokenID column) and this new function (three separate batches).
Full testing would use varying lengths of @t and would not just select the results but use them in a join to a table, so take the following with a grain of salt. I ran these 50 times together, each in its own batch, then averaged their values from SQL Profiler. That's also going to be inaccurate because you can't get values between 1ms and 12ms (or was it 16ms?)
[tt] CPU Reads Writes Duration
select * from dbo.FieldNameSplitter(@t) 30.1 2116.7 0 48.4
select * from dbo.SplitNumbers(@t, ',') 6.6 40 0 23.2
select * from dbo.SplitNumbers2(@t) 31 3837.1 0 49.7
[/tt]
Keep in mind the comment that performance on the middle function could be affected because the optimizer doesn't know what to expect from it, whereas with the other two it knows the specs of the output table ahead of time.
Clearly more testing is in order. But I wouldn't care to go to the lengths Erland already has--he says fixed-length is superior to everything else, including XML.
All,
Here is
Erland's "famous article" that I originally read two or more years ago. George already posted above the version for SQL Server 2005.
It is well worth reading both articles as he goes into great depth. A LOT is covered, including recompilation, execution plans, collation, sorting, indexes, and you'll undoubtedly learn at least one important thing. If you want to be knowledgeable then you can't miss this stuff.
[COLOR=#aa88aa black]
Cum catapultae proscriptae erunt tum soli proscript catapultas habebunt.[/color]