Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations bkrike on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SQL 2008 Weird string or binary would be truncated

Status
Not open for further replies.

MartinCroft

Programmer
Jun 12, 2003
151
GB
Hi

This ones a little weird, existing code that runs on a SQL2000 box fails when run against a SQL 2008 Server. Here's the scenario. SQL basically thinks the row size being inserted is bigger than the column being inserted into and its not.

A query that joins several tables together returns a source column from table A. This column in the source table is a VARCHAR(250) , but only VARCHAR(50) in the destination table. I know this is not best practice and is actually the fix for this issue. what I want to know is why this is doing what is is doing. The query returns a result set of 200 rows, if taking either MAX(LEN(Source)) or MAX(DATALENGTH(Source)) from this table you get 45, 5 less than the destination table.

If you just run a query against Table A with just the WHERE clause elements from the query ( so no other tables joined) and take the MAX values again it comes out as 78. If you try inserting the main query( all joins used) into a table variable it will fail at 77 characters VARCHAR, but work at 78. If you did a MAX again after the 78 value works its shows as 45.

The upshot of this is that SQL is assuming that the row size will be 78, but it isn't in the query its actually 45.

If you use a LEFT(source,50) this works,as does changing the column to VARCHAR(50), its the fact that SQL can get this wrong thats worrying. Has anyone seen this behaviour in SQL2008.

Additionally, tried created new source table and populated from existing, this had no indexes on (the new table) created cluster and an non clustered index on source column,.Thinking this was STATS related, also rebuild of stats with FULLSCAN. Next step I plan is to try on another server but can't replicate without using the query i.e in just using a single table

Also tried LTRIM, RTRIM with no difference

Any one come across this before
 
I just tried this..

Code:
create table a (c1 varchar(20) not null,c2 varchar(20) not null)
create table b (c1 varchar(25) not null)
insert into a values ('a test','to see what might')
insert into b select c1+c2 from a

create table c (c1 varchar(250))
insert into c values ('this is a test to see...')
insert into b select * from c

create table d (c1 char(250))
insert into d values ('this is a test to see...')

select LEN(c1) from d

declare @t table(c1 varchar(25))
insert into @t select c1+c2 from a
insert into @t select * from c
insert into @t select * from d

and had no problem..
In every case the column(s) in the source was greater in width than the destination, while the data in the source(s) was acutally smaller ... eg. col=250 but data=23

I tried with both perm tables and temp tables(variables)

It would be interesting to see which row is causing the error in your insert.

Depending on the number of rows you might want to set up a loop and look at the data that is causing it to fail.
 
I think I saw similar threads on MSDN. Essentially, even if there are no records in the final result that will break the limit, you still will get a warning if the original table has a bigger column. So, using LEFT(..,50) is the best and bullet-proof solution for such problems.

PluralSight Learning Library
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top