Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Wanet Telecoms Ltd on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Faster, faster! 1

Status
Not open for further replies.

innmedia

Programmer
Mar 24, 2000
108
US
Hi all,

I'm trying to see how to speed up a rather large and lengthy Select Into statement.

The SQL creates a large table out of dozens, possibly hundreds of smaller tables. They are all related on a single key field. Each smaller tables has one to perhaps a dozen fields. The final table can easily bump up against the 1024 field limit.

Every table is indexed on the key field.

The front end is Access. In VBA the long SQL statement is created, then run using the ADO connection execute method.

Each small table could have thousands of rows.

I don't know that there is much to do to speed it up. It can take hours, depending on just how much source data (and source tables) there are. This always changes, since even the small tables are created on the fly from a different part of the processing.

I tinkered with getting this to be a stored procedure but it is not possible -- the sp would need to behave like an SQL Insert Into template, which then takes hundreds of parameters - no good.

Perhaps I could take the long SQL statement and create it as a stored procedure, but does this save any time? Isn't all the processing currently on the server anyway, using the ADO connection. Nothing runs in Access, Access just assembles the SQL string.

Another issue to this, I would like to provide feedback to the user that something is happening. I don't believe there is any way to hook into the long processing to give some feedback on the Access form or the Access status bar. The best I have done is display a message that the processing has started at "now()" and that it could take a long time.


Any ideas on all this!??

Thanks!
 
I'd be tempted to write this as a procedure, although running the SQL from access may run the sql on the server, most likely sql server will have to develop an execution plan each time which may add some significant time to overall processing of a complex job.

A stored procedure execution plan is cached into memory and so can be re-used again next time, depending on complexity this could save alot of time.

also you can batch the processes within your script and return errors to the client with @@ERROR, RAISEERROR, or PRINT functions.
 
If I was stuck with this system, yes I would write it as a procedure.

However, you need to consider redesigning this system. You should not be creating permanent tables on the fly. In fact users should never be allowed to have create table rights at all. Your processes should be using table variables or temp tables instead. And select queries on joined tables don't have the record limit, so if this data is not being permanently stored in a table (which would surprise me since you are using select into), then why not just do a select query for the results and you won;t have to worry about the record limit and what happens if you reach it.

Table variables in particular would speed up the process as the inserts into them are not logged.

Questions about posting. See faq183-874
 
Hi,

thanks for the responses.

It's for a small specialized company. Just a few users, a group of specialists. One is the president of the company. There is no need or justification for controlling table creation rights.

The process creates the smaller tables and deletes them at the end when the large table is created. Users don't see or know really about these temp tables.

A simple description of what happens is that some data points in the a large source table become fields in the new large table. Kind of like a crosstab but not really.

For example, imagine these three fields and rows:

STORE FRUIT AMOUNT
Buy Here Apples 15
My Store Apples 22
The Store Bananas 28
Buy Here Bananas 19
My Store Bananas 12
Buy Here Cherries 14

This becomes...

STORE APPLES BANANAS CHERRIES
Buy Here 15 19 14
The Store 0 28 0
My Store 22 12 0

This is like a cross tab, but there are typically dozens of fields in the source, and there can be hundreds of thousands or records.

There is more to it than that too, but complex to describe here. Basically though there are many options for sorting, how to name fields in the new table, etc.


There is now issue with a record limit. The issue is with a field limit. The processing creates many new fields. The new table always has many more fields than the source, in fact it is a product of (using this example), stores times types of fruit.

What are table "variables"? Are these in-memory data stores? I am talking about many megabytes of data.

Thanks!
 
Table variables are much faster and yes they reside in memory.

And a table created in a database and then dropped is not a temp table. Temp tables are created in the temp database and have names starting with # or ##. BY creating these things inthe user database, then you are in trouble if two people try to use the same tablename for different purposes at the smae time.

There is a lot of database overhead associated with creating, populating and dropping tables. It will also make your transaction log grow to a tremendous size unless you are dealing with this issue through proper transaction log backups.

What you need to do is some research on method of creating cross-tab queries. There are a coupl of FAQs on this subject here which use real temp tables, but I haven't looked at them closely enough to see if table variables could be used instead.

The size of a company is irrelevant in whether users should have direct access to tables or the ability to create database objects. What is relevant is that they do not have the specialized knowledge needed to avoid causing harm accidentally to the database. Further you open the daatabase up to potential harm from any disgruntled employee about to quit. But hey if you want to risk your career who am I to argue.

But you wanted things to be faster and I'm telling you that you are using a slow method which is also a security problem. So there is literally no reason to continue using that method. You gain NOTHING from it that cannot be done both faster and more securely another way.

Questions about posting. See faq183-874
 
SQL Sister -
THanks for the help. I will look into the # and ## for temp tables. Great thing, I hope they have ample memory on the server. I do not work there, I am just on the project to build their processing. There one priority has always been SPEED!

Thanks!
 
SQLSister, table variables are not necessarily kept only in memory... they will use the disk if they grow large enough. But, you make good points about them not being logged as transactions which should definitely improve performance.

innmedia, to use a table variable, instead of

[tt]CREATE TABLE #t (
field int)[/tt]

you do

[tt]CREATE @t TABLE (
field int)[/tt]

Then use it just like a regular or temp table. The exceptions are that you cannot SELECT INTO them, only INSERT INTO, and you must use aliases in every query:

[tt]SELECT @t.field FROM @t[/tt]

doesn't work, but

[tt]SELECT T.field FROM @t T[/tt]

does work.


Take a gander at Dynamic Cross-Tabs/Pivot Tables.

-------------------------------------
A sacrifice is harder when no one knows you've made it.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top