Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations TouchToneTommy on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

SQL 7 transaction log

Status
Not open for further replies.

TheJFM

Programmer
Jun 13, 2002
65
GB
Does SQL 7 have a database setting that is equivalent to SQL 2000's simple mode?

Thanks

** Don't reach for the moon when you have the stars. ;-) TheJFM **
 
Thanks for the reply

But does this have the same effect - i.e. of stopping the transaction log from growing uncontrollably?

My problem is that I have a very very large data import. I do not need a rollback or restore if the import fails.

The transaction log grows and grows and swallows up all available disc space.

I tried limiting the size if the log and setting the database option 'truncate log on checkpoint' to True. The job fails since the transaction log gets filled before completion of the import.

I was sure when I used to use SQL 2000 that setting the database mode to simple avoided this problem. How do I resolve it in SQL 7?

Thanks

** Don't reach for the moon when you have the stars. ;-) TheJFM **
 
Does SQL Server 7 use the same ShrinkLog method? Do you have a fixed T-Log size? If it is fixed, would it hurt performance so much if you let it grow automatically?

Then after your import you could fire off a stored procedure or run a job to shrink the T-Log?

Use Sprint
go
DBCC Shrinkfile(sprint_log,2000, TRUNCATEONLY)
go

I am just curious about some of this info.

Thanks,

rocco
 
My problem is that the data import is about 5 GB - the source is such that I have to run a one off import.

The transaction log swells to 5 GB too - so I use a total of 10 GB which takes all the space I have remaining on the disc.

Ideally I need a situation where the log is not being written to at all.

** Don't reach for the moon when you have the stars. ;-) TheJFM **
 
RobertT687 has the right idea. However, do you have a fixed Transaction log size? I mentioned setting it to Auto-Grow above, but I did not fully understand the space dilema.
If it is set to Auto-Grow, the TLog will keep growing until your total space is used up (EVEN if the Truncate Log on Checkpoint is set to ON). The reason for this is the following:

1. The log file can automatically grow, the file is increased in size. If the file cannot be expanded, the log writer restarts at the begin of the file.
2. The log file size is fixed, the log writer restarts to write at the beginning of the file.

In both cases, if there are transaction that cannot be overwritten, an error occurs for the user(s) that issued the transactions.

So when you choose the option truncate log on checkpoint, any committed transaction entry can be overwritten, otherwise it must have been archived.

Hope this helps you!

Also, I found this information about bulk inserts (if that is your method of import). Sorry it is for 2000 but maybe there is some similarity to 7.0

In order for Transaction not to be logged:

- The recovery model is simple or bulk-logged.
- The target table is not being replicated.
- The target table does not have any triggers.
- The target table has either 0 rows or no indexes.

Rocco
 
That is a great help, though it confirms that I DO have a problem here.


** Don't reach for the moon when you have the stars. ;-) TheJFM **
 
Suggest you do the import in batches to prevent the log from uncontrolled growth. Terry has an FAQ on this:
How to Use Batch Size to Speed Mass Updates, Inserts and Deletes
faq183-3141
 
Thanks for that and the pointer to the FAQ. Certainy useful for SQL to SQL imports.

My problem is that I am importing from a non-SQL data source and cannot realistically import in batches using the technique described.

** Don't reach for the moon when you have the stars. ;-) TheJFM **
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top