Tek-Tips is the largest IT community on the Internet today!

Members share and learn making Tek-Tips Forums the best source of peer-reviewed technical information on the Internet!

  • Congratulations Chriss Miller on being selected by the Tek-Tips community for having the most helpful posts in the forums last week. Way to Go!

Import writes to 2nd .dbf datafile first

Status
Not open for further replies.

tman138

IS-IT--Management
Nov 27, 2001
128
US
I am running Oracle 8.1.6 on HPUX 11.0 I was importing a dumpfile from my production server into my test machine. I originally thought I could resize the 3 500MB datafiles (prod1.dbf, prod2.dbf and prod3.dbf) into a single 1.5 GB file. After 2 failed attempts at importing into the single larger file also named prod1.dbf I decided once again to drop the single .dbf file and recreate the 3 datafiles in their original sizes and location which are the same as on the production server. I deleted the tables in Unix and used Oracle's Storage Manager from the Enterprise Manager to do remove and recreate the tablespace and datafiles. When I begin the dump import, the prod2.dbf is the first file to begin filling. Once it is full, the import begins filling prod3.dbf, once it is full, the import halts and crashes with I/O errors. Ownership and groups are correct as Oracle created them. I thought perhaps there were write errors since the first time I built them I let it run overnight, so I started clean today by again dropping the prod tablespace, deleting the datafiles and recreating. I shutdown and restarted the database and verifyed all tablespace objects were dropped prior to attemmpting the import again....with the same results. I cannot find any information on a similar phenomenon. Any help?
 
I had similar experience where Datafile #2 was written first, then file #1 and last file#3. (Never had the time to investigate the cause).

If your OS allows files >2G create a single 3G file and after the import, re-size.

[ponder]

----------------------------------------------------------------------------
The person who says it can't be done should not interrupt the person doing it. -- Chinese proverb
 
Tman,

We do not have granular control over the choices that Oracle makes for writing to datafiles.

Out of curiosity, why does it matter which file Oracle writes to and in which order?

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)

Do you use Oracle and live or work in Utah, USA?
Then click here to join Utah Oracle Users Group on Tek-Tips.
 
It wouldn't matter a bit to me as long as it wrote to them all. In my previous 2 attempts (I'm attempting a 3rd) prod2.dbf and prod3.dbf filled, and then the import failed.
 
Tman,

When I want Oracle to write to multiple files in an "even-handed" fashion, the method that has always proven effective and "fair" has been to create (as an example) three empty files at perhaps 10MB each. Set the files for AUTOEXTEND ON, to a maximum of 2GB each. When you run the imports, Oracle should write to each of the three files with a "rotary" scheme, growing each in an even order.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)

Do you use Oracle and live or work in Utah, USA?
Then click here to join Utah Oracle Users Group on Tek-Tips.
 
Is there any requirement that the datafile naming or the number of datafiles from the exported database match the one importing?
 
Santa;
I have 2 tablespaces and had already imported the larger in its original file structure. So I decided to try your method of creating the datafiles @ 10MB each with AUTOEXTEND on on the second smaller tablespace. The import began and filled the first datafile to 10MB, and then hung for several hours on a large table as the second datafile had reached 6.8MB. I had to kill the import process and it just hung up. It seemed as if AUTOEXTEND was not working. Do all of the database datafiles have to be set to autoextend? Also where does Oracle hold information as to the AUTOEXTEND status? I tried V_$DATAFILE V_$DATAFILE_HEADER and DBA_DATA_FILES but can't find how to tell if it actually is turned on.
 
You may use this query to confirm whether or not a file is AUTOEXTENDing and what the increments and maximums are in blocks:
Code:
select * from sys.filext$
     FILE# MAXEXTEND       INC
------------------------------
         3    256000      1280
         1    256000      1280
         2    256000      1280
         4    256000      1280
         5    256000      1280
To monitor file growth and autoextend settings, I use this script:
Code:
set echo on
REM **************************************************************
REM David L. Hunt (file author) distributes this and other
REM files/scripts for educational purposes only, to illustrate the
REM use or application of various computing techniques. Neither the
REM author nor Dasages, LLC, makes any warranty regarding this
REM script's fitness for any industrial application or purpose nor is
REM there any claim that this or any similarly-distributed scripts
REM are error free or should be used for any purpose other than
REM illustration.
REM **************************************************************
set echo off
set linesize 165
set pagesize 40
set heading on
col fname heading "Filename" format a60
col fnum heading "#" format 999
col ts heading "Tablespace|Name" format a15
col tb heading "Total|Potential|File Size" format 999,999,999,999
col cb heading "Total|Current|File Size" like tb
col used heading "Bytes Used" like tb
col free heading "Potential|Bytes Free" like tb
col autoext heading "Auto|Ext." format a4
col percentfree heading "% Free|of|Pot.|Total|Bytes" format 999
break on report
compute sum of tb cb used free on report
spool TablespaceUsage.txt
select     substr(tablespace_name,1,15) ts
   ,d.file_id fnum
   ,decode(e.file#,null,d.bytes,(e.maxextend * blksize)) tb
   ,d.bytes cb
   ,decode(d.bytes,0,0,d.bytes-nvl(freebytes,0)) used
   ,decode(e.file#,null,d.bytes,(e.maxextend * blksize))
    -decode(d.bytes,0,0,d.bytes-nvl(freebytes,0)) free
   ,trunc(((decode(e.file#,null,d.bytes,(e.maxextend * blksize))
     -decode(d.bytes,0,0,d.bytes-nvl(freebytes,0)))/
      decode(e.file#,null,d.bytes,(e.maxextend * blksize))
         ),2) * 100 percentfree
   ,decode(e.file#,null,'No','Yes') autoext
   ,''''||substr(file_name,1,55)||'''' fname
from       sys.dba_data_files d
   ,(select file_id,sum(bytes) freebytes
           from sys.dba_free_space
           group by file_id) f
   ,sys.filext$ e
   ,v$datafile v
   ,(select value blksize from v$parameter
           where name = 'db_block_size') b
where      d.file_id=f.file_id(+)
  and      d.file_id=e.file#(+)
  and      v.file#=d.file_id
order by tablespace_name,creation_time
/
spool off
prompt
prompt Note: Physical disk limitations may prevent an AUTOEXTEND-ing file from reaching its potential
prompt
prompt Wrote spool file "TablespaceUsage.txt".
prompt

                                                                                   % Free
                                                                                       of
                              Total           Total                                  Pot.
Tablespace                Potential         Current                       Potential TotalAuto
Name              #       File Size       File Size      Bytes Used      Bytes Free BytesExt.Filename
------------------------------------------------------------------------------------------------------------------------------------------------
DATA1             4   2,097,152,000      83,886,080      19,005,440   2,078,146,560    99Yes 'D:\DHUNT\ORACLEDATA\DBDATA\DHUNT\DHUNTDATA101.DBF'
DATA2             5   2,097,152,000     241,172,480         917,504   2,096,234,496    99Yes 'D:\DHUNT\ORACLEDATA\DBDATA\DHUNT\DHUNTDATA201.DBF'
RBS               2   2,097,152,000      20,971,520          65,536   2,097,086,464    99Yes 'D:\DHUNT\ORACLEDATA\DBDATA\DHUNT\DHUNTRBS01.DBF'
SYSTEM            1   2,097,152,000     163,577,856     157,401,088   1,939,750,912    92Yes 'D:\DHUNT\ORACLEDATA\DBDATA\DHUNT\DHUNTSYS01.DBF'
TEMP              3   2,097,152,000      10,485,760           8,192   2,097,143,808    99Yes 'D:\DHUNT\ORACLEDATA\DBDATA\DHUNT\DHUNTTEMP01.DBF'
                   ----------------------------------------------------------------
sum                  10,485,760,000     520,093,696     177,397,760  10,308,362,240


Note: Physical disk limitations may prevent an AUTOEXTEND-ing file from reaching its potential

Wrote spool file "TablespaceUsage.txt".
************************************************************************************************************************************************
Let us know your findings and if any of this is helpful
[/code]
I have never had a delay when autoextending. I would look in your alert<SID>.log file for clues as to what could be causing our troubles.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)

Do you use Oracle and live or work in Utah, USA?
Then click here to join Utah Oracle Users Group on Tek-Tips.
 
Thanks for the script.
I beleive the problem was due to the redo logs being full.
I first got Thread 1 cannot allocate new log
Then
ORA-00600: internal error code, arguments: [12235], [], [], [], [], [], [], []
and this is where it hung.
After shuting down the database and changing to NOARCHIVELOG mode I think all is in order again. One last question & I'll leave you alone-
As far as datafiles go - is there any advantage to creating multiple datafiles as opposed to a single file provided the total file size needed is < 2GB ?
 
Tman,

When I allocate files, I personally allocate one datafile on each of my available application file systems. I set the files for AUTOEXTEND to 2GB. There is no penalty for allocating multiple datafiles. They do not occupy more space than they would occupy if the data were consolidated into one data file. In fact, spreading the data across multiple files can contribute to reducing I/O contention.

[santa]Mufasa
(aka Dave of Sandy, Utah, USA)

Do you use Oracle and live or work in Utah, USA?
Then click here to join Utah Oracle Users Group on Tek-Tips.
 
Status
Not open for further replies.

Part and Inventory Search

Sponsor

Back
Top