Adding a Parallel hint instead of a Star Transformation gets the time down to 6 Seconds:
/*+ PARALLEL (f_exp_det, 6) */
Explain Plan:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=ALL_ROWS 9 M 473792...
taupirho - changing the where clause as suggested made no difference.
thargtheslayer - I didn't use autotrace, I used Explain Plan functionality within TOAD (not sure if that calls utlxpls).
I deleted bits of the plan to make it more readable, I'll paste it in full below now:
Operation Object...
I would have thought a combination of hitting the partitions and using the bitmap indexes was possible. If not, I'm open to suggestions as to how to tune this query.
At the moment it takes 29 secs to return the 1st 100 rows. Not exactly light-speed!
I've also tried joining to a larger dimension table to try and avoid the optimiser choosing to do a full table scan. This dimension has 15million+ rows.
SELECT /*+ STAR_TRANSFORMATION */
f_exp_det.adjust_flag,
f_exp_det.deal_id,
f_exp_det.approved_approach_id...
I have the following query:
SELECT /*+ STAR_TRANSFORMATION */
f_exp_det.adjust_flag,
f_exp_det.deal_id,
f_exp_det.approved_approach_id,
f_exp_det.asset_class_id,
d_asset_class.exposure_type,
d_asset_class.exposure_type_desc...
Tried with one record. When I removed all the special characters, it loads no problem.
I don't think changing the fields to VARCHAR2 is going to fix the problem, since it seems the problem lies in the characters in the adjacent field.
I am trying to load some data into the following table:
CREATE TABLE IMP_ECUST
(Source VARCHAR2(1),
Sales_Level_2 VARCHAR2(255),
Sales_Level_3 VARCHAR2(255),
Sales_Level_4 VARCHAR2(255),
Sales_Level_5 VARCHAR2(255),
Sales_Level_6 VARCHAR2(255),
CSC_Global_ID INTEGER,
CSC_Global_Name...
Thanks for the reply, however, this does not quite solve the problem.
What I am after is those with duplicate first_name & email address, regardless of contact_id.
As you are grouping by both contact_id and first_name_email, your query will only pick out duplicates within contact_id...
I have two tables GCR_CONTACTS and GCR_EADDRESSES , which are linked on contact_id.
I need to select the first 500000 contact_id's of those contacts where there are duplicates on first name & email address (they can be different case, and have extra spaces).
I have written this query:
select...
This has indeed solved my problem. I can use the Convert function in the SQL and it does the trick. I will also talk to those with the necessary authority to look at changing the datatypes.
Thank you all for your help :)
I've already done DUMP(address_line_1) in an earlier post. However
select length(address_line_1)
from cccadm.gcr_postal_ss
where contact_id = 15942373
Gives...
LENGTH(ADDRESS_LINE_1)
----------------------
19
Which confirms that ° is only one character.
Not sure how...
No differences I am afraid, as the following shows:
select dump(address_line_1)
rom cccadm.gcr_postal_ss
where contact_id = 15942373
DUMP(ADDRESS_LINE_1)
--------------------
Typ=1 Len=20: 86,73,65,32,68,69,73,32,83,65,76,73,67,73,32,78,194,176,51,56
select dump('VIA DEI SALICI N°38') from...
I've checked the NLS_LANG environment variable in the registry and it is: AMERICAN_AMERICA.WE8ISO8859P1
This character set includes both of the unusual characters  and °, which means this is the correct setting.
This means unfortunately my problem is not resolved.
Thanks for the reply, however:
1) I think the NLS setting is correct as UTF8. Here are the settings anyway:
PARAMETER VALUE
NLS_LANGUAGE AMERICAN
NLS_TERRITORY AMERICA
NLS_CURRENCY $
NLS_ISO_CURRENCY AMERICA...
I am currently extracting data into a file using the following script:
set linesize 1180
set pagesize 0
set head off
set term off
set wrap off
column CONTACT_ID format 9999999999
column FIRST_NAME format a78
column LAST_NAME format a78
column JOB_TITLE format a78
column ADDRESS_LINE_1 format...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.