Adding a Parallel hint instead of a Star Transformation gets the time down to 6 Seconds:
/*+ PARALLEL (f_exp_det, 6) */
Explain Plan:
Operation Object Name Rows Bytes Cost Object Node In/Out PStart PStop
SELECT STATEMENT Optimizer Mode=ALL_ROWS 9 M 473792...
taupirho - changing the where clause as suggested made no difference.
thargtheslayer - I didn't use autotrace, I used Explain Plan functionality within TOAD (not sure if that calls utlxpls).
I deleted bits of the plan to make it more readable, I'll paste it in full below now:
Operation Object...
I would have thought a combination of hitting the partitions and using the bitmap indexes was possible. If not, I'm open to suggestions as to how to tune this query.
At the moment it takes 29 secs to return the 1st 100 rows. Not exactly light-speed!
I've also tried joining to a larger dimension table to try and avoid the optimiser choosing to do a full table scan. This dimension has 15million+ rows.
SELECT /*+ STAR_TRANSFORMATION */
f_exp_det.adjust_flag,
f_exp_det.deal_id,
f_exp_det.approved_approach_id...
I have the following query:
SELECT /*+ STAR_TRANSFORMATION */
f_exp_det.adjust_flag,
f_exp_det.deal_id,
f_exp_det.approved_approach_id,
f_exp_det.asset_class_id,
d_asset_class.exposure_type,
d_asset_class.exposure_type_desc...
Tried with one record. When I removed all the special characters, it loads no problem.
I don't think changing the fields to VARCHAR2 is going to fix the problem, since it seems the problem lies in the characters in the adjacent field.
I am trying to load some data into the following table:
CREATE TABLE IMP_ECUST
(Source VARCHAR2(1),
Sales_Level_2 VARCHAR2(255),
Sales_Level_3 VARCHAR2(255),
Sales_Level_4 VARCHAR2(255),
Sales_Level_5 VARCHAR2(255),
Sales_Level_6 VARCHAR2(255),
CSC_Global_ID INTEGER,
CSC_Global_Name...
Thanks for the reply, however, this does not quite solve the problem.
What I am after is those with duplicate first_name & email address, regardless of contact_id.
As you are grouping by both contact_id and first_name_email, your query will only pick out duplicates within contact_id...
I have two tables GCR_CONTACTS and GCR_EADDRESSES , which are linked on contact_id.
I need to select the first 500000 contact_id's of those contacts where there are duplicates on first name & email address (they can be different case, and have extra spaces).
I have written this query:
select...
This has indeed solved my problem. I can use the Convert function in the SQL and it does the trick. I will also talk to those with the necessary authority to look at changing the datatypes.
Thank you all for your help :)
I've already done DUMP(address_line_1) in an earlier post. However
select length(address_line_1)
from cccadm.gcr_postal_ss
where contact_id = 15942373
Gives...
LENGTH(ADDRESS_LINE_1)
----------------------
19
Which confirms that ° is only one character.
Not sure how...
This site uses cookies to help personalise content, tailor your experience and to keep you logged in if you register.
By continuing to use this site, you are consenting to our use of cookies.