Thursday, October 08, 2009

http://asktom.oracle.com/tkyte/

This page will be published and republished over the next couple of days and into the future. It will contain pointers to the material that was previously hosted on http://asktom.oracle.com/tkyte. I'll be reformatting those pages to fit into blogspot here and host the data supporting the pages in my "files" tab on asktom. After that is all done, I'll be doing an update to the questions and answers on asktom to map to the new URLS for everything.

So - beware - you will see a flurry of activity on my blog for a bit. It won't be anything new, but it might be something you didn't see before...

Selected Utilities/Frequently Asked Questions

NOTE: please do not comment on this page.  They will never show up.  If you want to ask a question - that is what http://asktom.oracle.com/ is for.  Thanks!
POST A COMMENT

221 Comments:

Blogger Kalyani said....

Hi Tom,

Could you please recommend a reference book for Oracle Web toolkit.

Thanks,
SS.

Sun Oct 11, 11:36:00 AM EDT  

Anonymous Anonymous said....

Hi tom,

I'd love to see your hash join presentation URL reposted :)

http://asktom.oracle.com/tkyte/hj/hj.html

Regards

Wed Oct 14, 05:42:00 AM EDT  

Anonymous Anonymous said....

And this too please..

http://asktom.oracle.com/tkyte/article1/autotrace.html

Wed Oct 14, 04:01:00 PM EDT  

Blogger Haneda said....

We are moving our Data Mart from Oracle to Netezza because Oracle seems to be performing very slow for large volume of data(table with 500+ million rows).

Thu Oct 15, 04:06:00 PM EDT  

Blogger Thomas Kyte said....

@Haneda

first - 500,000,000 rows is pretty small (probably). Actually, the number 500 million 'rows' is absolutely meaningless.

what is that? 500mb? 500gb? 5tb? whatever

In anycase, I doubt it is more than 5tb - which means "it is small"

If your approach is to simply through hardware at it - then you should consider at least evaluation exadata as it would allow you to leverage what you have already built without changing a thing in your application (although, I'm pretty sure this is a design problem, not a hardware problem given the small sizes involved)

I have a doubt that you spent any time sizing the existing hardware

I have many doubts that you (not you personally, your company) spent much time in the thought of the design.

if you just want to wave the magic "we got hardware" flag - you might seriously consider asking your current Oracle contact to prototype your system on exadata (does what netezza does and then some and more and more).

Fri Oct 16, 11:26:00 AM EDT  

Blogger Haneda said....

Thanks a lot Tom.

I work in Risk technology of Bank of America. I have been working in Oracle for for than 15 years. I am not too happy to jump into Netezza and start coding in NZSQL and migrating thousands of lines of PL/SQL code to NZPL/SQL. It appears to me that, our management already made the decision to move to Netezza. I guess Oracle should do much better job in marketing exadata which I am sure is as good as Netezza appliance.

Mon Oct 19, 01:25:00 PM EDT  

Blogger Thomas Kyte said....

@Haneda

About the marketing comment.

If your management even remotely followed Oracle a little bit - Exadata would have been something they heard of - we premiered it last year at Oracle Open Oracle (2008) and it has been on the tip of the tongue of most in the Oracle community since.


The problem is as I see it - if you have thousands of lines of code, I've a feeling "you done did it wrong and netezza isn't going to change a thing, you are doing slow by slow processing and all of the hardware on the planet isn't going to do a thing for you"


500,000,000 row table - my laptop has come close to processing data on that volume (assuming a reasonable row width, since 500 million rows is a rather meaningless figure by itself). IF you use set based operations.

but if you try to process it procedurally, slow by slow, like it was in a file - look up here, look up there, look up a million times everywhere - it will remain slow.

I'd be interested to hear what happens - unless you massively re-architect - nothing will change

Mon Oct 19, 01:45:00 PM EDT  

Blogger Esteban said....

hi Tom!
i work wito Oracle 10g r2 and need that the users can to post comments about the news.
Is there a portlet that does this?

Wed Oct 21, 10:00:00 AM EDT  

Blogger Phiri Lighton said....

Hi Tom,

I am going through Expert Oracle Database Architecture. Could you kindly post contents of what was on the link below;

http://asktom.oracle.com/~tkyte/alert_arch.html

Thanks,
LP

Sun Nov 01, 05:10:00 AM EST  

Blogger Thomas Kyte said....

http://asktom.oracle.com/~tkyte/alert_arch.html

Sun Nov 01, 07:46:00 AM EST  

Blogger sujatha said....

Hi Tom,

I want to know the oracle user who created the tablespace with out enabling the auditing.Is it possible.

Thanks,
Sujatha.Ch

Mon Nov 02, 05:23:00 AM EST  

Blogger Thomas Kyte said....

@sujatha

this isn't really the appropriate place to ask a question - asktom.oracle.com would be, otn.oracle.com -> forums would be.

we don't track that information in the dictionary.

Mon Nov 02, 08:22:00 AM EST  

Blogger adi said....

hi tom,
this is adi........

i want to know which way is the better one. to use the bind variables or not to use them.
i have a situation like this:
sql_stmt := 'select count(*)
' || 'from LX_AUTH_RESPONSE ' ||
'where CC_ERR_CODE =:1 ' || 'and CVV_RESP =:2 ' ||
'and BOOKING_SOURCE IS NOT NULL';

open cur_err_code for sql_stmt
using v_err_code, v_cvv_code;

here can i use bind variables or not.
u said all the pl/sql variables are bound.
please help me.........

Thu Nov 05, 04:47:00 AM EST  

Blogger Thomas Kyte said....

@adi

couple of things

first, "U" isn't available, "U" is dead as far as I know. Look it up, it is true.

http://en.wikipedia.org/wiki/U_of_Goryeo

second, you probably do NOT need to execute that query. when I see "select count(*) from ..." - that indicates to me that someone is looking to see if there is a row to process and if so - they process it. The *smart* way to do that is......

JUST TRY TO PROCESS IT, if it isn't there - so be it, you'll figure that out really fast. Counting records is not smart, inefficient and should always be avoided.


third - there is no reason in your example to use dynamic sql at all. The comments I've made about PLSQL and binding is "if you use STATIC SQL it is impossible to bind incorrectly, it happens automatically and you need do nothing special, it is only when you use DYNAMIC SQL in plsql that you need to think about binding correctly". so, I would say a) do not run this query, it is a count(*), it is a waste. b) do not run this query using dynamic sql, it isn't needed and is less efficient than static sql.

lastly, yes, if you persist in this course of action and you a) do run the query and b) do use dynamic sql - you will absolutely want to use bind variables as you have in your example. That is a "no brainer"

Thu Nov 05, 06:54:00 AM EST  

Anonymous Andrew said....

Hi Tom,

I attended your PL/SQL presentation at the St. Louis Oracle Users Group. You had provided several tips on tuning SQL procedures. We have a procedure which processes EDI orders that was running between an hour and a half and two hours depending on volume. I used your tips and modified the procedure accordingly. I benchmarked processing one day's EDI orders using the original procedure and the modified version. The original version ran in 1 hour and 40 minutes, and the modified version completed in 26 minutes.

Thanks for those tips!
Andrew

Thu Nov 12, 04:28:00 PM EST  

Blogger up_s said....

Hi Tom,

I wanted to know., How do we delete view without using instead of triggers..

I'll be obliged if you could help me with this.

Thank You.
Upasna

Tue Nov 24, 11:51:00 AM EST  

Blogger Thomas Kyte said....

@up_s

if you have a simple view whereby the delete is unambiguous - you can just delete from the view.

if you have a complex view - whereby we cannot figure out what it means to delete from it - you have to program it via an instead of trigger.

actually, I would say if you are the second case, do NOT delete from that view, be unambiguous in your code and delete from the base tables.

Tue Nov 24, 12:09:00 PM EST  

Anonymous Anonymous said....

Hi Tom

I've got a tab separated file with double quotes around character fields (including nulls). However, on number fields there are no double quotes (including nulls).
I've tried using the following syntax. This loads the character strings but for number fields, it skips the null values thereby displacing the data.

LOAD DATA
INFILE hello.txt'
TRUNCATE
INTO TABLE hello
replace
FIELDS TERMINATED BY X'9' OPTIONALLY ENCLOSED BY '"'
TRAILING NULLCOLS

Please can you let me know an easy way round this.

Also I don't have a specification for this file and I have to assume that anything with double quotes is a string and anything that doesn't have double quotes is a number.

Many Thanks

Wed Dec 02, 11:31:00 AM EST  

Blogger Sanjay said....

We have a package developed using Oracle 11g & Apex 3.1. We would like to start having a licence key so that users do not exceed usage beyond a certain date & no. of concurrent users.

Any pointers, samples, suggestions, comments would be very welcome.

Thanks

Sanjay

Fri Dec 04, 10:52:00 AM EST  

Blogger Anand said....

Hi Tom,

Kindly post the link for

http://asktom.oracle.com/tkyte/hj/hj.exe

Regards,
Anand

Sun Dec 13, 02:17:00 AM EST  

Blogger StartServiceCtrlDispatcher failed with err 1063. said....

Hi Tom,

I setup Dataguard based on Oracle9i
.
I failovered to the standby database , after fixed primary database I tried to back switchover new primary to standby
but I got this error



SQL> alter database commit to switchover to physical standby with session shutdown;
alter database commit to switchover to physical standby with session shutdown

*
ERROR at line 1:
ORA-16014: log 1 sequence# 17 not archived, no available destinations
ORA-00312: online log 1 thread 1: 'C:\STY\REDO01.LOG'


The information in alert log is


ARCH: noswitch archival of thread 1, sequence 17
ARCH: End-Of-Redo archival of thread 1 sequence 17
ARCH: Evaluating archive log 1 thread 1 sequence 17
ARCH: Archiving not possible: No standby destinations
ARCH: Failed to archive log 1 thread 1 sequence 17
ORA-16014 signalled during: alter database commit to switchover to physical st...
Tue Dec 15 14:11:25 2009
ARCH shutting down
Tue Dec 15 14:11:25 2009
ARC1: Archival stopped

Please help me
I appreciate in advanced

Regards,
Reza

Tue Dec 15, 02:36:00 PM EST  

Blogger vijay said....

can u plz explian how to recover database with missing archiev logs


I'm clonning a database from hotbackup (bcv) without online redo. I'm
also missing some archive logs.


SQL> recover database using backup controlfile until cancel;
ORA-00279: change 5366589940848 generated at 06/19/2009 14:27:00 needed
for
thread 1
ORA-00289: suggestion :
/env330/Noradb/sanddb/rdbms_11g/rdbms_11g_base/pro duct/11.1.0/dbs/arch1_
304174_5
06509196.dbf
ORA-00280: change 5366589940848 for thread 1 is in sequence #304174

Specify log: {=suggested | filename | AUTO | CANCEL}
ORA-00280: change 5366589940848 for thread 1 is in sequence #304174


I.m having archive logs with sequence #304178 but not having any
archive logs with sequence #304174

Wed Dec 16, 12:13:00 AM EST  

Blogger Thomas Kyte said....

@Vijay

"U" isn't available, "U" is dead as far as I know. Look it up, it is true.

http://en.wikipedia.org/wiki/U_of_Goryeo

(just a word of advice, using 'text' speak makes you sound like a 12 year old)


Please think about this - if you are missing 304174 - you cannot recover beyond 304173. Isn't that obvious?

How could you just "skip" a redo log file?

Wed Dec 16, 07:12:00 AM EST  

Anonymous Anonymous said....

In dutch 'U' is a more polite variation of 'you' (like Thou).


:)


remi@

Wed Dec 16, 01:41:00 PM EST  

Blogger rmei said....

Forgot to mention, 'U' is the only word in Dutch that has a length of only one character.

http://nl.wikipedia.org/wiki/U_%28letter%29

:)


Al the best Tom.

remi, the netherlands.

Wed Dec 16, 01:46:00 PM EST  

Anonymous AHA said....

Hi Tom,

I have a requirement to check if a file exists on the Oracle Application Server or not in a pl/sql procedure itself. Is there a way to do this? All I know is that I could do it if it was on the database server itself. But why not on the OAS? Can you please help?

Thanks in advance,
AHA

Sun Dec 20, 02:28:00 AM EST  

Blogger Thomas Kyte said....

@Aha

you cannot, for the same reason that I cannot tell if you have a certain file on your desktop computer.

If I could, it would be very virus like - wouldn't it. PLSQL runs in the server - the database server. It can see only that which the database server machine can see.

So, just like code running in your application server cannot read the database file system, the database cannot read your application servers file system.

Unless you mount the directory on the database sever of course, then it could.

Sun Dec 20, 04:12:00 AM EST  

Blogger Jay said....

Hi Tom,

I would like to know

1. what internally happens when we give the command alter table move PARTITION TABLESPACE

2. Will the above command work in RAC environment.

Regards

Jay

Tue Dec 22, 12:30:00 AM EST  

Blogger Thomas Kyte said....

@Jay

the proper place to ask questions would be http://asktom.oracle.com

1) we read old partition segment, we format new, compact, clean blocks, we write them to a new partition segment.

When we've read every old block and written every new block, we drop the old segment and point to the new segment as if it were the old one.

Basically, it "moved the partition"

2) yes

Tue Dec 22, 08:38:00 AM EST  

Blogger chidambaram said....

Tom ,
Good Morning,
Looking for this link
http://asktom.oracle.com/~tkyte/Misc/su.html .
Could you please provide .
Thank you
CT

Wed Dec 23, 06:12:00 AM EST  

Blogger chidambaram said....

Hi Tom,
Kindly ignore my previous request , found it in this blog itself .
Thank you
CT

Wed Dec 23, 06:18:00 AM EST  

Blogger santu said....

HI tom i have heard many things abt you and assure you 'll surely help me?
can u tell me
without using merge statement how could we
achieve the result of merge statement?

Sun Dec 27, 12:47:00 PM EST  

Blogger Thomas Kyte said....

@Santu

If you've heard many things about me, you should be expecting to be made fun of then

http://tkyte.blogspot.com/2006/01/im-speak.html

Anyone that uses the writings abilities of a "12 year old on facebook" - chatting with their friends - should.

abt?
u? "U" isn't available, "U" is dead as far as I know. Look it up, it is true.

http://en.wikipedia.org/wiki/U_of_Goryeo



But in short, if you want to insert/update and delete in a SINGLE DML statement, you'll be using merge.

If you want to do "merge" without using "merge", well, you won't - you would need multiple statements (an insert, an update and a delete) to accomplish that.

Sun Dec 27, 05:30:00 PM EST  

Blogger santu said....

Dear
tom ,it was really a rejoice moment
after hearing from you.if u hurt u please pardon me for that.Really i am
a big fan of you.The service u r doning is great and you are really the guru.i got my answer regarding merge statements.
at last i wish a bunch of Thanks 2 u
and for great service.

Mon Dec 28, 12:38:00 PM EST  

Blogger santu said....

Hi
tom good morning..
i think it(without using merge achieve the result set of merge) will be achievable
using pl/sql.
But it too complex writing the same or through rowid and changed blocks can be done. There are some front end application program
like Power Builder Pipe function does the same.i am little bit confused can u help me regarding this with a suitable example?
please elaborate it.

Mon Dec 28, 12:43:00 PM EST  

Blogger santu said....

Again tom i am extermly sorry
for the charting habit but i think
you will not mind for that.i'll try
to avoid it BUt would you please help me regarding that with a exampilary manner,or publish any answer on your Archieve contents.
or send me any link which help me.
i will be always obliged for that.

Mon Dec 28, 12:59:00 PM EST  

Blogger Thomas Kyte said....

@Santu

The only and best advice I can give you for this is....

"but i think you will not mind for that."

Is entirely false. When people see you speaking like a 12 year old texting with their friends, such as this:

"The service u r doning is great"

they immediately downgrade their opinion of you.


As for "how to do merge without merge", I'll leave that as an exercise for you to develop. For you should know, I don't reinvent wheels - I use what is available.

I question the motives of a coder that wants to procedurally do what can be done in a single SQL statement.

There is no reason to write code to do that which already exists.

So, my final answer will be simply:


USE MERGE

Mon Dec 28, 03:53:00 PM EST  

Blogger santu said....

Hi
toms
thanks for your reply
i got my answer.

Tue Dec 29, 08:47:00 AM EST  

Blogger santu said....

Hi
tom i have a another querry
please help me.
how could i confirm wheather
a pl/sql table is empty or not?

Tue Dec 29, 08:51:00 AM EST  

Blogger Thomas Kyte said....

@Santu

This is not really the place for questions and answers... That would be asktom


but the documentation can be simply amazing sometimes for finding out simple things like this...

My first guess where to look would be the PL/SQL User'sGuide and Reference...

From the table of contents I found Using collection methods which in turn led me to

Using COUNT which shows how to count the number of elements in a collection/table...

Tue Dec 29, 09:06:00 AM EST  

Anonymous Mohannad said....

Dear Tom,
I was searching for one subject in ask Tom which led me to forum where you was discussing using of implicit cursors as it’s faster than explicit. It was a surprised for me, because since I start developing sql I was told never to use select into (my instructor, my team leader, …) , they mentioned that it’s no more recommended in oracle!!. So after those years I found by chance they are wrong as you had proven. At this moment I feel disappointed as it’s Possible that many of our habits in programming are wrong or not the optimal to do. So is there any place to see what is not recommended or better or faster?..

Sun Jan 03, 09:25:00 AM EST  

Blogger Thomas Kyte said....

@Mohannad

Do it the way I've always done it.

Take nothing at face value, take no ones "word for it". Things change - advice you receive today might have been valid 10 years ago - but not anymore. Constantly re-evaluate.

Look at the page you reference - regarding implicit/explicit cursors. How did I convince you? By example - by something you can take and evaluate on your own. By showing you something that you can ask questions about - by giving you a thing that you can try to pick apart and say 'yeah, but what about if.....'


If I just wrote, in bold italic:

Explicit cursors for select into operations are worse than implicit cursors in PL/SQL.


And someone else just wrong in bold italic with underlines:

Implicit cursors for select into operations are worse than implicit cursors in PL/SQL.


You would be stuck - you might take the second bit of "advice" because they used underlines in addition to bold and italics!


But, since I said:

Implicit cursors in PLSQL do this .....

This is necessary because of .....

And - they perform at least as good as and typically better than explict cursors as demonstrated here: .....


You know how some compelling evidence that you can test on your version of Oracle, under your conditions, using your operating system to see if I am correct, or not.


And remember, things change, the advice given to you for version 8i would be/could be/might be different than the advice given to you for version 9i and in 10g it'll change again and in 11g it'll become obsolete once more and need revision.


If things didn't change, if you were supposed to be doing things the way we did things in version 6 in 1989 - why would the other releases even exist....

Sun Jan 03, 10:51:00 AM EST  

Anonymous Anonymous said....

Hello tell me how to use case or decode in counting no of employees who have switched their jobs.The data is in two tables emp amd job history .Only emp id is common in both.

Mon Jan 04, 06:21:00 AM EST  

Blogger Gary said....

Hi Tom,

I am attempting to design an Oracle Application Express app that will allow me to select a client side text file, read in the contents, and automatically write each line as a record into an oracle table.

I do not want to store the whole file in a CLOB. I do not want my users to have to log into APEX and use the export tool.

I do not want export the client file to the server first.

I do not have access to TEXT_IO as our installation of Oracel 10g does not include Oracle Forms.

Any assist you can render would be greatly appreciated.

Gary

Tue Jan 05, 12:01:00 PM EST  

Blogger Thomas Kyte said....

@Gary,

that is a lot of "i do not wants" there, you are going to have to compromise somewhere - or the answer is "you won't be doing it then"


If the client has a file - the client MUST get it to the server if the server wants to read/process it (it would be very virus like if the server could reach out and just grab that file, wouldn't it).

You will create an APEX form that does a file upload, you will load the file into a lob in the document table you set up for your application. You can then - during the processing of that page - access the LOB that was uploaded, read it using dbms_lob, process it - do whatever you want with it - and then delete it

But you will store it in the database for a moment as a LOB, so your code on the server side can read it and process it.

You will do that or you will be using something else that runs natively on the client to process the file - there is no other way, the file MUST be transmitted from the client machine to where the code that wants to process that file resides (be it on the middle tier or database side - you will be moving the file from the client)

Unless you write a native application that actually runs on the client.

Tue Jan 05, 12:16:00 PM EST  

Anonymous Anonymous said....

Can you tell me where this article is?
http://asktom.oracle.com/~tkyte/article1/index.html

Wed Jan 06, 09:33:00 AM EST  

Blogger Thomas Kyte said....

you can get it here


http://asktom.oracle.com/~tkyte/article1/index.html

Wed Jan 06, 10:44:00 AM EST  

Blogger Gary said....

Hi Tom,

First of all, thanks for your help with my question on loading data from a client side file into Oracle. I bit the bullet and I did as you suggested.

Now, I’m trying to come up with a way to pass the username to my APEX app when I call it from Infoview so the user won't need to log in.

We don't have SSO yet, so I can't use that, and I'm not concerned with security here, as the user will only have access to his own schema. Thus, no password is required.

I think that I might be able to use OWA_COOKIE.GET for this. I’ve read the white paper on how it works but questions remain.

The function call as used in the APEX login for example looks like this:

c := owa_cookie.get('LOGIN_USERNAME_COOKIE');

Q1:
Is the string, 'LOGIN_USERNAME_COOKIE', actually a cookie that has that name, and exists in Tempory Internet Files on the client?

I could not locate it with a search.

Q2:

Is the string a pointer to an actual cookie? If so, then where is the path/cookiename set to it?

Thanks for your help,

Gary

Fri Jan 08, 09:19:00 AM EST  

Blogger Thomas Kyte said....

@Gary

and I'm not concerned with security here, as the user will only have access to his own schema. Thus, no password is required.


which means no username is required either as I (as an end user) would be able to tell you I am *anyone* I want to tell you I am.


Cookies are an internet standard - they could be session based, or persistent (stored in a cookies.txt file on the client machine). Either way - It doesn't matter, on the server side, you don't really care where the client stores it.

Probably, if it is credentials, and if it comes from a properly coded application it would be:

a) encrypted (so you, the client, cannot mess with it)

b) non-persistent (NOT stored in a file, only session based)

c) available ONLY do the domains that are known to need it

Meaning, if they did it right, you won't have access to it, you are not part of their framework.


But since you say:

and I'm not concerned with security here, as the user will only have access to his own schema. Thus, no password is required.

I don't even see why you need a username, as you could not really count on it being a true username - if there is no validation of the credentials, I would just set it to whatever I wanted to in the client and trick you.

Fri Jan 08, 09:23:00 AM EST  

Blogger Gary said....

Hi Tom,

The reason I need to supply the username is so that my APEX app will access the correct user's schema in oracle. The APEX workspace is set up with multiple user schemas.

If I can't supply it with a cookie, perhaps there is another way to pass the username?

I just need to know at this point how to make my APEX app grab the username auto-magically when it opens.

Thanks,

Gary

Fri Jan 08, 02:28:00 PM EST  

Blogger Thomas Kyte said....

@Gary

path of least resistance is to have a DAD per schema, so that unqualified references go a DAD set up for that unprotected schema. So, you just look at whatever you want to figure out who it is and use owa_util.redirect_url to fling them over.


Else, you can maybe use "alter session set current_schema = ...." to set up the default schema name.

Fri Jan 08, 02:53:00 PM EST  

Blogger Gary said....

Hi Tom,

Thanks for your continuing assistance.

The problem I'm having is that I want to allow multiple users to open my app from Infoview and have my APEX app automatically know who they are without the need to log in.

I'm sorry, I should have mentioned that I'm running apex via mod_plsql on a seperate server, not embedded in Oracle, so the app does not know who is calling it w/o a login. I could do it with a SSO server, but we don't have one yet.

I need some other way to let APEX know who the calling user is, like parsing the username from the URL. But, even if I figure out how to embed the username in a URL, I have no clue how to grab it in APEX.

Fri Jan 08, 03:14:00 PM EST  

Blogger Thomas Kyte said....

@Gary

that will be a question for "infoview", you'd need to ask them something like

"I have an application, it is a web application, it would like to know what the username as you see it is - how can I make that available to me"

They might say:

a) cannot be done, it's encrypted
b) it is hidden in this cookie, but we only send it to domains X, Y and Z (and you are on domain a)
c) it is buried in this URL, if you use http_referer in the 'cgi' environment - you'll have it
d) something else

Things like a and b will mean "it cannot be gotten", things like c and d will mean you can do it, but they have to give you a clue.

Fri Jan 08, 03:22:00 PM EST  

Blogger Thomas Kyte said....

@Gary again

and how to grab it in APEX will be the result of C or D, you'll need to hear what they say and then use the appropriate API's to get it (get: if they say "cookie, sent to all domains, name of cookie is FOO", then the owa_cookie package will be useful

Fri Jan 08, 03:23:00 PM EST  

Blogger g3g6 said....

I am looking for installation files old Oracle 7.34 . Can anybody help me?

Thu Jan 14, 06:06:00 AM EST  

Blogger Thomas Kyte said....

@g3g6

obvious licensing issues aside, you'd need to be a tad more specific.

eg: what old piece of hardware running what really old OS are you going to be using.

Not that I have have the software, just saying if you expect to get any responses, you'd really need to be more forthcoming with details.

Thu Jan 14, 09:27:00 AM EST  

Anonymous Anonymous said....

Please help me. I am stranded on a desert island in the middle of the Sahara. I am only allowed to read one SQL book to become the greatest SQL master of the entire universe. Mwahh ahh ahh. What is that book?

Truth: My job changed and they dumped a bunch of database tasks on me. I know absolutely nothing about Oracle, SQL, etc. Even if I don't learn much, I still need to be able to talk "shop-talk" with the existing I.T. department so I don't look like a complete idiot. Just a partial idiot is OK with me.

I would be forever grateful if you could provide me with the best SQL books out there. And I know you must have been asked this question a million times before, but this time you will be helping somebody save their job.

Fri Jan 15, 05:04:00 PM EST  

Blogger Thomas Kyte said....

@Anonymous

for sql starters, perhaps Joe Celkos "sql for dummies"

For getting to understand the thing you are using - the 11gR2 or above Concepts guide (available free on otn.oracle.com). Regardless of the version of Oracle you have, read the concepts for the current release..

Fri Jan 15, 06:46:00 PM EST  

Blogger g3g6 said....

Hi Tom,

we use old COMPAQ server with NT 4.0 server as OS.

We need migrate old application (app Lekar from www.gubi.cz)which use Oracle 7.3.4. However source media after years are unreadable and author doesn't have media too.

Wed Jan 20, 03:41:00 AM EST  

Blogger sagar said....

Hi Tom,

Your Answers were superb.
I have one question,plz do needful.

We have some issue with job queue and the job_queue_process got bounced.

After bouncing the job queue , one process got stuck up and queue becoming long.other processes running fine.

help me how to resolve this.

Wed Jan 27, 02:54:00 AM EST  

Anonymous Prasant Bijuli said....

Hi Tom,

My requiremnet is to update the target table coulmn value based on the source table column value.

Source table - OM_FISCAL_MONTH
Target table -SRS_OTS_ADJUSTMENTS_F

UPDATE SRS_OTS_ADJUSTMENTS_F a
SET a.FISCAL_MONTH_KEY =
(SELECT b.FISCAL_MONTH_KEY FROM OM_FISCAL_MONTH b
WHERE b.EDWO_FISCAL_MONTH_KEY = a.FISCAL_MONTH_KEY
);

but the problem is target table is having 100 million records and only 400 records are in source table.

First Appraoch1:

CREATE OR REPLACE PROCEDURE SRS_STAGE.sk_update
(
i_lz_mapping_table IN VARCHAR2
,i_target_table IN VARCHAR2
,i_target_column IN VARCHAR2
,i_lz_oracle_column IN VARCHAR2
,i_lz_teradata_column IN VARCHAR2
)
IS
lv_upd_stmt VARCHAR2(255);
BEGIN
lv_upd_stmt := ' UPDATE ' || i_target_table ||' a ' ||
' SET a.' || i_target_column || ' = ' ||
' ( SELECT b.' || i_lz_teradata_column ||
' FROM ' || i_lz_mapping_table || ' b '||
' WHERE b.' || i_lz_oracle_column || ' = a.'|| i_target_column || ' ) ' ;
--dbms_output.put_line ('lv_upd_stmt : ' || lv_upd_stmt);
EXECUTE IMMEDIATE lv_upd_stmt;
DBMS_OUTPUT.PUT_LINE('Total Number of Records Updated : '||SQL%ROWCOUNT );
COMMIT;
END;
/

Second Approach 2:(Bulk Update):

CREATE OR REPLACE package SRS_STAGE.update_sk_pkg_test_bulk is

type t_crs is ref cursor;

procedure dyn_sel (
i_lz_mapping_table IN VARCHAR2
,i_target_table IN VARCHAR2
,i_target_column IN VARCHAR2
,i_lz_oracle_column IN VARCHAR2
,i_lz_teradata_column IN VARCHAR2
,crs in out t_crs
);

procedure update_sk_test_bulk(
i_lz_mapping_table IN VARCHAR2
,i_target_table IN VARCHAR2
,i_target_column IN VARCHAR2
,i_lz_oracle_column IN VARCHAR2
,i_lz_teradata_column IN VARCHAR2
);

end update_sk_pkg_test_bulk;
/

CREATE OR REPLACE package body SRS_STAGE.update_sk_pkg_test_bulk as
procedure dyn_sel (
i_lz_mapping_table IN VARCHAR2
,i_target_table IN VARCHAR2
,i_target_column IN VARCHAR2
,i_lz_oracle_column IN VARCHAR2
,i_lz_teradata_column IN VARCHAR2
,crs in out t_crs
)

is
stmt varchar2(1000);
begin
stmt := 'SELECT a.'|| i_target_column || ', b.' || i_lz_teradata_column||
' FROM ' || i_lz_mapping_table || ' b ,'|| i_target_table ||' a ' ||
' WHERE b.' || i_lz_oracle_column || ' = a.'|| i_target_column || '';

open crs for stmt;
end dyn_sel;

procedure update_sk_test_bulk(
i_lz_mapping_table IN VARCHAR2
,i_target_table IN VARCHAR2
,i_target_column IN VARCHAR2
,i_lz_oracle_column IN VARCHAR2
,i_lz_teradata_column IN VARCHAR2
) is
tc t_crs;
TYPE number_table_type IS TABLE OF NUMBER INDEX BY BINARY_INTEGER;
tab1 number_table_type;
tab2 number_table_type;

begin

dyn_sel(i_lz_mapping_table,i_target_table,i_target_column,i_lz_oracle_column,i_lz_teradata_column,tc);
loop
fetch tc bulk collect into tab1,tab2 limit 100000;
IF tab1.COUNT > 0
Then
forall i in 1..tab1.count

EXECUTE IMMEDIATE ' UPDATE ' || i_target_table ||' a ' ||
' SET a.' || i_target_column || ' = :1 where '||i_target_column ||'=:2' using tab2(i),tab1(i);

commit;
END IF;
exit when tc%notfound;
end loop;
end update_sk_test_bulk;

end update_sk_pkg_test_bulk;
/


But the both the approaches are taking too much of time around more than 4 hrs.

Could you pls suggset me how to solve this problem??

Wed Jan 27, 09:55:00 AM EST  

Anonymous Anonymous said....

Hi, Tom

Can you please explain the technical reasoning behind doing a + 0 or a || ''
to a column in a where clause as the following:

SELECT bc.tid_id,
n.entity_name,
a.street_1_addr,
a.city_addr,
a.state_cd,
a.zip_cd
FROM rti_bus_addr a,
rti_bus_addr_xref x,
rti_bond_companies bc,
rti_ids id,
rti_bus_names n
WHERE bc.bond_co_type_cd = 'LC'
AND id.external_id = bc.tid_id || ''
AND id.loc_nbr = bc.loc_nbr
AND id.id_type_cd = 'TID'
AND id.id_stat_cd = 'V'
AND n.internal_id = id.internal_id + 0
AND n.name_type_cd = 'PRI'
AND x.internal_id = id.internal_id + 0
AND x.addr_type_cd = 'P'
AND x.addr_stat_cd = 'A'
AND x.end_dt > TRUNC(sysdate)
AND a.address_id = x.address_id + 0
AND n.internal_id = x.internal_id + 0
ORDER BY n.entity_name;

The internal_id is a numeric column and the tid_id is a varchar.

Your help in understanding the reasoning is appreciated.

Wed Jan 27, 10:10:00 AM EST  

Blogger Thomas Kyte said....

@prasant

Your update:

UPDATE SRS_OTS_ADJUSTMENTS_F a
SET a.FISCAL_MONTH_KEY =
(SELECT b.FISCAL_MONTH_KEY FROM OM_FISCAL_MONTH b
WHERE b.EDWO_FISCAL_MONTH_KEY = a.FISCAL_MONTH_KEY
);

updates all 100,000,000 records - is that what you MEANT to do? Or did you mean to update only SOME of the records.

Instead of showing us what you did..... explicitly tell us what you are trying to do.


If you only want to update records that are in the OM_FISCAL_MONTH table, probably


UPDATE
(select a.fiscal_month_key akey,
b.fiscal_month_key bkey
from SRS_OTS_ADJUSTMENTS_F a,
OM_FISCAL_MONTH b
where WHERE b.EDWO_FISCAL_MONTH_KEY = a.FISCAL_MONTH_KEY
)
set akey = bkey;


is what you meant to do. You need a primary key on b.edwo_fiscal_month_key.

Wed Jan 27, 04:44:00 PM EST  

Blogger ajtih said....

Dear Tom,

We generate the files(.dat) from oracle table using UTR_FILE Package in oralce for datawarehouse.

When we execute the extraction procedures , sometimes its generate
the files and sometimes it doesnot generate the files and trowing the error "ORA-29283: invalid file operation"

Pls find the attached procedure for the same.

CREATE OR REPALCE PROCEDURE SP_ORGMSTEE_EXTRACT
IS
CURSOR STR_CUR IS SELECT
ORG_LVL_CHILD ,
ORG_LVL_PARENT ,
ORG_LVL_ID,
ORG_NAME_FULL,
ORG_NAME_SHORT,
ORG_LVL_NUMBER ,
CURR_CODE,
ORG_IS_STORE,
ORG_E3_XREF
FROM ORGMSTEE;
DIM_STR_REC ORGMSTEE%ROWTYPE;
v_filehandle UTL_FILE.FILE_TYPE;
v_err_code NUMBER;
v_err_msg VARCHAR2(2000);
v_fname VARCHAR2(50);
v_fdate VARCHAR2(30);
v_file BOOLEAN;
v_delimit VARCHAR2(30) :='ý' ; --Declaring variable for specifying Delimiter
BEGIN
SELECT TO_CHAR(SYSDATE,'YYYYMMDD') INTO v_fdate FROM DUAL;
v_fname := 'ORGMSTEE'||'_'||v_fdate;
-- Closing the file , if it is already opened.
v_file := UTL_FILE.IS_OPEN(v_filehandle);
IF v_file=TRUE THEN
UTL_FILE.FCLOSE(v_filehandle);
END IF;
--Opening the file DIM_STORE_ORGMSTEE in write mode .
v_filehandle:=UTL_FILE.FOPEN('RIPLEY_DIR',v_fname||'.dat','W');
UTL_FILE.FFLUSH(v_filehandle);
OPEN STR_CUR ;
LOOP
FETCH STR_CUR INTO DIM_STR_REC ;
EXIT WHEN STR_CUR%NOTFOUND;
UTL_FILE.PUT_LINE(v_filehandle,
DIM_STR_REC.ORG_LVL_CHILD||v_delimit||
DIM_STR_REC.ORG_LVL_PARENT||v_delimit||
DIM_STR_REC.ORG_LVL_ID||v_delimit||
LTRIM(RTRIM(DIM_STR_REC.ORG_NAME_FULL))||v_delimit||
LTRIM(RTRIM(DIM_STR_REC.ORG_NAME_SHORT))||v_delimit||
DIM_STR_REC.ORG_LVL_NUMBER||v_delimit||
LTRIM(RTRIM(DIM_STR_REC.CURR_CODE))||v_delimit||
LTRIM(RTRIM(DIM_STR_REC.ORG_IS_STORE))||v_delimit||
LTRIM(RTRIM(DIM_STR_REC.ORG_E3_XREF))||v_delimit||TO_CHAR(SYSDATE,'DD-MON-YY')
);
UTL_FILE.FFLUSH(v_filehandle);
END LOOP;
UTL_FILE.FCLOSE(v_filehandle);
CLOSE STR_CUR;
END;

Thanks and Rgards,
Karthik

Thu Feb 04, 05:43:00 AM EST  

Blogger Thomas Kyte said....

@karthik

please tell me how this could ever happen?


v_file := UTL_FILE.IS_OPEN(v_filehandle);
IF v_file=TRUE THEN
UTL_FILE.FCLOSE(v_filehandle);
END IF;


????? since v$file is a local variable - it will ALWAYS BE NULL - always, every single time. If you are hoping to close a file opened by some prior call to this routine - you are out of luck, it'll never happen (you might need some error handling in the code to deal with that!)


as for the invalid mode, sounds like you are trying to write to a directory you don't have write permissions on.


See, like this:

ops$tkyte%ORA10GR2> !ls -ldg /tmp/dir?
drwxrwxrwx 2 tkyte 4096 Feb 4 07:00 /tmp/dir1
drwxr-xr-x 2 tkyte 4096 Feb 4 06:56 /tmp/dir2

ops$tkyte%ORA10GR2> create or replace directory dir1 as '/tmp/dir1';

Directory created.

ops$tkyte%ORA10GR2> create or replace directory dir2 as '/tmp/dir2';

Directory created.

ops$tkyte%ORA10GR2> exec dump_table_to_csv( 'SCOTT.EMP', 'DIR1', 'tkyte.emp' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> exec dump_table_to_csv( 'SCOTT.EMP', 'DIR2', 'tkyte.emp' );
BEGIN dump_table_to_csv( 'SCOTT.EMP', 'DIR2', 'tkyte.emp' ); END;

*
ERROR at line 1:
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 488
ORA-29283: invalid file operation
ORA-06512: at "OPS$TKYTE.DUMP_TABLE_TO_CSV", line 49
ORA-06512: at line 1


ops$tkyte%ORA10GR2> host cat /tmp/tkyte.emp
cat: /tmp/tkyte.emp: No such file or directory



dump_csv is a routine that tries to open a file, if you don't have write on the directory (the oracle software owner doesn't have write) guess what happens!

Thu Feb 04, 07:01:00 AM EST  

Blogger ajtih said....

Dear Tom,

We are able to generate the files at the same directoy using extraction procedures.if i exeute the same procedure again from some other session its throwing error.


if execute from one sesson its generating the files. if i execute the same from some other session , its not executing.

We use the same directory.

How one session its generating files ? and other sessions its not generating files..

we have enough space for that directory..

Thanks and Regards,
Karthik

Thu Feb 04, 08:06:00 AM EST  

Blogger Thomas Kyte said....

@karthik.

please utilize a little bit of your time to INSTRUMENT your code. You are doing something wrong.

The file might already exist (you can actually test for that).

do you have a list of all times this procedure is called and it's inputs? The outcome of calling it? Rhetorical question of course, you don't - you don't have any sort of instrumentation in there (if your code isn't 75% debug code, you are doing it wrong!) Add lots and lots of DEBUG code in there - find out what you are really doing and you'll figure this out right away.


I'll get back to IT IS FILE PERMISSIONS. Here is yet another example:

ops$tkyte%ORA10GR2> !ls -ldg /tmp/dir1 /tmp/dir1/*
ls: /tmp/dir1/*: No such file or directory
drwxrwxrwx 2 tkyte 4096 Feb 4 08:17 /tmp/dir1

ops$tkyte%ORA10GR2> create or replace directory dir1 as '/tmp/dir1';

Directory created.

ops$tkyte%ORA10GR2> exec dump_table_to_csv( 'SCOTT.EMP', 'DIR1', 'tkyte.emp' );

PL/SQL procedure successfully completed.

ops$tkyte%ORA10GR2> echo chmod u=rx,g=rx /tmp/dir1/tkyte.emp
SP2-0734: unknown command beginning "echo chmod..." - rest of line ignored.
ops$tkyte%ORA10GR2> !su - ora10gr2
Password: ********
[ora10gr2@dellpe ~]$ chmod a=rx /tmp/dir1/tkyte.emp
[ora10gr2@dellpe ~]$ exit
logout


ops$tkyte%ORA10GR2> !ls -ldg /tmp/dir1 /tmp/dir1/*
drwxrwxrwx 2 tkyte 4096 Feb 4 08:17 /tmp/dir1
-r-xr-xr-x 1 ora10gr2 821 Feb 4 08:17 /tmp/dir1/tkyte.emp

ops$tkyte%ORA10GR2> exec dump_table_to_csv( 'SCOTT.EMP', 'DIR1', 'tkyte.emp' );
BEGIN dump_table_to_csv( 'SCOTT.EMP', 'DIR1', 'tkyte.emp' ); END;

*
ERROR at line 1:
ORA-29283: invalid file operation
ORA-06512: at "SYS.UTL_FILE", line 488
ORA-29283: invalid file operation
ORA-06512: at "OPS$TKYTE.DUMP_TABLE_TO_CSV", line 49
ORA-06512: at line 1


Perhaps your UMASK (I've no clue what OS you might be on :( ) is set such that you can create the file but you cannot overwrite it after creating. I don't know, but I'm sure that if you add some tracing so you can see the flow what your calls and what inputs you received.... You'll figure it out in a couple of seconds (after the debug is added - which should have been added the day you wrote it!)

Thu Feb 04, 08:22:00 AM EST  

Anonymous Anonymous said....

hi tom possible to share the code on how you do the apex calendar in ur website?

Thu Feb 04, 08:36:00 PM EST  

Blogger Thomas Kyte said....

@anonymous

you can download the entire application from apex.oracle.com - one of the supplied applications.

Thu Feb 04, 09:41:00 PM EST  

Anonymous Anonymous said....

Hi Tom,
Could you please tell me if an insert/update/delete is executed on a table, because of the implicit select,does it to full table scan on the table? I thought full table scan apply only for Select Statements not insert/update/delete statements. Can you please correct me if i am wrong?

Fri Feb 05, 10:08:00 AM EST  

Blogger Thomas Kyte said....

@Anonymous

insert/update/delete have two parts

a query part - to find rows to insert, to update, to delete and a modification/write part - to do the insert/update delete.


insert into t select * from t;

will almost surely full scan t to find the rows to insert into t.

update t set x = x+5;

will almost certainly full scan....


delete from t;

will almost certainly full scan...

Fri Feb 05, 10:27:00 AM EST  

Anonymous Anonymous said....

Hi Tom,

I came across a weird Bug in Oracle 10.0.2.0.4.0. It seems, the explain plan is a mess for the statement in question. Below you will find a commented test case. The test works fine on Oracle 10.0.2.0.2.0

Is there a way to force oracle into using an explain plan that gives the correct result?

Regards,
Bernhard

create table x(
id number,
x number);

create table y(
id number,
y number);

-- case 1: y is less than all x values
insert into x values(1,100);
insert into x values(1,101);
insert into y values(1,99);

-- case 2: y is less than some x value
insert into x values(2,200);
insert into x values(2,201);
insert into y values(2,200);

-- case 3: y is not less than any x value
insert into x values(3,300);
insert into x values(3,301);
insert into y values(3,301);

/* just check: select minimum x for each case and the corresponding y value -> 3 rows */
select (select min(x) from x where x.id = y.id) minx,y.y
from y;

/* good case: select only the one case, where y is less than each x value -> 1 row */
select minx, y from (
select (select min(x) from x where x.id = y.id) minx,y.y
from y
)
where minx > y;

/* check: optimized query (if case 3 like entries make up for 99.9% of db content),
only select min(x) for those cases, where there is at least on x-value which is bigger than the corresponding y-value -> 2 rows
*/
select (select min(x) from x where x.id = y.id) minx, y.y
from y
where
y.id in (
select distinct x.id from x
join y on x.id = y.id
and x.x > y.y
);

/* BAD CASE: from the above query, try to select the one case where y is less than each x value
result, which is obviously wrong:
MINX Y
100 99
200 200
This result was obtained from oracle 10.2.0.4.0, with 10.2.0.2.0 we get a different explain plan and the correct result
*/
select minx, y from (
select (select min(x) from x where x.id = y.id) minx, y.y
from y
where
y.id in (
select distinct x.id from x
join y on x.id = y.id
and x.x > y.y
)
)
where minx > y;

/* try again, but get distinct result -> we get the correct result. The explain plan has completely changed.
*/
select distinct minx, y from (
select (select min(x) from x where x.id = y.id) minx, y.y
from y
where
y.id in (
select distinct x.id from x
join y on x.id = y.id
and x.x > y.y
)
)
where minx > y;

drop table x;
drop table y;

Thu Feb 18, 11:55:00 AM EST  

Blogger Thomas Kyte said....

use this:
create table x(
id number,
x number);

create table y(
id number,
y number);

-- case 2: y is less than some x value
insert into x values(2,200);
insert into x values(2,201);
insert into y values(2,200);

select minx, y from (
select (select min(x) from x where x.id = y.id) minx, y.y
from y
where
y.id in (
select distinct x.id from x
join y on x.id = y.id
and x.x > y.y
)
)
where minx > y;


to file your bug with support, you have every thing you need - I'm not sure why you didn't do that first and foremost.

Thu Feb 18, 05:48:00 PM EST  

Anonymous Md. Nazmul Hossain Mridha said....

hi Tom,
I cann't install Oracle9i(2.2) in Windows Vista or Windows 2007.

please give me a solution.
Thanking to You,
Nazmul Hossain Mridha.

Thu Feb 25, 04:43:00 AM EST  

Blogger Thomas Kyte said....

@Md. Nazmul Hossain Mridha

the solution would be to use an operating system that is at least as old as the database software you are attempting to use.

Thu Feb 25, 10:36:00 PM EST  

Anonymous Anonymous said....

i am newbee to database administration and I am looking for few books to start with.I see your must recent books were related to only 10g.Do you think those would be good enough for 11g as well or do you suggest any new books?

Mon Mar 01, 12:29:00 PM EST  

Blogger Thomas Kyte said....

have you read the actual Oracle documentation as yet? The 2-day guides to start and the more in depth to follow??

Mon Mar 01, 12:50:00 PM EST  

Anonymous Peter said....

Hi Tom,

Could you please repost your article on Roles and Procedures?

http://asktom.oracle.com/~tkyte/Misc/RolesAndProcedures.html

Thanks!

Fri Mar 05, 02:26:00 PM EST  

Blogger Thomas Kyte said....

@Peter

see the link for Why do I get an ORA-1031 or PL-201 with stored procedures above, it was there already.

Fri Mar 05, 02:46:00 PM EST  

Blogger Ruud said....

Hi Tom,

I've got the following query that runs well on a 10g (10.2.0.4.0) database but throws an ORA-00979 (no GROUP BY-expression) on a 11g (11.2.0.1.0) database.

select vendor_id
, vendor_site_id
, x
, count(*)
from
(
select vendor_id
, vendor_site_id
, to_char(vendor_id) x
from po_headers_all
)
group by vendor_id
, vendor_site_id

Is it intended not to work anymore in 11g or is there anything else going on here?

Thanks & take care,
Ruud.

Wed Mar 10, 11:01:00 AM EST  

Blogger Thomas Kyte said....

@Ruud,

look at it, is should never have worked, if there is a bug anywhere, it is in the older release.

select a, b, c, count(*)
from t
group by a,b;

should fail - you must group by c. Even when T is an inline view.

Wed Mar 10, 11:11:00 AM EST  

Blogger Gary Myers said....

Ruud's example wasn't quite like that.
You can do a

select object_type, created, to_char(created,'DD/Mon/YYYY') cc, count(*)
from user_objects
group by object_type, created

And Oracle will realise that the TO_CHAR(created) doesn't need to be in the group by as the 'created' is already there.

But when you put it in the in-line view, it works in 10gR2 but fails in 11gR2.

select object_type, created, cc, count(*)
from
(select object_type, to_char(created,'DD/Mon/YYYY') cc, created
from user_objects)
group by object_type, created
/

It is quite legitimate to want to group by the underlying columns, but format those columns differently in the SELECT list.

select count(*)||' objects created on '||to_char(trunc(created),'DD/Mon/YYYY')
from user_objects
group by trunc(created)
/

Splitting that through an inline view shouldn't break it.

Wed Mar 10, 05:01:00 PM EST  

Blogger Thomas Kyte said....

@Gary


Ruud's example wasn't quite like that.
You can do a

select object_type, created, to_char(created,'DD/Mon/YYYY') cc, count(*)
from user_objects
group by object_type, created


Yes, yes it was exactly like that and so is yours.

since when is created = to_char(created,'dd/mon/yyyy')

It isn't, not at all. It was a bug in earlier releases, it should never have worked.


column IS NOT the same as to_char(column)

All non-aggregate, non-constants need to be grouped by.

Wed Mar 10, 06:02:00 PM EST  

Blogger Gary Myers said....

"All non-aggregate, non-constants need to be grouped by."
According to the documentation, you are allowed expressions based on the group by columns.

http://download.oracle.com/docs/cd/B19306_01/server.102/b14200/statements_10002.htm#i2065646
and
http://download.oracle.com/docs/cd/E11882_01/server.112/e10592/statements_10002.htm#i2182483

"#Expressions identical to those in the group_by_clause....
#Expressions involving the preceding expressions that evaluate to the same value for all rows in a group"

The TO_CHAR(date_col) example fits exactly that last item, as it will evaluate to the same value for any given date_col value, which is in the group by.

Wed Mar 10, 07:03:00 PM EST  

Blogger Thomas Kyte said....

first of all, to_char(created,'dd/mon/yyyy') is not the same as created - not nearly. You are forgetting that created has a TIME component, they are not the same - they cannot be treated the same.

secondly, you are not applying a to_char to a group by item. You are selecting three columns from an inline view - there is a difference.

They have corrected this universally in 11g, in 10g - this would have worked by mistake:

drop table t;
create table t
as
select * from all_users;
create or replace view v
as
select user_id a, created b, to_char(created,'dd-mon-yyyy') c
from t;
select a, b, c, count(*) from v group by a, b;


In 11g it (correctly in my opinion, in my reading of everything) fails with "you need to group by that column"

Wed Mar 10, 09:05:00 PM EST  

Anonymous Anonymous said....

Hello Tom,
I would like to see your mutating triggers example posted.

Thanks!

Tue Mar 16, 11:19:00 AM EDT  

Blogger t said....

Table contains duplicate data . Have to move data to another table. Criteria: check for duplicate values if duplicate exist move all duplicates except one to the history table. While moving to other table see if the record being moved already exists.


source table
SOURCE TABLE : ODS_OWNER
grp_id grp_name face_id address1 address2 city zipcode

3456789 NIKE AERO 457899 707 CROFT GRAND RA 12345
1256789 NIKE AERO CORP 678899 707 CROFT SE GRAND RA 12345
5465455 BB SHIPPING 809708 201 SOUTH CT DESPLAINE 45434
3454534 BB SHIPPING INC 980900 201 SOUTH CT DESPLAINE 45434
2343444 CAT BRAKES 565665 P.O BOX 100 JERSEY 12323
3423423 CAT BRAKES 554645 200 CENTER DR CHICAGO 45645

FIRST 4 RECORDS ARE DUPLICATES FROM WHICH 1 RECORD GOES TO w_grp AND ONE GOES TO HISTORY TABLE. THE RECORD WHICH GOES INTO w_grp OUT OF THE DUPLICATES WILL DEPEND ON THE LAST MODIFIED DATE FOR EACH

DISTINCT VALUES GO IN w_grp TABLE

DUPLICATE GO INTO match_his TABLE

Tue Mar 16, 08:56:00 PM EDT  

Blogger Thomas Kyte said....

@t

WHY DO YOU SHOUT AT ME???????


It is funny that you do not describe your concept of "duplicate"

You say that the first four are duplicates, but I see no possible logic to deduce that.

and...

Please use asktom.oracle.com to ask questions - not this blog. You'll find asktom has a search function which will be very useful to you as well.

Wed Mar 17, 09:47:00 AM EDT  

Anonymous Anonymous said....

on oracle 11.1.0.6 on 64bit. their memory_target was set to 5G with sga_target and all pool parameters set to 0. pga_aggregate_target was explicitly set to 750MB. however oracle would not allocate more than 64MB RAM to the buffer cache. we looked through multiple snapshots and never found a case where db_cache_size was above 64MB, despite a staggering amount of disk reads. so we tried to change the db_cache_size manually set to minimum.

SQL> alter system set db_cache_size=1024M;

alter system set db_cache_size=1024M

*

Error at line 1:
ORA-32017: failure in updating SPFILE
ORA-00384: Insufficient memory to grow cache

Fri Mar 19, 10:07:00 AM EDT  

Blogger Thomas Kyte said....

@anonymous

you must have had some other parameter, other than pga_aggregate_target, set greater than zero to observe what you say... I believe you might have - and not realized it.

Fri Mar 19, 10:43:00 AM EDT  

Blogger SAURABH NOLAKHA said....

I am working on Oracle 9i...
as per the requirement I have build the code with the following logic:

1)process the first record..
if id doesnt exist then insert row.....
else call procdure chk_dups(to check duplicates)...
2)chk_dups()...
select records for the particular id into cursor....
process 1st record
loop
chk for all the columns......if matching move record into history table as duplicate.....
else check if its updated record
end loop
if not dup and not updated record...then insert as new record


for processing 2200 records the code is taking 170 secs and for 5500 its taking 700 secs....thats really too much annoying...
am a beginner 1ly..plz suggest sum tips to optimize the code..

Mon Mar 22, 02:36:00 AM EDT  

Blogger Thomas Kyte said....

@SAURABH NOLAKHA

two tips

#1) use full words when attempting to communicate professionally. Things like "1ly..plz " might be fine for use on a phone, with text messaging, between kids - but in real life, it makes you look like - you are a kid on a phone texting their friend.

#2) if you can do it in SQL entirely - do it that way. The best way to speed up your code will be to NOT perform the operation slow by slow, row by row, but rather as a set.

This bit of psuedo code is too vague to comment on

2)chk_dups()...
select records for the particular id into cursor....
process 1st record
loop
chk for all the columns......if matching move record into history table as duplicate.....
else check if its updated record
end loop


"else check if its updated record"??? what does that mean

and "move record into history table", does that really mean "move the record from the current table into the other table" or "insert the record into history, leaving the current record in the current table" or "something else entirely"

Mon Mar 22, 02:43:00 AM EDT  

Blogger SAURABH NOLAKHA said....

further to my previous post....the tables dont have any kind of primary key.......
this is the sample data for 1 row

'ID','DT_REC','H_TECH','PT_ID','ONSET','END_DATE','DURATION','EXTREME_RATE','MEAN_RATE','TOTAL_BEATS','CREATE_DATE','LOAD_DATE','RELOAD_DATE','REC_TYPE'
'H^STAT_MLN-PK-10^73_B032 ^RLZ_20051004073539_20100122101908.xml','20051004073539','D-1','B032 ','04-OCT-200507:48:59','04-OCT-200507:50:59','0:02:00','45','53','105',,'3/22/2010 8:44:03 AM','3/22/2010 8:44:03 AM','N'

Mon Mar 22, 02:48:00 AM EDT  

Blogger Thomas Kyte said....

@@SAURABH NOLAKHA

of course it has a primary key, it better be ID or everything you said above is entirely meaningless. How can you match in ID if ID is not the primary key?

and a sample row does not help clarify anything - more "logic" would, but not a sample row - that is meaningless.

Mon Mar 22, 02:58:00 AM EDT  

Blogger SAURABH NOLAKHA said....

Hi Tom,
lets go into more details:-
1)this is the sample data:-one new record followed by a duplicate one:

('H^STAT_MLN-PK-10^143_B039 ^NJT_20051004073741_20100121170333.xml','20051004073741','D-1','B039 ','04-OCT-200507:55:06','04-OCT-200507:56:16','0:01:10','51','55','64',NULL,'MONDAY, MARCH 22, 2010','MONDAY, MARCH 22, 2010','N')
/
INSERT INTO &&table_name
VALUES
('H^STAT_MLN-PK-10^143_B039 ^NJT_20051004073741_20100125144017.xml','20051004073741','D-1','B039 ','04-OCT-200507:55:06','04-OCT-200507:56:16','0:01:10','51','55','64',NULL,'MONDAY, MARCH 22, 2010','MONDAY, MARCH 22, 2010','D')

now while comparing the records am comparing 'H^STAT_MLN-PK-10^143_B039 ^NJT_20051004073741' which actually is SUBSTR(holter_id,1,LENGTH(holter_id)-19) this part of ID to see if any record by such ID(xml name practically) exists.
So am selecting all the records with that part of ID into a cursor and then comparing them.
if all the other fields excluding the last three one(create date,modified date,record type) are same,then I am inserting that entire record into History table with record type as D.
So the problem here is for every record,I am selecting the set of records for that particular part of ID into cursor and looping through it.
Unfortunately,I couldn't find a better approach to it.


And yes,ID here isn't a primary key.Actually there's no primary key in table.here's more sample data for your understanding.Both are new records and different but still the ID is same.

INSERT INTO &&table_name
VALUES
('H^STAT_MLN-PK-10^74_B033 ^TET_20051004073623_20100121164355.xml','20051004073623','D-1','B033 ','04-OCT-200508:02:53','04-OCT-200508:04:38','0:01:45','45','50','86',NULL,'MONDAY, MARCH 22, 2010','MONDAY, MARCH 22, 2010','N')
/
INSERT INTO &&table_name
VALUES
('H^STAT_MLN-PK-10^74_B033 ^TET_20051004073623_20100121164355.xml','20051004073623','D-1','B033 ','04-OCT-200508:04:53','04-OCT-200508:06:48','0:01:55','49','53','101',NULL,'MONDAY, MARCH 22, 2010','MONDAY, MARCH 22, 2010','N')
/

Looking forward for some mind blowing suggestions from you.
Thanks !!

Tue Mar 23, 02:40:00 AM EDT  

Blogger Thomas Kyte said....

@SAURABH NOLAKHA

my mind blowing idea:

tell us your logic.


sample records: not useful

precise definition of what to do: priceless.


tell us:

if id matches id and records are considered the same - we do X

if id matches id and record are considered different - we do Y

if id does not match any existing id we do Z

Now, fill in the blanks, what are X, Y and Z.

Be precise, be concise... Specify (write the specification) for the actions needed.

Tue Mar 23, 02:55:00 AM EDT  

Blogger SAURABH NOLAKHA said....

if ID matches id:
a):-if all column matches==>insert into history(DUPLICATE)
b):-else if col1,col2,col3 matches==>move the existing record into history(UPDATED) and update that record
c):-else insert into main table(NEW)

else if ID doesnt matches ID
insert into main table(NEW)



so what I am doing exactly is:

1)check if ID exists:-
a)if not exists then insert into main table(NEW)
b)if exists then call chkdups(record)



procedure chkdups(record)

1)cursor:-select all records for that ID
2)SET FLAG=0
3)LOOP:
A)if found duplicate,insert into history(DUPLICATE) and set flag=1
B)if found updated,move the existing record into history(UPDATED) and update that record and set flag=1
END LOOP
4) if FLAG=0,insert into main table(NEW)


hope it will be helpful !

Thanks!!!!

Tue Mar 23, 03:17:00 AM EDT  

Blogger Thomas Kyte said....

if you have a trigger on the big table that looks like this:


trigger after update for each row
begin
my_flag := 'U'; -- for update;
if ( :new.col1 = :old.col1 and :new.col2 = :old.col2 and ....
then
my_flag := 'D'; -- for duplicate;
end if;
insert into history (.... ) values ( :old.id, :old.col1, ...., MY_FLAG );
end;


and you use this single sql statement:

merge into big_table b
using small_table s
on (b.id = s.id)
when matched
then update set col1 = s.col1, col2 = s.col2, .....


it'll do it all for you.

if id = id and all columns match
then
merge will update and update
will set flag to 'D' for duplicate and record that record in history

if id = id and some columns do not match
then
merge will update and update will set flag to 'U' for update and move the old record into history.


if id not in existing ids
then
merge will insert



so, all of your logic will devolve into a single merge statement.

Tue Mar 23, 07:14:00 AM EDT  

Blogger SAURABH NOLAKHA said....

Thanks a million ton Tom......!!!!!!!!!!!

Fri Mar 26, 05:10:00 AM EDT  

Anonymous Anonymous said....

Hello Tom!

I need some general advices how to insert some values from an xml file to an oracle table.

I am very new at this and will need some very simple advices :)

I will be very thankful for any advice!

Regards
Alex

XML-file: http://asktom.oracle.com/pls/asktom/f?p=100:11:163639818783980::NO::P11_QUESTION_ID:1405403846139

Thu Apr 01, 10:38:00 AM EDT  

Anonymous Anonymous said....

Hi Tom,

I am beginner user for oracle. Could you recommend any book for PL/SQL and ORACLE

Regards

Mon Apr 05, 06:26:00 PM EDT  

Blogger SPARSH said....

hi tom,

can i install two oracle Db instance on a same machine.(Not a server but a simple home PC for learning purpose).
if yes..the how??
and how can i manage those 2 Db instances.....

Please suggest some links also where i can learn about this more..

Thanks
Sparsh

Sat Apr 17, 07:54:00 AM EDT  

Blogger Thomas Kyte said....

@sparsh

yes you can

there is nothing special

just use dbca more than once and create two instances for testing.

Sat Apr 17, 09:46:00 AM EDT  

Blogger SAURABH NOLAKHA said....

Hi Tom,
Can you please tell which is the most efficient way to look-up from a table.
my look up table is having 30 million records.....to look-up for 35 ids its taking 2 mins....please suggest the best possble way.

Mon Apr 19, 07:40:00 AM EDT  

Blogger Manohar Reddy said....

Hi Tom,

I want to fetch limited results for each request(pagination) which are sorted by date wise.For each next request we will get last retrieved primarykey value from front ends, so by using that we need to fetch next limit results.
I am able to do pagination if we are sorting by primary key value but here problem is need to sort by date.
Could you please help me,how can we do this.

Fri Apr 23, 02:31:00 PM EDT  

Blogger Thomas Kyte said....

If the front end only gives you a primary key and asks you to give data back after sorting by date - think about it.

select *
from (
select *
from t
where date_col >
(select date_col from table
where primary_key = :x)
order by date_col
)
where rownum <= 25;

that would get the next 25 rows after the current row - assuming that date_col is unique, else you'd have to use >= and end up re-retrieving the primary key row...

Fri Apr 23, 02:53:00 PM EDT  

Blogger Ravi @ Pune said....

Hi Tom,

Is there any way to find out which tables are being accessed as of now by the current queries running in the system( or by a specific query )

The problem is that I tried checking v$session_longops for long operation but there is nothing in the table.

Can you please help.

Thanks
Ravindra

Mon Apr 26, 10:01:00 PM EDT  

Blogger Thomas Kyte said....

@Ravi,

since immediately after getting this information - it would be obsolete/stale/not useful (things change rapidly) - what would you need this for?

Tell us what you are trying to do - not how you are trying to do it. You must want this data for "something", what is that "something"(your goal)

probably v$access is what you are looking for

Tue Apr 27, 06:51:00 AM EDT  

Blogger SAURABH NOLAKHA said....

Hi Tom,

I have a requirement to have audit report on view which is composed by join of 3 tables.But the actual insert and update is going on base tables in the database( on which we have read only access).
Is there any method we can have trigger on the view to have audit of all the new values inserted or updated in tables ??

Fri Apr 30, 12:15:00 PM EDT  

Blogger Thomas Kyte said....

@Saurabh

triggers are on tables only - not sets of tables. You would need a trigger/table.


you do have instead of triggers on the view... BUT


I would suggest if you want to do that, you would not use triggers - rather you would use a stored procedure to modify the base tables and audit at the same time. Keep is easy, maintainable, understandable.

avoid triggers!

Fri Apr 30, 12:24:00 PM EDT  

Blogger SAURABH NOLAKHA said....

Hi Tom,
We have only read access to base tables.Whatever data we have is from the view only.So,we just want to create a report on view on the rows in which any columns got a new value.
will it be fine performance-wise using stored-procedure

Sat May 01, 10:21:00 AM EDT  

Blogger Thomas Kyte said....

stored procedures will allow you to have the ULTIMATE in performance...

Sun May 02, 10:09:00 AM EDT  

Blogger mickita said....

Hi Tom,
Can you please tell me that y procedures cannot be called from an sql query and y functions can be

Mon May 03, 09:03:00 AM EDT  

Blogger Thomas Kyte said....

@Mickita,

think about it...

select procedure from dual;

that doesn't make sense, procedure doesn't return anything, so that would be just like:

select from dual;

it would not make sense to SELECT a procedure therefore.

And further:

select * from dual where procedure;

or

select * from dual where procedure = 5;


neither of those make sense syntactically either - so it is just a matter of basic SQL syntax that dictates "a procedure makes no sense..."

Mon May 03, 02:41:00 PM EDT  

Blogger SAURABH NOLAKHA said....

Hi Tom,

Is there any alternative for a distinct keyword?
select distinct col1 from view1;

Its taking 12 mnts of time and that's just very huge for my requirement.I just want to optimize that.

Thanks!

Tue May 04, 03:05:00 PM EDT  

Blogger Thomas Kyte said....

please utilize asktom.oracle.com to ask technical questions, thanks!

Tue May 04, 03:12:00 PM EDT  

Blogger SAURABH NOLAKHA said....

Hi Tom,
Is there any functionality available in Oracle 9i for comparing collections which is similar to "MULTISET EXCEPT" provided in Oracle 10g?

Thanks and regards:
Saurabh

Thu May 06, 03:42:00 PM EDT  

Blogger Thomas Kyte said....

please utilize asktom.oracle.com to ask technical questions, thanks!

I will no longer publish any comments that are just questions here - keep them in one place, on asktom.oracle.com

thanks!

Thu May 06, 04:25:00 PM EDT  

Blogger Mark said....

Running Oracle 10g patching our OS from AIX 5.3 TL 6 to TL 11. It has been suggested that there is an Oracle patch required to prevent Data block corruption issues, can't find any published references.
Do you know of an issue?

Thu May 13, 11:58:00 PM EDT  

Blogger Thomas Kyte said....

@Mark

that would be an excellent thing for support to research. I don't personally use aix, I'm not aware of any widespread issue.

Fri May 14, 06:52:00 AM EDT  

Blogger ananya said....

The below link which is part of '•How to build a simple test harness (RUNSTATS) (HOWTO) to test two different approaches from a performance perspective ' is not working. Can you please restore it.

http://asktom.oracle.com/pls/ask/f?p=4950:8:::::F4950_P8_DISPLAYID:9496983726463

Wed May 19, 12:03:00 AM EDT  

Blogger Thomas Kyte said....

@ananya

fixed that, thanks!

Wed May 19, 11:08:00 AM EDT  

Blogger Backpackers said....

i use oracle 8i and forms and report 6. I would like to know if i can save the rdf to excel. right now i save it in pdf format

Mon May 24, 11:19:00 PM EDT  

Blogger Backpackers said....

how do i use lexical parameter. i did use it this way

select mjh_cd,fin_yr from abc where
&lexi_param=:p_mjh_cd and fin_yr=:p_fin_yr. it throws an error. if i remove the fin_yr parameter it works but gives me the values for all the years. how do i use lexical parameter and get values for a particular year

Mon May 24, 11:22:00 PM EDT  

Blogger Joel Garry said....

The "take a look at my blog" link on the asktom homepage is broke: http://asktom.oracle.com/pls/apex/z?p_url=http://tkyte.blogspot.com&p_cat=AskTom_Blog&p_company=822925097021874

word: beedrapr

Mon Jun 07, 02:19:00 PM EDT  

Blogger Thomas Kyte said....

@Joel

thanks - fixed!

Mon Jun 07, 02:22:00 PM EDT  

Anonymous Anonymous said....

Hi,
I want to join two tables Table 1, Table 2 Based on
Table1.Field 1 = Table 2.Field 1
AND Table2.Field 2 like Table 1.Field2%.

How can I achive this?

Please help

Wed Jun 09, 06:27:00 PM EDT  

Blogger Thomas Kyte said....

@Anonymous

you answered your own question.

Table1.Field 1 = Table 2.Field 1
AND Table2.Field 2 like Table 1.Field2 || '%'


there you go, don't expect it to be "fast" if the join on field1 results in hundreds/thousands of pairs to apply the like to though.

Wed Jun 09, 06:48:00 PM EDT  

Anonymous Jake said....

Hi Tom,

Are you able to use the SQL*Loader utility stand alone or does it require the server or client software install?

Thanks,
Jake

Thu Jun 10, 12:14:00 PM EDT  

Blogger Thomas Kyte said....

@Jake

you need client software installed - networking support would be necessary

Thu Jun 10, 12:22:00 PM EDT  

Blogger baladba said....

I got ended in one of a new customer who is still running one of his database on Oracle 7.3.2.

One of the end-user approached me that she cannot connect to the database and asking me to verify her account status ?? Got surprised to know there was no account_status column in dba_users view.

How do I check the user's account status in Oracle 7 ?

Thu Jun 17, 07:12:00 AM EDT  

Anonymous Tom S. said....

Hi Tom,

i was planning to use the dbms_xmlstore package from a stored procedure to insert rows. It is really cool to be able to simply pass an xml to the stored procedure and have the inserts done for me. I use the table name to be inserted as the root element in the xml and so can easily insert into different tables. However, my dba opines that this is very inefficient. Would you concur with that opinion?

Thanks,
Tom

Wed Jun 30, 04:17:00 PM EDT  

Blogger Thomas Kyte said....

@Tom S

tell your DBA opinions are a penny a dozen - facts are priceless.

If they have an alternate IDEA, a better APPROACH, you would be more than happy to entertain it.

If they are just speaking FUD (fear, uncertainty, doubt) - then they need to sit down and be still unless and until they have some facts and (stress AND) a better idea, superior approach.

That's what I opine.

dbms_xlmstore may be many thousands of percent better than you need, or it might not be. It is not something that can be answered right off of the cuff. But, do as I would, benchmark it.

Fri Jul 02, 09:41:00 AM EDT  

Anonymous Anonymous said....

Please educate me on ANSI join syntax.

Here is a mockup query

select ename, dname, emp.deptno, dept.deptno
from SCOTT.EMP left outer join SCOTT.DEPT
on emp.deptno = dept.deptno

I understand can do compound join conditions with the ON syntax. When does it make sense to put filtering with the join condition versus the where clause?

For example, I only want departments with names like '%Risk%' Is better to add with the join condition or drop to the where clause. Imagine if I have tables with data in the millions.

select ename, dname, emp.deptno, dept.deptno
from SCOTT.EMP left outer join SCOTT.DEPT
on emp.deptno = dept.deptno and dept_nm like '%risk%'

OR

select ename, dname, emp.deptno, dept.deptno
from SCOTT.EMP left outer join SCOTT.DEPT
on emp.deptno = dept.deptno
where dept_nm like '%risk%'.

Peter

Wed Jul 28, 07:06:00 PM EDT  

Blogger syed said....

Hi Tom,

Could you please recommend a good performance tuning book.

I am a bit confused in chosing since there are lot of books in the market with different authors like Christian Antognini,Jonathan Lewis etc.

Thanks,
Syed

Mon Aug 02, 04:10:00 AM EDT  

Anonymous Derek said....

Hi Tom,
My schedulers were in a 'RUNNING' state when they actually doing nothing.
This happened when the DB's were recovered and started up after an instance crash.
This was resolved by disable force, stop force and then enabling the sched' jobs which worked fine thereafter.
Stopping them without disabling force first produced error ORA-27366: job "[JOB_NAME]" is not running .
I tried to recreate the scenario by offlining the data volumes [TEST servers] recovered and started up
the DB and the schedulers are functioning as normal. I'm not sure why they didn't start normally in the prod environment.
PSU addressing this bug was applied before this happened. Your thoughts are appreciated.

Regards
Derek

Fri Aug 06, 10:00:00 AM EDT  

Blogger Madhu said....

Tom,
Could u plz guide me on the following issue. I tired in many ways, i am not getting exart solution without hard coded.

Pbm: Table data


TYPECODEAcc no
a1 AAA 11
a2 AAA 16
a1 AAA 14
a2 BBB 15
a2 AAA 17
a3 CCC 19
a1 BBB 18

Required format:

Type AAA BBB CCC ...TOTAL
a1 2 1 0 3
a2 2 1 0 3
a3 0 0 1 1
TOTAL 4 2 1 7

I tired and acheived this by using PIVOT, but this code has Hard coded.plz kindly do the needful...

Mon Aug 23, 08:17:00 AM EDT  

Blogger Thomas Kyte said....

@Madhu

"U" isn't available, "U" is dead as far as I know. Look it up, it is true.

http://en.wikipedia.org/wiki/U_of_Goryeo

And what do German Postal codes (PLZ) have to do with anything?


But - you win the prize for the "least defined problem statement of the month"


I have to assume (totally assume) you really want to take the output of:

select type, code, count(*) cnt
from t
group by type, code;

and pivot that. so it would be more like

a1 aaa 2
a2 aaa 2
a2 bbb 1
....

and so on (giving us the acc no and saying "from this I want that" is sort of - well, not sort of - very misleading at best).


You will HAVE to know the columns you want to pivot - SQL needs at hard parse time to know the number of and names of and types of all columns. You have to

a) query select distinct CODE from t

b) so you can build a query


select type,
max(decode(code,'AAA',cnt)) aaa,
max(decode(code,'BBB',cnt)) bbb,
...
max(decode(code,'NNN',cnt)) NNN...
from (that group by query above)
group by type

Mon Aug 23, 06:20:00 PM EDT  

Anonymous Anonymous said....

Hi Tom,

Oracle 10.2.0.4 on Solaris 10

We have 3 session cache tables that gets truncated every day. It starts from 0 rows to end 2 million rows(15Gb) at the end of the day before it gets truncated. SELECTs and UPDATEs on table are equally frequent as INSERTs on the table.

earlier they (previous DBAs) had "high consistent gets" on the queries to that table and they believed collecting stats solved the problem. Now i see they have scheduled stats collection every 2 hours on those tables. Also following this idea, the Schema STAT collection job has been scheduled every 4 hours. I find this really unusual.
1. Does this approach really help?
2. How to manage such tables (high activity daily reset tables).
2. Is there a thumb rule for stat collection or can widely vary on the application and DB design?

Thanks,
SB

Wed Aug 25, 04:00:00 PM EDT  

Blogger Ani said....

Hi Tom,

The folloing link for hash join presentation is not working....

http://asktom.oracle.com/tkyte/hj/hj.html

Can you provide alter link or download link for the presentation?

Thanks
Anish

Wed Sep 01, 01:43:00 PM EDT  

Blogger Purvesh said....

Hi Tom,

I am facing a problem. That I would frame being a weird behavior from Oracle.

We have data containing Single Quote('). When code executes with fetching this data in the Where clause the select query does not throw an exception. However when a similar statement is dynamically constructed and executed it throws an error.

My assumption being that when a select statement is executed oracle is performing an implicit conversion for escaping the Single Quote. But when executing Dynamic SQL it is not performing.

Can you please explain this behavior? Is this expected?

Expecting an early reply. I have tried to post a question on AskTom but it was not accepting a question. Can you please help me with how to post the questions on that portal.

Have a nice time!!!

Thu Sep 02, 02:15:00 AM EDT  

Blogger Joe said....

Tom - I am trying to pass the variable (i_rollup) into a grouping set to alter the results, Is this possible?

CREATE OR REPLACE PACKAGE BODY the_program
IS

FUNCTION fn_extended (i_data_set IN SYS.dbms_debug_vc2coll,
i_rollup IN VARCHAR2)
RETURN the_program.rc_extended
IS
v_refcursor the_program.rc_extended;
BEGIN
OPEN v_refcursor FOR
SELECT CASE WHEN i_rollup IS NULL
THEN GROUPING_ID( data_set_id )
ELSE GROUPING_ID( i_rollup , data_set_id )END gid,
COUNT ( * ) numrec,
name,
address,
city,
state,
SUM (amount) totalamount
FROM the_table
WHERE data_set_id IN ( SELECT column_value
FROM TABLE(CAST (i_data_set as SYS.DBMS_DEBUG_VC2COLL)))
GROUP BY GROUPING SETS (( name, address, city, state ), ( i_rollup ))

RETURN v_refcursor;
END;

Thu Sep 02, 02:35:00 PM EDT  

Blogger Thomas Kyte said....

@purvesh

you should use bind variables. a single quote must be escaped in a literal (and I'm not going to show you how - because you don't need to - you NEED TO use bind variables)

just put a placeholder in the SQL statement and BIND the string (containing ANY character's) normally.

If you do not - you will

a) kill performance
b) kill scalability
c) consume huge amounts of memory
d) be subject to SQL injection - a huge, major security bug.

Use bind variables and no worries...


@Anonymous,

you cannot 'bind' an identifier, you would have to use dynamic SQL


open v_refcursor for '
select .....
where data_set_id in (select...
from table( cast( :x as ...) )
...
group by grouping sets((...)('||i_rollup||'))' using l_data_set;

Thu Sep 02, 03:44:00 PM EDT  

Blogger Purvesh said....

Hi Tom,

Your reply to my earlier question was as below ::
"
@purvesh

you should use bind variables. a single quote must be escaped in a literal (and I'm not going to show you how - because you don't need to - you NEED TO use bind variables)

just put a placeholder in the SQL statement and BIND the string (containing ANY character's) normally.

If you do not - you will

a) kill performance
b) kill scalability
c) consume huge amounts of memory
d) be subject to SQL injection - a huge, major security bug.

Use bind variables and no worries...
"

With regards to the above post, I would like to thank you for the suggestion. We have incorporated it to a certain extent.

However, my question is between the execution of a usual query and the one of dynamic query.

Say for eg. We have a data stored in table that contains Single Quote (') say for eg. Country "Lao People's Democratic Republic".
We fetch the data in an anchored variable and then use the variable to select data from another table.

When the above is executed without a dynamic sql; it executes fine. But when same is executed using a Dynamic Sql, it throws an exception as "SQL command not properly ended"

Also I do agree that the Single Quote should be escaped and shall work towards it. But please explain this behavior and what causes this sort of execution.

Mon Sep 06, 12:25:00 AM EDT  

Blogger Thomas Kyte said....

@purvesh

if you have a string stored as

Lao People's Democratic Republic


AND YOU BIND IT (using dynamic sql, using static sql, using whatever you want to call it since all sql in Oracle is dynamic sql under the covers...) - you will not hit this issue.


If you are hitting this issue - it screams out "that you are not binding", that you are simply taking the string and concatenating it into the larger sql statement, ending up with a sql statement that looks like:


select ... from ...
where column = 'Lao People's Democratic Republic' ...


Now you can see there is obviously a problem there - that single quote in the middle of your literal becomes the string terminator.

I could tell you how to put a string with a quote in literal SQL - but I won't.

I won't because you'll just do it and won't bind. That would be wrong.


You can look it up yourself - it is documented how to do this, but I'm not going to tell you.


JUST BIND IT!!!!

JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!
JUST BIND IT!!!!

JUST BIND IT!!!!

Mon Sep 06, 08:12:00 AM EDT  

Anonymous Sokrates said....

since you are not going to tell him, I fear he will find the "workaround"

select ... from ...
where column =
'Lao People' || chr(39) || 's Democratic Republic'

Mon Sep 06, 09:34:00 AM EDT  

Blogger Thomas Kyte said....

@sokrates

devilishly nice answer - it technically answers the question, but is one of the harder ways to accomplish it :)

Mon Sep 06, 06:36:00 PM EDT  

Blogger Purvesh said....

Hi Tom,

I am aware of the method to escape the Single Quote and I have already handled it. The way below

select REPLACE(v_str, '''', '''''') into v_new_str from dual;

Do suggest if any better method of escaping it.

My interest lies in understanding the way Oracle is executing the Select Query.

Would be very happy if you can explain what difference occurs when I use Bind variable and a normal variable.

Both the variables containing a string that has (').

Is it that Oracle is escaping it while parsing the select statement?

My question is not to complete an assigned task but to understand the way Oracle is working. :)
I hope you are understanding my concern for troubling you with the same question.

Tue Sep 07, 01:44:00 AM EDT  

Blogger Thomas Kyte said....

@purvesh

Do suggest if any better method of escaping it.

yes, I do - I have, I will continue to do so...


JUST BIND IT (am I sounding repetitive? I should be, I've been screaming this for 20 years now)


JUST BIND IT (am I sounding repetitive? I should be, I've been screaming this for 20 years now)


JUST BIND IT (am I sounding repetitive? I should be, I've been screaming this for 20 years now)


do you get it - just bind it.

build your dynamic query as:

where column = :x


and then BIND THE VALUE without escaping it.

You don't need to escape it (you have to think less - think about that - it is actually EASIER to bind than to not bind)

You'll get better scalability (the server will only have to optimize your query once and EVERYONE can share in that optimization. If you use literals, the server will have to optimize EVERY SINGLE QUERY)

You'll be more secure, you won't have to worry about sql injection (google that term if you are not familiar with it)


if you have access to effective oracle by design - read it, I talk about binding in effective sql in that book in great detail.



If you listened to the first bit of advice - you'd be done by now, you would have BOUND IT and been done with it.

Tue Sep 07, 07:30:00 AM EDT  

Anonymous Anonymous said....

Hi

I have a problem with some sequences - we use a third party programme that uses sequences for lots of things. I understand the limitations with sequences, holes, ordering, etc., but we have discovered that all our sequences in one database have taken a jump which seems to correspond roughly with the cache value.

Our DBA tells us that the database was not stopped/reinitialised, etc.

Could you tell me any ways in which the cache could be emptied without anyone supposedly being aware?

Thanks in advance,

K

Mon Sep 13, 11:09:00 AM EDT  

Blogger Thomas Kyte said....

@Anonymous K

it doesn't matter really - the only thing that matters is...


IT CAN and WILL happen.

period. It has happened, it'll happen again.

if the shared pool filled up triggering an ora-4031, this could happen (massive flush in an attempt to free space), database abort shutdown, flushing of shared pool....

But in short, it does not really matter what caused it - it is just a fact that it can and will happen so you always need to be prepared to live with it. It should be something that you do not even NOTICE, if you did notice it - that would say to me you are doing something wrong (it should be such a common thing that no one would even take note, that you did implies you are relying on "gap free" to some degree - it doesn't exist, it won't exist)

Mon Sep 13, 11:12:00 AM EDT  

Anonymous Anonymous said....

Thanks Tom,

I know - but it's a third party software that the powers that be say that we can't touch because it impacts our support.

They use a sequence number for invoice numbers - so of course, we see it - we just have to be able to justify it, (hence we looked at all the sequence numbers to see if they were all impacted or if someone had been playing around with the invoice tables) so I was wondering if there was any way to trace it.

I do appreciate that this is not the best way to get an invoice number and we are working with the third party software provider to change this, we just need to explain the jump in invoice numbers to the users (in our test system...)

Thanks for your time. Excuse the anonymous...

K

Mon Sep 13, 11:19:00 AM EDT  

Blogger Thomas Kyte said....

@K

my answer still stands, the sequence was NEVER gap free, will NEVER be gap free...

It has had holes in the past.

If your vendor claims to provide gap free invoice numbers - they have lied.

The gaps could be of any size - from 1 to hundreds/thousands/whatever. The justification is simply the same for everyone:

Sequences as implemented by Oracle are not gap free. The software we purchased does not provide gap free invoice numbers. Therefore you have seen, will see and will continue to see 'gaps'. Just like you did in the days of paper invoices when someone spilled coffee or lost a tablet.

Mon Sep 13, 12:06:00 PM EDT  

Blogger Thomas Kyte said....

@K

oh, and by the way, it is the best way to get invoice numbers - invoice numbers do not have to be gap free, just UNIQUE.

Mon Sep 13, 12:07:00 PM EDT  

Anonymous Anonymous said....

Thanks again Tom, just for info, (no need to put this in the blog) we're not based in the US. Our invoice numbers do have to be sequential

http://www.bytestart.co.uk/content/taxlegal/9_17/invoice-sequential-numbers.shtml

gives the following information

From 1st October 2007, new VAT invoicing regulations came into play which some small companies may not be aware of. From this date, all invoices issued should use an identifying number that is unique and sequential.

But your last post to me does explain why our third party suppliers (who are based in the US) used the sequence.

Thanks again

K

Tue Sep 14, 04:28:00 AM EDT  

Blogger Thomas Kyte said....

@K

then the right way to do this is to assign invoice numbers when you close your books - when you can actually use a computer to assign perfectly sequential, gap free numbers in isolation (only thing happening), since a single transaction and complete isolation (serialization) is sort of necessary to do anything remotely like that.

But my answer remains:

Sequences as implemented by Oracle are not gap free. The software we purchased does not provide gap free invoice numbers. Therefore you have seen, will see and will continue to see 'gaps'.

Tue Sep 14, 06:28:00 AM EDT  

Anonymous Anonymous said....

Can I get the slides form the presentation you gave in denver last week on 11G. Thanks!

Thu Sep 16, 05:58:00 PM EDT  

Blogger Thomas Kyte said....

@Anonymous

http://asktom.oracle.com -> files tab -> sort by created, it is there at the top right now as it was the last uploaded set

Thu Sep 16, 06:14:00 PM EDT  

Anonymous Anonymous said....

Hi Tom,
Could you please answer, do we have possibility to know a table size that not exist yet.
create table as select ..(huge query) ..
Maybe we can look at some temporary segments (system view) or we can't know this?
Or maybe you know how evaluate time create table as select if it is possible.
Thanks in advance.

Thu Sep 23, 10:29:00 AM EDT  

Blogger Thomas Kyte said....

@Anonymous

... do we have possibility to know a table size that not exist yet. ...

that sort of hurt my head :)

think about it...


first - in most cases - if the table is going to be the result of a large, complex, long running select statement - it is likely that the select itself would use temporary space (for many things) and the fact is - this temp space has no relationship to the final size of the resulting table - eg: consider:

select count(*)
from (select distinct a,b,c from t);

that'll typically have put select distinct a,b,c from t into TEMP and then count it - the temp space might be huge but the resulting table is 1 row.


Second - the query isn't finished yet. You could use real time sql monitoring in 11g to see "where" the query is as far as progress - but again that won't tell you a thing about the end size of the result set (the intermediate temporary space stuff has no relation to the end size - it could be much larger than the end size or much smaller - in some strange coincidence - it could even be close to the same size as the end result).

So no, there isn't any way to see the size of something that doesn't exist yet.


If the create table as select is in the final phase of the query where it is starting to direct path write the table itself, you can look in the target tablespace for temporary extents - these will be the table as it is written - but again, it won't be the size of the end result, it will just reflect what has been written so far.

Thu Sep 23, 10:35:00 AM EDT  

Blogger Marcelo Souza said....

Hello Tom,
A processor of credit cards in Brazil has received a request from Mastercard to insert into your database a search engine to analyze your transactions and make sure that no person who is part of the global list of terrorist are using the card operations to fund their activities. The client asked us if through Oracle Text is possible to perform semantic search to filter these possible violations. Is there someone with expertise who can assist us in this request?

Best regards,
Marcelo

Mon Sep 27, 10:49:00 AM EDT  

Blogger Thomas Kyte said....

@Marcelo

insufficient data to say. What is your definition of a 'semantic search' (I fail to see how semantic search has anything to do with "name searching" - this just sounds like a watch list)

You don't say HOW they want to search this list, what is their algorithm, set of requirements, etc.

Mon Sep 27, 10:52:00 AM EDT  

Anonymous Abusi said....

Hello Tom,
can Oracle APEX applications run atop of tables or object from SQL Server, MS Acess and Excell spreadsheet?

Mon Oct 04, 10:05:00 PM EDT  

Blogger Thomas Kyte said....

@Abusi

inasmuch as Oracle itself can - yes. You can use the generic connectivity gateway over ODBC to connect to these data sources, then APEX would use a database link to access them as if they were tables.

Tue Oct 05, 12:16:00 AM EDT  

Blogger Lara said....

Hi Tom,

I am trying to execute OS commands from oracle.
I created a java class to do that but the command: cat file1.txt > file2.txt is always giving me error
cat: No such file or directory

code:
osCommands[0]="/bin/bash";
osCommands[1]="-c";
osCommands[2]="cat file1.txt > file2.txt";
p = rt.exec(osCommands);

i also tried to execute it this way but i also had an error
1. rt.exec("cat file1.txt > file2.txt");
2.osCommands[0]="/bin/bash";
osCommands[1]=cat file1.txt > file2.txt";
p = rt.exec(osCommands);

Thank you for your consideration,
Lara

Sat Oct 09, 12:49:00 PM EDT  

Blogger Thomas Kyte said....

@Lara

reaad http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:952229840241

read the entire page - not just the first bit (there is a fix for that procedure later on in the page)

read about how shell scripts work on that page (you'd need to have a single commmand like "/bin/bash -c /bin/cat file1.txt > file2.txt") HOWEVER - that command would be a horrible command to use. You'd have to have permission on /bin/bash and that would mean you can do *ANYTHING*.


If you really just want to cat file1 > file2, use utl_file.fcopy and be done with it.

If your command is not really cat file1 > file2, but something more complex - then put it all in a script (and remember, YOUR environment is not the environment your code will run in, use full path names for EVERYTHING, be explicit - don't assume ANYTHING is set) and grant yourself permission to run that script (NOT bin/bash)

Sat Oct 09, 12:56:00 PM EDT  

Blogger Lara said....

Thx a lot for ur help :) it worked fine
but today i have another question for you :$
i need to spool a file from a stored procedure or a java file is that feasible in oracle?
I've read that i need to have a cursor to get the query result and than i have to use 'UTL FILE' but in case of huge data i guess this is not the best solution


Thank you for your time and consideration,
Lara

Mon Oct 11, 04:45:00 PM EDT  

Blogger Thomas Kyte said....

@Lara

see http://tkyte.blogspot.com/2009/10/httpasktomoraclecomtkyteflat.html for approaches.

Mon Oct 11, 04:48:00 PM EDT  

Blogger Lara said....

the link you gave me is not working :(


Page not found
Sorry, the page you were looking for in the blog The Tom Kyte Blog does not exist.
Go to blog homepage

Mon Oct 11, 05:00:00 PM EDT  

Blogger Thomas Kyte said....

@Lara

it works, you might have cut and pasted it wrong.

Here is a clickable link:

http://tkyte.blogspot.com/2009/10/httpasktomoraclecomtkyteflat.html

Mon Oct 11, 05:16:00 PM EDT  

Blogger Lara said....

ok thx a lot now it works, i guess it was my fault.
but i'm afraid i didnt really clarify my question, i need to do the following:
spool on
spool file1.txt
select id, name from employee;
spool off

but i need to do it in a stored procedure.

Mon Oct 11, 05:26:00 PM EDT  

Blogger Thomas Kyte said....

@Lara

no can do.

you give a sqlplus script above, if you want to use spool (something sqlpls and only sqlplus does) you will use.....

sqlplus

If you want to have a stored procedure written in plsql create a file, you will utl_file, plain and simple.


I gave you a link that shows how to use

a) sqlplus to create a delimited spool file

b) plsql to create a delimited flat file from a query

c) pro*c to unload really fast if you want


You can use any of those ideas and make them yours.

Mon Oct 11, 05:29:00 PM EDT  

Blogger Lara said....

ok Tom thx a lot for the information .
Best Regards,
Lara

Tue Oct 12, 06:41:00 AM EDT  

Anonymous Anonymous said....

http://www.asktom.cz/

Same Tom?

Wed Oct 20, 09:46:00 AM EDT  

Blogger Thomas Kyte said....

different tom, not me.

Wed Oct 20, 10:43:00 AM EDT  

Anonymous Anonymous said....

Hi,

I am trying to set the oracle stream between two different database. Once it is configured , i have tested the streams by updating(mean both insert and delete) in the source db and verified in the desitination db. It was working well. But wheni tried to delete some 40,000 rows, it stops propagating the data. Can anyone inform me the cause ?

Thu Oct 21, 07:49:00 AM EDT  

Blogger Vidya said....

Hi Tom,
In Explain plan part of TKPROF output sorted as "sort = '(prsela,exeela,fchela)'"
what is the significance of statement
(cr=386 pr=32 pw=0 time=358108 us)

pleaes let me know.

Thank you.

Tue Oct 26, 09:43:00 AM EDT  

Blogger Thomas Kyte said....

@Vidya

cr = cumulative consistent reads (logical IO)

pr = cumulative physical reads from disk

pw = cumulative physical writes to disk

time = well, cumulative time - in milliseconds

Tue Oct 26, 05:39:00 PM EDT  

Blogger vikash said....

Hi TOM

I need your help in writing a procedure to get the out put of data from a table into the exl.
Aactualy I am using an pkg which creates an exl which i find it from google ExcelDoctypeUtils .
In that pkg its passing a select query which prient all the values in the exl.

Now my requirement is bit different where the rows value has to come in the col of the exl.
I went through your doc of pivort table where you have done some thing near to my requirement.
DEPTNO JOB ENAME_ SAL_1 ENAME_2 SAL_2 ENAME_3 SAL_3 ENAME_ SAL_4

------ --------- ------ ----- ---------- ----- ---------- ----- ------ -----

10 CLERK MILLER 1300

10 MANAGER CLARK 2450

10 PRESIDENT KING 5000

20 ANALYST FORD 3000 SCOTT 3000

20 CLERK SMITH 800 ADAMS 1100

20 MANAGER JONES 2975

30 CLERK JAMES 99

30 MANAGER BLAKE 99

30 SALESMAN ALLEN 99 MARTIN 99 TURNER 99 WARD 99

Now my requirement is bit different I try to implement yor pkg but didnt get success.so need your help.
suppose consider emp now my requirement is to get all the dept 10 values in one col corresponding to ename and sl,deptno20 col like wise

emp_id emp_name deptno salary deptno salary deptno salary deptno salary
1 tom
10 1000






2 vikash
10 2000






3 kumar
10 10000






4 prakash


20 3000




5 ajit


20 2000




6 david




30 1000


7 lora




30 4000


8 deeni






40 1000

So need your help .IS it possible or not.
I tryed to implement your pivort table logic but fails.

Thu Oct 28, 01:27:00 AM EDT  

Blogger Thomas Kyte said....

@vikash

"but fails" is

"very vague". You don't say what you tried or how it failed.

And I cannot determine from what you typed in above what you mean exactly. Not sure what you want.

did you mean this?


scott%ORA11GR2> select rn,
2 max(decode( deptno, 10, ename )) d10,
3 max(decode( deptno, 20, ename )) d20,
4 max(decode( deptno, 30, ename )) d30
5 from (
6 select deptno, ename, row_number() over (partition by deptno order by ename) rn
7 from emp
8 )
9 group by rn
10 order by rn
11 /

RN D10 D20 D30
---------- ---------- ---------- ----------
1 CLARK ADAMS ALLEN
2 KING FORD BLAKE
3 MILLER JONES JAMES
4 SCOTT MARTIN
5 SMITH TURNER
6 WARD

6 rows selected.

Thu Oct 28, 06:22:00 AM EDT  

Blogger quangnd said....

Hi,
We are develop a 3 tiers app based on Oracle 10g. We using RLS in that. We have to set logged on username (by calling a fucntion)into to the session before query any thing in DB. RLS will use these information to generate where clause .... We have problem with Crystal Report: CR create its own session to DB so that we can not set any information before CR query DB. Please help me.

Mon Nov 01, 03:55:00 AM EDT  

Blogger Thomas Kyte said....

@quangnd

that is something YOU have to figure out. What is available to you from crystal reports that can help you facilitate your need to have a function called with a certain set of inputs.

In all likelihood, you'll need to change your approach somewhat. right now you have a "trusted" stored procedure that your middle tier application - of which YOU wrote all of the code - calls to set the identity. You cannot release execute on this trusted stored procedure to the public - for then anyone would be able to claim to be anyone they wanted. So, you need to determine how YOU using the tool of choice (crystal reports) can in a secure fashion pass the identity of the end user into the database.

Perhaps, if you are using crystal reports in "client server" mode - each user is logging in as themselves - you could adjust your RLS routines to do something like:

if (user is the application_we_wrote) then identity = your_function
elsif (user in (set of valid crystal reports users) ) then
identity = user
end if

Mon Nov 01, 04:38:00 AM EDT  

Blogger Kayode said....

Hi Tom,
Please, can you shed some light on how to calculate Average Database Transaction Volume and Peak Database Transaction Volume in Database Metrics for Logical SAR for Oracle 805 and above.

Mon Nov 01, 07:39:00 AM EDT  

Blogger Thomas Kyte said....

@Kayode

whatever a "logical SAR" is or does - I have no clue.


In order to get averages - you would need to snapshot the v$ tables with the relevant information for you - v$sysstat, v$sesstat contain lots of metrics. You would take a 'picture' of those tables at point A in time, then later at point B in time and then you can subtract A from B and that would tell you how many commits, rollbacks, etc you did during that period.

That would be your "average" over that time.

Finding the "peak" is not really feasible unless you took a picture using very very short intervals (and you DO NOT want to do that, you will kill your performance trying to gauge your performance).


Look into a utility called statspack - it does what you want in 8 and up. Look into a feature called AWR (automatic workload repository) - it does what you want in 10g and up in a much more complete fashion than statspack

Tue Nov 02, 02:25:00 AM EDT  

Blogger Raghab said....

Hi Tom,

I have a partion table where I load data every month. The process that loads data does the analyze of the current partiton once the loading is done as the analyze is meant to be done only for the changed data.

From last couple of months I have been observing that the analyze is being done for most of the older partitions even though there is no data change. I came to know this by checking the table "ALL_TAB_MODIFICATIONS" which I think is having wrong information. Can you please let me know how ALL_TAB_MODIFICATIONS is having the wrong the information ?

Thu Nov 11, 08:13:00 PM EST  

Blogger Thomas Kyte said....

@Raghab

you give no information as to the command you are using to gather stats.


why are you blaming the tab modifications information - you haven't shown anything to us that would indicate it is the issue.

show your work - why do you think it is caused by tab modifications?

If you are wanting to just gather on the latest partition - you would be using dbms_stats to gather on that partition only - tab modifications would not even come into play.

Fri Nov 12, 06:52:00 AM EST  

Blogger Thomas Kyte said....

@Raghab

you give no information as to the command you are using to gather stats.


why are you blaming the tab modifications information - you haven't shown anything to us that would indicate it is the issue.

show your work - why do you think it is caused by tab modifications?

If you are wanting to just gather on the latest partition - you would be using dbms_stats to gather on that partition only - tab modifications would not even come into play.

Fri Nov 12, 06:52:00 AM EST  

Anonymous Anonymous said....

HI TOM,

Iam using sqlplus instant client to connect to oracle DB server by using statement in script

sqlplus -l UNAME/PASSWORD@DBSERVER/orcl @/dev/ora.pipe

iam facing problem when there is network down and up.The sqlplus process is still there but it is not writing in to dbserver.

How to avoid such situations

Fri Nov 12, 11:37:00 AM EST  

Blogger Thomas Kyte said....

@anonymous

don't have the network go up and down??

You are using a connected client/server application, if the network fails, that connected connection is terminated.

your choices would be

a) don't have the network go up and down frequently as it must be.

b) don't use a connected client (use apex for example - only connected with you hit "enter"). This will limit the times you are prone to this issue.

Fri Nov 12, 11:55:00 AM EST  

Blogger Ashish said....

Hi tom, just want to know ho wto use run execute procedure with execute immediate, for eg;

V_STRNG :=
P_SCHEMA
|| '.MPLS_PROC ('
|| P_CLIENT_CODE
|| ','
|| P_VARIANT_CODE
|| ','
|| P_START_DATE
|| ','
|| P_END_DATE
|| ','
|| P_MEDIA_CODE
|| ');';

EXECUTE IMMEDIATE V_STRNG;

Waiting for your reply

Fri Nov 12, 12:02:00 PM EST  

Blogger Thomas Kyte said....

@all - ok, this post is now closed. This is not for questions (that is what I do on asktom.oracle.com).

No more comments will be accepted.


@ashish

documentation says....

http://download.oracle.com/docs/cd/E11882_01/appdev.112/e17126/dynamic.htm#CHDIEFCJ

Fri Nov 12, 12:04:00 PM EST  

Blogger Ashish said....

Thanks Tom, thank you very much..

Sun Nov 14, 08:38:00 AM EST  

Anonymous Charlotte Rapp said....

Hi Tom,

I've heard that you are comming to Norway in april :-). If possible we, one of Norways strongest Oracle environment, want to book you for a day.

Best Regards
Charlotte

Wed Dec 08, 08:27:00 AM EST  

Blogger Thomas Kyte said....

@Charlotte

I'm coming in from Italy and then I'll be doing a "geek cruise" in Norway, so I'll be on a boat the entire time (thursday-saturday) and then I fly back to the US for IOUG in Florida on monday.

Sorry - but I don't have any free time while I'm there.

Wed Dec 08, 08:36:00 AM EST  

Blogger Sohail said....

Hi Tom,
I have a problem of calculation
as
1.50 + 1.50 makes 3
is there any function which calculate
1.12 + 1.12 = 3
Thankx

Wed Dec 15, 05:24:00 AM EST  

Blogger Thomas Kyte said....

@Sohail

and what should 1.13 + 1.11 equal? give us the LOGIC here - what kind of math is this? do you always want to round UP to the nearest 1/2 (so that 1.51 + 1.51 = 4) or what?

Wed Dec 15, 09:12:00 AM EST  

Blogger ruzo said....

I understand that is not the right place for posting this question. But unfortunately due to the volume of questions you get at you akstom.oracle.com site, I am always unable to post my question there :-(
If you have the time can you please respond which one of the two would be faster: row_number() or rownum for pagination queries. I understand that both will return the same resulset and that syntactically row_number() is one in-line query less than that of rownum but performancewise which one is better? I see that
Row_number() seems to be doing a WINDOW SORT PUSHED RANK , whereas rownum does a "STOPKEY" .
In a query of the form
select * from ( select row_number() over (order by col1 desc) rn from t ) where rn < 5;

where t is a table of say million rows; Will the above query have a sort array of 1 million rows or 5 rows ?
Is the stopkey approach better(rownum) or window sort pushed rank approach(row_number()) better ?

Wed Dec 15, 06:16:00 PM EST  

Blogger Thomas Kyte said....

@ruzo

the pushed rank and the stopkey are both "top n query optimizations", they should be fairly equivalent in general. So for pagination - in most all general cases - row_number and rownum should perform equivalently.

Wed Dec 15, 06:24:00 PM EST  

Anonymous Anonymous said....

Hi Tom,

I have a .trc file generated by Oracle 10.2.0.4.0, when I try to convert it to a readable type (txt) with tkprof (from either a command line of a PL/SQL process file):
tkprof /u00/app/oracle/admin/BD/udump/OriginalFile.trc NewFile.txt

it returns with:
Could not open trace file /u00/app/oracle/admin/BD/udump/OriginalFile.trc

and if I try with tkprof01:
tkprof01 /u00/app/oracle/admin/BD/udump/OriginalFile.trc NewFile.txt

From a command line it converts it ok, but when I try to do it from a PL/SQL process file it asks for a password like this:

We trust you have received the usual lecture from the local System
Administrator. It usually boils down to these two things:

#1) Respect the privacy of others.
#2) Think before you type.

Password:

What could be the problem with either one, tkprof or tkprof01???

I will greatly appreciate your help!!!

Brenda T

Wed Jan 05, 02:03:00 PM EST  

Blogger Thomas Kyte said....

@Brenda

what is a "plsql process file"?

Wed Jan 05, 02:05:00 PM EST  

POST A COMMENT

<< Home