Oracle 10g Oracle 11g APEX 4 Linux Shell Scripting Automation SaaS PaaS IaaS PL/SQL SQL CA Clarity Tuning Monitoring Oracle Database Tuning

Six different Ways to Concatenate Strings in PL/SQL

There's a great article on CodePlex with source code showing a variety of ways to concatenate strings out of rows into CSV style format.

Check it out here:

http://oralibs.codeplex.com/wikipage?title=Aggregate/concatenate%20strings

Oracle APEX 4 - Wrap Text In Report Column Using CSS

Today I needed to modify a report so that the "comments" column would be wrapped text rather than truncated. Our original design for this report was to output just the first 50 characters of the comment field data. By truncating the comments the report stays a fixed width, leading to better overall look and feel for the report.

However, the information that is in the comments field is important, so the decision was made to try and keep the column to a fixed width but allow the row size to grow (and wrap the text).

On the Oracle forums there's quite a bit of talk about this issue, with many users claiming that the default behavior of Oracle APEX is to wrap columns in reports. I'm using the standard report template and not having that experience. So I dug a little deeper and found this little gem

If you place this

width:500px; display:block; white-space:normal;

In the report column CSS Style field then the report with wrap that column. Adjust the pixels for your own needs. Here's an image that shows how to do this.

Apex Picture

Oracle APEX - How to tell if your instance is running EPG, Apex Listener or HTTP

I recently stood up an Amazon AWS instance with Oracle 11 and Apex installed to test it out (more on this later). I found that the version of Apex installed on this instance is Apex 3.2, and I'm already working on Apex 4.1. In the process of upgrading the Apex version I need to determine what web server type the existing installation is using. It could be the Embedded PL/SQL Gateway EPG, Apex Listener or HTTP.

Here's how to check what web server you're using. Note that this works from the Oracle APEX SQL Workshop by errors out on my installation from sqlplus or OEM with a numeric PL/SQL error.

Just go into SQL Worksop -> SQL Commands and enter and execute this:

select owa_util.get_cgi_env('SERVER_SOFTWARE') from dual

Apex Picture

Oracle APEX 4.1 Force a Region to a Specific Width

It's very simple to force a region to a specific width. Simply edit the region and add a width to the "Region Attributes" field.

style="width:650px"

Will set the region to exactly 650 pixels.

Hope this helps.

-- George Milliken

Oracle Pre Built VMs for Database Development 32-bit or 64-bit

The Oracle pre-built VMs for database development are a great productivity booster. You can download them and immediately start development being assured that the entire stack is installed correctly and working.

Here's how you can check to see whether or not you have a 32-bit or 64-bit installation.

cat /proc/version

file /sbin/init

you should see output similar to this for 32 bit

[oracle@localhost ~]$ file /sbin/init
/sbin/init: ELF 32-bit LSB executable, Intel 80386, version 1 (SYSV), for GNU/Linux 2.6.9, dynamically linked (uses shared libs), for GNU/Linux 2.6.9, stripped
[oracle@localhost ~]$ 
[oracle@localhost ~]$ cat /proc/version
Linux version 2.6.18-194.17.1.0.1.el5 (mockbuild@ca-build9.us.oracle.com) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-48)) #1 SMP Wed Sep 29 15:40:03 EDT 2010
[oracle@localhost ~]$ 

Hope this helps, George

Calculate a Percent of Total in Oracle SQL Query Using RATIO_TO_REPORT and OVER ()

Here's a quick way to calculate a percentage of total count of events in a query.

You want output like this

PROGRAM_NAME_ID PERCENTOUTAGE
1
22.98%
2
16.81%
3
17.87%
4
21.28%
60
7.87%
80
13.19%

From a table like this

CREATE TABLE XFD_SHIFT_SUMMARY.XFD_PROGRAMS
(
  ID                   NUMBER                   NOT NULL,
  PROGRAM_NAME_ID      NUMBER                   NOT NULL,
  SCHEDULED_BEAM_TIME  NUMBER(38,15),
  CREATION_DATE        DATE,
  REPORT_ID            NUMBER
)

Execute this statement after inserting the data below into the above XFD_PROGRAMS table structure

 SELECT B.PROGRAM_NAME_ID,
         ROUND (RATIO_TO_REPORT (COUNT (*)) OVER () * 100, 2) || '%'
            PERCENTOUTAGE
    FROM XFD_PROGRAMS B
GROUP BY PROGRAM_NAME_ID;


SET DEFINE OFF;
Insert into XFD_PROGRAMS
   (ID, PROGRAM_NAME_ID, SCHEDULED_BEAM_TIME, CREATION_DATE, REPORT_ID)
 Values
   (141, 60, 8, TO_DATE('06/09/2011 23:59:00', 'MM/DD/YYYY HH24:MI:SS'), 
    55);
Insert into XFD_PROGRAMS
   (ID, PROGRAM_NAME_ID, SCHEDULED_BEAM_TIME, CREATION_DATE, REPORT_ID)
 Values
   (144, 4, 4, TO_DATE('06/11/2011 23:59:00', 'MM/DD/YYYY HH24:MI:SS'), 
    61);
Insert into XFD_PROGRAMS

Create a floating Region that is pinned in place while you scroll screen

OK I came across an easy trick to make a region "float" in the same place on the screen while the rest of the page scroll. Very useful for having some basic reference information (invoice header) always available as the user scrolls through a long transaction log.
Here's the steps:
  1. Make the region (perhaps a sidebar region
  2. Display the page in APEX and View Source in the browser
  3. Find the first item in that region using the CTRL F find
  4. Identify the parent class to the Region Header.(Mine was t15RegionHeader and the parent is t15RightBar)
  5. Paste this code into the Region Header in the APEX Region definition form
<style>
.t15RightBar{
margin-top:0;
position:fixed;
right:0;
}
</style>


This makes the region float on the right since it's a sidebar region in my case.

Easy way to Audit your DML transactions in Apex

Often (if not ALWAYS) you'll want to know whom inserted or updated a record in your table, from your Apex application. There may be many ways to do this, but the easiest I've found is to use a simple PL/SQL anonymous block process, to capture the logged in username from the Apex session cookie, and pass this to a form field. Of course, you'll want to add the necessary columns to your table to store these values and include them with your DML transaction.

On your form, add a PL/SQL anonymous block process that runs at any point prior to the DML transaction.

Here is the code:

declare
    xcookie owa_cookie.cookie;
begin
   xcookie := owa_cookie.get('LOGIN_USERNAME_COOKIE');
   :AUDIT_USER := xcookie.vals(1);
end;

You can even choose to display the field on the page, as a subtle warning to the user that you are auditing this transaction. To do this, make sure the Process runs prior to printing your Regions.

APEX Automated Row Fetch Failing Due to Primary Key of Type CHAR(20)

Ran into an interesting problem with a APEX form on a report always failing to fetch with No Data Found even when the data was plainly there in the table and the right ID item was getting set when branching to the detail page.

Turned out that the issue was because the primary key column was defined as CHAR(20) (even though it held what looked line a numeric value).

Changing the column to NUMBER immediately corrected the problem.

No mystery here, examine the data types.

CHAR is a fixed length data type, use of VARCHAR2 or INTEGER or NUMBER avoids this issue.

The use of a CHAR primary key was apparently defeating the implicit type conversion which APEX relies on.

Here's a quick recap of the Oracle Data Types Oracle Data Types

Cheers.

Oracle APEX - Interactive Report ORA-01445: cannot select ROWID from, or sample, a join view without a key-preserved table

I recently was working on an interactive report in Oracle Apex 4 and ran across this error

The report query needs a unique key to identify each row. The supplied key cannot be used for this query. Please edit the report attributes to define a unique key column. 
Report ORA-01445: cannot select ROWID from, or sample, a join view without a key-preserved table

Strange, because the query runs fine in SQL Developer and TOAD. Fortunately the query was similar to another IR report and I had only made one small change, which was to add an outer join for a status code that sometimes may be NULL.

Removing that and replacing it with an inline lookup immediately fixed the issue.

For example

SELECT COl1, 
       COL2, 
       STATUS,
       COL4
FROM TABLE1 A,
     STATUS_TABLE B
WHERE A.STATUS_ID=B.ID (+);

flipped around like this fixes the issue and allows APEX to use the query in an Interactive Report.

SELECT COl1, 
       COL2, 
       NVL((SELECT B.STATUS_NAME STATUS FROM STATUS_TABLE B WHERE A.STATUS_ID=B.ID), 'NO STATUS') STATUS,
       COL4
FROM TABLE1 A;

Howto Clear Stuck Data Pump Jobs Quickly

SELECT owner_name, job_name, operation, job_mode,
state, attached_sessions
FROM dba_datapump_jobs
WHERE job_name NOT LIKE 'BIN$%'
ORDER BY 1,2;

declare
h1 number;
BEGIN
-- Format: DBMS_DATAPUMP.ATTACH('[job_name]','[owner_name]');
h1 := DBMS_DATAPUMP.ATTACH('SYS_EXPORT_SCHEMA_01','SYS');
DBMS_DATAPUMP.STOP_JOB (h1,1,0);
END;
/

-- drop the table listed in the query above, example below. Will be named differently for imports
DROP TABLE SYS.SYS_EXPORT_SCHEMA_01;

Oracle APEX 4 - Howto Hide a Button using Javascript

So I wanted to create a button but hide it completely (not disable it). I need the button there but invisible. Here's how to do that easily.

1) create the button

2) set the ID to some name ( I used SUBMIT_API )

Apex Picture

3) Add this piece of jquery to the page

Apex Picture


That's it!


George Milliken

PL/SQL Data Pump Import Using DB Links - Modified Oracle Example

This is a neat example that tweaked slightly. This example is a data pump import across the wire. No dump files or OS access required.

/*
Oracle Data Pump Import Using database DB link
From Oracle Corporation Example
Minor modifications by George Milliken 02-10-2011

Assumes you are on the destination when you execute the script
Assumes the destination schema exists but is empty (you could change this by changing the parms)


*/
CREATE OR REPLACE PROCEDURE SYS.copy_schema3 (
source_schema in varchar2, 
destination_schema in varchar2, 
new_password in varchar2,
network_link in varchar2 default 'loopback'
) as
  ind NUMBER;              -- Loop index
  spos NUMBER;             -- String starting position
  slen NUMBER;             -- String length for output
  h1 NUMBER;               -- Data Pump job handle
  percent_done NUMBER;     -- Percentage of job complete
  job_state VARCHAR2(30);  -- To keep track of job state
  le ku$_LogEntry;         -- For WIP and error messages
  js ku$_JobStatus;        -- The job status from get_status
  jd ku$_JobDesc;          -- The job description from get_status
  sts ku$_Status;          -- The status object returned by get_status
  q varchar2(1) := chr(39); -- single quote
  
BEGIN

dbms_output.put_line('Source Schema [' || source_schema ||']');
dbms_output.put_line('Destination Schema [' || destination_schema ||']');

-- Create a (user-named) Data Pump job to do a schema import
  h1 := dbms_datapump.open ('IMPORT','SCHEMA',network_link);

-- A metadata filter is used to specify the schema that will be exported.
dbms_datapump.metadata_filter (h1,'SCHEMA_LIST',q||source_schema||q);

  DBMS_DATAPUMP.METADATA_REMAP(h1,'REMAP_SCHEMA',source_schema,destination_schema);

    begin
    dbms_datapump.start_job(h1);
    dbms_output.put_line('Data Pump job started successfully');
    exception
      when others then
        if sqlcode = dbms_datapump.success_with_info_num

Modified Nixcraft Shell Script - Check For Free Disk Space and Alert Admin if over 90%

#!/bin/sh
###############################################################################
# Shell script to monitor or watch the disk space
# It will send an email to $ADMIN, if the (free avilable) percentage
# of space is >= 90%
# -------------------------------------------------------------------------
# Copyright (c) 2005 nixCraft project 
# This script is licensed under GNU GPL version 2.0 or above
# -------------------------------------------------------------------------
# This script is part of nixCraft shell script collection (NSSC)
# Visit http://bash.cyberciti.biz/ for more information.
# ----------------------------------------------------------------------
# Linux shell script to watch disk space (should work on other UNIX oses )
# SEE URL: http://www.cyberciti.biz/tips/shell-script-to-watch-the-disk-space.html
# set admin email so that you can get email
# ---------
# George Milliken 03-26-2011 Modified for database platform (Linux)
###############################################################################
ADMIN="youremail@example.com"
# uncomment the TEST variable for testing
#TEST="This is a TEST"

# set alert level 90% is default
ALERT=90
#df -H | grep '/fs0'  | awk '{ print $5 " " $1 }' | while read output;
#df -H | grep '/sda'  | awk '{ print $5 " " $1 }' | while read output;
df -H |  grep -vE 'Filesystem|172|mnt|//|tmpfs|usr|boot'  | awk '{ print $5 " " $1 " " $6}' | while read output;
do
  #echo $output
  echo $usep
  usep=$(echo $output | awk '{ print $1}' | cut -d'%' -f1  )
  partition=$(echo $output | awk '{ print $2 }' )
  mount=$(echo $output | awk '{ print $3 }' )
  if [ $usep -ge $ALERT ]; then
    cat <
      

Oracle Apex 4.02 RHEL 5 - Error Installing APEX 4.0 : APEX is Shown as INVALID in DBA_REGISTRY

I'm doing an Apex install on 11g 11.2.0.2 RHEL 5 Apex 4.02 and the databse did not have Oracle TEXT and XML DB setup properly before I attempted the full devleopment environment install. Yes, this is my fault and resulted in an error.

I detected the problem, ran the Apex remove script and re-installed, still not joy.

Oracle Support pointed out several excellent resource that I should hav read before I did this and a great script for pre checking the environment.

Here's the information:

Check these documents:

APEX Installation Verification Script (Doc ID 1254932.1)


Here's the steps I run to install (note this is using EPG) and the resulting errors.

These are the steps we followed:

sqlplus / as sysdba
@?/rdbms/admin/owminst.plb
alter system set shared_servers = 5 scope=spfile;
alter system set shared_pool_size=200m;

shutdown immediate
startup

CREATE BIGFILE TABLESPACE APEX DATAFILE '/fs0/oracle/oradata/niku/apex001.dbf' SIZE 1G AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED LOGGING ONLINE PERMANENT 
EXTENT MANAGEMENT LOCAL AUTOALLOCATE BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT AUTO;

CREATE BIGFILE TABLESPACE xdb DATAFILE '/fs0/oracle/oradata/niku/xdb001.dbf' SIZE 1G AUTOEXTEND ON NEXT 100M MAXSIZE UNLIMITED LOGGING ONLINE PERMANENT 
EXTENT MANAGEMENT LOCAL AUTOALLOCATE BLOCKSIZE 8K SEGMENT SPACE MANAGEMENT AUTO;

-- install xdb as documented here http://www.adp-gmbh.ch/ora/xml_db/install.html
@?/rdbms/admin/catqm.sql xdb xdb temp 

sqlplus / as sysdba @?/rdbms/admin/catblock.sql

-- install Oracle Text as documented here http://www.oraclekonsulent.dk/en/index.php?blog_id=185
@$ORACLE_HOME/ctx/admin/catctx.sql change_on_install SYSAUX TEMP NOLOCK;

connect ctxsys/ctxsys

 @?/ctx/admin/defaults/drdefus.sql

Follow the directions here http://download.oracle.com/docs/cd/E17556_01/doc/install.40/e15513/otn_i...
sqlplus / as sysdba @apexins APEX APEX TEMP /i/

Oracle Apex 4 - Force Report Width to Specific Number of Pixels

In Oracle Apex tabular reports the default width of a report is the greater of either the column headings or the data.

This can have undesirable effects, particularly if the data for a given column in the report is allowed to be very wide. Some of the undesirable effects include creating a row so wide you cannot see the right most portion of the row or having two report regions (or more) on the same page with different total widths.

Sometimes it's important to force the report region to a specific number of pixels. This can be accomplished a number of ways.

One of the easiest is to add a width value to the report attributes substitution string (see image below).



In this case we are setting the absolute width of the report region to 500 pixels.

Apex Picture

Using Autonomous Transactions in PL SQL to Update a Debug Log

Here's a simple PL SQL function and accompanying table definition that can be used for writing to a DEBUG table. This is very useful when you're debugging packages (perhaps called from APEX?) and you cannot write to the console easily.

PROCEDURE DEBUG_WRITE(P_MSG IN VARCHAR2, P_PROC IN VARCHAR2)
IS
   PRAGMA AUTONOMOUS_TRANSACTION;
BEGIN
   BEGIN
        INSERT INTO ES2_DEBUG_LOG (MSG_TIME, PROC_NAME, MSG)
        VALUES(SYSTIMESTAMP, P_PROC, P_MSG);
        COMMIT;        
   -- EXCEPTION      
   END;
END;

The table to hold the data looks like this

CREATE TABLE SAMPLE.ES2_DEBUG_LOG
(
  MSG_TIME   TIMESTAMP(6),
  MSG        VARCHAR2(1000 BYTE),
  PROC_NAME  VARCHAR2(40 BYTE)
)
TABLESPACE SAMPLE_DATA
PCTUSED    0
PCTFREE    10
INITRANS   1
MAXTRANS   255
STORAGE    (
            INITIAL          64K
            NEXT             1M
            MINEXTENTS       1
            MAXEXTENTS       UNLIMITED
            PCTINCREASE      0
            BUFFER_POOL      DEFAULT
           )
LOGGING 
NOCOMPRESS 
NOCACHE
NOPARALLEL
MONITORING;

An example of writing to the log looks like this:

  DEBUG_WRITE('1','create es2');
    SELECT es2_experiment_id.NEXTVAL
    INTO l_experiment_id
    FROM dual;
    
    DEBUG_WRITE('2','create es2');
    
    p_experiment_id := l_experiment_id;
    
    SELECT status_id INTO l_status_id
    FROM es2_status_table
    WHERE status_desc = 'New';
    
    DEBUG_WRITE('3','create es2');

Here's the output from the log after one run

SQL Developer 3 - Connecting to SQL Server Using Third Party Drivers

SQL Devleoper 3 is here. Today I downloaded the early adopter version and am intent on connecting to a local SQL Server database to extract some support information.

Documentation on this process is contained in the online manuals located here

http://download.oracle.com/docs/cd/E18464_01/doc.30/e17472/dialogs.htm#s...

Here's a Quick How To Connect SQL Devleoper 3 to SQL Server or MySQL

Note that you can use "Check For Updates" to locate the available drivers.

Here's the steps I followed.

Click Help->Check For Updates

Then check the boxes to enable Oracle Extensions and Third Party SQL Devleoper Extensions

In the list scroll down to

JDTS JDBC Driver 11.1.58.17

check the box

If you need MySQL support that driver is right below and is named

MySQL JDBC Driver 11.1.58.17

Click Next

Accept the GNU Lesser License Agreement

Click Next

and the drivers will download

Restart SQL Developer

Now Click Tools->Preferences->Database->Third Party Drivers

In the Browser box you will see a couple of folders, one should be 'tmp'

Browse into Temp and open the update folder

The drivers you downloaded are there

Now if you get a message that says

java.sql.SQLException: I/O Error: SSO Failed: Native SSPI library not loaded.

You are missing a DLL.

Goto this address at SourceForge and download the missing DLL

http://sourceforge.net/projects/jtds/files/jtds/1.2.5/jtds-1.2.5-dist.zi...

Browse into the zip file and find a file named

ntlmauth.dll

Put that anywhere in your path and restart SQL Developer.

You will be all set and should be able to login using NT Authentication

Easy way to Query Oracle Database for DB IP Address, Host Name and Instance Name

1/24/2011

Here's a handy SQL script to pull host name, instance name and IP Address ( of the primary interface).

SELECT A.HOST_NAME,A.INSTANCE_NAME,UTL_INADDR.GET_HOST_ADDRESS V$INSTANCE A;

Returns this

HOST_NAME            INSTANCE_NAME     GET_HOST_ADDRESS
-------------------------------------------------------
myhost.mydomain.com  foobar            10.10.10.88

Oracle Log Buffer -- Considerations Around Buffer Size, Redo Log Groups, and Redo Log Member Sizing

I was recently asked by a colleague "How big should I set my log buffer?"

The answer is, you shouldn't set it at all in Oracle 10g and above, it will be set automatically. See this link to Oracle documentation for more information on log buffers. Oracle Log Buffer Docs

Here's some additional considerations:

Redo log switches are a factor of the size of the redo logs and number of groups AND the log_biuffer. The redo log buffer gets flushed when you commit, it's 1/3 full or every 3 seconds. Any space you allocate to the redo log buffer above and beyond what gets used (based on the above constraints) is wasted log buffer. Therefore, set the redo log buffer to an appropriate (but not excessive) size and then adjust the size of the redo log memober files and the number of log groups to attain a healthy cycle on the redo logs.

To determine what healthy is, size the redo logs so that in addition to the current log, only one other group is ACTIVE. If your groups are too few, or the mmbers to small, you will end up with a situation where all your log files are current or active and you may get a

'private strand flush incomplete'

or

'Checkpoint not complete'

SELECT * FROM V$LOG;

and you will see output similar (perhaps to this):


niku> set lin 2000
niku> /

    GROUP#    THREAD#  SEQUENCE#      BYTES    MEMBERS ARC STATUS           FIRST_CHANGE# FIRST_TIME

         1          1        431 1073741824          1 NO  INACTIVE            6040091582 03-DEC-10
         2          1        432 1073741824          1 NO  INACTIVE            6041112547 04-DEC-10
         3          1        433 1073741824          1 NO  INACTIVE            6041954837 05-DEC-10
         4          1        434 1073741824          1 NO  CURRENT             6042824300 05-DEC-10

expect script to automate shell logins and sudo to oracle

This posting describes a way to solve a problem of how to open an ssh session and login automatically (without using a private key). This is not a good idea, but can be useful and demonstrates the usefulness of expect scripting.

Note that I would recommend against doing this in most cases because the password is clearly documented in the script. Having said that, this is the situation at many Fortune 500 companies and Global 1000 companies.

My direct experience is that these companies have thousands of scripts in production that have passwords directly embedded into them or passwords set to the account ID.

Given that we cannot fix everything all the time, and need to work fast when we are tired, this script can be useful. Also, there are other applications of expect scripting that are benign. Hopefully this example will lead the reader to see other ways to use expect scripting.

Script to perform an automated login and sudo to oracle account

#!/usr/bin/expect 
# this script opens a ssh session and then logs is using a password containing a $
# then sudo over to the oracle account 

spawn ssh $argv 
expect "password:" 
send "Jan\$6rath\r" 

#sudo to oracle
send "sudo su - oracle\r" 
expect "password:" 
send "Jan\$6rath\r"
expect "$" 

#cd to your home location -- different depending on your OFA
send "cd /ora/admin\r"
expect "$" 
send "ls -l\r"

# return console to user
interact

Oracle Linux - Proper Method to Move a Data File

Moving a datafile from mount point /disk2 to /disk1 on an Oracle 10g database that is up and running


/* take the tablespace offline */
ALTER TABLESPACE TP6_TS_TABLES OFFLINE NORMAL;

# make sure you have the right data files, prevent unhappy accidents
ls -lh /disk1/oradata/datafiles/emspri.dbf 
ls -lh /disk2/oradata/datafiles/emspri.dbf 

# copy the file in the OS
cp /disk2/oradata/datafiles/emspri.dbf /disk1/oradata/datafiles/emspri.dbf 

-- let Oracle know the file is moved
ALTER DATABASE RENAME FILE '/disk2/oradata/datafiles/emspri.dbf' to '/disk1/oradata/datafiles/emspri.dbf';

-- put the tablespace back online
ALTER TABLESPACE TP6_TS_TABLES ONLINE;


After you move the file

clean up tasks -- delete the old data file to free up disk space
backup the control files

Querying the Oracle SYSMAN MGMT$ Views SYSMAN.MGMT$

DRAFT - contains typos

A project I'm working on required myself and another DBA to assemble an accurate list of all the Oracle databases in the company. The company has many Oracle databases in a variety of locations. The versions vary from 8i to 11g on a variety of platforms. Some are RAC clusters, some are standalone. The inventory needs to include RAC and standalone but exclude Weblogic and OCJ4.

The company uses Oracle Enterprise Manager OEM and Oracle Management Server OMS. As a consequence we can see all the information we need is in OEM. So we started down the path of querying the data out of the SYSMAN schema, in particular the MGMT$ views.

At first I thought the MGMT$ views would be widely known and fairly well documented, much like the V$ views. Well, so far, I haven't found much of anything that is helpful about SYSMAN and MGMT$. Perhaps (most likely) its buried somewhere but being short on time and patience we decided to dig in and figure it out for ourselves.

What follows are some comments, SQL queries and guesses about what's in these views. In the end, we found what we wanted. Along they way, we opened a MetaLink TAR, hoping for a nice tech note and documentation from Oracle Support. This we did not get. We got some advice, but not all of it was on target.

Here's the goal. We want to fill in a spreadsheet with these columns (for all production databases):

OS
OS Version
# Active CPU's
Server IP Address
Server Name
Database Listener TCP Ports
Instance Name
Database Version

DDL DROP Monitoring via Oracle Database Trigger

This trigger will monitor DDL, specifically DROP statements and log the schema (user) performing the drop into a table called DDL_LOG. A test script is presented below. Before you use this in production make sure you think through the location of the log table, and size it appropriately. If the trigger fails to fire there can be complications, so make sure you test this and understand it before you deploy it outside of a test instance.

conn / as sysdba

create user abc identified by abc;

CREATE TABLE ddl_log (
operation VARCHAR2(30),
obj_owner VARCHAR2(30),
object_name VARCHAR2(30),
sql_text VARCHAR2(64),
attempt_by VARCHAR2(30),
attempt_dt DATE);

CREATE OR REPLACE TRIGGER bds_trigger BEFORE DROP ON DATABASE
DECLARE 
oper system.ddl_log.operation%TYPE;
BEGIN 
INSERT INTO system.ddl_log 
SELECT ora_sysevent, 
ora_dict_obj_owner, 
ora_dict_obj_name, 
NULL, 
USER, 
SYSDATE
FROM dual;
END bds_trigger;

/

-- now let's test the logging mechanism 
select * from system.ddl_log;

drop user abc;

select * from system.ddl_log;

select * from DBA_USERS where lower(username) like lower('te%');

Oracle 11g Directory Object Case Sensitive on Linux

I ran into an interesting feature of Oracle 11g on Linux in debugging some PL/SQL code which uses BFILENAME and DBMS_LOB calls to load and process XML files.

There is some case sensitivity to the name of DIRECTORY object, but probably not what you think. The DIRECTORY object must be referenced in the PL/SQL code in upper case regardless of the case you use in creating it.

Return a value from SQLPLUS to a korn shell environment variable

#!/usr/bin/ksh
#does user exist
TARGET_USER=$1
VALUE=`sqlplus -silent <"system/password@xxpr04"> system/password@xxpr04 << END
set pagesize 0 feedback off verify off heading off echo off
select username from dba_users where username = UPPER('$1');
exit;
END

`if [ "$VALUE" = "$TARGET_USER" ]; then
echo "Rocky.....that user does exist"
else
echo "Rocky.....that user does not exist"
fi

echo "The value is [$VALUE]"

echo "The target user is [$TARGET_USER]"

#end of script

CURSOR_SHARING=FORCE using BIND variables to improve performance, Source code and timing provided



Background on the Benefits of Using CURSOR_SHARING=FORCE


Setting the init.ora CURSOR_SHARING=FORCE will work around source code that uses literal values instead of bind variables. (see below for proof of the performance improvement). However, there are some potential side effects of using CURSOR_SHARING=FORCE as follows:


1. Applications (report writers) that DESCRIBE cursors may get erroneous column widths


2. Soft parsing will continue to be higher than needed


3. Latches consumption will be higher than needed


4. Oracle will work harder than needed to "convert" the SQL (i.e. to perform the "FORCE")


In the example below three scenarios are tested to see the performance improvements as follows:


1. Bind Variables Used
2. No Bind Variables Used
3. No Bind Variables Used With 
     CURSOR_SHARING=FORCE


From the results below we see that using bind variables yields the best performance, CURSOR_SHARING=FORCE is second best, and no bind variables is the worst. For these reasons, CURSOR_SHARING=FORCE should be considered a temporary workaround until the application source code can be modified to use bind variables. Additional testing is recommended to determine the impact of potential CURSOR_SHARING=FORCE side effects.


Background on the Benefits of Using Bind Variables

Finding the Hidden INIT.ORA Parameters in Oracle 10.2.0.5

Oracle contain INIT.ORA parameters that control how the database is configured and operates. These are the 'boot up' parameters for the database although so of them can be modified while the database is running.

A standard caution applies -- DO NOT USE ANY OF THESE parameters without permission from Oracle Support.

This query will display all 1360 hidden INIT.ORA parameters in Oracle 10.2.0.5. (you must be logged in as SYS to perform this query):

SELECT X.KSPPINM NAME,
DECODE (BITAND (KSPPIFLG / 256, 1), 1, 'TRUE', 'FALSE') SESMOD,
DECODE (BITAND (KSPPIFLG / 65536, 3),
1, 'IMMEDIATE',
2, 'DEFERRED',
3, 'IMMEDIATE',
'FALSE')
SYSMOD,
KSPPDESC DESCRIPTION
FROM SYS.X_$KSPPI X
WHERE X.INST_ID = USERENV ('INSTANCE')
AND TRANSLATE (KSPPINM, '_', '#') LIKE '#%'
ORDER BY 1;

( some information presented is from DBA Pool by Vigyan Kaushik )

Using Oracle Trace Event 10046 to Troubleshoot Clarity Performance Issues

This entry is a brief discription of using the Oracle trace event 10046 to identify exactly what a query is doing. SQL Trace at the database level is the most in depth analysis that can be performed on a query to identify exactly what is happening at runtime.

Oracle documentation on SQL Trace can be found here ORACLE TRACE DOCUMENTATION

To perform this tracing you need access to the Oracle database in order to recover the trace file and run TKPROF on it. It's possible to put the trace fiule on another machine and then TKPROF it but you will not be able to use certain options, like EXPLAIN PLAN.Here's a quick outline of the issue and then steps to enable tracing.

Login to the machine, for this example I assume you are on the actual database server, it's a Linux box and you are using SQLPLUS. 

sqlplus / as sysdba 
ALTER SESSION SET EVENTS '10046 trace name context forever, level 4'; ALTER SESSION SET CURRENT_SCHEMA=YOUR_CLARITY_SCHEMA; -- Example trace select * from (select row_number() over ( order by key asc) row_num, count(*) over () num_rows, q.* from ( SELECT bms.bms_date_key key, bms.bms_date bms_date, sum(bms.scheduled) scheduled, sum(bms.actual) actual, bms.obs_id obs FROM ( select to_char(bms_sched_ms_date , 'YYYY/MM') as BMS_date_key, to_char(bms_sched_ms_date , 'Mon YYYY') as BMS_date, sum(bms_plan_billed) as scheduled, 0 as actual, 'scheduled' as amount_type, flat.branch_unit_id obs_id from odf_ca_eri_prj_bill_mlstone bms , odf_ca_project odf , inv_investments inv , odf_ca_project parent_odf , prj_obs_associations assoc , prj_obs_units_flat flat , prj_obs_units unt where bms_sched_ms_date is not null and bms.bms_actual_date is null

CA Clarity - Find and Kill All Sessions Running a Portlet That Consumes Too Many Resources

CA Clarity is very powerful and flexible. Custom portlets can be created that can perform very complex dynamic queries. These queries can sometimes consume too many resources. This can happen for a variety of reasons including:

  • Bad SQL (cartesian join)
  • Bad Execution Plan (stale statistics or other factors)
  • Lack of Indexes (on custom ODF attributes)
  • Too many users hitting the portlet at the same time (i.e. start of the day, Friday etc)

We'll see the issue manifest itself in a couple of way such as:

  1. iowait > 10%
  2. CPU on one or more CPUs sustained > 50%

When this situation occurs and the culprit is a portlet killing it is safe; Clarity will recover. Note that killing BG process and other queries producted from the Clarity application may not be safe so check with CA Support.

Killing the sessions in the case of our runaway portlet the fastest thing to do to return the system to normal operation.

Note also that killing the query doesn't fix the problem, it buys you time to get to the root cause.

Here's the steps you need to follow to bring the situation under control:

Find the hash value of the SQL statement (Using AWR or Statspack or OEM or Quest SpotLight).

In our example let's assume this hash is

3584585137

Use this SQL statemet to find the sessions from the app server that are running the query

SELECT DISTINCT 'ALTER SYSTEM KILL  SESSION ' 
 || CHR(39) 
 || SID 
 || ', ' 
 || SERIAL# 
 || CHR(39) 
 || ' IMMEDIATE;' STMT  
 FROM V$SESSION A,V$SQL B 
WHERE A.SQL_HASH_VALUE=B.HASH_VALUE 
  AND B.OLD_HASH_VALUE = '3584585137';

Save these statements and run them in SQL Devleoper, Spotlight, Toad or SQL Plus.

Note that any inactive or killed sessions will error out, ignore those errors. Killing a session that is already dead won't hurt.

You can also use the anonymous PL/SQL block

Syndicate content