If you get following 'ORA-00020' errors in database alert log:
--
ORA-00020: maximum number of processes (xxx) exceeded
ORA-20 errors will not be written to the alert log for
the next minute. Please look at trace files to see all
the ORA-20 errors.
Process m000 submission failed with error = 20
--
Then you probably wont get into the database via sqlplus or any other tool. This is because you got same error when you are trying to connect into the database. Error means that you need to increase processes parameter value or find out what is using too many processes and fix the problem. Best way to handle this kind of situation is to stop application/applications that are using this database and then you can again connect into the database and increase needed parameter values (alter system set processes=500 scope=spfile;) and restart the database.
If you cannot stop the applications to free processes then you can try to stop database with following way:
Run following as oracle user into database server:
export ORACLE_SID=<database_name>
sqlplus -prelim / as sysdba
and then run 'shutdown immediate' or 'shutdown abort' . Then 'startup' or 'startup mount' and do the parameter change (after parameter change you need to do restart the database). But remember that this can damage your applications data (especially if you use abort which just stops the database right away.) So it is better to just stop the application/applications even it means little downtime.
NOTE! When you connect into database via 'sqlplus -prelim' then you can also try to check what is causing these errors with oradebug (hanganalyze). And if you find some problematic sessions/SQLs you can kill (via OS) just those without need to restart the whole database. But this can take some time so more quickly fix use the above instructions. More info about oradebug and hanganalyze can be find from My Oracle Support (MOS) documents:
215858.1
and
310830.1
This blog contains useful and interesting issues/problems about different sql and nosql databases. Mainly Oracle.
Thursday, December 12, 2013
Thursday, November 28, 2013
Oracle 12c Invisible Columns
With Oracle 12c you can use Invisible Columns to hide table columns for your testing or for other purposes.
Column is invisible for following operations (but you can use normal DML operations for invisible columns):
1. SELECT * FROM <table_name>;
2. DESCRIBE <table_name> (via sqlplus and OCI)
3. %ROWTYPE attribute declarations in PL/SQL
This way you use invisible columns:
For example:
CREATE TABLE test_table(
test_id NUMBER,
test_name VARCHAR2(32),
starting_time TIMESTAMP,
ending_time TIMESTAMP );
You can change columns to invisible and to visible:
ALTER TABLE test_table MODIFY (test_name INVISIBLE);
ALTER TABLE test_table MODIFY (test_name VISIBLE);
NOTE! When you set column invisible the column order changes (invisible column is removed from column order). And If you set same column back to visible it is placed last in table column order.
You can also add new columns with invisible on:
ALTER TABLE test_table ADD ( tester_id NUMBER INVISIBLE );
You can also create table with invisible columns. Just add INVISIBLE after column datatype:
CREATE TABLE test_table(
test_id NUMBER INVISIBLE,
test_name VARCHAR2(32),
starting_time TIMESTAMP,
ending_time TIMESTAMP );
You can make normal SQL DML operations for invisible column like this (for previous test table): INSERT INTO test_table (test_id, test_name, starting_time, ending_time) VALUES (1110, 'Test_1', '12-Oct-13', '15-Oct-13');
NOTE! The following types of tables cannot have invisible columns: External tables, Cluster tables, Temporary tables. Also attributes of user-defined types cannot be invisible.
You can find more info about Invisible Columns here:
Understand Invisible Columns
Column is invisible for following operations (but you can use normal DML operations for invisible columns):
1. SELECT * FROM <table_name>;
2. DESCRIBE <table_name> (via sqlplus and OCI)
3. %ROWTYPE attribute declarations in PL/SQL
This way you use invisible columns:
For example:
CREATE TABLE test_table(
test_id NUMBER,
test_name VARCHAR2(32),
starting_time TIMESTAMP,
ending_time TIMESTAMP );
You can change columns to invisible and to visible:
ALTER TABLE test_table MODIFY (test_name INVISIBLE);
ALTER TABLE test_table MODIFY (test_name VISIBLE);
NOTE! When you set column invisible the column order changes (invisible column is removed from column order). And If you set same column back to visible it is placed last in table column order.
You can also add new columns with invisible on:
ALTER TABLE test_table ADD ( tester_id NUMBER INVISIBLE );
You can also create table with invisible columns. Just add INVISIBLE after column datatype:
CREATE TABLE test_table(
test_id NUMBER INVISIBLE,
test_name VARCHAR2(32),
starting_time TIMESTAMP,
ending_time TIMESTAMP );
You can make normal SQL DML operations for invisible column like this (for previous test table): INSERT INTO test_table (test_id, test_name, starting_time, ending_time) VALUES (1110, 'Test_1', '12-Oct-13', '15-Oct-13');
NOTE! The following types of tables cannot have invisible columns: External tables, Cluster tables, Temporary tables. Also attributes of user-defined types cannot be invisible.
You can find more info about Invisible Columns here:
Understand Invisible Columns
Monday, November 18, 2013
Oracle 12c Temporal Validity time periods in tables.
In Oracle 12c there is new feature called "Temporal Validity". With it you can create time periods between two columns and use these periods for queries.
Example:
-Create table with Temporal Validity period
(you can also add PERIOD FOR in existing table with "ALTER TABLE" clause):
CREATE TABLE test_table(
test_id NUMBER,
test_name VARCHAR2(32),
starting_time TIMESTAMP,
ending_time TIMESTAMP,
PERIOD FOR testing_time (starting_time, ending_time));
-Inserts are working just like before (PERIOD FOR is not column)
(There can be also NULL values if table constraints accept those.):
INSERT INTO test_table VALUES (1110, 'Test_1', '12-Oct-13', '15-Oct-13');
INSERT INTO test_table VALUES (1110, 'Test_1', '14-Oct-13', null);
- PERIOD FOR gives more variety for your queries (but you can also query table without it):
-This will return all rows that got given date in their time period (first example row):
SELECT * from test_table AS OF PERIOD FOR testing_time TO_TIMESTAMP('13-Oct-13');
-You can also use Period For in "BETWEEN" clause. This will return both example rows.:
SELECT * from test_table VERSIONS PERIOD FOR testing_time BETWEEN
TO_TIMESTAMP('13-Oct-13') AND TO_TIMESTAMP('16-Oct-13');
NOTE: Flashback Query has been extended to support queries on Temporal Validity dimensions.
You can find more info about Temporal Validity from here:
Oracle 12c New Features
and here:
Oracle 12c Desing Basics
Example:
-Create table with Temporal Validity period
(you can also add PERIOD FOR in existing table with "ALTER TABLE" clause):
CREATE TABLE test_table(
test_id NUMBER,
test_name VARCHAR2(32),
starting_time TIMESTAMP,
ending_time TIMESTAMP,
PERIOD FOR testing_time (starting_time, ending_time));
-Inserts are working just like before (PERIOD FOR is not column)
(There can be also NULL values if table constraints accept those.):
INSERT INTO test_table VALUES (1110, 'Test_1', '12-Oct-13', '15-Oct-13');
INSERT INTO test_table VALUES (1110, 'Test_1', '14-Oct-13', null);
- PERIOD FOR gives more variety for your queries (but you can also query table without it):
-This will return all rows that got given date in their time period (first example row):
SELECT * from test_table AS OF PERIOD FOR testing_time TO_TIMESTAMP('13-Oct-13');
-You can also use Period For in "BETWEEN" clause. This will return both example rows.:
SELECT * from test_table VERSIONS PERIOD FOR testing_time BETWEEN
TO_TIMESTAMP('13-Oct-13') AND TO_TIMESTAMP('16-Oct-13');
NOTE: Flashback Query has been extended to support queries on Temporal Validity dimensions.
You can find more info about Temporal Validity from here:
Oracle 12c New Features
and here:
Oracle 12c Desing Basics
Thursday, October 31, 2013
Oracle ORA-7445 [kkzufst] errors in alertlog.
ORA-7445 [kkzufst] errors in alertlog are result from bugs in materialized views refresh jobs.
If you got these bugs you'll see these kind of errors in your database alertlog
(there is always ORA-07445 and kkzufst in error but other clauses can vary.):
--
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x1000] [PC:0x44BBFCE, kkzufst()+111] [flags: 0x0, count: 1]
Errors in file...
ORA-07445 : exception encountered: core dump [kkzufst()+111] ...
Incident details in:...
--
In incident log you'll see something like this (here you can also see problematic SQL or PL/SQL clause in "Current SQL Statement ..." section)
(there is always ORA-07445 and kkzufst in error but other clauses can vary.):
--
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x1000] [PC:0x44BBFCE, kkzufst()+111] [flags: 0x0, count: 1]
.
.
.
dbkedDefDump(): Starting a non-incident diagnostic dump (flags=0x3, level=3, mask=0x0)
----- Current SQL Statement for this session (sql_id=...
--
Fixes:
Bugs are more accurately 9554075 and 9656434
These both bugs are fixed in Oracle version 11.2.0.2 SPS (Server Patch Set).
And in Oracle version 12 fixes are in following versions: 9554075 -> version Oracle 12.1.0.1 (Base Release). 9656434 -> version Oracle 12.2 (Future release. (Not yet released))
More info about these bugs can be find from MOS (My Oracle Support) document:
1288415.1
If you got these bugs you'll see these kind of errors in your database alertlog
(there is always ORA-07445 and kkzufst in error but other clauses can vary.):
--
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x1000] [PC:0x44BBFCE, kkzufst()+111] [flags: 0x0, count: 1]
Errors in file...
ORA-07445 : exception encountered: core dump [kkzufst()+111] ...
Incident details in:...
--
In incident log you'll see something like this (here you can also see problematic SQL or PL/SQL clause in "Current SQL Statement ..." section)
(there is always ORA-07445 and kkzufst in error but other clauses can vary.):
--
Exception [type: SIGSEGV, Address not mapped to object] [ADDR:0x1000] [PC:0x44BBFCE, kkzufst()+111] [flags: 0x0, count: 1]
.
.
.
dbkedDefDump(): Starting a non-incident diagnostic dump (flags=0x3, level=3, mask=0x0)
----- Current SQL Statement for this session (sql_id=...
--
Fixes:
Bugs are more accurately 9554075 and 9656434
These both bugs are fixed in Oracle version 11.2.0.2 SPS (Server Patch Set).
And in Oracle version 12 fixes are in following versions: 9554075 -> version Oracle 12.1.0.1 (Base Release). 9656434 -> version Oracle 12.2 (Future release. (Not yet released))
More info about these bugs can be find from MOS (My Oracle Support) document:
1288415.1
Thursday, October 17, 2013
Oracle 12c enhancements for SCAN
There are new features for SCAN in Oracle 12c:
1. SCAN and Oracle Clusterware managed VIPs now support IPv6 based IP addresses.
So now we can use also IPv6 addresses with SCAN. If you check SCAN configs with:
srvctl config scan
it will list you more info: SCAN name, IPv4 address, IPv6 address, SCAN number+VIP
2. SCAN is by default restricted to only accept service registration from nodes in the cluster.
If you want to add nodes (for example for the database that are not using cluster's private interconnect) then you can do it with command:
srvctl modify scan_listener -invitednodes <new_database_name> –update
You can also run this modify before you present real node in the cluster.
If you check SCAN-listener configs with:
srvctl config scan_listener
it will list you more info: SCAN-listener name+port, Registration invited nodes, Registration invited subnets
3. SCAN supports multiple subnets in the cluster (one SCAN per subnet).
NOTE! Here you can find more info about SCAN and Oracle 12c enhancements :
http://www.oracle.com/technetwork/products/clustering/overview/scan-129069.pdf
1. SCAN and Oracle Clusterware managed VIPs now support IPv6 based IP addresses.
So now we can use also IPv6 addresses with SCAN. If you check SCAN configs with:
srvctl config scan
it will list you more info: SCAN name, IPv4 address, IPv6 address, SCAN number+VIP
2. SCAN is by default restricted to only accept service registration from nodes in the cluster.
If you want to add nodes (for example for the database that are not using cluster's private interconnect) then you can do it with command:
srvctl modify scan_listener -invitednodes <new_database_name> –update
You can also run this modify before you present real node in the cluster.
If you check SCAN-listener configs with:
srvctl config scan_listener
it will list you more info: SCAN-listener name+port, Registration invited nodes, Registration invited subnets
3. SCAN supports multiple subnets in the cluster (one SCAN per subnet).
Before you can do this you must enable multiple subnets for the cluster (This is a post-installation task an cannot be made in Grid Infra installation).
NOTE! Here you can find more info about SCAN and Oracle 12c enhancements :
http://www.oracle.com/technetwork/products/clustering/overview/scan-129069.pdf
Thursday, October 10, 2013
Oracle 12c New Features for table Defaults
With Oracle 12c there is improvements in tables default values.
Starting on Oracle 12c you can use sequence nextval straight in column default or you can use "generated as identity" for column default (this will generate new sequence for table and start use it for column default values) This new sequence is linked with table so if you remove the table also the sequence is removed. You can also use "generated by default as identity" when you create new table sequence. This will allow you to override sequence value in inserts. If you use default word as column value or leave column and it's value out of the insert the sequence is used but you can also give new value for this column when "by default" is used.
With Oracle 12c you can also give default values for null columns. So if you are inserting null value into column that have for example this default settings: "default ON NULL 11" then 11 value will be inserted into table instead of null.
Examples:
Create table clause with using existing sequence:
SQL> CREATE TABLE TEST_TABLE ( id int seq_test_id.nextval primary key, name varchar2(32) );
Create table clause with generating new identity sequence:
SQL> CREATE TABLE TEST_TABLE ( id int generated as identity primary key, name varchar2(32) );
Create table clause with generating new identity sequence with sequence init values:
SQL> CREATE TABLE TEST_TABLE( id int generated as identity (start with 10000) primary key, name varchar2(32) );
Override sequence value when table squence is created with "generated by default as identity":
SQL> INSERT INTO TEST_TABLE (id, name) VALUES ( 10, 'test_name');
Using sequence value when table squence is created with "generated by default as identity":
SQL> INSERT INTO TEST_TABLE (id, name) VALUES ( default, 'test_name');
or
SQL> INSERT INTO TEST_TABLE (name) VALUES ( 'test_name');
Starting on Oracle 12c you can use sequence nextval straight in column default or you can use "generated as identity" for column default (this will generate new sequence for table and start use it for column default values) This new sequence is linked with table so if you remove the table also the sequence is removed. You can also use "generated by default as identity" when you create new table sequence. This will allow you to override sequence value in inserts. If you use default word as column value or leave column and it's value out of the insert the sequence is used but you can also give new value for this column when "by default" is used.
With Oracle 12c you can also give default values for null columns. So if you are inserting null value into column that have for example this default settings: "default ON NULL 11" then 11 value will be inserted into table instead of null.
Examples:
Create table clause with using existing sequence:
SQL> CREATE TABLE TEST_TABLE ( id int seq_test_id.nextval primary key, name varchar2(32) );
Create table clause with generating new identity sequence:
SQL> CREATE TABLE TEST_TABLE ( id int generated as identity primary key, name varchar2(32) );
Create table clause with generating new identity sequence with sequence init values:
SQL> CREATE TABLE TEST_TABLE( id int generated as identity (start with 10000) primary key, name varchar2(32) );
Override sequence value when table squence is created with "generated by default as identity":
SQL> INSERT INTO TEST_TABLE (id, name) VALUES ( 10, 'test_name');
Using sequence value when table squence is created with "generated by default as identity":
SQL> INSERT INTO TEST_TABLE (id, name) VALUES ( default, 'test_name');
or
SQL> INSERT INTO TEST_TABLE (name) VALUES ( 'test_name');
Wednesday, October 9, 2013
Oracle 11.2.0.3 ORA-04030: out of process memory when trying to allocate xxxx bytes (kxs-heap-w,KGL Iterator information)
If you get the ORA-04030: ... (kxs-heap-w,KGL Iterator information) error in alert log
and if you see obsoleted parent cursors lying around.
Then you are hitting Bug 12791981 - ORA-4030 due to "KGL Iterator information" .
Look more from MOS (My Oracle Support) document 12791981.8 .
If you are trying to check those obsolete cursors you are probably hitting the same error ORA-04030 and then you know that there are also those obsoleted cursors in the database.
With this SQL you can check those obsoleted cursors:
select sql_id, address, IS_OBSOLETE, count(*) from v$sql where IS_OBSOLETE='Y' group by sql_id, address, IS_OBSOLETE having count(*)>=10;
Or just:
select count(*) from v$sql where IS_OBSOLETE='Y';
You can fix this bug with patch 12791981 .
NOTE! This bug can also be fixed with patch 14799269 (this is more complete fix according to MOS) but it is only available for 11.2.0.3.1 version and newer ones.
and if you see obsoleted parent cursors lying around.
Then you are hitting Bug 12791981 - ORA-4030 due to "KGL Iterator information" .
Look more from MOS (My Oracle Support) document 12791981.8 .
If you are trying to check those obsolete cursors you are probably hitting the same error ORA-04030 and then you know that there are also those obsoleted cursors in the database.
With this SQL you can check those obsoleted cursors:
select sql_id, address, IS_OBSOLETE, count(*) from v$sql where IS_OBSOLETE='Y' group by sql_id, address, IS_OBSOLETE having count(*)>=10;
Or just:
select count(*) from v$sql where IS_OBSOLETE='Y';
You can fix this bug with patch 12791981 .
NOTE! This bug can also be fixed with patch 14799269 (this is more complete fix according to MOS) but it is only available for 11.2.0.3.1 version and newer ones.
Thursday, September 19, 2013
Oracle RAC cluster NIC bonding.
You need to do following steps to start using NIC bonding with Oracle RAC (this add both public and private interconnect bond interfaces):
NOTE! This operation needs full downtime from cluster databases. And depending your environment you might need server reboots during this settings.
Get current network interface configuration being used by cluster as oracle user:
oifcfg getif
Set new bond interfaces – Updates OCR (-global make these changes to all nodes on cluster) run these in one node as oracle user (IPs can be same as before but interface name is changing):
oifcfg setif -global bond0/10.77.5.0:public
oifcfg setif -global bond1/10.37.24.0:cluster_interconnect
oifcfg getif
Do this on one node as oracle user (and all databases in cluster):
srvctl status database -d <database_name>
srvctl stop database -d <database_name>
srvctl status database -d <database_name>
Do these on both nodes as root user:
crsctl disable crs
crsctl check crs
crstcl stop crs
crsctl check crs
If you need to reboot nodes because of your environment then do it now.
crsctl enable crs
crsctl check crs
crstcl start crs
crsctl check crs
check that crs started on both nodes run as root user on one node:
crsctl stat res -t
Get current pvt. interconnect info as oracle user in one node:
oifcfg getif
Delete old interfaces eth0 and eth1 as oracle user in one node:
oifcfg delif -global eth0/10.77.5.0
oifcfg delif -global eth1/10.37.24.0
Check that only new bond interfaces are visible as oracle user in one node:
oifcfg getif
srvctl status nodeapps
Check current VIP settings. run this on all nodes as oracle user:
srvctl config vip -n <node_name>
Check current SCAN settings. run this on one node as oracle user:
srvctl config scan
Stop cluster nodeapps as oracle user:
srvctl stop nodeapps
Change SCAN/VIP settings. run this on all nodes as root user (set correct IP for all nodes):
srvctl modify nodeapps -n <node_name> -A 10.77.5.129/255.255.255.0/bond0
Check current VIP settings. run this on all nodes as oracle user:
srvctl config vip -n <node_name>
Start cluster nodeapps as oracle user:
srvctl stop nodeapps
Check cluster current network status as oracle user:
srvctl status nodeapps
Check current SCAN settings. run this on one node as oracle user:
srvctl config scan
Do these on both nodes as root user:
crsctl check crs
crstcl stop crs
crsctl check crs
crstcl start crs
crsctl check crs
srvctl status database -d <database_name>
srvctl start database -d <database_name>
srvctl status database -d <database_name>
NOTE! This operation needs full downtime from cluster databases. And depending your environment you might need server reboots during this settings.
1. Add new bond0 and bond1 interfaces for RAC cluster globally via oifcfg. public interface is bond0 and cluster_interconnect interface bond1:
Get current network interface configuration being used by cluster as oracle user:
oifcfg getif
Set new bond interfaces – Updates OCR (-global make these changes to all nodes on cluster) run these in one node as oracle user (IPs can be same as before but interface name is changing):
oifcfg setif -global bond0/10.77.5.0:public
oifcfg setif -global bond1/10.37.24.0:cluster_interconnect
oifcfg getif
2. Stop databases and disable + stop crs on all nodes:
Do this on one node as oracle user (and all databases in cluster):
srvctl status database -d <database_name>
srvctl stop database -d <database_name>
srvctl status database -d <database_name>
Do these on both nodes as root user:
crsctl disable crs
crsctl check crs
crstcl stop crs
crsctl check crs
3.Change OS network interfaces to use NIC bonding like this (this guide only add bond0 but it is better to make bond0 (eth0 and eth1) for public and bond1 (eth2 and eth3 ) for interconnect. And you don't need to make alias interfaces.):
http://www.oracle-base.com/articles/linux/nic-channel-bonding.phpIf you need to reboot nodes because of your environment then do it now.
4. Enable and start crs on all nodes:
Do these on both nodes as root user:crsctl enable crs
crsctl check crs
crstcl start crs
crsctl check crs
check that crs started on both nodes run as root user on one node:
crsctl stat res -t
5. Remove old interfaces from RAC cluster via oifcfg:
Get current pvt. interconnect info as oracle user in one node:
oifcfg getif
Delete old interfaces eth0 and eth1 as oracle user in one node:
oifcfg delif -global eth0/10.77.5.0
oifcfg delif -global eth1/10.37.24.0
Check that only new bond interfaces are visible as oracle user in one node:
oifcfg getif
6. Repair scan- and vip- addresses using bond0 instead of eth0 :
Check cluster current network status as oracle user:srvctl status nodeapps
Check current VIP settings. run this on all nodes as oracle user:
srvctl config vip -n <node_name>
Check current SCAN settings. run this on one node as oracle user:
srvctl config scan
Stop cluster nodeapps as oracle user:
srvctl stop nodeapps
Change SCAN/VIP settings. run this on all nodes as root user (set correct IP for all nodes):
srvctl modify nodeapps -n <node_name> -A 10.77.5.129/255.255.255.0/bond0
Check current VIP settings. run this on all nodes as oracle user:
srvctl config vip -n <node_name>
Start cluster nodeapps as oracle user:
srvctl stop nodeapps
Check cluster current network status as oracle user:
srvctl status nodeapps
Check current SCAN settings. run this on one node as oracle user:
srvctl config scan
7. Restart crs to see that it is starting correctly:
Do these on both nodes as root user:
crsctl check crs
crstcl stop crs
crsctl check crs
crstcl start crs
crsctl check crs
8. Restart databases:
Do this on one node as oracle user (and all databases in cluster):srvctl status database -d <database_name>
srvctl start database -d <database_name>
srvctl status database -d <database_name>
9. Test that you can connect into databases via all SCAN/VIP IPs. You can do this for example via sqlplus.
Tuesday, September 10, 2013
Oracle 11.2.0.3.0 DATABASE CRASHED DUE TO ORA-240 AND ORA-15064
There is a bug in Oracle 11.2.0.3.0 which can make your database instance restarting itself.
If you get following errors in database alert.log you know this bug is affecting your database:
ORA-00240: control file enqueue held for more than 120 seconds
ORA-29770: global enqueue process LCK0 (OSID 12329) is hung for more than 70 seconds
ORA-15064: communication failure with ASM instance
There is bugfix for this problem and you can download it from My Oracle Support (MOS) patch:
13914613
Other way to fix this is to update your database to newest version where this is also fixed
More info about this can be find from MOS documents:
Database Instance Crashes with ORA-15064 ORA-03135 ORA-00240 on 11.2 (Doc ID 1487108.1)
Bug 13914613 - Excessive time holding shared pool latch in kghfrunp with auto memory management (Doc ID 13914613.8)
If you get following errors in database alert.log you know this bug is affecting your database:
ORA-00240: control file enqueue held for more than 120 seconds
ORA-29770: global enqueue process LCK0 (OSID 12329) is hung for more than 70 seconds
ORA-15064: communication failure with ASM instance
There is bugfix for this problem and you can download it from My Oracle Support (MOS) patch:
13914613
Other way to fix this is to update your database to newest version where this is also fixed
More info about this can be find from MOS documents:
Database Instance Crashes with ORA-15064 ORA-03135 ORA-00240 on 11.2 (Doc ID 1487108.1)
Bug 13914613 - Excessive time holding shared pool latch in kghfrunp with auto memory management (Doc ID 13914613.8)
Monday, August 26, 2013
Oracle ORA-21561 : OID generation failed
If Oracle client connection is giving "ORA-21561 : OID generation failed" error.
Like this:
sqlplus <test_user>/<test_user_password>@<database_name>
.
.
.
ERROR:
ORA-21561 : OID generation failed
Enter user-name: <username>/<password> @ <tns connect string>
Then the problem is most likely in the client machine hosts file.
Check that there is client machine fully qualified name and short name in the client machine hosts file. If these are missing you'll get ORA-21561 errors when trying to connect server.
Like this:
sqlplus <test_user>/<test_user_password>@<database_name>
.
.
.
ERROR:
ORA-21561 : OID generation failed
Enter user-name: <username>/<password> @ <tns connect string>
Then the problem is most likely in the client machine hosts file.
Check that there is client machine fully qualified name and short name in the client machine hosts file. If these are missing you'll get ORA-21561 errors when trying to connect server.
Tuesday, August 20, 2013
Oracle 11.2.0.3 GCR0 trace files continuously created in trace directory.
Some environments with Oracle 11.2.0.3 you can see a lot of GCR0 trace files in trace directory.
Inside these trace files there is "kjgcr_DeleteSO" messages.
For Example:
---
.
.
.
kjgcr_DeleteSO: Processing SO at 0x13068a58
----------------------------------------
SO: 0xa13068a58, type: 36, owner: 0xa30d19230, flag: -/-/-/0x00 if: 0x1 c: 0x1
proc=0xa30d19230, name=GCR, file=kjgcr.h LINE:652, pg=0
(gcr) state=0x0 hold=0x0
.
.
.
---
These trace files are coming because of Bug 13385346.
You can find more about this Bug from MOS (My Oracle Support) ID: 13385346.8
But main thing is that as a workaround you can ignore these traces (they are small files and does not affect the database. This bug is fixed in 11.2.0.3.4 Database Patch Set Update and in the Windows platforms 11.2.0.3 Patch 6 .
Inside these trace files there is "kjgcr_DeleteSO" messages.
For Example:
---
.
.
.
kjgcr_DeleteSO: Processing SO at 0x13068a58
----------------------------------------
SO: 0xa13068a58, type: 36, owner: 0xa30d19230, flag: -/-/-/0x00 if: 0x1 c: 0x1
proc=0xa30d19230, name=GCR, file=kjgcr.h LINE:652, pg=0
(gcr) state=0x0 hold=0x0
.
.
.
---
These trace files are coming because of Bug 13385346.
You can find more about this Bug from MOS (My Oracle Support) ID: 13385346.8
But main thing is that as a workaround you can ignore these traces (they are small files and does not affect the database. This bug is fixed in 11.2.0.3.4 Database Patch Set Update and in the Windows platforms 11.2.0.3 Patch 6 .
Tuesday, August 13, 2013
Oracle SQLArea Most Expensive SQLs.
From v$sqlarea view you can check most expensive SQL clauses in your Oracle database. You can for example make checks about CPU time, Elapsed time, Executions, Buffer gets, Disk reads etc...
Here is the example SQL for CPU time (seconds) per execution fetching (run this as user who have permission to select from v$sqlarea) (this will fetch other values too but the order is made by CPU time per execution):
----
SQL> set pages 120
SQL> set lines 120
SQL> spool CPU_Time_seconds_per_exec.txt
SQL> select rownum as sorting, a.* from (
SELECT sqlst.parsing_schema_name, sqlst.sql_id, sqlst.last_active_time,
(sqlst.buffer_gets/sqlst.executions) "buffer_gets per exec",
(sqlst.disk_reads/sqlst.executions) "disk_reads per exec",
(sqlst.cpu_time/1000000) "cpu Time (s)",
(sqlst.elapsed_time/1000000) "Elapsed Time (s)",
(sqlst.user_io_wait_time/sqlst.executions) "iowait per exec",
sqlst.executions "ex",
sqlst.module,
substr(sqlst.sql_text,1,250) || '|' FROM
v$sqlarea sqlst
WHERE
sqlst.executions > 0 AND sqlst.parsing_schema_name <> 'ANONYMOUS' AND sqlst.parsing_schema_name <> 'APEX_030200'
AND sqlst.parsing_schema_name <> 'APEX_PUBLIC_USER' AND sqlst.parsing_schema_name <> 'APPQOSSYS'
AND sqlst.parsing_schema_name <> 'DBSNMP' AND sqlst.parsing_schema_name <> 'DIP' AND sqlst.parsing_schema_name <> 'EXFSYS'
AND sqlst.parsing_schema_name <> 'FLOWS_FILES' AND sqlst.parsing_schema_name <> 'MDSYS' AND sqlst.parsing_schema_name <> 'ORACCT'
AND sqlst.parsing_schema_name <> 'ORACLE_OCM' AND sqlst.parsing_schema_name <> 'ORDDATA' AND sqlst.parsing_schema_name <> 'ORDPLUGINS'
AND sqlst.parsing_schema_name <> 'ORDSYS' AND sqlst.parsing_schema_name <> 'OUTLN' AND sqlst.parsing_schema_name <> 'PUBLIC'
AND sqlst.parsing_schema_name <> 'SI_INFORMTN_SCHEMA' AND sqlst.parsing_schema_name <> 'SYS' AND sqlst.parsing_schema_name <> 'SYSTEM'
AND sqlst.parsing_schema_name <> 'WMSYS' AND sqlst.parsing_schema_name <> 'XDB' AND sqlst.parsing_schema_name <> 'XS$NULL' AND
TO_CHAR(sqlst.last_active_time, 'YYYYMMDD')=TO_CHAR(SYSDATE, 'YYYYMMDD')
ORDER BY "cpu Time (s)" desc ) a
where rownum < 11
SQL> spool off;
----
In this example SQL fetch only this day rows. You can make more accurate fetch if you make changes where clause "last_active_time" section. And with rownum you can limit number of rows that fetch will list.
And changing sort (ORDER BY) you'll get different most expensive lists like sqls that used most time (elapsed_time) . parsing_schema_name clauses in where will exclude Oracle own schemas from result set so that you can see only user made schemas.
Here is the example SQL for CPU time (seconds) per execution fetching (run this as user who have permission to select from v$sqlarea) (this will fetch other values too but the order is made by CPU time per execution):
----
SQL> set pages 120
SQL> set lines 120
SQL> spool CPU_Time_seconds_per_exec.txt
SQL> select rownum as sorting, a.* from (
SELECT sqlst.parsing_schema_name, sqlst.sql_id, sqlst.last_active_time,
(sqlst.buffer_gets/sqlst.executions) "buffer_gets per exec",
(sqlst.disk_reads/sqlst.executions) "disk_reads per exec",
(sqlst.cpu_time/1000000) "cpu Time (s)",
(sqlst.elapsed_time/1000000) "Elapsed Time (s)",
(sqlst.user_io_wait_time/sqlst.executions) "iowait per exec",
sqlst.executions "ex",
sqlst.module,
substr(sqlst.sql_text,1,250) || '|' FROM
v$sqlarea sqlst
WHERE
sqlst.executions > 0 AND sqlst.parsing_schema_name <> 'ANONYMOUS' AND sqlst.parsing_schema_name <> 'APEX_030200'
AND sqlst.parsing_schema_name <> 'APEX_PUBLIC_USER' AND sqlst.parsing_schema_name <> 'APPQOSSYS'
AND sqlst.parsing_schema_name <> 'DBSNMP' AND sqlst.parsing_schema_name <> 'DIP' AND sqlst.parsing_schema_name <> 'EXFSYS'
AND sqlst.parsing_schema_name <> 'FLOWS_FILES' AND sqlst.parsing_schema_name <> 'MDSYS' AND sqlst.parsing_schema_name <> 'ORACCT'
AND sqlst.parsing_schema_name <> 'ORACLE_OCM' AND sqlst.parsing_schema_name <> 'ORDDATA' AND sqlst.parsing_schema_name <> 'ORDPLUGINS'
AND sqlst.parsing_schema_name <> 'ORDSYS' AND sqlst.parsing_schema_name <> 'OUTLN' AND sqlst.parsing_schema_name <> 'PUBLIC'
AND sqlst.parsing_schema_name <> 'SI_INFORMTN_SCHEMA' AND sqlst.parsing_schema_name <> 'SYS' AND sqlst.parsing_schema_name <> 'SYSTEM'
AND sqlst.parsing_schema_name <> 'WMSYS' AND sqlst.parsing_schema_name <> 'XDB' AND sqlst.parsing_schema_name <> 'XS$NULL' AND
TO_CHAR(sqlst.last_active_time, 'YYYYMMDD')=TO_CHAR(SYSDATE, 'YYYYMMDD')
ORDER BY "cpu Time (s)" desc ) a
where rownum < 11
SQL> spool off;
----
In this example SQL fetch only this day rows. You can make more accurate fetch if you make changes where clause "last_active_time" section. And with rownum you can limit number of rows that fetch will list.
And changing sort (ORDER BY) you'll get different most expensive lists like sqls that used most time (elapsed_time) . parsing_schema_name clauses in where will exclude Oracle own schemas from result set so that you can see only user made schemas.
Oracle Connectivity test with shell script.
Sometimes you might need a script which make connectivity check for database.
Here is example shell script that makes connectivity checks for selected Oracle database.
Add following text in text file with .sh extension (connectivity_test_shell.sh). And change username , password and database name. You can also change numbers in loop (how many connection tests you want to run). Script creates conn_test.txt file where connection informations are printed (so user must have write permissions in the directory where this script is.).
----
#!/bin/bash
for x in {0..10};
do
echo quit | sqlplus testuser/<testuser_password>@<database_name> >> conn_test.txt;
done
----
Give permissions to scripts:
chmod 755 connectivity_test_shell.sh
Run script:
./connectivity_test_shell.sh
Here is example shell script that makes connectivity checks for selected Oracle database.
Add following text in text file with .sh extension (connectivity_test_shell.sh). And change username , password and database name. You can also change numbers in loop (how many connection tests you want to run). Script creates conn_test.txt file where connection informations are printed (so user must have write permissions in the directory where this script is.).
----
#!/bin/bash
for x in {0..10};
do
echo quit | sqlplus testuser/<testuser_password>@<database_name> >> conn_test.txt;
done
----
Give permissions to scripts:
chmod 755 connectivity_test_shell.sh
Run script:
./connectivity_test_shell.sh
Oracle RAC connectivity test with shell script and sql script.
If you need to make connectivity check for Oracle RAC cluster database here is sample scripts for that.
With this you can check connections for all cluster database instances.
First create sql script connectivity_test_shell.sql (text file with .sql extension.) and add following into it:
----
select instance_name from v$instance;
----
Here is example shell script that makes connectivity checks for selected Oracle database.
Add following text in text file with .sh extension ( connectivity_test_shell.sh ). And change username , password and database name. You can also change numbers in loop (how many connection tests you want to run). Database user you are using should have select permission for v$instance view used in connectivity_test_shell.sql script.
Script creates conn_test_testdb.txt file where connection information is printed (so user must have write permissions in the directory where this script is.). In this log you find also information of instance which you are connected. This way you can check that you can connect all instances of cluster database.
----
#!/bin/bash
for x in {0..100};
do
echo quit | sqlplus testuser/<testuser_password>@<cluster_database_name> @connectivity_test_shell.sql >> conn_test_testdb.txt;
done
----
Give permissions to scripts:
chmod 755 connectivity_test_shell*
Run script:
./connectivity_test_shell.sh
With this you can check connections for all cluster database instances.
First create sql script connectivity_test_shell.sql (text file with .sql extension.) and add following into it:
----
select instance_name from v$instance;
----
Here is example shell script that makes connectivity checks for selected Oracle database.
Add following text in text file with .sh extension ( connectivity_test_shell.sh ). And change username , password and database name. You can also change numbers in loop (how many connection tests you want to run). Database user you are using should have select permission for v$instance view used in connectivity_test_shell.sql script.
Script creates conn_test_testdb.txt file where connection information is printed (so user must have write permissions in the directory where this script is.). In this log you find also information of instance which you are connected. This way you can check that you can connect all instances of cluster database.
----
#!/bin/bash
for x in {0..100};
do
echo quit | sqlplus testuser/<testuser_password>@<cluster_database_name> @connectivity_test_shell.sql >> conn_test_testdb.txt;
done
----
Give permissions to scripts:
chmod 755 connectivity_test_shell*
Run script:
./connectivity_test_shell.sh
Wednesday, July 10, 2013
Oracle Date and Time comparison
There are several ways to do date and time comparison in Oracle.
Here are a couple of examples how to do it (examples are using v$sqlarea view last_active_time column which data type is DATE) :
1. Comparison as date with using TO_DATE function:
SQL> select count(*) from v$sqlarea sqlst where sqlst.last_active_time > to_date('07/10/2013 13:44:10', 'mm/dd/yyyy hh24:mi:ss' );
COUNT(*)
----------
7
Here are a couple of examples how to do it (examples are using v$sqlarea view last_active_time column which data type is DATE) :
1. Comparison as date with using TO_DATE function:
SQL> select count(*) from v$sqlarea sqlst where sqlst.last_active_time > to_date('07/10/2013 13:44:10', 'mm/dd/yyyy hh24:mi:ss' );
COUNT(*)
----------
7
2. Comparison as string with using TO_CHAR function (Note different comparison >= ):
SQL> select count(*) from v$sqlarea sqlst where TO_CHAR(sqlst.last_active_time, 'YYYYMMDD-HH24MISS') >= '20130710-134410';
COUNT(*)
----------
8
SQL> select count(*) from v$sqlarea sqlst where TO_CHAR(sqlst.last_active_time, 'YYYYMMDD-HH24MISS') >= '20130710-134410';
COUNT(*)
----------
8
Tuesday, July 9, 2013
Oracle Get database DDL's with dbms_metadata.get_ddl function
Since Oracle 9i you can get database DDL clauses from sqlplus with dbms_metadata.get_ddl utility. Data is fetch from data dictionary. Here is the list of possible object_types to fetch with dbms_metadata.get_ddl:
For Oracle 9i:
http://docs.oracle.com/cd/B10501_01/appdev.920/a96612/d_metad2.htm#1031458
For Oracle 11:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_metada.htm#BGBIEDIA
To get all one schema tables and indexes creation clauses out of the database you can do it with following SQL:
SQL> set pages 120
SQL> set lines 120
SQL> set long 5000
SQL> select dbms_metadata.get_ddl('TABLE',dt.table_name,dt.owner)||';' from dba_tables dt where dt.owner='<TEST_SCHEMA_NAME>';
SQL> select dbms_metadata.get_ddl('INDEX',di.index_name,di.table_owner)||';' from dba_indexes di where di.table_owner='<TEST_SCHEMA_NAME>';
SQL> spool off;
Same way you can add more clauses inside spool. For example sequences, synonyms, db_links etc...
If you want to cleaner output you can get it with sqlplus settings (thought select clauses are added in the spool file still):
set heading off
set feedback off
set verify off
For Oracle 9i:
http://docs.oracle.com/cd/B10501_01/appdev.920/a96612/d_metad2.htm#1031458
For Oracle 11:
http://docs.oracle.com/cd/B28359_01/appdev.111/b28419/d_metada.htm#BGBIEDIA
To get all one schema tables and indexes creation clauses out of the database you can do it with following SQL:
SQL> set pages 120
SQL> set lines 120
SQL> set long 5000
SQL> spool test_schema_name_ddl.txt
SQL> select dbms_metadata.get_ddl('INDEX',di.index_name,di.table_owner)||';' from dba_indexes di where di.table_owner='<TEST_SCHEMA_NAME>';
Same way you can add more clauses inside spool. For example sequences, synonyms, db_links etc...
If you want to cleaner output you can get it with sqlplus settings (thought select clauses are added in the spool file still):
set heading off
set feedback off
set verify off
Monday, July 8, 2013
Oracle CRS is not starting "has a disk HB, but no network HB, DHB has rcfg..." in ocssd log
Network problems in interconnect network or problems with interconnect interface can prevent CRS for starting.
If you look CRS check you'll see following:
[root@<node2> <node2>]# /u01/app/11.2.0/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
CRS alert log shows following:
(/u01/app/11.2.0/grid/log/<node2>/alert<node2>.log):
.
.
.
2013-06-03 06:56:50.778
[/u01/app/11.2.0/grid/bin/cssdagent(13124)]CRS-5818:Aborted command 'start' for resource 'ora.cssd'.
Details at (:CRSAGF00113:) {0:28:4} in /u01/app/11.2.0/grid/log/<node2>/agent/ohasd/oracssdagent_root/oracssdagent_root.log.
2013-06-03 06:56:50.779
[cssd(13138)]CRS-1656:The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /u01/app/11.2.0/grid/log/<node2>/cssd/ocssd.log
.
.
.
ocssd log shows following:
(/u01/app/11.2.0/grid/log/<node2>/cssd/ocssd.log) (this is complaining about node1 interconnect) :
2013-06-03 06:56:50.814: [ CSSD][3190012224]clssnmvDHBValidateNCopy: node 1, <node1>, has a disk HB, but no network HB, DHB has rcfg 216823918, wrtcnt, 48655493,
LATS 5387994, lastSeqNo 48655492, uniqueness 1365009957, timestamp 1370231810/927159338
2013-06-03 06:56:51.822: [ CSSD][3190012224]clssnmvDHBValidateNCopy: node 1, <node1>, has a disk HB, but no network HB, DHB has rcfg 216823918, wrtcnt, 48655494,
LATS 5389004, lastSeqNo 48655493, uniqueness 1365009957, timestamp 1370231811/927160338
2013-06-03 06:56:52.862: [ CSSD][3190012224]clssnmvDHBValidateNCopy: node 1, <node1>, has a disk HB, but no network HB, DHB has rcfg 216823918, wrtcnt, 48655495,
LATS 5390044, lastSeqNo 48655494, uniqueness 1365009957, timestamp 1370231812/927161338
.
You can check Ping and SSH between nodes via interconnect interface.
If they are not working then there is problem in network connection between cluster nodes. Fix the problem and CRS will start correctly.
But if Ping and SSH did work between nodes via interconnect interface and still ocssd log did complain about interconnect HeartBeat (no network HB) then interconnect interface is jammed. You can try to restart it to get it fixed (NOTE! It is usually the working node interconnect interface that is needed to restart (like error message is saying in ocssd.log (it is complaining node1)). For example if node2 CRS is not starting then restart node1 interconnect interface ) :
[root@<node1> <node1>]# ifdown eth1
[root@<node1> <node1>]# ifup eth1
And check that eth1 is looking ok:
[root@<node1> <node1>]# ifconfig
After interface restart or network problem fix check that <node2> clusterware is starting again:
[root@<node2> <node2>]# /u01/app/11.2.0/grid/bin/crsctl check crs
NOTE: If clusterware is trying to connect long enough via interconnect without success it will give this message in its alert log:
[ohasd(7773)]CRS-2771:Maximum restart attempts reached for resource 'ora.cssd'; will not restart.
If this error occurs then you need to kill CRS processes manually or reboot <node2> to get it trying again the clusterware start (cssd start).
If you look CRS check you'll see following:
[root@<node2> <node2>]# /u01/app/11.2.0/grid/bin/crsctl check crs
CRS-4638: Oracle High Availability Services is online
CRS-4535: Cannot communicate with Cluster Ready Services
CRS-4530: Communications failure contacting Cluster Synchronization Services daemon
CRS-4534: Cannot communicate with Event Manager
CRS alert log shows following:
(/u01/app/11.2.0/grid/log/<node2>/alert<node2>.log):
.
.
.
2013-06-03 06:56:50.778
[/u01/app/11.2.0/grid/bin/cssdagent(13124)]CRS-5818:Aborted command 'start' for resource 'ora.cssd'.
Details at (:CRSAGF00113:) {0:28:4} in /u01/app/11.2.0/grid/log/<node2>/agent/ohasd/oracssdagent_root/oracssdagent_root.log.
2013-06-03 06:56:50.779
[cssd(13138)]CRS-1656:The CSS daemon is terminating due to a fatal error; Details at (:CSSSC00012:) in /u01/app/11.2.0/grid/log/<node2>/cssd/ocssd.log
.
.
.
ocssd log shows following:
(/u01/app/11.2.0/grid/log/<node2>/cssd/ocssd.log) (this is complaining about node1 interconnect) :
2013-06-03 06:56:50.814: [ CSSD][3190012224]clssnmvDHBValidateNCopy: node 1, <node1>, has a disk HB, but no network HB, DHB has rcfg 216823918, wrtcnt, 48655493,
LATS 5387994, lastSeqNo 48655492, uniqueness 1365009957, timestamp 1370231810/927159338
2013-06-03 06:56:51.822: [ CSSD][3190012224]clssnmvDHBValidateNCopy: node 1, <node1>, has a disk HB, but no network HB, DHB has rcfg 216823918, wrtcnt, 48655494,
LATS 5389004, lastSeqNo 48655493, uniqueness 1365009957, timestamp 1370231811/927160338
2013-06-03 06:56:52.862: [ CSSD][3190012224]clssnmvDHBValidateNCopy: node 1, <node1>, has a disk HB, but no network HB, DHB has rcfg 216823918, wrtcnt, 48655495,
LATS 5390044, lastSeqNo 48655494, uniqueness 1365009957, timestamp 1370231812/927161338
.
You can check Ping and SSH between nodes via interconnect interface.
If they are not working then there is problem in network connection between cluster nodes. Fix the problem and CRS will start correctly.
But if Ping and SSH did work between nodes via interconnect interface and still ocssd log did complain about interconnect HeartBeat (no network HB) then interconnect interface is jammed. You can try to restart it to get it fixed (NOTE! It is usually the working node interconnect interface that is needed to restart (like error message is saying in ocssd.log (it is complaining node1)). For example if node2 CRS is not starting then restart node1 interconnect interface ) :
[root@<node1> <node1>]# ifdown eth1
[root@<node1> <node1>]# ifup eth1
And check that eth1 is looking ok:
[root@<node1> <node1>]# ifconfig
After interface restart or network problem fix check that <node2> clusterware is starting again:
[root@<node2> <node2>]# /u01/app/11.2.0/grid/bin/crsctl check crs
NOTE: If clusterware is trying to connect long enough via interconnect without success it will give this message in its alert log:
[ohasd(7773)]CRS-2771:Maximum restart attempts reached for resource 'ora.cssd'; will not restart.
If this error occurs then you need to kill CRS processes manually or reboot <node2> to get it trying again the clusterware start (cssd start).
Oracle cannot allocate new log "Private strand flush not complete" or "Checkpoint not complete" in alert log.
It's not unusual to get "Private strand flush not complete" or "Checkpoint not complete" messages with cannot allocate new log in alert log. Both are relating redo log writing and they are not errors. But if you get these a lot then it is better to try to fix them.
1. Usually easiest way to fix these is add more Redo Logs and Redo Log groups.
2. Other way is increase log files size.
3. Increasing the value for db_writer_processes (and use ASYNC I/O with Redo Logs ) can help with these. These are increasing Redo writing speed.
4. You can also try to tune checkpoint to fix "Checkpoint not complete". Here you can find parameters to tune: Checkpoint Tuning and Troubleshooting Guide [ID 147468.1]
5. And also moving log files to faster disk is possible fix.
If you want more detailed info about these alert log messages read following MY Oracle Support (MOS) documents:
Can Not Allocate Log [ID 1265962.1]
Checkpoint Tuning and Troubleshooting Guide [ID 147468.1]
1. Usually easiest way to fix these is add more Redo Logs and Redo Log groups.
2. Other way is increase log files size.
3. Increasing the value for db_writer_processes (and use ASYNC I/O with Redo Logs ) can help with these. These are increasing Redo writing speed.
4. You can also try to tune checkpoint to fix "Checkpoint not complete". Here you can find parameters to tune: Checkpoint Tuning and Troubleshooting Guide [ID 147468.1]
5. And also moving log files to faster disk is possible fix.
If you want more detailed info about these alert log messages read following MY Oracle Support (MOS) documents:
Checkpoint Tuning and Troubleshooting Guide [ID 147468.1]
Friday, June 28, 2013
Oracle sequence last number check and creation clauses.
Sometimes for example when you are updating database you might need to check that sequences last used value does not change. This can be done running following SQL before and after update and then comparing the result sets (this show only given schema sequences):
SQL> spool test_schema_name_sequences_last_num_before.txt
SQL> set pages 120
SQL> set lines 120
SQL> select SEQUENCE_OWNER,SEQUENCE_NAME,LAST_NUMBER from dba_sequences where SEQUENCE_OWNER = '<TEST_SCHEMA_NAME'>) order by 2;
SQL> spool off;
# Again after changes:
SQL> spool test_schema_name_sequences_last_num_after.txt
SQL> set pages 120
SQL> set lines 120
SQL> select SEQUENCE_OWNER,SEQUENCE_NAME,LAST_NUMBER from dba_sequences where SEQUENCE_OWNER = '<TEST_SCHEMA_NAME'>) order by 2;
SQL> spool off;
NOTE! Remember that last_number column value is aware of cache. If you have cache 20 last_number is increased 20 every time next cache set is taken. And because of this you can't depend only this column when cache is used you might need also check current_value from sequence. If cache is not used at all then last_number value is increased only when next value is taken from sequence and last_number is same as current_value.
If you want to get schema sequences creation clauses out of the database you can do it with following SQL:
SQL> spool test_schema_name_sequences_ddl.txt
SQL> set pages 120
SQL> set lines 120
SQL> select dbms_metadata.get_ddl('SEQUENCE',ds.sequence_name,ds.sequence_owner) from dba_sequences ds where ds.sequence_owner='<TEST_SCHEMA_NAME>';
SQL> spool off;
SQL> spool test_schema_name_sequences_last_num_before.txt
SQL> set pages 120
SQL> set lines 120
SQL> select SEQUENCE_OWNER,SEQUENCE_NAME,LAST_NUMBER from dba_sequences where SEQUENCE_OWNER = '<TEST_SCHEMA_NAME'>) order by 2;
SQL> spool off;
# Again after changes:
SQL> spool test_schema_name_sequences_last_num_after.txt
SQL> set pages 120
SQL> set lines 120
SQL> select SEQUENCE_OWNER,SEQUENCE_NAME,LAST_NUMBER from dba_sequences where SEQUENCE_OWNER = '<TEST_SCHEMA_NAME'>) order by 2;
SQL> spool off;
NOTE! Remember that last_number column value is aware of cache. If you have cache 20 last_number is increased 20 every time next cache set is taken. And because of this you can't depend only this column when cache is used you might need also check current_value from sequence. If cache is not used at all then last_number value is increased only when next value is taken from sequence and last_number is same as current_value.
If you want to get schema sequences creation clauses out of the database you can do it with following SQL:
SQL> spool test_schema_name_sequences_ddl.txt
SQL> set pages 120
SQL> set lines 120
SQL> select dbms_metadata.get_ddl('SEQUENCE',ds.sequence_name,ds.sequence_owner) from dba_sequences ds where ds.sequence_owner='<TEST_SCHEMA_NAME>';
SQL> spool off;
Thursday, June 27, 2013
Oracle Statspack usage
Statspack is tool for performance monitoring and reporting.
New versions of Oracle (10 and 11) provides AWR (Automatic Workload Repository) reports with more detailed statistics than statspack.
But sometimes you might still need statspack for example with Oracle SE version databases.
Here is how you can get statspack report out of your database (First create tablespace for statspack before start install):
1. Install statspack (this creates PERFSTAT schema and this asks you to give tablespace name for statspack)(Run as oracle user (and SYS)):
cd $ORACLE_HOME/rdbms/admin
sqlplus "/ as sysdba" @spcreate.sql
2. Take snapshots for statspack report (you need at least 2 snapshots to generate report):
sqlplus perfstat/<password>
exec statspack.snap;
You can also give detail levels for snapshots (Default level is 5). Levels vary between Oracle versions.
Oracle 11.2 gives following levels:
SQL> select * from stats$level_description;
SNAP_LEVEL
----------
DESCRIPTION
------------------------------------------------------------------------------------------------------------------------
0
This level captures general statistics, including rollback segment, row cache, SGA, system events, background events, session events, system statistics, wait statistics, lock statistics, and Latch information
5
This level includes capturing high resource usage SQL Statements, along with all data captured by lower levels
6
This level includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements, along with all data captured by lower levels
7
This level captures segment level statistics, including logical and physical reads, row lock, itl and buffer busy waits, along with all data captured by lower levels
10
This level includes capturing Child Latch statistics, along with all data captured by lower levels
----------------------------------
Take snapshot with certain level:
sqlplus perfstat/<password>
exec statspack.snap(i_snap_level=>10);
It is also possible to collect snapshots automatically via dbms_jobs (spauto.sql script) or with your own cron script.
3. Generate statspack report (This list available snapshots and asks you to give begin and end snapshot for report):
sqlplus perfstat/<password>
@?/rdbms/admin/spreport
You can also check available snapshots from here:
select SNAP_ID, SNAP_TIME from STATS$SNAPSHOT;
If you need a help for statspack report analyzing check these:
statspackanalyzer.com
http://filebank.orapub.com/cgi-bin/quickUOWTBA.cgi
4. Purge statspack snapshots (You can purge old or all statspack snapshots from database):
Purge 10 days older snapshots:
sqlplus perfstat/<password>
exec statspack.purge(sysdate-10);
5. If you want to remove statspack schema and data from your database you can do it this way (Run as oracle user (and SYS)):
cd $ORACLE_HOME/rdbms/admin
sqlplus "/ as sysdba" @spdrop.sql
New versions of Oracle (10 and 11) provides AWR (Automatic Workload Repository) reports with more detailed statistics than statspack.
But sometimes you might still need statspack for example with Oracle SE version databases.
Here is how you can get statspack report out of your database (First create tablespace for statspack before start install):
1. Install statspack (this creates PERFSTAT schema and this asks you to give tablespace name for statspack)(Run as oracle user (and SYS)):
cd $ORACLE_HOME/rdbms/admin
sqlplus "/ as sysdba" @spcreate.sql
2. Take snapshots for statspack report (you need at least 2 snapshots to generate report):
sqlplus perfstat/<password>
exec statspack.snap;
You can also give detail levels for snapshots (Default level is 5). Levels vary between Oracle versions.
Oracle 11.2 gives following levels:
SQL> select * from stats$level_description;
SNAP_LEVEL
----------
DESCRIPTION
------------------------------------------------------------------------------------------------------------------------
0
This level captures general statistics, including rollback segment, row cache, SGA, system events, background events, session events, system statistics, wait statistics, lock statistics, and Latch information
5
This level includes capturing high resource usage SQL Statements, along with all data captured by lower levels
6
This level includes capturing SQL plan and SQL plan usage information for high resource usage SQL Statements, along with all data captured by lower levels
7
This level captures segment level statistics, including logical and physical reads, row lock, itl and buffer busy waits, along with all data captured by lower levels
10
This level includes capturing Child Latch statistics, along with all data captured by lower levels
----------------------------------
Take snapshot with certain level:
sqlplus perfstat/<password>
exec statspack.snap(i_snap_level=>10);
It is also possible to collect snapshots automatically via dbms_jobs (spauto.sql script) or with your own cron script.
3. Generate statspack report (This list available snapshots and asks you to give begin and end snapshot for report):
sqlplus perfstat/<password>
@?/rdbms/admin/spreport
You can also check available snapshots from here:
select SNAP_ID, SNAP_TIME from STATS$SNAPSHOT;
If you need a help for statspack report analyzing check these:
statspackanalyzer.com
http://filebank.orapub.com/cgi-bin/quickUOWTBA.cgi
4. Purge statspack snapshots (You can purge old or all statspack snapshots from database):
Purge 10 days older snapshots:
sqlplus perfstat/<password>
exec statspack.purge(sysdate-10);
5. If you want to remove statspack schema and data from your database you can do it this way (Run as oracle user (and SYS)):
cd $ORACLE_HOME/rdbms/admin
sqlplus "/ as sysdba" @spdrop.sql
Oracle Windows OPatch Prerequisite check "CheckActiveFilesAndExecutables" (Following files are active) failed.
When you are running Oracle OPatch patches in Windows environment you might get following error:
Running prerequisite checks...
Prerequisite check "CheckActiveFilesAndExecutables" failed.
The details are:
Following files are active :
C:\oracle\product\10.2.0\db_1\bin\oci.dll
.
.
.
Patching guide says that before patching you should stop all Oracle services along with the "Distributed Transaction Coordinator" service. But sometimes this is not enough and you get "ActiveFiles" error when trying to run OPatch.
Usually problem is Windows Management Instrumentation (WMI) service. Restart this service from Services tool or you can also end process "WMIPrvSE.exe" from Task Manager (it will restart automatically). Then try to run OPatch again.
From following MOS (My Oracle Support) document you can find more help to this error:
Files in Use errors, when applying patches on Windows [ID 418479.1]
There is also Process Explorer tool which you can use to check which process is using certain ddl's:
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
Running prerequisite checks...
Prerequisite check "CheckActiveFilesAndExecutables" failed.
The details are:
Following files are active :
C:\oracle\product\10.2.0\db_1\bin\oci.dll
.
.
.
Patching guide says that before patching you should stop all Oracle services along with the "Distributed Transaction Coordinator" service. But sometimes this is not enough and you get "ActiveFiles" error when trying to run OPatch.
Usually problem is Windows Management Instrumentation (WMI) service. Restart this service from Services tool or you can also end process "WMIPrvSE.exe" from Task Manager (it will restart automatically). Then try to run OPatch again.
From following MOS (My Oracle Support) document you can find more help to this error:
Files in Use errors, when applying patches on Windows [ID 418479.1]
There is also Process Explorer tool which you can use to check which process is using certain ddl's:
http://technet.microsoft.com/en-us/sysinternals/bb896653.aspx
Wednesday, June 5, 2013
MySQL flush logs; gives: ERROR 1105 (HY000): Unknown error
If you trying to run "flush logs;" from mysql or with mysqladmin you'll get following error message:
ERROR 1105 (HY000): Unknown error
There are bugs that might affect like this but usually first thing to do with this is check log paths permissions.
1. First check what log directories your MySQL is using:
mysql> show global variables like '%log%';
You can also use more specific checks like:
This show error log:
mysql> show global variables like '%err%';
or for general log:
mysql> show global variables like '%general%';
or for slow log:
mysql> show global variables like '%slow%';
For example:
mysql> show global variables like '%err%';
+--------------------+----------------------+
| Variable_name | Value |
+--------------------+----------------------+
| error_count | 0 |
| log_error | /data/mysql/logs/mysqld_error.log |
| max_connect_errors | 10 |
| max_error_count | 64 |
| slave_skip_errors | OFF |
+--------------------+----------------------+
5 rows in set (0.00 sec)
2. After you know log paths check that mysql user (or user that run mysqld if it is other than mysql.) have permissions to write in those directories (remember to check all log paths you are using).
ls -latr /data00/mysql/
drwxr-xr-x 14 mysql mysql 4096 Mar 26 06:46 logs
If there is not enough permissions then add them and the error is gone. If permissions are correct then you are probably hitting the bug.
ERROR 1105 (HY000): Unknown error
There are bugs that might affect like this but usually first thing to do with this is check log paths permissions.
1. First check what log directories your MySQL is using:
mysql> show global variables like '%log%';
You can also use more specific checks like:
This show error log:
mysql> show global variables like '%err%';
or for general log:
mysql> show global variables like '%general%';
or for slow log:
mysql> show global variables like '%slow%';
For example:
mysql> show global variables like '%err%';
+--------------------+----------------------+
| Variable_name | Value |
+--------------------+----------------------+
| error_count | 0 |
| log_error | /data/mysql/logs/mysqld_error.log |
| max_connect_errors | 10 |
| max_error_count | 64 |
| slave_skip_errors | OFF |
+--------------------+----------------------+
5 rows in set (0.00 sec)
2. After you know log paths check that mysql user (or user that run mysqld if it is other than mysql.) have permissions to write in those directories (remember to check all log paths you are using).
ls -latr /data00/mysql/
drwxr-xr-x 14 mysql mysql 4096 Mar 26 06:46 logs
If there is not enough permissions then add them and the error is gone. If permissions are correct then you are probably hitting the bug.
Monday, June 3, 2013
Oracle Cluster Verification Utility (cluvfy) tracing.
If you get errors from Cluster Verification Utility (cluvfy) then you might need to get trace log out of it.
For example if you are creating Service Request about cluvfy into My Oracle Support you might add trace log into SR.
How to get trace log from cluvfy (run these as oracle user) :
1. Create directory for log:
mkdir /tmp/cvutrace
2. Set ENV variables:
export CV_TRACELOC=/tmp/cvutrace
export SRVM_TRACE=true
export SRVM_TRACE_LEVEL=1
3. Run wanted cluvfy check:
runcluvfy.sh .....
For example:
/data/install/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -r 11gR2 -verbose
After the cluvfy run is completed you'll have its trace log in the directory you created.
Copy it and add into the SR.
More info from MOS (My Oracle Support):
MOS ID 986822.1
For example if you are creating Service Request about cluvfy into My Oracle Support you might add trace log into SR.
How to get trace log from cluvfy (run these as oracle user) :
1. Create directory for log:
mkdir /tmp/cvutrace
2. Set ENV variables:
export CV_TRACELOC=/tmp/cvutrace
export SRVM_TRACE=true
export SRVM_TRACE_LEVEL=1
3. Run wanted cluvfy check:
runcluvfy.sh .....
For example:
/data/install/grid/runcluvfy.sh stage -pre crsinst -n node1,node2 -r 11gR2 -verbose
After the cluvfy run is completed you'll have its trace log in the directory you created.
Copy it and add into the SR.
More info from MOS (My Oracle Support):
MOS ID 986822.1
Wednesday, May 29, 2013
Oracle shared server processes generates huge trace files in diag > rdbms > trace directory
There is Oracle bug that can cause shared server processes to generate huge trace files in the diag > rdbms > trace directory.
If you are using RAC this problem usually appears only in one node at the time.
Inside trace files you can find following errors (reparse is main thing to watch):
-----------
kksfbc: entering reparse diagnosis mode for xsc:0x7381dcbd0
...
and
Check cursor failed ctx=(nil) xscflg=110632 ctxflg=0 flg=102
kksSearchChildList: no suitable child found (hash_match=0)
...
and
Current Cursor Sharing Diagnostics Nodes:
Child Node: 2 ID=40 reason=Bind mismatch(8) size=4x4
...
-----------
This bug itself is not fatal if you have enough free space in disk where diag directory is.
But if that disk run out of free space then alert log will start to log errors about the trace file writing.
And if the diag directory is in the default place there will be probably other log and files in the same disk which will cause you more problems when the disk is full. For example listener log.
There is fix for this bug in MOS (My Oracle Support) and also more information (MOS ID):
10373013.8 (you can check patches from Patch & Updates > search 10373013)
NOTE! The bug is already fixed in 11.2.0.3 and 12.1 versions. And some patch updates of 11.2.0.2 . Quick workaround for this problem is empty trace files for example this way in Linux:
cat /dev/null > trace_file_name.trc . If you just delete the file it will not free used space until you restart database. But above emptying will free space immediately. But this is just workaround and error can hit again later. So if you get these often it is better to use bug fix or upgrade the database into new version.
If you are using RAC this problem usually appears only in one node at the time.
Inside trace files you can find following errors (reparse is main thing to watch):
-----------
kksfbc: entering reparse diagnosis mode for xsc:0x7381dcbd0
...
and
Check cursor failed ctx=(nil) xscflg=110632 ctxflg=0 flg=102
kksSearchChildList: no suitable child found (hash_match=0)
...
and
Current Cursor Sharing Diagnostics Nodes:
Child Node: 2 ID=40 reason=Bind mismatch(8) size=4x4
...
-----------
This bug itself is not fatal if you have enough free space in disk where diag directory is.
But if that disk run out of free space then alert log will start to log errors about the trace file writing.
And if the diag directory is in the default place there will be probably other log and files in the same disk which will cause you more problems when the disk is full. For example listener log.
There is fix for this bug in MOS (My Oracle Support) and also more information (MOS ID):
10373013.8 (you can check patches from Patch & Updates > search 10373013)
NOTE! The bug is already fixed in 11.2.0.3 and 12.1 versions. And some patch updates of 11.2.0.2 . Quick workaround for this problem is empty trace files for example this way in Linux:
cat /dev/null > trace_file_name.trc . If you just delete the file it will not free used space until you restart database. But above emptying will free space immediately. But this is just workaround and error can hit again later. So if you get these often it is better to use bug fix or upgrade the database into new version.
Monday, May 13, 2013
IOPS check from Oracle 11 server
If you want to check Oracle database server IOPS (Input/Output Operations Per Second) usage you can do it with following SQL:
SQL> break on report
SQL> compute sum label TOTAL_IOPS: of IOPS on report
SQL> select METRIC_NAME,avg(AVERAGE) as "IOPS"
from dba_hist_sysmetric_summary
where METRIC_NAME in ('Physical Read Total IO Requests Per Sec','Physical Write Total IO Requests Per Sec')
group by METRIC_NAME;
For example:
SQL> break on report
SQL> compute sum label TOTAL_IOPS: of IOPS on report
SQL> select METRIC_NAME,avg(AVERAGE) as "IOPS"
2 from dba_hist_sysmetric_summary
3 where METRIC_NAME in ('Physical Read Total IO Requests Per Sec','Physical Write Total IO Requests Per Sec')
4 group by METRIC_NAME;
METRIC_NAME IOPS
---------------------------------------------------------------- ----------
Physical Write Total IO Requests Per Sec 32.8448519
Physical Read Total IO Requests Per Sec 13.3675773
----------
TOTAL_IOPS: 46.2124292
NOTE! DBA_HIST_SYSMETRIC_SUMMARY contains snapshots of V$SYSMETRIC_SUMMARY view. V$SYSMETRIC_SUMMARY displays a summary of all system Metric values for the long-duration system metrics. The average, maximum value, minimum value, and the value of one standard deviation for the last hour are displayed for each metric item. And because of this if you are using V$SYSMETRIC_SUMMARY instead of DBA_HIST_SYSMETRIC_SUMMARY you will get different results .
SQL> break on report
SQL> compute sum label TOTAL_IOPS: of IOPS on report
SQL> select METRIC_NAME,avg(AVERAGE) as "IOPS"
from dba_hist_sysmetric_summary
where METRIC_NAME in ('Physical Read Total IO Requests Per Sec','Physical Write Total IO Requests Per Sec')
group by METRIC_NAME;
For example:
SQL> break on report
SQL> compute sum label TOTAL_IOPS: of IOPS on report
SQL> select METRIC_NAME,avg(AVERAGE) as "IOPS"
2 from dba_hist_sysmetric_summary
3 where METRIC_NAME in ('Physical Read Total IO Requests Per Sec','Physical Write Total IO Requests Per Sec')
4 group by METRIC_NAME;
METRIC_NAME IOPS
---------------------------------------------------------------- ----------
Physical Write Total IO Requests Per Sec 32.8448519
Physical Read Total IO Requests Per Sec 13.3675773
----------
TOTAL_IOPS: 46.2124292
NOTE! DBA_HIST_SYSMETRIC_SUMMARY contains snapshots of V$SYSMETRIC_SUMMARY view. V$SYSMETRIC_SUMMARY displays a summary of all system Metric values for the long-duration system metrics. The average, maximum value, minimum value, and the value of one standard deviation for the last hour are displayed for each metric item. And because of this if you are using V$SYSMETRIC_SUMMARY instead of DBA_HIST_SYSMETRIC_SUMMARY you will get different results .
Oracle export table data to CSV format.
There are many ways to export Oracle table data to CSV format.
Easiest way is simple SQL spool clause:
With following sql spool you can get txt file that contains data in CSV format (with strings (columns 2,3 and 5) in single quotation marks):
SQL> spool testdata.csv
SQL> SELECT column1||',"'||column2||'","'||column3||'",'||column4||',"'||column5||'"'FROM testtable1;
SQL> spool off;
Example:
SQL> desc test_schema.test_table_1
Name Null? Type
----------------------------------------- -------- ----------------------------
COL1 NUMBER
COL2 VARCHAR2(20)
SQL> spool test.csv
SQL> SELECT col1||','''||col2||'''' FROM test_schema.test_table_1
SQL> spool off;
You can also use Oracle UTL_FILE package to get data in CSV format more info about that from asktom.oracle.com:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:88212348059
Easiest way is simple SQL spool clause:
With following sql spool you can get txt file that contains data in CSV format (with strings (columns 2,3 and 5) in single quotation marks):
SQL> spool testdata.csv
SQL> SELECT column1||',"'||column2||'","'||column3||'",'||column4||',"'||column5||'"'FROM testtable1;
SQL> spool off;
Example:
SQL> desc test_schema.test_table_1
Name Null? Type
----------------------------------------- -------- ----------------------------
COL1 NUMBER
COL2 VARCHAR2(20)
SQL> spool test.csv
SQL> SELECT col1||','''||col2||'''' FROM test_schema.test_table_1
SQL> spool off;
You can also use Oracle UTL_FILE package to get data in CSV format more info about that from asktom.oracle.com:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::p11_question_id:88212348059
Tuesday, May 7, 2013
Oracle Grid Control Agent configuration/communication problem.
If your suddenly get following errors in Grid Control console:
If trying to configure database error will appear:
"Error occurred while getting information for the specified target. Possible reasons include: the target doesn't exist; connection to the target failed."
If trying to configure agent this error will appear:
"Communication between the Oracle Management Service host to the Agent host is unavailable. Any functions or displayed information requiring this communication link will be unavailable. For example: deleting/configuring/adding targets, uploading metric data, or displaying Agent home page information such as Agent to Management Service Response Time (ms)."
And when going into database page in grid control following error:
"java.lang.Exception: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated"
These are coming because of some kind network connectivity issues with secured agents.
Workaround for these errors is:
Check oms detailed status (this asks SYSMAN password)
$OMS_HOME/bin/emctl status oms -details
If Agent upload and OMS Console is locked then run following:
(To remove the restriction for HTTPS uploads from the Management Agents, repeat the preceding procedure, but replace the emctl secure lock command with the following command: (this asks SYSMAN and agent securing password.)
$OMS_HOME/bin/emctl secure unlock
$OMS_HOME/bin/emctl stop oms -all
$OMS_HOME/bin/emctl start oms
After unlock run following for each of problematic Grid Control Agents:
$AGENT_HOME/bin/emctl unsecure agent
$AGENT_HOME/bin/emctl upload agent
$AGENT_HOME/bin/emctl status agent
After this errors are gone.
If want to use secure OMS and Agents try to find problem of HTTPS connections between your OMS server and Agent servers. And after that run :
$OMS_HOME/bin/emctl secure lock
$OMS_HOME/bin/emctl stop oms -all
$OMS_HOME/bin/emctl start oms
And then secure agents with:
$AGENT_HOME/bin/emctl secure agent
$AGENT_HOME/bin/emctl upload agent
$AGENT_HOME/bin/emctl status agent
Sometimes simple unsecure and secure can fix the problem.
If trying to configure database error will appear:
"Error occurred while getting information for the specified target. Possible reasons include: the target doesn't exist; connection to the target failed."
If trying to configure agent this error will appear:
"Communication between the Oracle Management Service host to the Agent host is unavailable. Any functions or displayed information requiring this communication link will be unavailable. For example: deleting/configuring/adding targets, uploading metric data, or displaying Agent home page information such as Agent to Management Service Response Time (ms)."
And when going into database page in grid control following error:
"java.lang.Exception: java.io.IOException: javax.net.ssl.SSLPeerUnverifiedException: peer not authenticated"
These are coming because of some kind network connectivity issues with secured agents.
Workaround for these errors is:
Check oms detailed status (this asks SYSMAN password)
$OMS_HOME/bin/emctl status oms -details
If Agent upload and OMS Console is locked then run following:
(To remove the restriction for HTTPS uploads from the Management Agents, repeat the preceding procedure, but replace the emctl secure lock command with the following command: (this asks SYSMAN and agent securing password.)
$OMS_HOME/bin/emctl secure unlock
$OMS_HOME/bin/emctl stop oms -all
$OMS_HOME/bin/emctl start oms
After unlock run following for each of problematic Grid Control Agents:
$AGENT_HOME/bin/emctl unsecure agent
$AGENT_HOME/bin/emctl upload agent
$AGENT_HOME/bin/emctl status agent
After this errors are gone.
If want to use secure OMS and Agents try to find problem of HTTPS connections between your OMS server and Agent servers. And after that run :
$OMS_HOME/bin/emctl secure lock
$OMS_HOME/bin/emctl stop oms -all
$OMS_HOME/bin/emctl start oms
And then secure agents with:
$AGENT_HOME/bin/emctl secure agent
$AGENT_HOME/bin/emctl upload agent
$AGENT_HOME/bin/emctl status agent
Sometimes simple unsecure and secure can fix the problem.
Oracle 11.2.0.3 ORACLE_OCM.MGMT_CONFIG_JOB_2_1 errors in database alert log.
You might get following errors in Oracle database alert log:
Errors in file /u01/app/oracle/diag/rdbms/<database_name>/<instance_name>/trace/<instance_name>_j000_7665.trc:
ORA-12012: error on auto execute of job "ORACLE_OCM"."MGMT_CONFIG_JOB_2_1"
ORA-29280: invalid directory path
ORA-06512: at "ORACLE_OCM.MGMT_DB_LL_METRICS", line 2436
ORA-06512: at line 1
Reason for these errors are Oracle Configuration Manager (OCM).
It will give these errors once a day because it is trying to write directory that does not exist.
You can fix this error with following ways:
If you don't use OSM then:
Drop whole user:
SQL> drop user ORACLE_OCM cascade;
OCM is a standalone, separate tool and removing won't affect the database functionality.
Or just disable OCM jobs (this prevent it's writing attempts):
exec dbms_scheduler.disable('ORACLE_OCM.MGMT_CONFIG_JOB')
exec dbms_scheduler.disable('ORACLE_OCM.MGMT_STATS_CONFIG_JOB')
If you want to use OCM then you need to reconfigure it:
More info about reconfigure and this error from MOS (My Oracle Support) document:
1453959.1
Errors in file /u01/app/oracle/diag/rdbms/<database_name>/<instance_name>/trace/<instance_name>_j000_7665.trc:
ORA-12012: error on auto execute of job "ORACLE_OCM"."MGMT_CONFIG_JOB_2_1"
ORA-29280: invalid directory path
ORA-06512: at "ORACLE_OCM.MGMT_DB_LL_METRICS", line 2436
ORA-06512: at line 1
Reason for these errors are Oracle Configuration Manager (OCM).
It will give these errors once a day because it is trying to write directory that does not exist.
You can fix this error with following ways:
If you don't use OSM then:
Drop whole user:
SQL> drop user ORACLE_OCM cascade;
OCM is a standalone, separate tool and removing won't affect the database functionality.
Or just disable OCM jobs (this prevent it's writing attempts):
exec dbms_scheduler.disable('ORACLE_OCM.MGMT_CONFIG_JOB')
exec dbms_scheduler.disable('ORACLE_OCM.MGMT_STATS_CONFIG_JOB')
If you want to use OCM then you need to reconfigure it:
More info about reconfigure and this error from MOS (My Oracle Support) document:
1453959.1
Oracle Grid Control database configuration save problem.
If you get following errors when trying to save configuration changes into database via Grid Control:
"Saving instance <INSTANCE_NAME> to repository and agent
https://<AGENT_NAME>:3872/emd/main/...-- exception caught when saving the <DB_NAME> to
agent https://<AGENT_NAME>:3872/emd/main/:
oracle.sysman.emSDK.emd.comm.OperationException: Skipping target {<DB_NAME>, rac_database}:
Missing properties - ClusterName ... finished."
This error is coming because Grid Control Agent targets.xml is missing ClusterName properties row.
To fix this problem:
Go into Agent servers and take backup of file: $AGENT_HOME/<nodename>/sysman/emd/targets.xml
And then add following row into file (for example after DBVersion row):
<Property NAME="ClusterName" VALUE="<CLUSTER_NAME>"/>
And in the end reload agent in Agent servers:
$AGENT_HOME/bin/emctl reload
After these steps configuration should work normally.
"Saving instance <INSTANCE_NAME> to repository and agent
https://<AGENT_NAME>:3872/emd/main/...-- exception caught when saving the <DB_NAME> to
agent https://<AGENT_NAME>:3872/emd/main/:
oracle.sysman.emSDK.emd.comm.OperationException: Skipping target {<DB_NAME>, rac_database}:
Missing properties - ClusterName ... finished."
This error is coming because Grid Control Agent targets.xml is missing ClusterName properties row.
To fix this problem:
Go into Agent servers and take backup of file: $AGENT_HOME/<nodename>/sysman/emd/targets.xml
And then add following row into file (for example after DBVersion row):
<Property NAME="ClusterName" VALUE="<CLUSTER_NAME>"/>
And in the end reload agent in Agent servers:
$AGENT_HOME/bin/emctl reload
After these steps configuration should work normally.
Thursday, March 14, 2013
Oracle tablespace space usage stats.
In Oracle you can look tablespace space usage stats from couple of places.
You can use dba_data_files and dba_segments to check free space, used space, total space, plus percents of free space. Here is example:
SQL> select dfiles.tablespace_name "Tablespace", totalusedspace "Used MB",
(dfiles.totalspace - tusage.totalusedspace) "Free MB",
dfiles.totalspace "Total MB",
round(100 * ( (dfiles.totalspace - tusage.totalusedspace)/ dfiles.totalspace)) "Free %"
from
(select tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from dba_data_files
group by tablespace_name) dfiles,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name
from dba_segments
group by tablespace_name) tusage
where dfiles.tablespace_name = tusage.tablespace_name ;
Tablespace Used MB Free MB Total MB Free %
------------------------------ ---------- ---------- ---------- ----------
TEST2_DATA 3 253 256 99
SYSAUX 1094 56 1150 5
UNDOTBS1 145 2070 2215 93
TEST_DATA 1369 443 1812 24
Or you can use dba_free_space to check free (usable) space in tablespace. Here is example:
SQL> select tablespace_name , sum(bytes)/1024/1024 "free space Mb" from dba_free_space group by tablespace_name;
TABLESPACE_NAME free space Mb
------------------------------ -------------
TEST2_DATA 251.625
SYSAUX 55.25
UNDOTBS1 1928.0625
TEST_DATA 573.25
NOTE!! (in Oracle 10/11) If you see differences between these two selects (free space might be different) that is because dba_free_space does not show data that is in recyclebin but dba_segment shows that data. If you delete data from table or whole table/index that data goes into recycle bin (it is still in the same tablespace) but space used by recyclebin can be used for new data if it is needed. And this is why dba_free_space does not show it as used space. You can clean recyclebin with purge.
For certain user recyclebin:
SQL> purge recyclebin;
All recyclebins (with user that have privileges):
SQL> purge dba_recyclebin;
You can use dba_data_files and dba_segments to check free space, used space, total space, plus percents of free space. Here is example:
SQL> select dfiles.tablespace_name "Tablespace", totalusedspace "Used MB",
(dfiles.totalspace - tusage.totalusedspace) "Free MB",
dfiles.totalspace "Total MB",
round(100 * ( (dfiles.totalspace - tusage.totalusedspace)/ dfiles.totalspace)) "Free %"
from
(select tablespace_name,
round(sum(bytes) / 1048576) TotalSpace
from dba_data_files
group by tablespace_name) dfiles,
(select round(sum(bytes)/(1024*1024)) totalusedspace, tablespace_name
from dba_segments
group by tablespace_name) tusage
where dfiles.tablespace_name = tusage.tablespace_name ;
Tablespace Used MB Free MB Total MB Free %
------------------------------ ---------- ---------- ---------- ----------
TEST2_DATA 3 253 256 99
SYSAUX 1094 56 1150 5
UNDOTBS1 145 2070 2215 93
TEST_DATA 1369 443 1812 24
Or you can use dba_free_space to check free (usable) space in tablespace. Here is example:
SQL> select tablespace_name , sum(bytes)/1024/1024 "free space Mb" from dba_free_space group by tablespace_name;
TABLESPACE_NAME free space Mb
------------------------------ -------------
TEST2_DATA 251.625
SYSAUX 55.25
UNDOTBS1 1928.0625
TEST_DATA 573.25
NOTE!! (in Oracle 10/11) If you see differences between these two selects (free space might be different) that is because dba_free_space does not show data that is in recyclebin but dba_segment shows that data. If you delete data from table or whole table/index that data goes into recycle bin (it is still in the same tablespace) but space used by recyclebin can be used for new data if it is needed. And this is why dba_free_space does not show it as used space. You can clean recyclebin with purge.
For certain user recyclebin:
SQL> purge recyclebin;
All recyclebins (with user that have privileges):
SQL> purge dba_recyclebin;
Tuesday, March 12, 2013
Oracle create temporary tablespace group
Since Oracle 10 you could have made temporary tablespace group. In this group you can add multiple temporary tablespaces which contains multiple tempfiles. You can also set temporary tablespace group to your default temp tablespace.
When you use temporary tablespace group Oracle will distribute temp operations in all tablespaces in group. If you use single temp tablespace Oracle won't distribute operations on several tempfiles even tablespace would have more tempfiles in it. So with temporary tablespace group you can spread I/O from big sort operations on several tempfiles (and different disks if you have the tempfiles in different disks).
# Create 2 new temp tablespaces into temp tablespace group temp_group (this will also create the temporary tablespace group if it does not exists yet):
SQL> create temporary tablespace temp_g01_tbs
tempfile '/u01/app/oradata/orcl/TEMPg01.DBF' size 2G
extent management local uniform size 1M
tablespace group temp_group;
SQL> create temporary tablespace temp_g02_tbs
tempfile '/u01/app/oradata/orcl/TEMPg02.DBF' size 2G
extent management local uniform size 1M
tablespace group temp_group;
From following views you can see tempfiles and temporary tablespace groups:
SQL> select * from dba_temp_files;
SQL> select * from dba_tablespace_groups;
Here you can see users default tablespaces:
SQL> select username, default_tablespace, temporary_tablespace from dba_users order by username;
And here you can change database default temporary tablespace to temporary tablespace group you just created:
SQL> alter database default temporary tablespace temp_group;
When you use temporary tablespace group Oracle will distribute temp operations in all tablespaces in group. If you use single temp tablespace Oracle won't distribute operations on several tempfiles even tablespace would have more tempfiles in it. So with temporary tablespace group you can spread I/O from big sort operations on several tempfiles (and different disks if you have the tempfiles in different disks).
# Create 2 new temp tablespaces into temp tablespace group temp_group (this will also create the temporary tablespace group if it does not exists yet):
SQL> create temporary tablespace temp_g01_tbs
tempfile '/u01/app/oradata/orcl/TEMPg01.DBF' size 2G
extent management local uniform size 1M
tablespace group temp_group;
SQL> create temporary tablespace temp_g02_tbs
tempfile '/u01/app/oradata/orcl/TEMPg02.DBF' size 2G
extent management local uniform size 1M
tablespace group temp_group;
From following views you can see tempfiles and temporary tablespace groups:
SQL> select * from dba_temp_files;
SQL> select * from dba_tablespace_groups;
Here you can see users default tablespaces:
SQL> select username, default_tablespace, temporary_tablespace from dba_users order by username;
And here you can change database default temporary tablespace to temporary tablespace group you just created:
SQL> alter database default temporary tablespace temp_group;
Oracle which tables given table is using as foreign key tables.
With the following SQL-clause you can check which tables the table that you put into SQL-clause is using as foreign key tables:
SQL> select dbac.table_name,
dbac.constraint_name "Primary Constraint Name",
r.constraint_name "Referenced Constraint Name"
from dba_constraints dbac,
(select cons.constraint_name,
cons.r_constraint_name,
cons.table_name
from dba_constraints cons
where table_name='<TABLE_NAME>'
and constraint_type='R') r
where dbac.constraint_name=r.r_constraint_name;
Example:
SQL> select dbac.table_name,
dbac.constraint_name "Primary Constraint Name",
r.constraint_name "Referenced Constraint Name"
from dba_constraints dbac,
(select cons.constraint_name,
cons.r_constraint_name,
cons.table_name
from dba_constraints cons
where table_name='TEST_TABLE2'
and constraint_type='R') r
where dbac.constraint_name=r.r_constraint_name;
TABLE_NAME Primary Constraint Name
------------------------------ ------------------------------
Referenced Constraint Name
------------------------------
TEST_TABLE1 PKEY_TEST_TABLE1_TEST_TYPE
FKEY_TEST_TABLE2_TEST_TYPE
SQL> select dbac.table_name,
dbac.constraint_name "Primary Constraint Name",
r.constraint_name "Referenced Constraint Name"
from dba_constraints dbac,
(select cons.constraint_name,
cons.r_constraint_name,
cons.table_name
from dba_constraints cons
where table_name='<TABLE_NAME>'
and constraint_type='R') r
where dbac.constraint_name=r.r_constraint_name;
Example:
SQL> select dbac.table_name,
dbac.constraint_name "Primary Constraint Name",
r.constraint_name "Referenced Constraint Name"
from dba_constraints dbac,
(select cons.constraint_name,
cons.r_constraint_name,
cons.table_name
from dba_constraints cons
where table_name='TEST_TABLE2'
and constraint_type='R') r
where dbac.constraint_name=r.r_constraint_name;
TABLE_NAME Primary Constraint Name
------------------------------ ------------------------------
Referenced Constraint Name
------------------------------
TEST_TABLE1 PKEY_TEST_TABLE1_TEST_TYPE
FKEY_TEST_TABLE2_TEST_TYPE
Oracle which tables are using given table as foreign key table.
With the following SQL-clause you can check which tables are using table that you put into SQL-clause as foreign key table:
SQL> select table_name, constraint_name, status, owner
from all_constraints
where r_owner = '<SCHEMA_NAME>'
and constraint_type = 'R'
and r_constraint_name in
(
select constraint_name from all_constraints
where constraint_type in ('P', 'U')
and table_name = '<TABLE_NAME>'
and owner = '<SCHEMA_NAME>'
)
order by table_name, constraint_name;
Example:
SQL> select table_name, constraint_name, status, owner
from all_constraints
where r_owner = 'TEST_USER'
and constraint_type = 'R'
and r_constraint_name in
(
select constraint_name from all_constraints
where constraint_type in ('P', 'U')
and table_name = 'TEST_TABLE1'
and owner = 'TEST_USER'
)
order by table_name, constraint_name;
TABLE_NAME CONSTRAINT_NAME STATUS
------------------------------ ------------------------------ --------
OWNER
------------------------------
TEST_TABLE2 FKEY_TEST_TABLE2_TEST_TYPE ENABLED
TEST_USER
SQL> select table_name, constraint_name, status, owner
from all_constraints
where r_owner = '<SCHEMA_NAME>'
and constraint_type = 'R'
and r_constraint_name in
(
select constraint_name from all_constraints
where constraint_type in ('P', 'U')
and table_name = '<TABLE_NAME>'
and owner = '<SCHEMA_NAME>'
)
order by table_name, constraint_name;
Example:
SQL> select table_name, constraint_name, status, owner
from all_constraints
where r_owner = 'TEST_USER'
and constraint_type = 'R'
and r_constraint_name in
(
select constraint_name from all_constraints
where constraint_type in ('P', 'U')
and table_name = 'TEST_TABLE1'
and owner = 'TEST_USER'
)
order by table_name, constraint_name;
TABLE_NAME CONSTRAINT_NAME STATUS
------------------------------ ------------------------------ --------
OWNER
------------------------------
TEST_TABLE2 FKEY_TEST_TABLE2_TEST_TYPE ENABLED
TEST_USER
Thursday, March 7, 2013
Oracle kill all sessions from one schema.
Sometimes you need to kill all sessions from one schema. You can do that with following sql's.
1.) Kill all session from one schema from single instance:
SQL> set head off
SQL> spool kill_sessions.tmp
SQL> select 'alter system kill session '''||sid||','||serial#|| ''' immediate ;' from v$session where username='TEST_USER' ;
SQL> spool off
SQL> @kill_sessions.tmp
When you run kill_sessions.tmp script it will give you two "SP2-0734: unknown command beginning ..."
errors because there are orginal select clause and spool off in file. But you can ignore those. Or you can also remove those two lines from tmp file before running it.
2.) Kill all sessions from one schema from whole RAC cluster (all instances):
SQL> set head off
SQL> spool kill_sessions_rac.tmp
SQL> select 'alter system kill session '''||sid||','||serial#|| ',@' || inst_id || ''' immediate;' from gv$session where username='TEST_USER' ;
SQL> spool off
SQL> @kill_sessions_rac.tmp
When you run kill_sessions.tmp script it will give same errors as single instance version.
And of course you can also run only select clause and run manually kill clauses it prints.
1.) Kill all session from one schema from single instance:
SQL> set head off
SQL> spool kill_sessions.tmp
SQL> select 'alter system kill session '''||sid||','||serial#|| ''' immediate ;' from v$session where username='TEST_USER' ;
SQL> spool off
SQL> @kill_sessions.tmp
When you run kill_sessions.tmp script it will give you two "SP2-0734: unknown command beginning ..."
errors because there are orginal select clause and spool off in file. But you can ignore those. Or you can also remove those two lines from tmp file before running it.
2.) Kill all sessions from one schema from whole RAC cluster (all instances):
SQL> set head off
SQL> spool kill_sessions_rac.tmp
SQL> select 'alter system kill session '''||sid||','||serial#|| ',@' || inst_id || ''' immediate;' from gv$session where username='TEST_USER' ;
SQL> spool off
SQL> @kill_sessions_rac.tmp
When you run kill_sessions.tmp script it will give same errors as single instance version.
And of course you can also run only select clause and run manually kill clauses it prints.
Thursday, February 28, 2013
Oracle 11.2.0.3 upgrade precheck error: "PRVF-4190 : Verification of the hosts config file failed"
When I was upgrading Oracle 11.2.0.1 RAC database to 11.2.0.3 on Linux RHEL5 environment the prerequisite checks gaved following error:
"PRVF-4190 : Verification of the hosts config file failed"
This means that there is something wrong in the /etc/hosts file. Following MOS (My Oracle Support) ID gives more info about this:
[ID 1056025.1]
But in this time I was using same /etc/hosts file in several other RAC installations both 11.2.0.1 and 11.2.0.3 . And all of those were working nicely and none of those gaved this error when installing them. So it seems that upgrade is doing this check some other way than pure install of 11.2.0.3. (In my case this fail is probably because there is a lot of IPv6 addresses in hosts file. Thought that hosts file is still valid and working fine.)
I made SR about this and response was that I can ignore error because same file is working fine with other databases. So I ignored it and run upgrade. Upgrade went ok but after root scripts cluvfy (Oracle Cluster Verification Utility) was failing because the same error about hosts file. I checked the installAction log file and there was no other errors than hosts file error. So I hit the skip button and then finished installation and everything is working just fine.
So the point here is that if upgrade precheck is failing because of the hosts file (PRVF-4190) it can be ignored if it is not failing with clean installations and the hosts file is valid+working fine elsewhere.
"PRVF-4190 : Verification of the hosts config file failed"
This means that there is something wrong in the /etc/hosts file. Following MOS (My Oracle Support) ID gives more info about this:
[ID 1056025.1]
But in this time I was using same /etc/hosts file in several other RAC installations both 11.2.0.1 and 11.2.0.3 . And all of those were working nicely and none of those gaved this error when installing them. So it seems that upgrade is doing this check some other way than pure install of 11.2.0.3. (In my case this fail is probably because there is a lot of IPv6 addresses in hosts file. Thought that hosts file is still valid and working fine.)
I made SR about this and response was that I can ignore error because same file is working fine with other databases. So I ignored it and run upgrade. Upgrade went ok but after root scripts cluvfy (Oracle Cluster Verification Utility) was failing because the same error about hosts file. I checked the installAction log file and there was no other errors than hosts file error. So I hit the skip button and then finished installation and everything is working just fine.
So the point here is that if upgrade precheck is failing because of the hosts file (PRVF-4190) it can be ignored if it is not failing with clean installations and the hosts file is valid+working fine elsewhere.
Copy Archivelogs from ASM to disk via asmcmd
Copy current archivelogs from asmcmd (as oracle user):
export ORACLE_HOME=/u01/app/11.2.0/grid
export ORACLE_SID=+ASM1
asmcmd
ASMCMD> ls -la
ASMCMD> cd TEST_TESTDB_ARCH0/
ASMCMD> cd TESTDB/
ASMCMD> cd ARCHIVELOG/
ASMCMD> cd 2012_02_28/
# This gives you latest logs in the end of the listing:
ASMCMD> ls -lat --reverse
ASMCMD> cp '+TEST_TESTDB_ARCH0/TESTDB/ARCHIVELOG/2012_02_28/thread_2_seq_144810.3108.808567851' /backup/export/thread_2_seq_144810.3108.808567851
NOTE! You can use the cp command to:
----
-Copy files from a disk group to the operating system
-Copy files from a disk group to a disk group
-Copy files from the operating system to a disk group
----
Copy Archivelogs from ASM to disk via RMAN
Copy archivelogs from ASM to desired disk via RMAN (as oracle user):
#This makes backup of the file so it is not basic copy.
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_SID=testdb1
rman
RMAN> connect target /
# This gives you latest logs in the end of the listing:
RMAN> LIST ARCHIVELOG ALL;
# And then copy desired:
RMAN> copy archivelog '+TEST_TESTDB_ARCH0/TESTDB/ARCHIVELOG/thread_2_seq_144809.675.808567493' to '/backup/export/thread_2_seq_144809.675.808567493';
Starting backup at 20120228-101014
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log copy
input archived log thread=2 sequence=144809 RECID=380790 STAMP=808567494
output file name=/backup/export/thread_2_seq_144809.675.808567493 RECID=380792 STAMP=808567817
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:03
Finished backup at 20120228-101018
#This makes backup of the file so it is not basic copy.
export ORACLE_HOME=/u01/app/oracle/product/11.2.0/dbhome_1
export ORACLE_SID=testdb1
rman
RMAN> connect target /
# This gives you latest logs in the end of the listing:
RMAN> LIST ARCHIVELOG ALL;
# And then copy desired:
RMAN> copy archivelog '+TEST_TESTDB_ARCH0/TESTDB/ARCHIVELOG/thread_2_seq_144809.675.808567493' to '/backup/export/thread_2_seq_144809.675.808567493';
Starting backup at 20120228-101014
using channel ORA_DISK_1
channel ORA_DISK_1: starting archived log copy
input archived log thread=2 sequence=144809 RECID=380790 STAMP=808567494
output file name=/backup/export/thread_2_seq_144809.675.808567493 RECID=380792 STAMP=808567817
channel ORA_DISK_1: archived log copy complete, elapsed time: 00:00:03
Finished backup at 20120228-101018
Sunday, February 24, 2013
Oracle 11 and interval partitioning
If you have used Oracle range partitioning for example month based partitions before Oracle 11 then you needed to do each new month partitions yourself. Oracle 11 introduced interval partitions which can automatically create new partitions for desired intervals for example each months .
Here is simple example how to do range partiotion with interval (there is a lot of other paratmeter to use with partitions but this example just shows the interval):
SQL> CREATE TABLE test_data ( test_id number not null, test_name varchar2(32) not null, test_day date not null ) partition by range (test_day) interval (numtoyminterval(1, 'MONTH')) ( partition before_int_1_jan_2012 values less than (to_Date('01-01-2012', 'dd-mm-yyyy')));
Last partition before interval starts to create new partitions is called transition point. After that partition intervals are automatically created. For example if you insert data into test_data table and test_day value points into the next month thats month partition will be done and data is inserted into it.
It is also possible to alter older range partition tables to use interval with alter table clause:
SQL> ALTER TABLE test_data2 SET interval (numtoyminterval(1, 'MONTH'));
Remember that interval creates partitions with system generated names (you can rename these manually if you want). You can check intervals from user_tab_partitions view for example this way:
SQL> SELECT partition_name, high_value FROM USER_TAB_PARTITIONS WHERE table_name='TEST_DATA';
You can also check certain date partition with following clause:
SQL> SELECT * FROM test_data partition for (to_date('01-02-2012', 'dd-mm-yyyy'));
NOTE! Interval partition can be used only with date or number columns. And it can have only single partitioning key column. And it does not support subpartitions.
Here is simple example how to do range partiotion with interval (there is a lot of other paratmeter to use with partitions but this example just shows the interval):
SQL> CREATE TABLE test_data ( test_id number not null, test_name varchar2(32) not null, test_day date not null ) partition by range (test_day) interval (numtoyminterval(1, 'MONTH')) ( partition before_int_1_jan_2012 values less than (to_Date('01-01-2012', 'dd-mm-yyyy')));
Last partition before interval starts to create new partitions is called transition point. After that partition intervals are automatically created. For example if you insert data into test_data table and test_day value points into the next month thats month partition will be done and data is inserted into it.
It is also possible to alter older range partition tables to use interval with alter table clause:
SQL> ALTER TABLE test_data2 SET interval (numtoyminterval(1, 'MONTH'));
Remember that interval creates partitions with system generated names (you can rename these manually if you want). You can check intervals from user_tab_partitions view for example this way:
SQL> SELECT partition_name, high_value FROM USER_TAB_PARTITIONS WHERE table_name='TEST_DATA';
You can also check certain date partition with following clause:
SQL> SELECT * FROM test_data partition for (to_date('01-02-2012', 'dd-mm-yyyy'));
NOTE! Interval partition can be used only with date or number columns. And it can have only single partitioning key column. And it does not support subpartitions.
Saturday, February 23, 2013
Oracle 11 Encrypted tablespaces
Start of the Oracle 11 you can create encrypted tablespaces.
But you can't alter tablespaces to encrypted. So if you need to
encrypt tablespace you need always create new one with encryption on.
Encrypted tablespaces are created like this:
SQL> CREATE TABLESPACE test_enc_ts DATAFILE '/data/oradata/testdb/test_enc_ts_01.dbf'
size 512M encryption using '3DES168' default storage (encrypt);
If you don't use "using 'encryption_algorithm'" clause then
default algorithm (AES128) is used.
You can check if tablespace is encrypted from DBA_TABLESPACES view ENCRYPTED column.
Tablespace encryption will slow down DML and SQL operations but it
allows you to use indexes in all columns on it. If you encrypt only single column
in one table Oracle won't use index with that column. So in those cases
tablespace encryption is better than single column encryption.
But you can't alter tablespaces to encrypted. So if you need to
encrypt tablespace you need always create new one with encryption on.
Encrypted tablespaces are created like this:
SQL> CREATE TABLESPACE test_enc_ts DATAFILE '/data/oradata/testdb/test_enc_ts_01.dbf'
size 512M encryption using '3DES168' default storage (encrypt);
If you don't use "using 'encryption_algorithm'" clause then
default algorithm (AES128) is used.
You can check if tablespace is encrypted from DBA_TABLESPACES view ENCRYPTED column.
Tablespace encryption will slow down DML and SQL operations but it
allows you to use indexes in all columns on it. If you encrypt only single column
in one table Oracle won't use index with that column. So in those cases
tablespace encryption is better than single column encryption.
Oracle 11 Password_life_time
It is good to remember that in Oracle 11 "Password_life_time" resource in default profile is 180 days. So if you create new user in Oracle 11 and do not set other profile to that user then this new user password will be expired in 180 days. Thought you can change this default profile setting like this:
SQL> ALTER PROFILE default LIMIT PASSWORD_LIFE_TIME UNLIMITED;
You can check all profile limits from here:
SQL> SELECT * FROM dba_profiles;
Remember also that when you create new database in Oracle 11 all it users get default profile by default. So they have Password_life_time 180 by default.
SQL> ALTER PROFILE default LIMIT PASSWORD_LIFE_TIME UNLIMITED;
You can check all profile limits from here:
SQL> SELECT * FROM dba_profiles;
Remember also that when you create new database in Oracle 11 all it users get default profile by default. So they have Password_life_time 180 by default.
Oracle 11 passwords and upgrade
In Oracle 11 passwords are case sensitive by default. When you are upgrading older database to Oracle 11 password remains case-insensitive but if you change passwords they are starting to use case sensitivity.
You can check PASSWORD_VERSION column from DBA_USERS view. If users password version in there is lower than 11 then it is not case sensitive but if it is eleven or above then it is case sensitive by default. And if there is both 10g and 11g in column it points out that password assigned to user in 10g but then upgraded to 11g (case sensitive):
SQL> SELECT username, password_version FROM DBA_USERS;
If you want to take case sensitivity of you can do it with setting FALSE on "sec_case_sensitive_logon" parameter:
SQL> ALTER SYSTEM SET sec_case_sensitive_logon=false ;
You can check PASSWORD_VERSION column from DBA_USERS view. If users password version in there is lower than 11 then it is not case sensitive but if it is eleven or above then it is case sensitive by default. And if there is both 10g and 11g in column it points out that password assigned to user in 10g but then upgraded to 11g (case sensitive):
SQL> SELECT username, password_version FROM DBA_USERS;
If you want to take case sensitivity of you can do it with setting FALSE on "sec_case_sensitive_logon" parameter:
SQL> ALTER SYSTEM SET sec_case_sensitive_logon=false ;
Friday, February 15, 2013
ORA-12502 TNS:listener received no CONNECT_DATA from client on Oracle 11g RAC environment.
If you get "TNS-12502: TNS:listener received no CONNECT_DATA from client" errors in your listener.log or in your SCAN listener logs. Or some client connection gives "ORA - 12502 TNS:listener received no CONNECT_DATA from client" errors (check client sqlnet.log for these). Then you know there is some kind of connection problem going on.
There is troubleshooting guide for this kind of problems in MOS (My Oracle Support) (this advice to set sqlnet listener tracing on if errors are coming rarely.This way you can get more info about problematic connection (where it is coming etc.) But if these errors are coming frequently then it is not wise to set listener tracing on because you will get huge log file. If you get this error from multiple clients then your application probably provide invalid connection string.):
MOS document: [ID 453505.1]
If you have RAC environment you can also read following MOS (My Oracle Support) document (this help you to check that your remote and local listener settings are correct):
MOS document:[ID 453293.1]
If everything else looks to be ok then problem is probably in firewall/network settings. Check that you have opened both SCAN IPs and VIPs from client to server. If you open only SCAN IPs then you can get these errors because RAC uses internally also VIPs.
There is troubleshooting guide for this kind of problems in MOS (My Oracle Support) (this advice to set sqlnet listener tracing on if errors are coming rarely.This way you can get more info about problematic connection (where it is coming etc.) But if these errors are coming frequently then it is not wise to set listener tracing on because you will get huge log file. If you get this error from multiple clients then your application probably provide invalid connection string.):
MOS document: [ID 453505.1]
If you have RAC environment you can also read following MOS (My Oracle Support) document (this help you to check that your remote and local listener settings are correct):
MOS document:[ID 453293.1]
If everything else looks to be ok then problem is probably in firewall/network settings. Check that you have opened both SCAN IPs and VIPs from client to server. If you open only SCAN IPs then you can get these errors because RAC uses internally also VIPs.
Oracle Critical Patch Updates (CPUs) are nowadays Security Patch Updates (SPUs).
Starting from October 2012 onwards Oracle's Critical Patch Updates (CPU) will be called Security Patch Updates (SPU). SPU patches are cumulative so the new SPU patch contains all previous patches. And you don't need to install previous CPU/SPU patches before installing new SPU. The patches will still be released as part of the overall Oracle Critical Patch Update (CPU) Program:
CPU/SPU patches in Oracle technet
NOTE! Window's Oracle installation have their own patches SPUs are not used in that environment.
More info about Oracle patch types:
My Oracle Support Document 1430923.1
Saturday, January 19, 2013
Oracle 11.1 and above read-only tables
In Oracle 11.1 and above you can use alter table clause to change table to read-only or read-write mode.
Change table to read-only:
SQL> alter table test_table read only;
After this users can't run deletes, inserts or updates for changed table.
Change table to read-write:
SQL> alter table test_table read write;
After this users can again run deletes, inserts or updates for this table.
You can check table mode from user_tables, all_tables or dba_tables views (check the READ_ONLY column).
Change table to read-only:
SQL> alter table test_table read only;
After this users can't run deletes, inserts or updates for changed table.
Change table to read-write:
SQL> alter table test_table read write;
After this users can again run deletes, inserts or updates for this table.
You can check table mode from user_tables, all_tables or dba_tables views (check the READ_ONLY column).
Invisible Index Oracle 11.1 and above.
Starting in Oracle 11.1 is it possible to test index before starting to use it with all sessions. And it is also possible to test affect of index dropping before actually dropping the index. These can be done with invisible index feature which allows you to set optimizer seeing index only in the session you are using.
Here is a couple of examples how to do this:
Create invisible index:
SQL> CREATE INDEX test_invis_idx ON test_table (column_name) INVISIBLE;
There is new init param 'optimizer_use_invisible_indexes' which tells if optimizer sees the invisible indexes. And default values for this parameter is false ( so optimizer does not see invisible indexes by default and because of this queries does not use this kind of indexes). You can set this parameter true for the session you are using and test new index with only that session:
SQL> ALTER SESSION SET optimizer_use_invisible_indexes=true;
Then run queries that are using new index and if everything works like you want then change index visible and all sessions start to use new index because optimizer sees this index ( all new indexes are visible by default ):
SQL> ALTER INDEX test_invis_idx VISIBLE;
And same way you can test affect of dropping index. First change index you like to drop invisible and see if queries are starting to working poorly. If they are change index back to visible. And if they are working ok without the index you can drop it safely.
SQL> ALTER INDEX test_invis_idx INVISIBLE;
SQL> DROP INDEX test_invis_idx;
You can check index state from the user_indexes , all_indexes or dba_indexes views like this:
SQL> select index_name,VISIBILITY from user_indexes where index_name='test_invis_idx';
Here is a couple of examples how to do this:
Create invisible index:
SQL> CREATE INDEX test_invis_idx ON test_table (column_name) INVISIBLE;
There is new init param 'optimizer_use_invisible_indexes' which tells if optimizer sees the invisible indexes. And default values for this parameter is false ( so optimizer does not see invisible indexes by default and because of this queries does not use this kind of indexes). You can set this parameter true for the session you are using and test new index with only that session:
SQL> ALTER SESSION SET optimizer_use_invisible_indexes=true;
Then run queries that are using new index and if everything works like you want then change index visible and all sessions start to use new index because optimizer sees this index ( all new indexes are visible by default ):
SQL> ALTER INDEX test_invis_idx VISIBLE;
And same way you can test affect of dropping index. First change index you like to drop invisible and see if queries are starting to working poorly. If they are change index back to visible. And if they are working ok without the index you can drop it safely.
SQL> ALTER INDEX test_invis_idx INVISIBLE;
SQL> DROP INDEX test_invis_idx;
You can check index state from the user_indexes , all_indexes or dba_indexes views like this:
SQL> select index_name,VISIBILITY from user_indexes where index_name='test_invis_idx';
INDEX_NAME VISIBILIT
------------------------------ ---------
TEST_INVIS_IDX VISIBLE
Sunday, January 6, 2013
Lock table clause changes in Oracle 11.1 and above.
In Oracle 11.1 and above you can set wait time for LOCK TABLE clause.
Default value for lock table clause is WAIT . And with that lock table clause will wait as long it gets the lock for the table. But you can give also waith time for lock table clause with command like this:
LOCK TABLE <table_name> IN <lock_mode> MODE WAIT <seconds to wait>;
For example:
SQL> LOCK TABLE test1 IN exclusive MODE WAIT 60;
And if you use NOWAIT instead of WAIT then lock table clause will give error straight away if it does not get table lock for table.
Default value for lock table clause is WAIT . And with that lock table clause will wait as long it gets the lock for the table. But you can give also waith time for lock table clause with command like this:
LOCK TABLE <table_name> IN <lock_mode> MODE WAIT <seconds to wait>;
For example:
SQL> LOCK TABLE test1 IN exclusive MODE WAIT 60;
And if you use NOWAIT instead of WAIT then lock table clause will give error straight away if it does not get table lock for table.
Subscribe to:
Posts (Atom)