Autonomous Health Framework – Missing node on TFA

Hi all,

Welcome to the second post of this series!

Yesterday we one about Autonomous Health Framework basics and tomorrow we have another one!

For today: While manually installing AHF – when the root user can not ssh to the cluster nodes – you might have a missing communication between the nodes while installing
which you don’t have all cluster nodes found by all TFA installations.

In my example, I have a 3 nodes cluster which I was doing the manual install, on the 1st two nodes I was able to see them but on the 3rd node the tfa was not being able to see the other 2.

Here are the steps or the manual installation:

  • Node1
[root@servertst01 oraadm]# /oraadm/home/oracrs/ahf_setup -ahf_loc /oraadm/dba/ahf -data_dir /oraadm/dba/ahf/data

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_178647_2020_05_08-16_24_07.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 20.1.3 Build Date: 202004290950

TFA is already installed at : /oraadm/oracrs/product/19.0.0/tfa/servertst01/tfa_home

Installed TFA Version : 192000 Build ID : 20190426041420

AHF Location : /oraadm/dba/ahf/oracle.ahf

AHF Data Directory : /oraadm/dba/ahf/oracle.ahf/data

Shutting down TFA : /oraadm/oracrs/product/19.0.0/tfa/servertst01/tfa_home

Copying TFA Data Files from /oraadm/oracrs/product/19.0.0/tfa/servertst01/tfa_home

Uninstalling TFA : /oraadm/oracrs/product/19.0.0/tfa/servertst01/tfa_home

Do you want to add AHF Notification Email IDs ? [Y]|N : n

Login using root is disabled in sshd config. Installing AHF only on Local Node

Extracting AHF to /oraadm/dba/ahf/oracle.ahf

Configuring TFA Services

Copying TFA Data Files to AHF

Discovering Nodes and Oracle Resources

Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.

.---------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+-------------+---------------+--------+------+------------+----------------------+
| servertst01 | RUNNING | 189966 | 5000 | 20.1.3.0.0 | 20130020200429095054 |
'-------------+---------------+--------+------+------------+----------------------'

Running TFA Inventory...

Adding default users to TFA Access list...

.--------------------------------------------------------------------.
| Summary of AHF Configuration |
+-----------------+--------------------------------------------------+
| Parameter | Value |
+-----------------+--------------------------------------------------+
| AHF Location | /oraadm/dba/ahf/oracle.ahf |
| TFA Location | /oraadm/dba/ahf/oracle.ahf/tfa |
| Orachk Location | /oraadm/dba/ahf/oracle.ahf/orachk |
| Data Directory | /oraadm/dba/ahf/oracle.ahf/data |
| Repository | /oraadm/dba/ahf/oracle.ahf/data/repository |
| Diag Directory | /oraadm/dba/ahf/oracle.ahf/data/servertst01/diag |
'-----------------+--------------------------------------------------'


Starting orachk daemon from AHF ...

AHF binaries are available in /oraadm/dba/ahf/oracle.ahf/bin

AHF is successfully installed

Moving /tmp/ahf_install_178647_2020_05_08-16_24_07.log to /oraadm/dba/ahf/oracle.ahf/data/servertst01/diag/ahf/

Node 2

[root@servertst02 oraadm]# /oraadm/home/oracrs/ahf_setup -ahf_loc /oraadm/dba/ahf -data_dir /oraadm/dba/ahf/data

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_236841_2020_05_08-16_27_47.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 20.1.3 Build Date: 202004290950

TFA is already installed at : /oraadm/oracrs/product/19.0.0/tfa/servertst02/tfa_home

Installed TFA Version : 192000 Build ID : 20190426041420

[ERROR] : AHF-00009: AHF Location directory [/oraadm/dba/ahf] not found

[root@servertst02 oraadm]# mkdir -p /oraadm/dba/ahf/data
[root@servertst02 oraadm]# /oraadm/home/oracrs/ahf_setup -ahf_loc /oraadm/dba/ahf -data_dir /oraadm/dba/ahf/data

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_237428_2020_05_08-16_27_59.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 20.1.3 Build Date: 202004290950

TFA is already installed at : /oraadm/oracrs/product/19.0.0/tfa/servertst02/tfa_home

Installed TFA Version : 192000 Build ID : 20190426041420

AHF Location : /oraadm/dba/ahf/oracle.ahf

AHF Data Directory : /oraadm/dba/ahf/oracle.ahf/data

Shutting down TFA : /oraadm/oracrs/product/19.0.0/tfa/servertst02/tfa_home

Copying TFA Data Files from /oraadm/oracrs/product/19.0.0/tfa/servertst02/tfa_home

Uninstalling TFA : /oraadm/oracrs/product/19.0.0/tfa/servertst02/tfa_home

Do you want to add AHF Notification Email IDs ? [Y]|N : n

Login using root is disabled in sshd config. Installing AHF only on Local Node

Extracting AHF to /oraadm/dba/ahf/oracle.ahf

Configuring TFA Services

Copying TFA Data Files to AHF

Discovering Nodes and Oracle Resources

Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.

.---------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+-------------+---------------+--------+------+------------+----------------------+
| servertst02 | RUNNING | 249057 | 5000 | 20.1.3.0.0 | 20130020200429095054 |
| servertst01 | RUNNING | 189966 | 5000 | 20.1.3.0.0 | 20130020200429095054 |
'-------------+---------------+--------+------+------------+----------------------'

Running TFA Inventory...

Adding default users to TFA Access list...

.--------------------------------------------------------------------.
| Summary of AHF Configuration |
+-----------------+--------------------------------------------------+
| Parameter | Value |
+-----------------+--------------------------------------------------+
| AHF Location | /oraadm/dba/ahf/oracle.ahf |
| TFA Location | /oraadm/dba/ahf/oracle.ahf/tfa |
| Orachk Location | /oraadm/dba/ahf/oracle.ahf/orachk |
| Data Directory | /oraadm/dba/ahf/oracle.ahf/data |
| Repository | /oraadm/dba/ahf/oracle.ahf/data/repository |
| Diag Directory | /oraadm/dba/ahf/oracle.ahf/data/servertst02/diag |
'-----------------+--------------------------------------------------'


Starting orachk daemon from AHF ...

AHF binaries are available in /oraadm/dba/ahf/oracle.ahf/bin

AHF is successfully installed

Moving /tmp/ahf_install_237428_2020_05_08-16_27_59.log to /oraadm/dba/ahf/oracle.ahf/data/servertst02/diag/ahf/

[root@servertst02 oraadm]# ls /oraadm/dba/ahf/oracle.ahf/data/servertst02/

Node 3

[root@servertst03 oraadm]# /oraadm/home/oracrs/ahf_setup -ahf_loc /oraadm/dba/ahf -data_dir /oraadm/dba/ahf/data

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_129104_2020_05_08-16_31_08.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 20.1.3 Build Date: 202004290950

TFA is already installed at : /oraadm/oracrs/product/19.0.0/tfa/servertst03/tfa_home

Installed TFA Version : 192000 Build ID : 20190426041420

AHF Location : /oraadm/dba/ahf/oracle.ahf

AHF Data Directory : /oraadm/dba/ahf/oracle.ahf/data

Shutting down TFA : /oraadm/oracrs/product/19.0.0/tfa/servertst03/tfa_home

Copying TFA Data Files from /oraadm/oracrs/product/19.0.0/tfa/servertst03/tfa_home

Uninstalling TFA : /oraadm/oracrs/product/19.0.0/tfa/servertst03/tfa_home

Do you want to add AHF Notification Email IDs ? [Y]|N : n

Login using root is disabled in sshd config. Installing AHF only on Local Node

Extracting AHF to /oraadm/dba/ahf/oracle.ahf

Configuring TFA Services

Copying TFA Data Files to AHF

Discovering Nodes and Oracle Resources

Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.

.---------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+-------------+---------------+--------+------+------------+----------------------+
| servertst03 | RUNNING | 135279 | 5000 | 20.1.3.0.0 | 20130020200429095054 |
'-------------+---------------+--------+------+------------+----------------------'

Running TFA Inventory...

Adding default users to TFA Access list...

.--------------------------------------------------------------------.
| Summary of AHF Configuration |
+-----------------+--------------------------------------------------+
| Parameter | Value |
+-----------------+--------------------------------------------------+
| AHF Location | /oraadm/dba/ahf/oracle.ahf |
| TFA Location | /oraadm/dba/ahf/oracle.ahf/tfa |
| Orachk Location | /oraadm/dba/ahf/oracle.ahf/orachk |
| Data Directory | /oraadm/dba/ahf/oracle.ahf/data |
| Repository | /oraadm/dba/ahf/oracle.ahf/data/repository |
| Diag Directory | /oraadm/dba/ahf/oracle.ahf/data/servertst03/diag |
'-----------------+--------------------------------------------------'


Starting orachk daemon from AHF ...

AHF binaries are available in /oraadm/dba/ahf/oracle.ahf/bin

AHF is successfully installed

Moving /tmp/ahf_install_129104_2020_05_08-16_31_08.log to /oraadm/dba/ahf/oracle.ahf/data/servertst03/diag/ahf/

As you can see the node 2 was able to see the existing installation of node 1 but when it came to node 3 it was not able to.

To resolve the issue, we need to use the syncnodes command:

[root@servertst01 bin]# ./tfactl status

.----------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+-------------+---------------+--------+------+------------+----------------------+------------------+
| servertst01 | RUNNING | 189966 | 5000 | 20.1.3.0.0 | 20130020200429095054 | COMPLETE |
| servertst02 | RUNNING | 249057 | 5000 | 20.1.3.0.0 | 20130020200429095054 | COMPLETE |
'-------------+---------------+--------+------+------------+----------------------+------------------'
[root@servertst01 bin]# ./tfactl syncnodes

Login using root is disabled in sshd config. Please enable it or

Please copy these files manually to remote node and restart TFA
1. /oraadm/dba/ahf/oracle.ahf/data/servertst01/tfa/server.jks
2. /oraadm/dba/ahf/oracle.ahf/data/servertst01/tfa/client.jks
3. /oraadm/dba/ahf/oracle.ahf/data/servertst01/tfa/internal/ssl.properties

These files must be owned by root and should have 600 permissions.

I copied the files to the specified location and stopped and started TFA and it was able to see all nodes in the cluster.

[root@servertst03 bin]# ./tfactl statusahf

.----------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+-------------+---------------+--------+------+------------+----------------------+------------------+
| servertst03 | RUNNING | 135279 | 5000 | 20.1.3.0.0 | 20130020200429095054 | COMPLETE |
'-------------+---------------+--------+------+------------+----------------------+------------------'

orachk scheduler is running [PID: 135279] [Version: 20.1.3]
[root@servertst03 bin]# ./tfactl stop
Stopping TFA from the Command Line
Stopped OSWatcher
Nothing to do !
Killing TFA running with pid 135279
. . .
Successfully stopped TFA..
[root@servertst03 bin]# ./tfactl start
Starting TFA..
Waiting up to 100 seconds for TFA to be started..
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
Successfully started TFA Process..
. . . . .
TFA Started and listening for commands
[root@servertst03 bin]# ./tfactl status

.----------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+-------------+---------------+--------+------+------------+----------------------+------------------+
| servertst03 | RUNNING | 191886 | 5000 | 20.1.3.0.0 | 20130020200429095054 | COMPLETE |
| servertst01 | RUNNING | 189966 | 5000 | 20.1.3.0.0 | 20130020200429095054 | COMPLETE |
| servertst02 | RUNNING | 249057 | 5000 | 20.1.3.0.0 | 20130020200429095054 | COMPLETE |
'-------------+---------------+--------+------+------------+----------------------+------------------'

And that solved the issue!

I hope it helps you!

Elisson Almeida

Autonomous Health Framework – Installation and Usage Basics

Hi all,

So, this week we’ll have an Autonomous Health Framework series o posts! One post per day, stay tuned!

Today I’ll review some points of the Autonomous Health Framework and how it can help us on the day to day tasks as well as on specific problem resolutions. This framework is a bundle of tools that were already available like orachk/exachk, tfa plus other features that I will try to cover focusing on practical cases.

So, first of all, you need to install it, if you have a RAC cluster most likely you already have it but it is always good to update it, as Oracle security recommendations as also part of the orachk analyze. For reference: In an ideal world you have root access or sudo, as its daemons under root provides a more wide log collection capabilities.

  1. Download the latest version from Autonomous Health Framework (AHF) – Including TFA and ORAchk/EXAChk (Doc ID 2550798.1)
  2. Transfer and unzip the file to your server and as root follow the installation as below
root@servertst01 oracrs]# ./ahf_setup -ahf_loc /oraadm/oracrs/product/19.0.0/ahf -data_dir /oraadm/oracrs/product/19.0.0/ahf/data

AHF Installer for Platform Linux Architecture x86_64

AHF Installation Log : /tmp/ahf_install_120962_2020_05_07-22_41_11.log

Starting Autonomous Health Framework (AHF) Installation

AHF Version: 20.1.3 Build Date: 202004290950

TFA is already installed at : /oraadm/oracrs/product/19.0.0/tfa/servertst01/tfa_home

Installed TFA Version : 192000 Build ID : 20190426041420

AHF Location : /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf

AHF Data Directory : /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf/data

Shutting down TFA : /oraadm/oracrs/product/19.0.0/tfa/servertst01/tfa_home

Copying TFA Data Files from /oraadm/oracrs/product/19.0.0/tfa/servertst01/tfa_home

Uninstalling TFA : /oraadm/oracrs/product/19.0.0/tfa/servertst01/tfa_home

Do you want to add AHF Notification Email IDs ? [Y]|N : n

Login using root is disabled in sshd config. Installing AHF only on Local Node

Extracting AHF to /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf

Configuring TFA Services

Copying TFA Data Files to AHF

Discovering Nodes and Oracle Resources

Starting TFA Services
Created symlink from /etc/systemd/system/multi-user.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.
Created symlink from /etc/systemd/system/graphical.target.wants/oracle-tfa.service to /etc/systemd/system/oracle-tfa.service.

.---------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID |
+-------------+---------------+--------+------+------------+----------------------+
| servertst01 | RUNNING | 135195 | 5000 | 20.1.3.0.0 | 20130020200429095054 |
'-------------+---------------+--------+------+------------+----------------------'

Running TFA Inventory...

Adding default users to TFA Access list...

.--------------------------------------------------------------------------------------.
| Summary of AHF Configuration |
+-----------------+--------------------------------------------------------------------+
| Parameter | Value |
+-----------------+--------------------------------------------------------------------+
| AHF Location | /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf |
| TFA Location | /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf/tfa |
| Orachk Location | /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf/orachk |
| Data Directory | /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf/data |
| Repository | /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf/data/repository |
| Diag Directory | /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf/data/servertst01/diag |
'-----------------+--------------------------------------------------------------------'


Starting orachk daemon from AHF ...

AHF binaries are available in /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf/bin

AHF is successfully installed

Moving /tmp/ahf_install_120962_2020_05_07-22_41_11.log to /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf/data/servertst01/diag/ahf/

As you can see as the root user is not allowed to ssh, the installation was done only on the local node. Repeat the process on the remaning ones

Once you have it completed you will be able to to a tfactl status and a toolstatus and see the processes and tools available.
[root@servertst01 bin]# ./tfactl status

.----------------------------------------------------------------------------------------------------.
| Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status |
+-------------+---------------+--------+------+------------+----------------------+------------------+
| servertst01 | RUNNING | 135195 | 5000 | 20.1.3.0.0 | 20130020200429095054 | COMPLETE |
| servertst02 | RUNNING | 241723 | 5000 | 20.1.3.0.0 | 20130020200429095054 | COMPLETE |
'-------------+---------------+--------+------+------------+----------------------+------------------'
[root@servertst01 bin]#
[root@servertst01 bin]# ./tfactl toolstatus

.------------------------------------------------------------------.
| TOOLS STATUS - HOST : servertst01 |
+----------------------+--------------+--------------+-------------+
| Tool Type | Tool | Version | Status |
+----------------------+--------------+--------------+-------------+
| Development Tools | orachk | 19.3.0.0.0 | DEPLOYED |
| | oratop | 14.1.2 | DEPLOYED |
+----------------------+--------------+--------------+-------------+
| Support Tools Bundle | darda | 2.10.0.R6036 | DEPLOYED |
| | oswbb | 8.3.2 | RUNNING |
| | prw | 12.1.13.11.4 | NOT RUNNING |
+----------------------+--------------+--------------+-------------+
| TFA Utilities | alertsummary | 19.3.0.0.0 | DEPLOYED |
| | calog | 19.3.0.0.0 | DEPLOYED |
| | dbcheck | 18.3.0.0.0 | DEPLOYED |
| | dbglevel | 19.3.0.0.0 | DEPLOYED |
| | grep | 19.3.0.0.0 | DEPLOYED |
| | history | 19.3.0.0.0 | DEPLOYED |
| | ls | 19.3.0.0.0 | DEPLOYED |
| | managelogs | 19.3.0.0.0 | DEPLOYED |
| | menu | 19.3.0.0.0 | DEPLOYED |
| | param | 19.3.0.0.0 | DEPLOYED |
| | ps | 19.3.0.0.0 | DEPLOYED |
| | pstack | 19.3.0.0.0 | DEPLOYED |
| | summary | 19.3.0.0.0 | DEPLOYED |
| | tail | 19.3.0.0.0 | DEPLOYED |
| | triage | 19.3.0.0.0 | DEPLOYED |
| | vi | 19.3.0.0.0 | DEPLOYED |
'----------------------+--------------+--------------+-------------'

Note :-
DEPLOYED : Installed and Available - To be configured or run interactively.
NOT RUNNING : Configured and Available - Currently turned off interactively.
RUNNING : Configured and Available.

[root@servertst01 bin]#

3. Using it you can ‘query’ all cluster nodes for information from a single point which can really speed up your information collection. Here is an example of the summary command:

[root@servertst01 bin]# ./tfactl summary -h
---------------------------------------------------------------------------------
Usage : TFACTL [run] summary -help
---------------------------------------------------------------------------------
Command : /oraptp/oracrs/product/19.0.0/ahf/oracle.ahf/tfa/bin/tfactl [run] summary [OPTIONS]
Following Options are supported:
        [no_components] : [Default] Complete Summary Collection
        -overview       : [Optional/Default] Complete Summary Collection - Overview
        -crs            : [Optional/Default] CRS Status Summary
        -asm            : [Optional/Default] ASM Status Summary
        -acfs           : [Optional/Default] ACFS Status Summary
        -database       : [Optional/Default] DATABASE Status Summary
        -exadata        : [Optional/Default] EXADATA Status Summary
                          Not enabled/ignored in Windows and Non-Exadata machine
        -patch          : [Optional/Default] Patch Details
        -listener       : [Optional/Default] LISTENER Status Summary
        -network        : [Optional/Default] NETWORK Status Summary
        -os             : [Optional/Default] OS Status Summary
        -tfa            : [Optional/Default] TFA Status Summary
        -summary        : [Optional/Default] Summary Tool Metadata

        -json           : [Optional] - Prepare json report
        -html           : [Optional] - Prepare html report
        -print          : [Optional] - Display [html or json] Report at Console
        -silent         : [Optional] - Interactive console by defauly
        -history <num>  : [Optional] - View Previous <numberof> Summary Collection History in Interpreter
        -node <node(s)> : [Optional] - local or Comma Separated Node Name(s)
        -help           : Usage/Help.
---------------------------------------------------------------------------------
  - Data Collection From Node - servertst02 .. Done.
[root@servertst01 bin]#

[root@servertst01 bin]# ./tfactl summary

  Executing Summary in Parallel on Following Nodes:
    Node : servertst01
    Node : servertst02

LOGFILE LOCATION : /oraadm/oracrs/product/19.0.0/ahf/oracle.ahf/data/repository/suptools/servertst01/summary/root/20200508124309/log/summary_command_20200508124309_servertst01_35756.log

  Component Specific Summary collection :
    - Collecting CRS details ... Done.
    - Collecting ASM details ... Done.
    - Collecting ACFS details ... Done.
    - Collecting DATABASE details ... Done.
    - Collecting PATCH details ... Done.
    - Collecting LISTENER details ... Done.
    - Collecting NETWORK details ... Done.
    - Collecting OS details ... Done.
    - Collecting TFA details ... Done.
    - Collecting SUMMARY details ... Done.

  Remote Summary Data Collection : In-Progress - Please wait ...
  - Data Collection From Node - servertst02 .. Done.

  Prepare Clusterwide Summary Overview ... Done
      cluster_status_summary

  COMPONENT   STATUS    DETAILS
+-----------+---------+---------------------------------------------------------------------------------------------------+
  CRS         PROBLEM   .-----------------------------------------------.
                        | CRS_SERVER_STATUS   : ONLINE                  |
                        | CRS_STATE           : ONLINE                  |
                        | CRS_INTEGRITY_CHECK : FAIL                    |
                        | CRS_RESOURCE_STATUS : OFFLINE Resources Found |
                        '-----------------------------------------------'
  ASM         PROBLEM   .-------------------------------------------------------.
                        | ASM_DISK_SIZE_STATUS : WARNING - Available Size < 20% |
                        | ASM_BLOCK_STATUS     : PASS                           |
                        | ASM_CHAIN_STATUS     : PASS                           |
                        | ASM_INCIDENTS        : PASS                           |
                        | ASM_PROBLEMS         : PASS                           |
                        '-------------------------------------------------------'
  ACFS        OFFLINE   .-----------------------.
                        | ACFS_STATUS : OFFLINE |
                        '-----------------------'
  DATABASE    PROBLEM   .-----------------------------------------------------------------------------------------------.
                        | ORACLE_HOME_DETAILS                                                        | ORACLE_HOME_NAME |
                        +----------------------------------------------------------------------------+------------------+
                        | .------------------------------------------------------------------------. | OraDb11g_home1   |
                        | | PROBLEMS | INCIDENTS | DB_BLOCKS | DATABASE_NAME | STATUS  | DB_CHAINS | |                  |
                        | +----------+-----------+-----------+---------------+---------+-----------+ |                  |
                        | | PROBLEM  | PROBLEM   | PASS      | FSIGNTST      | PROBLEM | PROBLEM   | |                  |
                        | '----------+-----------+-----------+---------------+---------+-----------' |                  |
                        '----------------------------------------------------------------------------+------------------'
  PATCH       OK        .----------------------------------------------.
                        | CRS_PATCH_CONSISTENCY_ACROSS_NODES      : OK |
                        | DATABASE_PATCH_CONSISTENCY_ACROSS_NODES : OK |
                        '----------------------------------------------'
  LISTENER    OK        .-----------------------.
                        | LISTNER_STATUS   : OK |
                        '-----------------------'
  NETWORK     OK        .---------------------------.
                        | CLUSTER_NETWORK_STATUS :  |
                        '---------------------------'
  OS          OK        .-----------------------.
                        | MEM_USAGE_STATUS : OK |
                        '-----------------------'
  TFA         OK        .----------------------.
                        | TFA_STATUS : RUNNING |
                        '----------------------'
  SUMMARY     OK        .-----------------------------------.
                        | SUMMARY_EXECUTION_TIME : 0H:2M:9S |
                        '-----------------------------------'
+-----------+---------+---------------------------------------------------------------------------------------------------+


        ### Entering in to SUMMARY Command-Line Interface ###

tfactl_summary>list

  Components : Select Component - select [component_number|component_name]
        1 => overview
        2 => crs_overview
        3 => asm_overview
        4 => acfs_overview
        5 => database_overview
        6 => patch_overview
        7 => listener_overview
        8 => network_overview
        9 => os_overview
        10 => tfa_overview
        11 => summary_overview

 

See you tomorrow at the next post o this series!

Elisson Almeida

 

19c: Could not execute DBMS_LOGMNR.START_LOGMNR: ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR.

Hi all,
This is to show you we should never trust 100% on documentation and how running on new versions yearly can put additional pressure on the documentation and cause errors…

So, I started supporting a new tool for data mining. There were no version restrictions as per their documentation, so I was more than happy about creating a PDB on my brand new 19c CDB, proudly using 19c for this new tool!

What happened?

Could not execute DBMS_LOGMNR.START_LOGMNR: ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR.

Doing my validation against the database:

SQL> execute dbms_logmnr.start_logmnr( options => dbms_logmnr.dict_from_online_catalog + SYS.DBMS_LOGMNR.CONTINUOUS_MINE);
BEGIN dbms_logmnr.start_logmnr( options => dbms_logmnr.dict_from_online_catalog + SYS.DBMS_LOGMNR.CONTINUOUS_MINE); END;

*
ERROR at line 1:
ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR.

Why is this happening?

This happens because the continuous_mine feature is deprecated since 12.2 and desupported/unavailable 19c on.

It seems this LogMining tool devs don’t check what changes on the new DBs versions before confirm compatibility, right?

And they don’t do it or a while, as this as announced as deprecated on 12.2…

Which makes me ask myself how are their confidence on the tool running on other clients they have… LOL

Anyway, are you facing the same? Here is some reference documentation to answer your boss about:

From: 19.1 Announcement – ORA-44609: CONTINOUS_MINE is Desupported For Use With DBMS_LOGMNR.START_LOGMNR (Doc ID 2456684.1):

  • CONTINUOUS_MINE was deprecated in Oracle Database 12c release 2 (12.2) and starting with 19.1 is desuppported. There is no replacement functionality.
  • The continuous_mine option for the dbms_logmnr.start_logmnr package is desupported in Oracle Database 19c (19.1), and is no longer available.

The real reasoning behind is: Nothing seems to be populating the V$LOGMNR_LOGS, so the ORA-1291 occurs.

Here is a quick test case or you, from the 19c version for Oracle Utilities Guide version 19c “Mining without specifying the list of redo log files“. (19c – 22. Using LogMiner to Analyze Redo Log Files)

I just put it all together to show you how it does [not] work:

  • Setting up it all:
ALTER DATABASE add SUPPLEMENTAL LOG DATA;
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
EXECUTE DBMS_LOGMNR_D.BUILD (OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
@?/rdbms/admin/dbmslm.sql
alter system switch logfile;
alter system switch logfile;
alter system switch logfile;
create user BLABLA identified by BLABLA default tablespace users quota unlimited on users profile default;
grant connect, resource to ;
  • Doing some stuff to generate logs:
connect BLABLA/BLABLA
alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
set time on
CREATE TABLE TEST_NULLS (COLUMNA1 NUMBER(3,0));
ALTER TABLE TEST_NULLS ADD (COLUMNA2 NUMBER(3) DEFAULT 0 NOT NULL);
insert into TEST_NULLS(columna1) values (4);
commit;
select * from TEST_NULLS;
update TEST_NULLS set columna2=221 where columna1=4;
commit;
select * from TEST_NULLS;
  • Gathering info or mining:
connect / as sysdba;
set echo on
set serveroutput on
alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
set linesize 254
set pagesize 3000
column name format a40;
SELECT FILENAME name FROM V$LOGMNR_LOGS;
SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG;
SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES');
  • Introduce the first_time to recover for the previous query:
EXEC DBMS_LOGMNR.START_LOGMNR(STARTTIME =>'&1',ENDTIME => SYSDATE,OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC BEGIN DBMS_LOGMNR.START_LOGMNR(STARTTIME =>'&1',ENDTIME => SYSDATE,OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
select FIRST_CHANGE#,NEXT_CHANGE# from V$archived_log;
SELECT CHECKPOINT_CHANGE#, CURRENT_SCN FROM V$DATABASE;
  • Use the checkpoint at startscn and the current_scn at endscn:
EXEC DBMS_LOGMNR.START_LOGMNR(STARTSCN =>&1,ENDSCN => &2,OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC DBMS_LOGMNR.START_LOGMNR(STARTSCN =>&1,ENDSCN => &2,OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC DBMS_LOGMNR.START_LOGMNR(STARTTIME =>SYSDATE, ENDTIME => SYSDATE +5/24,OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC DBMS_LOGMNR.START_LOGMNR(STARTTIME =>SYSDATE ,ENDTIME => SYSDATE +5/24 ,OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);

Ref: 19c – 22. Using LogMiner to Analyze Redo Log Files

  • You can expect or this:
ERROR at line 1:
ORA-01291: missing log file
ORA-06512: at "SYS.DBMS_LOGMNR", line 72
ORA-06512: at line 1

Hmmmm… So, the 19c Documentation is not working? Precisely.

As per (Doc ID 2613490.1), this will be fixed in 20.1 documentation.

  • Sections 22.13.2 Examples of Mining without specifying the list of redo log files explicitly and child example topics will be removed.
  • Section 22.4.2. Automatic Redo log Files options will be changed to Specifying Redo Log Files for Data Mining.
  • Section 22.7.4 The next sentence will be removed too.

In summary, whatever reference related to automatic mining in the documentation will be removed as this feature it’s not supported in 19.1 version and higher.

Ok, but this doesn’t solve my problem. What should I do with the client tool?

So, The continous_mine was the only method of starting LogMiner without first adding logfiles using dbms_logmnr.add_logfile.

What can I do, to workaround it:

  1. Add a logfile manually with dbms_logmnr.add_logfile for each logfile.
  2. Remove SYS.DBMS_LOGMNR.CONTINUOUS_MINE from the code.
    1. For this, you’ll need to specify the logfile explicitly. So I guess, some additional codding will be needed from your side…

I hope it helps!
Matheus.

Monitoring database storage over-committing with OEM extention metric when using autoextend datafiles

Hi all,

Working on a project, I was asked to stop using autoextend on the database file, while setting up a new environment on a client, to avoid ASM diskgroup space issues.

In case the free space was not enough to handle all datafiles growth when often people would add datafiles in autoextend without checking the complete database e storage, which could cause a bigger problem.

I really like the autoextend, as it can prevent processes from failing and even more. Instead of be wasting space as often, in multiple tablespace environment, as the tablespace grows differently and as I mention not all validations are often done when adding space. We just want to resolve the issue and move to the next one.

So how can we save space by using it on where is needed and not being at risk of over-committing the storage use?

The idea is to monitor the database possible growth and compare with the current free space in the ASM diskgroup being used by the database.

So to accomplish that, I used column MAXBYTES in dba_data_files and dba_temp_files. In the example below we can see the current size from the BYTES column and the MAXBYTES as the limit to which the file can grow to be.  If the MAXBYTES is 0 the file does not have autoextend on.

SYS@db011>select TABLESPACE_NAME,FILE_NAME,BYTES,MAXBYTES from dba_data_files where TABLESPACE_NAME like 'SYS%' or TABLESPACE_NAME='UNDOTBS1';

TABLESPACE_NAME FILE_NAME BYTES MAXBYTES
------------------------------ -------------------------------------------------------------------------- ---------- ------------
SYSTEM +ORA_DATA/db01/datafile/system.264.1032015007 2147483648 32212254720
SYSAUX +ORA_DATA/db01/datafile/sysaux.319.1032015009 2751463424 32212254720
UNDOTBS1 +ORA_DATA/db01/datafile/undotbs1.320.103201500 4294967296 0

So, to gather the max database size, I summed the greatest value between BYTES and MAXBYTES (when available) and using the v$asm_diskgroup to grab the current avaliable space from the diskgroup used in the db_create_file_dest parameter and finally subtracting the available space from the max database size:

select dg.FREE_MB - round((dbf.max+tempf.max),0) as total_free
from
(select trunc(sum(greatest(maxbytes,bytes)/1024/1024)) as max
from dba_data_files ) dbf,
(select trunc(sum(greatest(maxbytes,bytes)/1024/1024)) as max
from dba_temp_files) tempf,
(select FREE_MB
from v$asm_diskgroup where name in (select LTRIM( value, '+' ) from v$parameter where name='db_create_file_dest')) dg;

Based on this we can create several ways to avoid the storage over-committing.

The one I used was to create a metric extension:

If the value of the above query is negative, means that the max database size is greater than the available size therefore the capacity plan for that database must be reviewed.

  1. To create the metric extension, go to, on the OEM menu  – Enterprise -> Monitoring -> Metric Extension > Actions -> Create -> Metric Extension.
  2. In the General Properties page select the target type as Cluster database if it´s RAC or Database instance.
  3. In the Adapter, select “SQL”
  4. In Collection schedule, enable the metric, upload the data and set the collection frequently to your environment needs.
  5. In the Adapter page add the above query.
  6. In the Columns page, add a column for the result of the query.
    1. To fit my environment I only set the critical alert and in case the value of the metric is below 0.
  7. Continue filling the necessary information on the next pages, as well as test the new metric on a selected target, Afterwards save the new metric.
  8. Prior to deploying it to any target you need save the metric as a deployable draft, select the created metric and go the action menu and select Save as Deployable draft

Now you can deploy the new metric to the targets.

Hope this can help!

Elisson Almeida

ALTERNATE Archive log destination configuration that can help you – at least it helped me =)

Hi all,

I was working on a database that was waiting on a backup policy to be configured on the Netbackup side, so basically it had no backups. No news here right =)

The dev team did not wait for much to hammer down the database which caused the archive log generation spiked high.

They were planning to work all weekend long and I needed to make sure that the database was available.

I had a spare diskgroup that I could use when the FRA got full but I did not wanted to monitor the space usage along the weekend to change the archive log destination, also did not want set it up ahead of it causing to double the archive log being sent to both locations wasting space.

So, one active location at a particular time and if /when the 1st location filled up, it would switch to the secondary location.

The configuration that helped me with was the ALTERNATE parameter in the log_archive_dest_n.

The parameter “Specifies an alternate archiving destination to be used when the original destination fails.”

If you want you can read the all details of the parameter here 

Most of the configurations using this parameter are related to the Dataguard standby configurations but it works on this scenario as well.

This is how I used it:
*I used the noreopen configuration as I knew that the space issue would not be resolved during the weekend

alter system set log_archive_dest_1='location=use_db_recovery_file_dest noreopen alternate=log_archive_dest_2' scope=both sid'*';
alter system set log_archive_dest_2='+NEW_FLASH´ sid='*';
alter system set log_archive_dest_state_2= 'ALTERNATE' scope=both sid='*';

On the database, you can see the status of the archive destinations:

select dest_id, dest_name, status from v$archive_dest_status where status <> 'INACTIVE';

DEST_ID DEST_NAME STATUS
---------- -------------------- ---------
1 LOG_ARCHIVE_DEST_1 VALID
2 LOG_ARCHIVE_DEST_2 UNKNOWN

I saw this error in the alert when the space on the 1st destination exhausted (also some hiccups on the database´s services):

ORA-17502: ksfdcre:4 Failed to create file +FLASH
ORA-15041: diskgroup "+FLASH" space exhausted
ORA-16038: log 8 sequence# 1059 cannot be archived
ORA-19504: failed to create file ""

Checking the log_archive_dest parameter status, the 1st one got disabled and only the 2nd was is valid, the hickups were baralley noticed andthe database kept working all the time.

select dest_id, dest_name, status from v$archive_dest_status where status <> 'INACTIVE';

DEST_ID DEST_NAME STATUS
---------- -------------------- ---------
1 LOG_ARCHIVE_DEST_1 DISABLED
2 LOG_ARCHIVE_DEST_2 VALID

When the space issue was resolved, all I needed to do was to enable the 1st location and set the 2nd one again to alternate.

select dest_id, dest_name, status from v$archive_dest_status where status <> 'INACTIVE';

DEST_ID DEST_NAME STATUS
---------- -------------------- ---------
1 LOG_ARCHIVE_DEST_1 VALID
2 LOG_ARCHIVE_DEST_2 UNKNOWN

Thanks and hope it helps

Elisson Almeida

ASM: Process PPAx holds disk from being dropped

So I was trying to remove some no longer needed disks from a cluster: 1st I umounted the disgroups from the other cluster nodes and from the last one I executed the DROP DISKGROUP DG_NAME INCLUDING CONTENTS. Nothing new right, but the SA told me that the disks were still being accessed.

I checked with kfod and kfed and also using asmcmd lsdsk -g –candidate, the disks were there, avaliable to be used in another diskgroup but they were no a part of any diskgroup.

I tried to check using asmcmd lsod but it was not returning anything while using lsop and fuser I was able to see the process that was holding the disks, which was an asm process PPA7:
*the disks that I´m was looking for was from the db5*

ASMCMD> lsod --process ppa
Instance Process OSPID Path
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_data1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_data3p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_flash1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_redos1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_redos6p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db1_redos7p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db2_data1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db2_data3p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db2_flash1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db2_redos1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db2_redos2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db3_data1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db3_data6p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db3_flash1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db3_redos1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db3_redos2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ocrvoting2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ocrvoting5p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_data1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_data2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_flash1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_redo1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_redo2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_redo3p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_ora_redo4p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_data1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_data6p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_flash1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_flash2p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_redos1p1
1 oracle@server01 (PPA7) 38908 /dev/mapper/mp_db4_redos2p1

[oracrs@server01 ~]$ fuser /dev/mapper/mp_db5_redos2p1
/dev/dm-99: 38908
[oracrs@server01 ~]$ lsof /dev/mapper/mp_db5_redos2p1
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
asm_ppa7_ 38908 oracrs 270u BLK 253,99 0t0 70663 /dev/mapper/../dm-99

Checking MOS I found some notes like Database Process Hold Locks Dropped ASM Disk (Doc ID 2159941.1) but still would not help me as there was no work around the issue.

So basicly I needed to kill the process but could I? It was a production system and kill ASM processes is not something that one should take lightly.

In the documentation the PPA process is found under the
Parallel Query Slave Process which made my life easier, right? Not really.

How to make sure that killing a process would not make the instance to go down?

There is a script and blog post from Tanel Poder which also helped me , and it is worth the reading,

Oracle Database 19c Enterprise Edition Release 19.0.0.0.0 - Production
Version 19.6.0.0.0

SYS@+ASM1>SELECT indx,ksuprpnm,TO_CHAR(ksuprflg,'XXXXXXXXXXXXXXXX')
FROM x$ksupr
WHERE BITAND(ksuprflg,4) = 4 ORDER BY indx
/ 2 3 4

INDX KSUPRPNM TO_CHAR(KSUPRFLG,
---------- ------------------------------------------------ -----------------
2 oracle@server01 (PMON) E
3 oracle@server01 (CLMN) E
4 oracle@server01 (PSP0) 6
5 oracle@server01 (VKTM) 6
6 oracle@server01 (GEN0) 6
7 oracle@server01 (MMAN) 6
13 oracle@server01 (PMAN) 6
15 oracle@server01 (LMON) 6
16 oracle@server01 (LMD0) 6
17 oracle@server01 (LMS0) 6
20 oracle@server01 (LCK1) 6
21 oracle@server01 (DBW0) 6
22 oracle@server01 (LGWR) 6
23 oracle@server01 (CKPT) 6
24 oracle@server01 (SMON) 6
25 oracle@server01 (LREG) 6
27 oracle@server01 (RBAL) 6
28 oracle@server01 (GMON) 6
31 oracle@server01 (IMR0) 6
32 oracle@server01 (LCK0) 6
38 oracle@server01 (ASMB) 6

Cutting the sotry short as my process was not the query output, I did some other testing and I was comfortable enough to kill it and release the disks to the SA team.

Hope it helps

Elisson Almeida

Exadata: Cell Server Crashing on ORA-00600: [LinuxBlockIO::reap]

Hi all,

So I started facing this in a client environment. Here is the alert message:

Target name=db01cel08.xxx.com
Message=ORA-00600: internal error code, arguments: [LinuxBlockIO::reap], [0x60000D502388], [], [], [], [], [], [], [], [], [], []
Event reported time=Dec 19, 2019 2:14:16 AM EDT

When checking on the cellserver I see this message:

[root@db01 ~]# ssh db01cel08
Last login: Thu Dec 19 04:45:13 2019 from db01.xxx.com
[root@db01cel08 ~]# cellcli
CellCLI: Release 12.1.2.3.5 - Production on Fri Dec 19 17:13:31 EDT 2019

Copyright (c) 2007, 2016, Oracle. All rights reserved.

CellCLI> LIST ALERTHISTORY detail

[...]

name: 10
alertDescription: "ORA-07445: exception encountered: core dump [__intel_new_memset()+62] [11] [0x000000000] [] [] []"
alertMessage: "ORA-07445: exception encountered: core dump [__intel_new_memset()+62] [11] [0x000000000] [] [] []"
alertSequenceID: 10
alertShortName: ADR
alertType: Stateless
beginTime: 2019-12-19T02:00:04-04:00
endTime:
examinedBy:
notificationState: 1
sequenceBeginTime: 2019-12-19T02:00:04-04:00
severity: critical
alertAction: "Errors in file /opt/oracle/cell/log/diag/asm/cell/SYS_112331_170406/trace/cellofltrc_19796_53.trc (incident=25). Diagnostic package is attached. It is also accessible at https://db01cel08.xxx.com/diagpack/download?name=db01cel08_2019_12_19T02_00_04_10.tar.bz2 It will be retained on the storage server for 7 days. If the diagnostic package has expired, then it can be re-created at https://db01cel08.xxx.com/diagpack"

name: 11
alertDescription: "ORA-00600: internal error code, arguments: [LinuxBlockIO::reap], [0x60000D502388], [], [], [], [], [], [], [], [], [], []"
alertMessage: "ORA-00600: internal error code, arguments: [LinuxBlockIO::reap], [0x60000D502388], [], [], [], [], [], [], [], [], [], []"
alertSequenceID: 11
alertShortName: ADR
alertType: Stateless
beginTime: 2019-12-19T02:00:04-04:00
endTime:
examinedBy:
notificationState: 1
sequenceBeginTime: 2019-12-19T02:00:04-04:00
severity: critical
alertAction: "Errors in file /opt/oracle/cell/log/diag/asm/cell/db01cel08/trace/svtrc_9174_12.trc (incident=25). Diagnostic package is attached. It is also accessible at https://db01cel08.xxxx.com/diagpack/download?name=jdb01cel08_2019_12_19T02_00_04_11.tar.bz2 It will be retained on the storage server for 7 days. If the diagnostic package has expired, then it can be re-created at https://db01cel08.xxx.com/diagpack"

name: 12_1
alertDescription: "A SQL PLAN quarantine has been added"
alertMessage: "A SQL PLAN quarantine has been added. As a result, Smart Scan is disabled for SQL statements with the quarantined SQL plan. Quarantine id : 21 Quarantine type : SQL PLAN Quarantine reason : Crash Quarantine Plan : SYSTEM Quarantine Mode : FULL_Quarantine DB Unique Name : XPTODB Incident id : 25 SQLID : 8j0az9sgxs5yh SQL Plan details : {SQL_PLAN_HASH_VALUE=281152830, PLAN_LINE_ID=9} In addition, the following disk region has been quarantined, and Smart Scan will be disabled for this region: Disk Region : {Grid Disk Name=Unknown, offset=186750337024, size=1M} "
alertSequenceID: 12
alertShortName: Software
alertType: Stateful
beginTime: 2019-12-19T02:00:12-04:00
examinedBy:
metricObjectName: QUARANTINE/21
notificationState: 1
sequenceBeginTime: 2019-12-19T02:00:12-04:00
severity: critical
alertAction: "A SQL statement caused the Cell Server (CELLSRV) service on the cell to crash. A SQL PLAN quarantine has been created to prevent the same SQL statement from causing the same cell to crash. When possible, disable offload for the SQL statement or apply the RDBMS patch that fixes the crash, then remove the quarantine with the following CellCLI command: CellCLI> drop quarantine 21 All quarantines are automatically removed when a cell is patched or upgraded. For information about how to disable offload for the SQL statement, refer to the section about 'SQL Processing Offload' in Oracle Exadata Storage Server User's Guide. Diagnostic package is attached. It is also accessible at https://db01cel08.xxx.com/diagpack/download?name=db01cel08_2019_12_19T02_00_12_12_1.tar.bz2 It will be retained on the storage server for 7 days. If the diagnostic package has expired, then it can be re-created at https://db01cel08.xxx.com/diagpack"

CellCLI>

After some research, we could match the situation to Bug 13245134 – Query may fail with errors ORA-27618, ORA-27603, ORA-27626 or ORA-00600[linuxblockio::reap_1] or ora-600 [cacheput::process_1]

It’s also described as per: Exadata/SuperCluster: 11.2 databases missing fix for the bug 13245134 may lead to cell service crash with ora-600 [linuxblockio::reap_1]/ora-600 [cacheput::process_1] or ORA-27626: Exadata error: 242/Smart scan issues on the RDBMS side

In order to resolve the crashes quickly, I applied the patch online with:

After applying, all got solved:

[oracle@db01 ~]$ /oracle/xptodb/product/11.2.0.4/OPatch/opatch lsinventory -OH $ORACLE_HOME | grep 13245134
Patch (online) 13245134: applied on Thu Dec 19 23:34:50 EST 2019
13245134
[oracle@db01 ~]$

Hope it helps!

Automatic Killing Inactive Sessions with Resource Manager

Hello All!

So, your are manually (or via script) killing idle sessions on your database?

As consequence, your users are getting error “ORA-00028: your session has been killed.” and getting angry on you?

What about doing it automatically in a much more graceful way and be seen as a nicer DBA? You can do it using Resource Manager.

Are you already using Resource Manager on your database? Yes – Great!

No – Shame on you. Read this and put RM in place ASAP. This is a great tool for the database to manage the database resources, plus no additional licensing is needed. So go for it!

Ok, but what is the trick?
It’s the limit max_idle_time. You can use it either on existing groups or subgroups of your plan (or subplan) or switch to “KILL” groups with this you can even use the same criteria you’d use for any script to perform this action. For this you might user the parameter new_switch_group and have a created a different group only for those kills.

Have this option it’s nicer, if you want to avoid killing sessions on database, by the way, as you can always switch a session for the killing group manually, not demanding it to fill the requirements to automatic switching.

How? Quick example:

Creating new Plan Groups with MAX_IDLE_TIME:

begin
dbms_resource_manager.create_plan_directive( plan => '&RM_PLAN', group_or_subplan =>
'LONG_RUN', comment => 'Limit idle time to 5 minutes', max_idle_time => 300);
dbms_resource_manager.create_plan_directive( plan => '&RM_PLAN', group_or_subplan =>
'SHORT_RUN', comment => 'Limit idle time to 1 minute', max_idle_time => 60);
end;
/

Cool!

And what would be the error for the user that get’s the session killed?

ORA-3113 End of file on communication channel

Much nicer: Now you are a nice DBA and don’t kill sessions anymore.
You automated it!

Some additional recommendations:

  • Use this solution for Databases above 11.2.0.4 or 12.1.0.2, due some known bugs:
    • Bug 9523768 – IDLE SESSIONS AREN’T ACTUALLY KILLED IMMEDIATELY (affecting 11.2.0.1 to 11.2.0.3)
    • Bug 13837378 – ALTER SYSTEM KILL SESSION IMMEDIATE DOES NOT KILL QC SESSIONS (11.2.0.1 to 12.1.0.1)
    • Bug 8891495 – NON-IDLE SESSIONS ARE KILLED (11.2.0.1 only)
  • Some MOS references on it:
    • How To Automatic Kill Inactive Sessions using Resource Manager (Doc ID 1935739.1)
    • Using Resource Manager to Detect and Kill Idle Sessions (Doc ID 1557657.1)

Hope it helps!

Advise: Setting underScore/Hidden Parameters

Please be aware no hidden parameters (parameters starting with “_”) should not be set unless recommended by Oracle explicitly or in case of a perfect match to the case.

This is explained on MOS Best Practices and Guidelines for Setting underScore/Hidden Parameter to Workaround and/or Resolve Customers Problems (Doc ID 2524119.1)

So, in case of any diagnostics you have to lead to setting it, SUGGESTED:

  • Get the explicit approval/agreement from Oracle.
  • Include evidence on the root cause for this setting. A comment should work, something like:
alter system set "_replace_virtual_columns"=false Comment= 'Workaround Doc ID 1512347.1' SID= '*' scope= both;

Cheers!

ORA-20001: Statistics Advisor: Invalid task name for the current user

Hi all,
So, I start seeing on my alert log:

2019-12-26T00:42:03.543268-05:00
Errors in file /u01/app/oracle/diag/rdbms/cdb1/CDB1/trace/CDB1_j000_89793.trc:
ORA-12012: error on auto execute of job "SYS"."ORA$AT_OS_OPT_SY_36807"
ORA-20001: Statistics Advisor: Invalid task name for the current user
ORA-06512: at "SYS.DBMS_STATS", line 47207
ORA-06512: at "SYS.DBMS_STATS_ADVISOR", line 882
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 20059
ORA-06512: at "SYS.DBMS_STATS_INTERNAL", line 22201
ORA-06512: at "SYS.DBMS_STATS", line 47197

After some research I found Bug 22879263 – BETA 12.2 – ORA-12012 , ORA-20001 AND ORA-06512 FROM ORA$AT_OS_OPT_SY_88 JOB.

This was suppose to be fixed on 12.2.0.1, however I’m still getting. Anyway, I executed the below and it got fixed:

EXEC dbms_stats.init_package();

Ref: Recurring ORA-12012, ORA-20001, ORA-06512 In Container Database (Doc ID 2420581.1)

Hope it helps!