Oracle Groundbreakers Yatra 2020 (AIOUG) – Online – CFP is Open!

Hi all,

Just sharing the word about this tour I always wanted to attend, maybe now it’s online I can be there!

It is going to happen from 1st -15th July 2020.

From the organization committee:

The health of All India Oracle Users Community (AIOUG) is our primary concern. Considering global precautions for the COVID-19 Coronavirus, and building upon recommendations from the World Health Organization, AIOUG is taking a new approach to its Oracle Groundbreaker Yatra event. The event is a highly concentrated 15-day collaboration and transformation while providing the deep technical education needed for our Indian Oracle Community.

We will accept papers on any topic related to Oracle Cloud, Oracle Database, Oracle Applications, Middleware, Engineered Systems, BigData, Virtualization and Servers, and Storage. Please follow the following guidelines to submit your papers.

Please note that presentations should not have sales or marketing content. This is a user’s conference and should have technical value to the attendees. Presentations containing any kind of sales or marketing information will be disqualified.

Important Dates for Speakers:

• Deadline to submit abstracts: 7th June 2020

• Approximate date you will be notified: 14th June 2020

How to submit a paper:

A simple process to fill following Google Formhttps://forms.gle/PJSzBCFXa7dZ1xEA6

Selection Process:

Once the CFP closes, the presentations will be selected by a panel of judges who have attended previous AIOUG events.

Contact:

For questions regarding the Call for Proposals, send an e-mail to ogyatra@aioug.org

I did mine submissions already!

NewScreenshot 2020-05-27 às 21.05.56

Live GUOB: Qual o Futuro do DBA?

O que você vai fazer amanhã a noite? Que tal participar de uma conversa sobre o futuro da sua profissão?

Amanhã (26/maio) a partir das 20:00 teremos um papo sobre expectativas pro futuro do DBA organizado pelo GUOB (Grupo de Usuários Oracle do Brasil).

Se já é difícil conseguir reunir essa galera topzera,  mais difícil ainda é poder participar do conforto do seu sofá. Não vai perder essa, hein!

Como participar? Basta ficar ligado no Canal do GUORS no Youtube!

WhatsApp Image 2020-05-23 at 16.21.49

 

Te vejo por lá!

Quer ser um Speaker no GUOB? Essa é sua chance!

Olá pessoALL!

Suponho que estejam todos familiarizados com o Tour de eventos Oracle na America Latina organizado anualmente pela LAOUC (Latin American Oracle Users Group Community) em parcerias com os grupos locais, organizado no Brasil pelo GUOB.

Por razões conhecidas, este ao o tour será e 100% online! E GRATUITO!

O Call for Papers está aberto para quem quiser submeter suas palestras.

WhatsApp Image 2020-05-18 at 20.53.47

Não havendo dias específicos para cada país, segue a expectativa de agenda para este ano:

Dia da Semana Dia do Mês Track
Segunda-Feira 17/08/2020 Database Track
Terça-Feira 18/08/2020 APEX track
Quarta-Feira 19/08/2020 Big Data, Analytics, and Machine Learning Track
Quinta-Feira 20/08/2020 Java Development Track
Sexta-Feira 21/08/2020 Cloud-Native Track
Segunda-Feira 24/08/2020 IoT, Chatbots, Mobile Development Track
Terça-Feira 25/08/2020 Oracle Cloud Infrastructre
Quarta-Feira 26/08/2020 Java Development Track
Quinta-Feira 27/08/2020 Database Track

Sei que você tem algo a dizer!

Participa da comunidade e compartilha teu conhecimento!

Want to be a Speaker? This is your chance! Speak to entire Latin America in August!

Hi!

Are you wondering how to become a Speaker? This is your golden ticket!

We are announcing the Call for Paper for LAOUC!

What is LAOUC? Well, this is an Oracle Users Groups Tour around Latin America where local communities engage their Oracle Professionals for a meeting discussion about experiences and technology. This is all organized by the LAOUC – Latin American Oracle Users Group Community with the local user’s groups (GUOB – Brazilian Oracle User Group in the case of Brazil, for instance).

In general, we have people from other countries and continents doing a crossover to increase the value of any discussions. And if this is your case, even better this year!

With all this COVID thing, this year’s tour will be 100% online! AND FREE!

Click HERE or in the image below to submit your papers!

LOGO-LAOUC

Here is the expected tracking agenda for the Tour:

Weekday Month Day Track
Monday August 17th Database Track
Tuesday August 18th APEX track
Wednesday August 19th Big Data, Analytics, and Machine Learning Track
Thursday August 20th Java Development Track
Friday August 21th Cloud-Native Track
Monday August 24th IoT, Chatbots, Mobile Development Track
Tuesday August 25th Oracle Cloud Infrastructre
Wednesday August 26th Java Development Track
Thursday August 27th Database Track

Things I use to hear:

“I’m shy” / “I don’t have have a topic” / “I never did it before”:

  • You’ll never know if you never try.
  • If you know anything that is interesting or others may appreciate knowing, you HAVE a topic.
  • If you are not experienced, you have time to become. Make your session a few times for friends, colleagues at work, and so on.

“How to prepare?”

  • Have a topic and know what you want to share about it.
  • Prepare your material in a way you’d like to see it: Be dynamic, be quick, be objective.
  • Try, try and try: Measure the time, present it a couple times, know what people like more or less regarding what you speak.

Still have questions? Be my guest: Reach me out and I’ll see how I can help you with this.

Take this chance!

19c: Could not execute DBMS_LOGMNR.START_LOGMNR: ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR.

Hi all,
This is to show you we should never trust 100% on documentation and how running on new versions yearly can put additional pressure on the documentation and cause errors…

So, I started supporting a new tool for data mining. There were no version restrictions as per their documentation, so I was more than happy about creating a PDB on my brand new 19c CDB, proudly using 19c for this new tool!

What happened?

Could not execute DBMS_LOGMNR.START_LOGMNR: ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR.

Doing my validation against the database:

SQL> execute dbms_logmnr.start_logmnr( options => dbms_logmnr.dict_from_online_catalog + SYS.DBMS_LOGMNR.CONTINUOUS_MINE);
BEGIN dbms_logmnr.start_logmnr( options => dbms_logmnr.dict_from_online_catalog + SYS.DBMS_LOGMNR.CONTINUOUS_MINE); END;

*
ERROR at line 1:
ORA-44609: CONTINOUS_MINE is desupported for use with DBMS_LOGMNR.START_LOGMNR.

Why is this happening?

This happens because the continuous_mine feature is deprecated since 12.2 and desupported/unavailable 19c on.

It seems this LogMining tool devs don’t check what changes on the new DBs versions before confirm compatibility, right?

And they don’t do it or a while, as this as announced as deprecated on 12.2…

Which makes me ask myself how are their confidence on the tool running on other clients they have… LOL

Anyway, are you facing the same? Here is some reference documentation to answer your boss about:

From: 19.1 Announcement – ORA-44609: CONTINOUS_MINE is Desupported For Use With DBMS_LOGMNR.START_LOGMNR (Doc ID 2456684.1):

  • CONTINUOUS_MINE was deprecated in Oracle Database 12c release 2 (12.2) and starting with 19.1 is desuppported. There is no replacement functionality.
  • The continuous_mine option for the dbms_logmnr.start_logmnr package is desupported in Oracle Database 19c (19.1), and is no longer available.

The real reasoning behind is: Nothing seems to be populating the V$LOGMNR_LOGS, so the ORA-1291 occurs.

Here is a quick test case or you, from the 19c version for Oracle Utilities Guide version 19c “Mining without specifying the list of redo log files“. (19c – 22. Using LogMiner to Analyze Redo Log Files)

I just put it all together to show you how it does [not] work:

  • Setting up it all:
ALTER DATABASE add SUPPLEMENTAL LOG DATA;
SELECT SUPPLEMENTAL_LOG_DATA_MIN FROM V$DATABASE;
EXECUTE DBMS_LOGMNR_D.BUILD (OPTIONS=>DBMS_LOGMNR_D.STORE_IN_REDO_LOGS);
@?/rdbms/admin/dbmslm.sql
alter system switch logfile;
alter system switch logfile;
alter system switch logfile;
create user BLABLA identified by BLABLA default tablespace users quota unlimited on users profile default;
grant connect, resource to ;
  • Doing some stuff to generate logs:
connect BLABLA/BLABLA
alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
set time on
CREATE TABLE TEST_NULLS (COLUMNA1 NUMBER(3,0));
ALTER TABLE TEST_NULLS ADD (COLUMNA2 NUMBER(3) DEFAULT 0 NOT NULL);
insert into TEST_NULLS(columna1) values (4);
commit;
select * from TEST_NULLS;
update TEST_NULLS set columna2=221 where columna1=4;
commit;
select * from TEST_NULLS;
  • Gathering info or mining:
connect / as sysdba;
set echo on
set serveroutput on
alter session set nls_date_format='dd-mon-yyyy hh24:mi:ss';
set linesize 254
set pagesize 3000
column name format a40;
SELECT FILENAME name FROM V$LOGMNR_LOGS;
SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG;
SELECT NAME, FIRST_TIME FROM V$ARCHIVED_LOG WHERE SEQUENCE# = (SELECT MAX(SEQUENCE#) FROM V$ARCHIVED_LOG WHERE DICTIONARY_BEGIN = 'YES');
  • Introduce the first_time to recover for the previous query:
EXEC DBMS_LOGMNR.START_LOGMNR(STARTTIME =>'&1',ENDTIME => SYSDATE,OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC BEGIN DBMS_LOGMNR.START_LOGMNR(STARTTIME =>'&1',ENDTIME => SYSDATE,OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
select FIRST_CHANGE#,NEXT_CHANGE# from V$archived_log;
SELECT CHECKPOINT_CHANGE#, CURRENT_SCN FROM V$DATABASE;
  • Use the checkpoint at startscn and the current_scn at endscn:
EXEC DBMS_LOGMNR.START_LOGMNR(STARTSCN =>&1,ENDSCN => &2,OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC DBMS_LOGMNR.START_LOGMNR(STARTSCN =>&1,ENDSCN => &2,OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC DBMS_LOGMNR.START_LOGMNR(STARTTIME =>SYSDATE, ENDTIME => SYSDATE +5/24,OPTIONS => DBMS_LOGMNR.DICT_FROM_REDO_LOGS + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);
EXEC DBMS_LOGMNR.START_LOGMNR(STARTTIME =>SYSDATE ,ENDTIME => SYSDATE +5/24 ,OPTIONS => DBMS_LOGMNR.DICT_FROM_ONLINE_CATALOG + DBMS_LOGMNR.COMMITTED_DATA_ONLY + DBMS_LOGMNR.PRINT_PRETTY_SQL);

Ref: 19c – 22. Using LogMiner to Analyze Redo Log Files

  • You can expect or this:
ERROR at line 1:
ORA-01291: missing log file
ORA-06512: at "SYS.DBMS_LOGMNR", line 72
ORA-06512: at line 1

Hmmmm… So, the 19c Documentation is not working? Precisely.

As per (Doc ID 2613490.1), this will be fixed in 20.1 documentation.

  • Sections 22.13.2 Examples of Mining without specifying the list of redo log files explicitly and child example topics will be removed.
  • Section 22.4.2. Automatic Redo log Files options will be changed to Specifying Redo Log Files for Data Mining.
  • Section 22.7.4 The next sentence will be removed too.

In summary, whatever reference related to automatic mining in the documentation will be removed as this feature it’s not supported in 19.1 version and higher.

Ok, but this doesn’t solve my problem. What should I do with the client tool?

So, The continous_mine was the only method of starting LogMiner without first adding logfiles using dbms_logmnr.add_logfile.

What can I do, to workaround it:

  1. Add a logfile manually with dbms_logmnr.add_logfile for each logfile.
  2. Remove SYS.DBMS_LOGMNR.CONTINUOUS_MINE from the code.
    1. For this, you’ll need to specify the logfile explicitly. So I guess, some additional codding will be needed from your side…

I hope it helps!
Matheus.

GUORS: I Encontro de Usuários Oracle do RS 2020 (Online)

Olá pessoal,

Teremos evento dos GUORS no primeiro semestre, como de costume!

Porém desta vez será online!

Será no dia 21/5 (quinta-feira) as 20:00.
Onde? No conforto de sua casa!

Contaremos com a presença ilustre do Oracle ACE Marcus Vinicius falando sobre OCI for Oracle DBA’s: Introduction and Key Concepts.

Nessa sessão o nosso convidado falará sobre o OCI (Oracle Cloud Infrastructure), conceitos-chave e os desafios de um DBA Oracle na conversão de sua carreira para arquiteto Cloud a partir da perspectiva de quem vive esta experiência cotidianamente.

O Marcus Vinicius é DBA Oracle Sênior, Oracle ACE e possui diversas certificações em Bancos de Dados Oracle. Possui mais de 15 anos de experiência na área. Foi professor de pós-graduação no curso “MBA em Administração de Banco de Dados Oracle” no Veris IBTA, unidades de São Paulo e Campinas. Foi também DBA Oracle e Gerente de Operações na Discover Technology e Consultor Oracle Sênior na Oracle (ACS). Atualmente é Consultor Oracle Sênior na Accenture Enkitec Group. Também é conselheiro técnico do GUOB. Mantém um blog sobre Oracle há mais de 10 anos – https://www.viniciusdba.com.br/blog (não deixa de conhecer o Blog do Vini também!).

Não vai perder essa né?

Faça a sua inscrição AQUI!

Cloud computing

Virtual Conference: Systematic Oracle SQL Optimization in 2020 (Review)

Hi all,

I’m here for a review and for a tip: Please don’t miss it in case those guys decide to do it again!

It was a really great conference, with a lot of really valuable sessions with real content.

I’ll just share the agenda so you can imagine by yourself. There is more in https://tanelpoder.com/conference/.

Day 1 Speaker Topic
11:00-11:15 Tanel Welcome & Introduction
11:15-12:30 Cary A Richer Understanding of Software Performance
12:30-13:00 Slack Break/Q&A/Chat
13:00-14:30 Jonathan Trouble-shooting with Execution Plans
14:30-15:00 All Speakers Panel: What Has Changed in 10 Years and What Has Not

 

Day 2 Speaker Topic
11:00-11:15 Kerry How to Stay Relevant
11:15-12:30 Tanel Scripts and Tools for Optimizing SQL Execution and Indexing
12:30-13:00 Slack Break/Q&A/Chat
13:00-14:30 Mauro Chase the Optimizer Every Step of the Way
14:30-15:00 All Speakers Panel: What Will Change in the Next 10 Years

 

Thanks guys for organizing such a great ending of the week. You are the best!

Exadata: Cell Server Crashing on ORA-00600: [LinuxBlockIO::reap]

Hi all,

So I started facing this in a client environment. Here is the alert message:

Target name=db01cel08.xxx.com
Message=ORA-00600: internal error code, arguments: [LinuxBlockIO::reap], [0x60000D502388], [], [], [], [], [], [], [], [], [], []
Event reported time=Dec 19, 2019 2:14:16 AM EDT

When checking on the cellserver I see this message:

[root@db01 ~]# ssh db01cel08
Last login: Thu Dec 19 04:45:13 2019 from db01.xxx.com
[root@db01cel08 ~]# cellcli
CellCLI: Release 12.1.2.3.5 - Production on Fri Dec 19 17:13:31 EDT 2019

Copyright (c) 2007, 2016, Oracle. All rights reserved.

CellCLI> LIST ALERTHISTORY detail

[...]

name: 10
alertDescription: "ORA-07445: exception encountered: core dump [__intel_new_memset()+62] [11] [0x000000000] [] [] []"
alertMessage: "ORA-07445: exception encountered: core dump [__intel_new_memset()+62] [11] [0x000000000] [] [] []"
alertSequenceID: 10
alertShortName: ADR
alertType: Stateless
beginTime: 2019-12-19T02:00:04-04:00
endTime:
examinedBy:
notificationState: 1
sequenceBeginTime: 2019-12-19T02:00:04-04:00
severity: critical
alertAction: "Errors in file /opt/oracle/cell/log/diag/asm/cell/SYS_112331_170406/trace/cellofltrc_19796_53.trc (incident=25). Diagnostic package is attached. It is also accessible at https://db01cel08.xxx.com/diagpack/download?name=db01cel08_2019_12_19T02_00_04_10.tar.bz2 It will be retained on the storage server for 7 days. If the diagnostic package has expired, then it can be re-created at https://db01cel08.xxx.com/diagpack"

name: 11
alertDescription: "ORA-00600: internal error code, arguments: [LinuxBlockIO::reap], [0x60000D502388], [], [], [], [], [], [], [], [], [], []"
alertMessage: "ORA-00600: internal error code, arguments: [LinuxBlockIO::reap], [0x60000D502388], [], [], [], [], [], [], [], [], [], []"
alertSequenceID: 11
alertShortName: ADR
alertType: Stateless
beginTime: 2019-12-19T02:00:04-04:00
endTime:
examinedBy:
notificationState: 1
sequenceBeginTime: 2019-12-19T02:00:04-04:00
severity: critical
alertAction: "Errors in file /opt/oracle/cell/log/diag/asm/cell/db01cel08/trace/svtrc_9174_12.trc (incident=25). Diagnostic package is attached. It is also accessible at https://db01cel08.xxxx.com/diagpack/download?name=jdb01cel08_2019_12_19T02_00_04_11.tar.bz2 It will be retained on the storage server for 7 days. If the diagnostic package has expired, then it can be re-created at https://db01cel08.xxx.com/diagpack"

name: 12_1
alertDescription: "A SQL PLAN quarantine has been added"
alertMessage: "A SQL PLAN quarantine has been added. As a result, Smart Scan is disabled for SQL statements with the quarantined SQL plan. Quarantine id : 21 Quarantine type : SQL PLAN Quarantine reason : Crash Quarantine Plan : SYSTEM Quarantine Mode : FULL_Quarantine DB Unique Name : XPTODB Incident id : 25 SQLID : 8j0az9sgxs5yh SQL Plan details : {SQL_PLAN_HASH_VALUE=281152830, PLAN_LINE_ID=9} In addition, the following disk region has been quarantined, and Smart Scan will be disabled for this region: Disk Region : {Grid Disk Name=Unknown, offset=186750337024, size=1M} "
alertSequenceID: 12
alertShortName: Software
alertType: Stateful
beginTime: 2019-12-19T02:00:12-04:00
examinedBy:
metricObjectName: QUARANTINE/21
notificationState: 1
sequenceBeginTime: 2019-12-19T02:00:12-04:00
severity: critical
alertAction: "A SQL statement caused the Cell Server (CELLSRV) service on the cell to crash. A SQL PLAN quarantine has been created to prevent the same SQL statement from causing the same cell to crash. When possible, disable offload for the SQL statement or apply the RDBMS patch that fixes the crash, then remove the quarantine with the following CellCLI command: CellCLI> drop quarantine 21 All quarantines are automatically removed when a cell is patched or upgraded. For information about how to disable offload for the SQL statement, refer to the section about 'SQL Processing Offload' in Oracle Exadata Storage Server User's Guide. Diagnostic package is attached. It is also accessible at https://db01cel08.xxx.com/diagpack/download?name=db01cel08_2019_12_19T02_00_12_12_1.tar.bz2 It will be retained on the storage server for 7 days. If the diagnostic package has expired, then it can be re-created at https://db01cel08.xxx.com/diagpack"

CellCLI>

After some research, we could match the situation to Bug 13245134 – Query may fail with errors ORA-27618, ORA-27603, ORA-27626 or ORA-00600[linuxblockio::reap_1] or ora-600 [cacheput::process_1]

It’s also described as per: Exadata/SuperCluster: 11.2 databases missing fix for the bug 13245134 may lead to cell service crash with ora-600 [linuxblockio::reap_1]/ora-600 [cacheput::process_1] or ORA-27626: Exadata error: 242/Smart scan issues on the RDBMS side

In order to resolve the crashes quickly, I applied the patch online with:

After applying, all got solved:

[oracle@db01 ~]$ /oracle/xptodb/product/11.2.0.4/OPatch/opatch lsinventory -OH $ORACLE_HOME | grep 13245134
Patch (online) 13245134: applied on Thu Dec 19 23:34:50 EST 2019
13245134
[oracle@db01 ~]$

Hope it helps!