What can you do with APEX_COLLECTION

The APEX_COLLECTION (Oracle 12c onwards) package can be a good choice when you need to work with
data in a temporary, in-memory table-like structure.

When should I use the APEX_COLLECTION?

  1. Temporary data storage: If you need to temporarily store data that is
    generated by a report or a form, using APEX_COLLECTION can be a good option. Since the
    data is stored in memory rather than in a physical table in the database, you don’t need
    to worry about the overhead of creating and managing a database table.
  2. Working with large datasets: If you need to work with large datasets,
    APEX_COLLECTION can be more efficient than other options, such as creating a temporary
    table in the database. Since the data is stored in memory, it can be accessed and
    manipulated more quickly than data that is stored in a physical table.
  3. Non-persistent data: If you don’t need to persist the data between
    sessions or across different users, APEX_COLLECTION can be a good choice. Since the data
    is stored in memory, it is automatically cleared when the session ends or when the user
    logs out.
  4. Flexible data structure: APEX_COLLECTION provides a flexible data
    structure that can be easily modified at runtime. You can add, update, and delete
    records as needed, and you can define the structure of the collection dynamically based
    on your requirements.

When should I avoid it?

While APEX_COLLECTION can be a useful tool for working with temporary data in Oracle APEX,
there are some scenarios where it may not be the best choice. Here are a few situations
where you may want to consider alternative approaches:

  1. Large or complex datasets: While APEX_COLLECTION can be efficient for
    working with large datasets, it may not be the best choice for very large or complex
    datasets. In such cases, it may be better to use a physical table in the database or a
    dedicated data store, such as a NoSQL database or a document store.
  2. High transaction volume: If you have a high volume of transactions or
    concurrent users, APEX_COLLECTION may not be able to keep up with the demand. In such
    cases, it may be better to use a dedicated database table or another data storage
    mechanism that is optimized for high performance.
  3. Data persistence: If you need to persist the data across sessions or
    across different users, APEX_COLLECTION may not be the best choice. While the data is
    stored in memory and can be accessed quickly, it is not durable and will be lost when
    the session ends or when the user logs out. In such cases, it may be better to use a
    dedicated database table or another data storage mechanism that is designed for
    durability and persistence.
  4. Complex data structures: While APEX_COLLECTION can be flexible and
    dynamic, it may not be the best choice for working with very complex data structures or
    data types. If you need to work with data that has a complex or hierarchical structure,
    it may be better to use a dedicated data store or a specialized data modeling tool that
    is designed to handle such data.

That being said, there are also some limitations to using APEX_COLLECTION, such as the
inability to create indexes or enforce constraints, and the limited support for complex data
types. In some cases, it may be more appropriate to use a physical table or other data
storage
mechanism, depending on your specific requirements and use case.

 Alright, now that you know the pros and cons, here is a basic tutorial of what you need to work
with it.

Using APEX_COLLECTION

Create an APEX collection:

You can create an APEX collection using the APEX_COLLECTION.CREATE_COLLECTION procedure. This
procedure
creates a new collection with the specified name and collection type.

BEGIN
APEX_COLLECTION.CREATE_COLLECTION(
p_collection_name => 'MY_COLLECTION',
p_collection_type => 'APEX_COLLECTION_TYPE_VARCHAR2'
);
END;

Add records to the collection:

You can add records to an APEX collection using the APEX_COLLECTION.ADD_MEMBER procedure. This
procedure
adds a new record to the specified collection with the specified values.

BEGIN
APEX_COLLECTION.ADD_MEMBER(
p_collection_name => 'MY_COLLECTION',
p_c001 => 'John',
p_c002 => 'Doe',
p_c003 => 'johndoe@example.com'
);
END;

Access records in the collection:

You can access the records in an APEX collection using the APEX_COLLECTION.GET_MEMBERS function. This
function returns a cursor that can be used to iterate over the records in the collection.

DECLARE
l_cursor apex_t_varchar2;
l_rec apex_collection_types.apex_collection_varchar2_nt;
BEGIN
l_cursor := apex_collection.get_members(p_collection_name => 'MY_COLLECTION');
LOOP
FETCH l_cursor BULK COLLECT INTO l_rec LIMIT 100;
FOR i IN 1..l_rec.COUNT LOOP
DBMS_OUTPUT.PUT_LINE(l_rec(i).c001 || ' ' || l_rec(i).c002 || ': ' || l_rec(i).c003);
END LOOP;
EXIT WHEN l_cursor%NOTFOUND;
END LOOP;
END;

Merge records in the collection:

You can merge records in an APEX collection using the APEX_COLLECTION.MERGE_COLLECTION procedure. This
procedure merges the specified records from the source collection into the target collection.

BEGIN
APEX_COLLECTION.MERGE_COLLECTION(
p_target_collection_name => 'MY_COLLECTION',
p_source_collection_name => 'OTHER_COLLECTION',
p_member_key => '1',
p_update_column => 'C003',
p_update_value => 'newvalue@example.com'
);
END;

Change records in the collection:

You can change records in an APEX collection using the APEX_COLLECTION.UPDATE_MEMBER procedure. This
procedure
updates the specified record in the collection with the specified values.

BEGIN
APEX_COLLECTION.UPDATE_MEMBER(
p_collection_name => 'MY_COLLECTION',
p_seq => 1,
p_c001 => 'Jane',
p_c002 => 'Doe',
p_c003 => 'janedoe@example.com'
);
END;

Delete records from the collection:

You can delete records from an APEX collection using the APEX_COLLECTION.DELETE_MEMBER procedure. This
procedure
deletes the specified record from the collection.

BEGIN
APEX_COLLECTION.DELETE_MEMBER(
p_collection_name => 'MY_COLLECTION',
p_seq => 1
);
END;

Conclusion

That’s it! Using the APEX_COLLECTION package, you can create, access, merge, change, and delete records in a
collection in Oracle APEX.

Getting started with APEX_STRING.SPLIT

Dealing with string manipulation in PL/SQL isn’t hard, but it can be more work than it’s worth. Converting lists of values coming from HTML inside a single text item, separating lists based on multilple types of separators or even just the first N elements can require you to make an algorithm for something that kind of trivial.

For that reason I would like to show you the APEX_STRING.SPLIT function that should take care of most of you issues. Let’s take a look at the following example:

DECLARE
l_text VARCHAR2(100) := 'apple,banana,melon,orange';
l_delimiter VARCHAR2(1) := ',';
l_array apex_t_varchar2;
BEGIN
l_array := apex_string.split(l_text, l_delimiter);
FOR i IN 1 .. l_array.count
LOOP
DBMS_OUTPUT.put_line(l_array(i));
END LOOP;
END;

This is a basic example that receives 2 parameters, one which is the varchar2 that we want to split and the other is the separator. Easy

Now lets take a look on this example where the delimiters are different between themselves.

DECLARE
l_text VARCHAR2(100) := 'apple;banana,melon:grapes,lemon';
l_delimiter VARCHAR2(10) := '[;,:]';
l_array apex_t_varchar2;
BEGIN
l_array := apex_string.split(l_text, l_delimiter);
FOR i IN 1 .. l_array.count
LOOP
DBMS_OUTPUT.put_line(l_array(i));
END LOOP;
END;

As you may have noticed, the separator is provided as a regular expression, which means we get a lot of flexibility when working this way.

For this last example, we’ll be applying a delimiter.

DECLARE
l_text VARCHAR2(100) := 'apple,banana,cherry,date';
l_delimiter VARCHAR2(1) := ',';
l_array apex_t_varchar2;
BEGIN
l_array := apex_string.split(l_text, l_delimiter, 2);
FOR i IN 1 .. l_array.count
LOOP
DBMS_OUTPUT.put_line(l_array(i));
END LOOP;
END;

This is quite easy as well; it simply stops our breaking of the varchar after Nth iterations. where the Nth item in the array will have all remaining values.

 

Do you have CLOBs or Numbers?

There are two other functions I think are worth mentioning, and the have nearly the same signature, which could help you with numbers and values larger than varchar2.

  • SPLIT_CLOBS: take is a CLOB as the first parameter, same separation and limit rules. Return an apex_t_clob
  • SPLIT_NUMBERS: Doesn’t have the limit parameter and return an apex_t_number

 

Conclusion

That’s about it, with this package you can do a lot more and if you are working with APEX I highly recommend you taking a look at it’s documentation at https://docs.oracle.com/en/database/oracle/apex/22.2/aeapi/APEX_STRING.html#GUID-CAFD987C-7382-4F0F-8CB9-1D3BD05F054A

Failed do start Keycloak server – IP_MULTICAST_IF: address not bound

Failed to start Keycloak – Version 11.0.2 on Centos 7

bad argument for IP_MULTICAST_IF: address not bound to any interface

In an attempt to solve this case, searches always return to add the parameter “-Djava.net.preferIPv4Stack = true”. But for me, it didn’t solve.

The only way that solved my problem was to edit the /etc/sysctl.conf file. disabling IPV6.

Add the following at the bottom of the file:

net.ipv6.conf.all.disable_ipv6 = 1

net.ipv6.conf.default.disable_ipv6 = 1

net.ipv6.conf.lo.disable_ipv6 = 1

Save and close the file.

Reboot the machine.

Error when adding new managed server.

Error adding new managed server (on a new virtual server):

<Aug 27, 2020 8:38:46 AM BRT> <Info> <Management> <BEA-141223> <The server name WLS_MS7 specified with -Dweblogic.Name does not exist. The configuration includes the following servers {AdminServer,WLS_MS1,WLS_MS2,WS_MS1,WS_MS2,WS_MS3,WS_MS3}.>
<Aug 27, 2020 8:38:46 AM BRT> <Critical> <WebLogicServer> <BEA-000362> <Server failed. Reason:

There are 1 nested errors:

weblogic.management.ManagementException: [Management:141223]The server name WLS_MS7 specified with -Dweblogic.Name does not exist. The configuration includes the following servers {AdminServer,WLS_MS1,WLS_MS2,WS_MS1,WS_MS2,WS_MS3,WS_MS3}.
at weblogic.management.provider.internal.RuntimeAccessImpl.<init>(RuntimeAccessImpl.java:149)
at weblogic.management.provider.internal.RuntimeAccessService.start(RuntimeAccessService.java:41)
at weblogic.t3.srvr.ServerServicesManager.startService(ServerServicesManager.java:461)
at weblogic.t3.srvr.ServerServicesManager.startInStandbyState(ServerServicesManager.java:166)
at weblogic.t3.srvr.T3Srvr.initializeStandby(T3Srvr.java:882)
at weblogic.t3.srvr.T3Srvr.startup(T3Srvr.java:572)
at weblogic.t3.srvr.T3Srvr.run(T3Srvr.java:469)
at weblogic.Server.main(Server.java:71)

<Aug 27, 2020 8:38:46 AM BRT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FAILED>
<Aug 27, 2020 8:38:46 AM BRT> <Error> <WebLogicServer> <BEA-000383> <A critical service failed. The server will shut itself down>
<Aug 27, 2020 8:38:46 AM BRT> <Notice> <WebLogicServer> <BEA-000365> <Server state changed to FORCE_SHUTTING_DOWN>

The support notes and some other results on websites refer to configuration problems, it was resolved by noting that the new managed server had a firewall block to connected the AdminServer.

After releasing the firewall, the new managed server is running.

Extracting DML Stats With GGate

Hi all,

After having GoldenGate setup in your environment, on of the most frequent questions you most likely face is how much work is being done by your GGate environment. Or even you may want to set up some dummy extraction just to measure it before start your GGate implementation project.

GoldenGate provides stats command to report the work which is done. It’s recommended to reset the counters/stats before any testing you want or before start gathering, for better outputs.

The retrieving can be done for the total (since last reset), daily or even on table level, as per below:

1. Reset counters/stats:

stats extract ext_test, reset

2. Retrieve stats for a Table since Reset:

stats extract ext_test, table owner1.test, latest

3. Retrieve stats Total since last start of extract:

stats extract ext_int, totalsonly owner1.test

4. Retrieve stats from the day of a table:

stats extract ext_int, daily, table owner1.test

5. Retrieve stats from the day of all schema tables configured on extract:

stats extract ext_int, daily, table owner1.*

 

Example of Output:

GGSCI (myserver.local) 1> stats ext_test total daily

Sending STATS request to EXTRACT EXT_TEST ...

Start of Statistics at 2019-12-16 15:21:56.

Output by User Exit:

Extracting from OWNER1.TABLE1 to OWNER1.TABLE1:

*** Total statistics since 2019-12-16 07:18:14 ***
        Total inserts                                   2744.00
        Total updates                                      0.00
        Total deletes                                    300.00
        Total discards                                     0.00
        Total operations                                3044.00

*** Daily statistics since 2019-12-16 07:18:14 ***
        Total inserts                                   2744.00
        Total updates                                      0.00
        Total deletes                                    300.00
        Total discards                                     0.00
        Total operations                                3044.00

More about it?
Here: https://docs.oracle.com/goldengate/1212/gg-winux/GWURF/ggsci_commands030.htm#GWURF216

Hope it Helps!

ERROR: ´Failed to push ldap config data to libOvd for service instance “idstore.ldap” in JPS context “default”` in OSB 12C

The error below occurs on OSB server and AdminServer of the domain.

ERROR:

#### <> <> <> <1539087463845> <[severity-value: 4] [partition-id: 0] [partition-name: DOMAIN] > ailed. Reason:

There are 1 nested errors:

oracle.security.jps.JpsException: JPS-02592: Failed to push ldap config data to libOvd for service instance “idstore.ldap” in JPS context “default”, cause: org.xml.sax.SAXException: Error Parsing at line #2: 1.
org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 1; : XML-20108: (Fatal Error) Start of root element expected.
at oracle.security.jps.internal.config.OpssCommonStartup.start(OpssCommonStartup.java:209)
at oracle.security.jps.wls.JpsWlsStartup.start(JpsWlsStartup.java:96)
at oracle.security.jps.JpsStartup.start(JpsStartup.java:215)
at oracle.security.jps.wls.JpsDefaultService.start(JpsDefaultService.java:74)
at weblogic.server.AbstractServerService.postConstruct(AbstractServerService.java:76)
at sun.reflect.GeneratedMethodAccessor2.invoke(Unknown Source)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.glassfish.hk2.utilities.reflection.ReflectionHelper.invoke(ReflectionHelper.java:1262)
at org.jvnet.hk2.internal.ClazzCreator.postConstructMe(ClazzCreator.java:332)
at org.jvnet.hk2.internal.ClazzCreator.create(ClazzCreator.java:374)
at org.jvnet.hk2.internal.SystemDescriptor.create(SystemDescriptor.java:471)
at org.glassfish.hk2.runlevel.internal.AsyncRunLevelContext.findOrCreate(AsyncRunLevelContext.java:228)
at org.glassfish.hk2.runlevel.RunLevelContext.findOrCreate(RunLevelContext.java:85)
at org.jvnet.hk2.internal.Utilities.createService(Utilities.java:2072)
at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:114)
at org.jvnet.hk2.internal.ServiceHandleImpl.getService(ServiceHandleImpl.java:88)
at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.oneJob(CurrentTaskFuture.java:1213)
at org.glassfish.hk2.runlevel.internal.CurrentTaskFuture$QueueRunner.run(CurrentTaskFuture.java:1144)
at weblogic.work.SelfTuningWorkManagerImpl$WorkAdapterImpl.run(SelfTuningWorkManagerImpl.java:643)
at weblogic.invocation.ComponentInvocationContextManager._runAs(ComponentInvocationContextManager.java:348)
at weblogic.invocation.ComponentInvocationContextManager.runAs(ComponentInvocationContextManager.java:333)
at weblogic.work.LivePartitionUtility.doRunWorkUnderContext(LivePartitionUtility.java:54)
at weblogic.work.PartitionUtility.runWorkUnderContext(PartitionUtility.java:41)
at weblogic.work.SelfTuningWorkManagerImpl.runWorkUnderContext(SelfTuningWorkManagerImpl.java:617)
at weblogic.work.ExecuteThread.execute(ExecuteThread.java:397)
at weblogic.work.ExecuteThread.run(ExecuteThread.java:346)
Caused by: oracle.security.jps.service.idstore.IdentityStoreException: JPS-02592: Failed to push ldap config data to libOvd for service instance “idstore.ldap” in JPS context “default”, cause: org.xml.sax.SAXException: Error Parsing at line #2: 1.
org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 1; : XML-20108: (Fatal Error) Start of root element expected.
at oracle.security.jps.internal.idstore.util.LibOvdUtil$LibOvdDataPusherImpl.pushLdapDataToLibOvdIntenal(LibOvdUtil.java:641)
at oracle.security.jps.internal.idstore.util.LibOvdUtil$LibOvdDataPusherImpl.pushLdapDataToLibOvd(LibOvdUtil.java:593)
at oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider$NoLibOvd.getInstance(LdapIdentityStoreProvider.java:274)
at oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider.getInstance(LdapIdentityStoreProvider.java:118)
at oracle.security.jps.internal.idstore.ldap.LdapIdentityStoreProvider.getInstance(LdapIdentityStoreProvider.java:125)
at oracle.security.opss.internal.runtime.ServiceContextManagerImpl.createContextInternal(ServiceContextManagerImpl.java:636)
at oracle.security.opss.internal.runtime.ServiceContextManagerImpl.createDefaultContext(ServiceContextManagerImpl.java:319)
at oracle.security.opss.internal.runtime.ServiceContextManagerImpl.initialize(ServiceContextManagerImpl.java:241)
at oracle.security.jps.internal.config.OpssCommonStartup$1.run(OpssCommonStartup.java:150)
at java.security.AccessController.doPrivileged(Native Method)
at oracle.security.jps.internal.config.OpssCommonStartup.start(OpssCommonStartup.java:134)
… 26 more
Caused by: org.xml.sax.SAXException: Error Parsing at line #2: 1.
org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 1; : XML-20108: (Fatal Error) Start of root element expected.
at oracle.ods.virtualization.engine.config.parsers.ConfigErrorHandler.fatalError(ConfigErrorHandler.java:103)
at oracle.xml.parser.v2.XMLError.flushErrorHandler(XMLError.java:435)
at oracle.xml.parser.v2.XMLError.flushErrors1(XMLError.java:290)
at oracle.xml.parser.v2.NonValidatingParser.parseRootElement(NonValidatingParser.java:476)
at oracle.xml.parser.v2.NonValidatingParser.parseDocument(NonValidatingParser.java:417)
at oracle.xml.parser.v2.XMLParser.parse(XMLParser.java:246)
at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:175)
at oracle.xml.jaxp.JXDocumentBuilder.parse(JXDocumentBuilder.java:131)
at oracle.ods.virtualization.engine.config.parsers.ConfigParser.parseConfiguration(ConfigParser.java:335)
at oracle.ods.virtualization.engine.config.parsers.ConfigParser.parseConfiguration(ConfigParser.java:307)
at oracle.ods.virtualization.engine.config.parsers.ConfigParser.parseConfiguration(ConfigParser.java:292)
at oracle.ods.virtualization.config.ConfigureAdapters.doConfigure(ConfigureAdapters.java:950)
at oracle.ods.virtualization.config.ConfigureAdapters.configure(ConfigureAdapters.java:911)
at oracle.security.jps.internal.idstore.util.LibOvdUtil$LibOvdDataPusherImpl.pushLdapDataToLibOvdIntenal(LibOvdUtil.java:632)
… 36 more
Caused by: org.xml.sax.SAXParseException; lineNumber: 2; columnNumber: 1; : XML-20108: (Fatal Error) Start of root element expected.
at oracle.xml.parser.v2.XMLError.flushErrorHandler(XMLError.java:425)
… 48 more

Probably Corrupted the file adapters.os_xml and file was empty.

SOLUTION:

1) Take a backup of $DOMAIN_HOME/config/fmwconfig/ovd/default/adapters.os_xml
2) Delete corrupted adapters.os_xml
3) Copy adapters.os_xml from $MW_HOME/oracle_common/modules/oracle.ovd/templates/ to $DOMAIN_HOME/config/fmwconfig/ovd/default/
4) Restart the server.

That’s all for today.

Oracle Streams: Adding a Table to Streams Replication

Feel old yet?
Don’t worry, we all know that there are still companies using Streams. So, here goes a quick manual for a regular activity: Add a new table to replication using SYS.DBMS_STREAMS_ADM.

For this, assume the following:

STREAMS SETUP OWNER => streams_adm
CAPTURE PROCESS => str_capture
APPLY_PROCESS => str_apply
SOURCE DATABASE TABLE OWNER => TBLOWNER
TARGET DATABASE TABLE OWNER => TBLOWNER_DEST

The high level steps to add table to streams are as follows:

1. Stop streams CAPTURE
2. Create the table on the target database
3. Configure the CAPTURE process
4. Add a rule to the APPLY process
5. Instantiate the new object
6. Create constraints and indexes
7. Restart streams CAPTURE and APPLY

Let’s detail them:

More“Oracle Streams: Adding a Table to Streams Replication”

IPv6 Formatting for JDBC and SQLPlus

Hey all!
Seems new right? But it’s available since 11gR2.
Not needed to explain what is IPV6, right? Any questions, go here.

In summary the only thing you need is to enclose the IPv6 address in square brackets. Like this:

For Easy Connect on IPV4:

SQL> connect user/pass@172.23.10.40:1521/GREPORADB
Connected.

 

For Easy Connect on IPV6:

SQL> connect user/pass@[1:95e05a:g0d:da7a:2007]:1521/GREPORADB
Connected.

For JDBC (thin) IPV4:

url="jdbc:oracle:thin:@(DESCRIPTION=
(LOAD_BALANCE=on) (ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=172.23.10.40) (PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=172.23.10.41)(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=GREPORADB)))"

For JDBC (OCI) IPV4:

url="jdbc:oracle:oci:@(DESCRIPTION=(ADDRESS=
(PROTOCOL=TCP)(HOST=172.23.10.40)(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=GREPORADB)))"

For JDBC (thin) IPV6:

url="jdbc:oracle:thin:@(DESCRIPTION=
(LOAD_BALANCE=on) (ADDRESS_LIST=
(ADDRESS=(PROTOCOL=TCP)(HOST=[1:95e05a:g0d:da7a:2007]) (PORT=1521))
(ADDRESS=(PROTOCOL=TCP)(HOST=[1:95e05a:g0d:da7a:2006])(PORT=1521)))
(CONNECT_DATA=(SERVICE_NAME=GREPORADB)))"

For JDBC (OCI) IPV6:

url="jdbc:oracle:oci:@(DESCRIPTION=(ADDRESS=
(PROTOCOL=TCP)(HOST=[1:95e05a:g0d:da7a:2007])(PORT=1521))
(CONNECT_DATA=(SERVICE_NAME=GREPORADB)))"

As you can imagine, the same applies to your TNSNAMES entries.

Also, according to this, it can be used even for your LISTENER:

LISTENER =
 (DESCRIPTION_LIST =
  (DESCRIPTION =
   (ADDRESS =
    (PROTOCOL = TCP)
    (HOST = [1:95e05a:g0d:da7a:2007])(PORT =1521))
  )
 )

Cheers!

Create datasource 11g in silent mode

Today I will provide scripts to help create datasource(DS) in weblogic 11g in silent mode.

Necessary files:
createDS.py
DS.properties
DSnoXA.properties

After downloading the files, edit DS.properties to XA datasource or edit DSnoXA.properties to noXA datasource with enviroment data where will be created datasource.

For example, DS.properties:More“Create datasource 11g in silent mode”

GGate: STOP request pending. There are open, long-running transactions.

Hey all!
So, these days I was about to stop the GGate extract to add a new table to replication and saw this:

GGSCI (greporasrvr) 2> stop extract GREPORA_EXT

Sending STOP request to EXTRACT GREPORA_EXT ...

STOP request pending. There are open, long-running transactions.
Before you stop Extract, make the archives containing data for those transactions available for when Extract restarts.
To force Extract to stop, use the SEND EXTRACT GREPORA_EXT, FORCESTOP command.
Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 616, RBA 135673360.

2018-04-12 15:55:41 WARNING OGG-01742 Command sent to EXTRACT GREPORA_EXT returned with an invalid response.

Interesting, right?
Let’s discover what is causing the situation:

GGSCI (greporasrvr) 2> send extract GREPORA_EXT, showtrans duration 20 MIN

Sending showtrans request to EXTRACT GREPORA_EXT ...

Oldest redo log file necessary to restart Extract is:

Redo Log Sequence Number 616, RBA 135673360

------------------------------------------------------------
XID: 92.1.1460254
Items: 1
Extract: GREPORA_EXT
Redo Thread: 1
Start Time: 2018-04-12:08:27:03
SCN: 22.120909138 (94610189650)
Redo Seq: 616
Redo RBA: 135673360
Status: Running

------------------------------------------------------------
XID: 69.7.1771671
Items: 1
Extract: GREPORA_EXT
Redo Thread: 1
Start Time: 2018-04-12:13:27:29
SCN: 22.121387178 (94610667690)
Redo Seq: 629
Redo RBA: 1457168
Status: Running

15:59:12 SYS@GREPDB>select * from gv$transaction where xidusn=92;
1 000000018FEA2170 92 1 1460254 121 548870 24255 48 ACTIVE 04/12/18 08:27:02 120909138 22 168 2
131578 24097 43 0000000198A49D68 7683 NO NO NO NO

0 0 0 0 0 0 0 0 81875 4581610 23108750 1415217 -1.675E+09 197 12-APR-2018 08:27:02 0 0
9.4610E+10 0 5C0001001E481600 0000000000000000 0000000000000000

1 row selected.

16:00:08 SYS@GREPDB>select sid,serial#,event,machine,sql_id,seconds_in_wait,prev_sql_id,module,program,action from gv$session where taddr='000000018FEA2170';
885 55577 db file sequential read feedprocs12 39za7ttrfjbr1 55 6x9pg77q982g3
JDBC Thin Client JDBC Thin Client

1 row selected.

16:00:37 SYS@GREPDB>set verify off
set pagesize 999
col username format a13
col prog format a22
col sql_text format a140
col sid format 999
col child_number format 99999 heading CHILD
col ocategory format a10
col avg_etime format 9,999,999.99
col etime format 9,999,999.99
col execs for 99999,999
select sql_id, child_number, plan_hash_value plan_hash, executions execs, elapsed_time/1000000 etime,
(elapsed_time/1000000)/decode(nvl(executions,0),0,1,executions) avg_etime, u.username,
sql_text
from v$sql s, dba_users u
where
sql_id like nvl('&sql_id',sql_id)
and u.user_id = s.parsing_user_id
/
Enter value for sql_id: 39za7ttrfjbr1

SQL_ID CHILD PLAN_HASH EXECS ETIME AVG_ETIME USERNAME
------------- ------ ---------- ---------- ------------- ------------- -------------
SQL_TEXT
--------------------------------------------------------------------------------------------------------------------------------------------
39za7ttrfjbr1 0 72089250 0 25,434.97 25,434.97 GREPORA
--SQL Name:summary/envision/insertMyTable INSERT INTO mytable ( .......) 

1 row selected.

Nice!
Now we know it’s being caused by this long insert in mytable, but how can I proceed?
There are basically two options:
1. Kill the session on DB: Long rollback predicted, be aware of the impact and rollback activity, plus application effects if any.
2. Force stop of Extract with SEND EXTRACT GREPORA_EXT, FORCESTOP: In this case there is no effect for the database and no rollback, HOWEVER, when extract start again it will need to start from this cancelled sequence. First question, do I have this sequence still available? As you could see before, sequence 616 on would be needed, lets check on our availability:

16:01:41 SYS@GREPDB>archive log list
Database log mode Archive Mode
Automatic archival Enabled
Archive destination +FRA_GREPDB
Oldest online log sequence 619
Next log sequence to archive 634
Current log sequence 634

Ouch!
So, in case it be needed, be aware you’ll need your archivelogs and, if already removed, restore may be needed.

There is also third option: 3. Wait until transaction be completed. 🙂

If 1 or 2 need to be chosen, choose wisely. 🙂

Hope this helps.
Cheers!