12c New Feature: Unused Privileges

Did you know 12c now has a mechanism to examine which privileges are actually being used by an user, module or for the database as a whole, as opposed to merely the privileges granted?
This is a great security improvement in order give users the privileges that they need precisely without granting too much.

Here’s how is works?

1. First you have to set up a capture process using the new package DBMS_PRIVILEGE_CAPTURE.

# Example:

BEGIN 
DBMS_PRIVILEGE_CAPTURE.CREATE_CAPTURE( 
        name         => 'all_priv_analysis_pol', 
        description  => 'database-wide policy to analyze all privileges', 
        type         => DBMS_PRIVILEGE_CAPTURE.G_DATABASE); 
END;
/

This starts the capture process database wide. Let it run for a bit.

2. Now you can either generate a report:

# Example

BEGIN 
DBMS_PRIVILEGE_CAPTURE.GENERATE_RESULT( 
        name         => 'all_priv_analysis_pol'); 
END;
/

Or examine the new views:

DBA_USED_PRIVS 
DBA_UNUSED_PRIVS 
DBA_USED_OBJPRIVS 
DBA_UNUSED_OBJPRIVS

3. To turn off you proceed with the following:

BEGIN 
DBMS_PRIVILEGE_CAPTURE.DROP_CAPTURE( 
        name         => 'all_priv_analysis_pol'); 
END;
/

So now you ensure that you use the best practice of least privilege for users.

Cheers!

RabbitMQ on a Micro-service Architecture

Microservices ain’t a boom word right now, at least not as it was before… But we still can see a few things about it, and that’s what I’ll talk about today.

When we talk about microservices and how they work and scale, we think how the services will exchange data between them. We have a few options to do it, we can send the data to a reverse proxy that will handle the load balance for us, for example. There is also the option to send directly to a docker service name and let swarm handle it. We can use Kubernets and the service discovery that it provides to us (or use Consul for that).

All these options are fine, but what if we need a huge task to be done, and the service that was dealing with that job crashes? Did we lose our job? Do we need to manage that crash on the communication protocol our selves? How’s the load on the service? Is the service handling the request or they’re slowing? Too many questions right?

The point here is that even a simple and fast service can fail/crash and throw some error to our user or to the consuming service and that’s bad. Here’s where RabbitMQ comes in! it’ll handle the request between our services and ensure every single request has an answer.

Rabbit handle queues and delivery they to the right consumer. Once a job is “delivered” to a consumer, it has two main options:
1) Automatic acknowledge
2) The consumer sends the acknowledge.

The second one is the better way to say that some job was achieved. After the job, the consumer may give an answer back to the producer with some data.

Wanna know the load avg of the services?
Every service has it’s own queue and rabbit can show you how every single queue is doing. With that, you can see if you need more consumers from the same service or can scale down any of them.

You have another approach to solve those questions? Let me know 😉

Database Resident Connection Pool (DRCP)

Hi All!
I was reviewing some features in Oracle and, basically, every single time I review them I find something new. Seems Oracle Databases’ features are near to infinite and we frequently find some that can really add value to our solutions.

So I decided to make a serie of posts with really quick notes about each one of them.
You can see all posts in this serie in my page of posts and some others more.

Ready? Here it goes:

Database Resident Connection Pool

The database resident connection pool (DRCP) reduces the resource requirements of applications that currently don’t support connection pooling, either because it is not supported by the application infrastructure, or it has not been implemented. DRCP is only supported for database connections using the OCI driver.

The pool is managed using the DBMS_CONNECTION_POOL package. he DRCP is started and stopped using the START_POOL and STOP_POOL procedures respectively.

SQL> EXEC DBMS_CONNECTION_POOL.start_pool;

PL/SQL procedure successfully completed.

SQL> EXEC DBMS_CONNECTION_POOL.stop_pool;

PL/SQL procedure successfully completed.

The ALTER_PARAM procedure allows you to configure an individual pool parameter, while the CONFIGURE_POOL allows you to configure all pool parameters in one call. The default settings are restored using the RESTORE_DEFAULTS procedure. The pool parameters that are currently supported are listed below:

PROCEDURE CONFIGURE_POOL
 Argument Name                  Type                    In/Out Default?
 ------------------------------ ----------------------- ------ --------
 POOL_NAME                      VARCHAR2                IN     DEFAULT
 MINSIZE                        BINARY_INTEGER          IN     DEFAULT
 MAXSIZE                        BINARY_INTEGER          IN     DEFAULT
 INCRSIZE                       BINARY_INTEGER          IN     DEFAULT
 SESSION_CACHED_CURSORS         BINARY_INTEGER          IN     DEFAULT
 INACTIVITY_TIMEOUT             BINARY_INTEGER          IN     DEFAULT
 MAX_THINK_TIME                 BINARY_INTEGER          IN     DEFAULT
 MAX_USE_SESSION                BINARY_INTEGER          IN     DEFAULT
 MAX_LIFETIME_SESSION           BINARY_INTEGER          IN     DEFAULT

More“Database Resident Connection Pool (DRCP)”

Change display settings on linux with Disper

From time to time, I change the Linux distro on my laptop or just do a fresh install on it. And once in a while, have random problems with external displays. It can be something really “simple” like don’t detecting the external monitor or something crazy like the image below.

screenshot of the bug

As you can see at the image, the mint detected the display but mirrored it in a crazy way that works but doesn’t at the same time. If you try anything and doesn’t get working, or just wanna skip the whole job of configuring complexes text files, give a try to Disper. Download the latest version. Extract it on any folder, and make install it (on the extracted folder).

make install

After that, you can start using it… There are a few options that will serve you well.

disper -e #extend your display
disper -c #clone your display
displer -s #only your external display

 

Database in Cloud: Quickest Procedure to Clone a Schema

Hey all!
So, we all know that operating with files/dump files can be tricky when using DBaaS in Public Cloud. In some situations, like Amazon RDS service, we simply don’t have access to SO.

In this scenario, how can we quickly clone a schema in the database? Using IMPDP with Database Link.

Also note that when working on AWS environments, avoiding to use dumpfiles when dealing with expdp/impdp is also encouraged to save IOPS from local disks (it is capped based on machine type).

This is, of course, also valid for On-Premise environments with limited area for dump files.

For this to work, we only need to create a database link pointing to the same database.
Also, of course, it is mandatory to use remap_schema, with optional clause remap_tablespace.

In the following example the link name is loop and proceed with impdp from a jumpbox with an Oracle client and tns configuration to RDS database.

1. Creating Database Link:

SQL> create database link loop connect to my_sysdba identified by "***" using '(DESCRIPTION=(ADDRESS=(PROTOCOL=TCP)(HOST=localhost)(PORT=1521))(CONNECT_DATA=(SERVICE_NAME=service-name)))';

Database link created.

SQL> select * from dual@loop;

D
-
X

2. Running IMPDP:

[oracle@jumpbox ~]$ impdp schemas=ORIGINAL_SCHEMA network_link=loop remap_schema=ORIGINAL_SCHEMA:NEW_SCHEMA remap_tablespace=ORIGINAL_TBS:NEW_TBS TRANSFORM=oid:n directory=ANY_DIR logfile=duplicate_original.log

Import: Release 11.2.0.4.0 - Production on Wed Fev 7 21:03:54 2018

Copyright (c) 1982, 2011, Oracle and/or its affiliates. All rights reserved.

Username: / as sysdba

Connected to: Oracle Database 11g Release 11.2.0.4.0 - 64bit Production
Starting "SYS"."SYS_IMPORT_SCHEMA_02": /******** AS SYSDBA schemas=ORIGINAL_SCHEMA network_link=loop remap_schema=ORIGINAL_SCHEMA:NEW_SCHEMA remap_tablespace=ORIGINAL_TBS:NEW_TBS TRANSFORM=oid:n directory=ANY_DIR logfile=duplicate_original.log
Estimate in progress using BLOCKS method...
Processing object type SCHEMA_EXPORT/TABLE/TABLE_DATA
Total estimation using BLOCKS method: 1.481 GB
Processing object type SCHEMA_EXPORT/USER
ORA-31684: Object type USER:"NEW_SCHEMA" already exists
Processing object type SCHEMA_EXPORT/SYSTEM_GRANT
Processing object type SCHEMA_EXPORT/ROLE_GRANT
Processing object type SCHEMA_EXPORT/DEFAULT_ROLE
Processing object type SCHEMA_EXPORT/TABLESPACE_QUOTA
Processing object type SCHEMA_EXPORT/PRE_SCHEMA/PROCACT_SCHEMA
Processing object type SCHEMA_EXPORT/TYPE/TYPE_SPEC
Processing object type SCHEMA_EXPORT/SEQUENCE/SEQUENCE
Processing object type SCHEMA_EXPORT/TABLE/TABLE
...
. . imported "NEW_SCHEMA"."TABLE1" 0 rows
. . imported "NEW_SCHEMA"."TABLE2" 0 rows
. . imported "NEW_SCHEMA"."TABLE3" 0 rows
Processing object type SCHEMA_EXPORT/FUNCTION/FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/PROCEDURE
Processing object type SCHEMA_EXPORT/FUNCTION/ALTER_FUNCTION
Processing object type SCHEMA_EXPORT/PROCEDURE/ALTER_PROCEDURE
Processing object type SCHEMA_EXPORT/TABLE/INDEX/INDEX
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type SCHEMA_EXPORT/TABLE/CONSTRAINT/REF_CONSTRAINT
Processing object type SCHEMA_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
Job "SYS"."SYS_IMPORT_SCHEMA_02" completed with 1 error(s) at Wed Fev 7 21:08:52 2018 elapsed 0 00:04:54

Hope it helps,
Cheers!

Embeded server for Java Web App and a New author

Hi, my name is Maurício and it’s the first time I’m posting here. I’m a Senior Software Engineer and I live in Brazil. I have six years working with web development and love Java and Javascript. I’ll be posting here from time to time development stuff. Hope you enjoy it!

Our subject for today: Once in a while, I do a small Java Web project and every time I think on which application server will host the app, Tomcat, GlassFish, WebLogic, WildFly, Payara and there are plenty more servers. Nowadays almost everybody uses a Docker container to host app’s, and that’s nothing wrong with that, but why load a container if I can just embed the server in a jar file?

If you have a Maven project, there’s almost no work to be done, just add a plugin that targets the package goal and there you go. In the end, you will have a jar file with your app-name-swarm.jar (i know, it’s a WildFly swarm, not a fully WildFly, but it weights just 64Mb). Run it like a normal jar and you will have a running web app.

<plugin>
  <groupId>org.wildfly.swarm</groupId>
  <artifactId>wildfly-swarm-plugin</artifactId>
  <version>2018.2.0</version>
  <executions>
    <execution>
      <goals>
        <goal>package</goal>
      </goals>
    </execution>
  </executions>
</plugin>

After all, if you want, that jar can be loaded in a Docker container and launched at a production server. There are other options like Tomcat embedded, Payara microserver and a few others. Go ahead and give a try to any one of them and share your experience, I’m sure you have something to say about them.

But remember, these options aren’t a full application server, they’re engineered to be small and focused on specific features so read their docs to see what they can and can’t do before using. 😉

 

Exacheck: The bundle patch version installed does not match the bundle patch version registered in the database

Hi all!
So, running Exacheck on a recently created database, found this error:

 FAIL => The bundle patch version installed does not match the bundle patch version registered in the database: [host]:[sid],...

This means that a boundle patch with sqlpatch was applied to OH and not to this database. Happens because Exacheck try to match the patch info stored in oraInventory with the patch info stored in dba_registry_sqlpatch.

Also note in some situations, running datapatch may require the database to be in upgrade mode and if you are patching Exadata , which is generally a RAC based environment, you need to set the cluster_database=false and at least 1 job_queue_process before starting the database using startup upgrade command. This should be described in readme on related patch.

When checking for this, I found a really interesting validation script here. As per:

opatch_bp=$($ORACLE_HOME/OPatch/opatch lspatches 2>/dev/null|grep -iwv javavm|grep -wi database|head -1|awk -F';' '{print $1}') 
database_bp_status=$(echo -e "set heading off feedback off timing off \n select STATUS from dba_registry_sqlpatch where PATCH_ID = $opatch_bp;"|$ORACLE_HOME/bin/sqlplus -s " / as sysdba" | sed -e '/^ *$/d')
if [ "$database_bp_status" == SUCCESS ]
then
      echo "SUCCESS: Bundle patch installed in the database matches the software home and is installed successfully."
else
      echo "FAILURE: Bundle patch installed in the database does not match the software home, or is installed with errors." 
 fi

To fix, just set environment variables to correct database, go to $ORACLE_HOME/OPatch and run:

More“Exacheck: The bundle patch version installed does not match the bundle patch version registered in the database”