Hi all,
Working on a project, I was asked to stop using autoextend on the database file, while setting up a new environment on a client, to avoid ASM diskgroup space issues.
In case the free space was not enough to handle all datafiles growth when often people would add datafiles in autoextend without checking the complete database e storage, which could cause a bigger problem.
I really like the autoextend, as it can prevent processes from failing and even more. Instead of be wasting space as often, in multiple tablespace environment, as the tablespace grows differently and as I mention not all validations are often done when adding space. We just want to resolve the issue and move to the next one.
So how can we save space by using it on where is needed and not being at risk of over-committing the storage use?
The idea is to monitor the database possible growth and compare with the current free space in the ASM diskgroup being used by the database.
So to accomplish that, I used column MAXBYTES in dba_data_files and dba_temp_files. In the example below we can see the current size from the BYTES column and the MAXBYTES as the limit to which the file can grow to be. If the MAXBYTES is 0 the file does not have autoextend on.
SYS@db011>select TABLESPACE_NAME,FILE_NAME,BYTES,MAXBYTES from dba_data_files where TABLESPACE_NAME like 'SYS%' or TABLESPACE_NAME='UNDOTBS1'; TABLESPACE_NAME FILE_NAME BYTES MAXBYTES ------------------------------ -------------------------------------------------------------------------- ---------- ------------ SYSTEM +ORA_DATA/db01/datafile/system.264.1032015007 2147483648 32212254720 SYSAUX +ORA_DATA/db01/datafile/sysaux.319.1032015009 2751463424 32212254720 UNDOTBS1 +ORA_DATA/db01/datafile/undotbs1.320.103201500 4294967296 0
So, to gather the max database size, I summed the greatest value between BYTES and MAXBYTES (when available) and using the v$asm_diskgroup to grab the current avaliable space from the diskgroup used in the db_create_file_dest parameter and finally subtracting the available space from the max database size:
select dg.FREE_MB - round((dbf.max+tempf.max),0) as total_free from (select trunc(sum(greatest(maxbytes,bytes)/1024/1024)) as max from dba_data_files ) dbf, (select trunc(sum(greatest(maxbytes,bytes)/1024/1024)) as max from dba_temp_files) tempf, (select FREE_MB from v$asm_diskgroup where name in (select LTRIM( value, '+' ) from v$parameter where name='db_create_file_dest')) dg;
Based on this we can create several ways to avoid the storage over-committing.
The one I used was to create a metric extension:
If the value of the above query is negative, means that the max database size is greater than the available size therefore the capacity plan for that database must be reviewed.
- To create the metric extension, go to, on the OEM menu – Enterprise -> Monitoring -> Metric Extension > Actions -> Create -> Metric Extension.
- In the General Properties page select the target type as Cluster database if it´s RAC or Database instance.
- In the Adapter, select “SQL”
- In Collection schedule, enable the metric, upload the data and set the collection frequently to your environment needs.
- In the Adapter page add the above query.
- In the Columns page, add a column for the result of the query.
- To fit my environment I only set the critical alert and in case the value of the metric is below 0.
- Continue filling the necessary information on the next pages, as well as test the new metric on a selected target, Afterwards save the new metric.
- Prior to deploying it to any target you need save the metric as a deployable draft, select the created metric and go the action menu and select Save as Deployable draft
Now you can deploy the new metric to the targets.
Hope this can help!
Elisson Almeida