JS.OracleDatabase: I got it, now how do I use it?

Welcome my much appreciated readers!

I want to tell you that if you have made it here so far, you are on my team. The unsettled ones that always look for improvement.

I’ll be posting some scripts and samples on my PowerShell GitHub Repo. I’m quite sure you’ll find some interesting ideas there. Feel free to contact me about modifications you’d like or better ways to implement and/or extend the functions I have already in the Module.

Here’s a simple script to get the size of an Oracle cluster database.

# Get Formatted Sizing with % Used
 [Parameter(Mandatory = $true)]
 [Parameter(Mandatory = $true)]
 [Parameter(Mandatory = $true)]
foreach ($DBName in $TargetDB) {
 Get-OracleSize -TargetDB $DBName -SizeType $SizeType -Unit $Unit | Format-Table -AutoSize `
@{Name = "Database";Expression = { $_.DBName }; Alignment = "left"},
@{Name = "DiskGroup";Expression = { $_.DiskGroup }; Alignment = "left"},
@{Name = "UsedSpace";Expression = { $_.UsedGB }; Alignment = "Right"},
@{Name = "AllocatedSpace";Expression = { $_.AllocatedGB }; Alignment = "Right"},
@{Name = "MaxSpace";Expression = { "{0:N2}" -f $_.MaxGB }; Alignment = "Right"},
@{Name = "PercentUsed";Expression = { "{0:P2}" -f [float]$( $_.UsedGB / $_.MaxGB ) }; Alignment = "Right"}

Saving this and running it (after having the  JS.OracleDatabase module installed) will give you output like this:

[CUSTOMPROMPT] Dev > .\FormatDBSizing.ps1 -TargetDB myorcldb1 -SizeType Full -Unit GB

Database  DiskGroup UsedSpace AllocatedSpace MaxSpace PercentUsed
---------  --------- --------- -------------- -------- -----------
myorcldb1 DATA_DG      222.47         222.99 13439         1.66 %
myorcldb1 FRA_DG         6.55           6.68  4199.69      0.16 %

Feel free to give it a go and leave your opinions.



PowerShell + Oracle Client = Awesome Module

Hello everyone!

At first I was reluctant to try PowerShell, you know, being a Oracle on Linux DBA gets you fully used to the Linux ecosystem and going for PowerShell meant relying on Windows and .Net to do what I was comfortable doing by leveraging Linux, Korn Shell and the Oracle Client on Linux.

But then again, I don’t like to be too comfortable so I decided to take a dive in and see what I could achieve with PowerShell. As usual, with Microsoft, expectations were high according to what the makers of PowerShell claimed that could be done. And this time, they were just right. If you want to go a little deeper on PowerShell, visit my previous post about it.

So, today I’m  here to make both an announcement and a request.

The announcement is that my first PowerShell module is now published in the PowerShellGallery.com under the name JS.OracleDatabase (JS as in Jesus Sanchez).

If you want to give it a test drive, just run the following commands in PowerShell and feel free to install it and use it to your best convenience.

Find-Module JS.OracleDatabase
Install-Module JS.OracleDatabase

No Admin rights? No Problem, just add -Scope CurrentUser to the Install-Module command and you’ll be set.

The module is currently under development and I’m adding functions as they come handy for me. I can use your suggestions to further add more functionality and options to existing functions.

Remember that this module is based on the Oracle Client programs, so it’s required to have the Oracle Client installed with SQL*Plus and TNS Ping at least.

Make sure to set your environment variables and add the bin directory to the path in powershell as well.

$env:ORACLE_HOME = /path/to/your/oracle/installation
$env:PATH = "$env:PATH ; $env:ORACLE_HOME/bin"

This module can be used under Windows or, if your using PowerShell Core, on Linux.

Happy scripting!

How to avoid plain-text passwords in shell scripts

Hello fellow DBAs and readers,

today I want to share with you a good option to avoid having plain-text password on storage as part of scripts or automation efforts.

A few months ago, I discovered an issue with a plain-text password sitting there in a script. Most of the people I talked about it disregarded the issue as “no big deal”. But my concern only grew when I found out that it was a common practice to script like that and I decided to do something about it.

So there are a lot of options out there, if you ask me. You can even have your own code there written in a compiled code. But truth is that given enough time, if a hacker gets that code, they can reverse engineer it to find out what you’re using. Using Python or any other scripting language is even simpler to read if you find the code and thus, is not a good option to have our “encryption method” in the same place as the passwords.

At some point we stumbled upon a good solution provided by Oracle itself: The Oracle Wallet.

If you didn’t know it. You can use it to store external passwords and access them transparently to connect to any Oracle Database.

You can find the whitepaper by clicking >> here <<

According to the whitepaper you can use the the tnsnames.ora file that is located at your $ORACLE_HOME/network/admin directory. But for my use case, I needed to keep that one as clean as possible and use a different tnsnames.ora file.

So I have this special directory where I will have the tnsnames.ora and sqlnet.ora files and the wallet directory. Let’s call it /path/to/tns/files I’ll make my Oracle binaries aware of this location by exporting the TNS_ADMIN environment variable.

Let’s make a long story short and get this done:

  1. Create your directory
    mkdir -p /path/to/tns/files/wallet
  2. Create tnsnames.ora and sqlnet.ora files
    cd /path/to/tns/files
    vi tnsnames.ora
    vi sqlnet.ora
    • For the sqlnet.ora file use the following content:
        (SOURCE = 
          (METHOD = FILE) 
          (METHOD_DATA = 
            (DIRECTORY = /path/to/tns/files/wallet))) 
  3. Create the wallet files
    mkstore -wrl /path/to/tns/files/wallet -create
    • Now you’ll see the cwallet.sso and the ewallet.p12 files sitting there.
  4. It’s time to start saving your credentials
    mkstore -wrl /path/to/tns/files/wallet -createCredential <DB NAME> <USERNAME> [<PASSWORD>]

You can see I place the password as optional. As you may have guessed already, I don’t want to leave the password there in the command history of my SHELL.

So, finally you can test your connection using sqlplus /@<DB NAME>

AWR: Same SNAP ID, Different Dates

Hello everyone,

Recently I ran into an interesting issue. I created a script to automatically generate AWR + ADDM reports by asking only for the start and end timestamps.

Basically the script looks for the minimum and maximum SNAP_IDs between the start and end times in order to get comprehensive information out of the AWR.

Everything worked well until we got the starting time pointing to last year. But we only store 2 weeks of AWR data. So this seemed like a ghost snapshot.

The issue

2 years working perfectly until we got something like this:

SQL> select distinct snap_id, begin_interval_time, end_interval_time
from dba_hist_snapshot
where snap_id=99999
order by 2;

---------- ------------------------- -------------------------
 99999     03-MAY-16 03-MAY-16
 99999     03-MAY-16 03-MAY-16
 99999     03-MAY-16 03-MAY-16
 99999     03-MAY-16 03-MAY-16
 99999     31-JAN-17 31-JAN-17
 99999     31-JAN-17 31-JAN-17
 99999     31-JAN-17 31-JAN-17
 99999     31-JAN-17 31-JAN-17

8 rows selected

So, how can this happen?

Basically, we refreshed this database from another one using RMAN Clone, this means that after the cloning, the databases were identical, including AWR snapshots.

This introduces SNAP_IDs from the source database into the target database, and thus when the target database reaches that same SNAP_ID number, we face this situation.

By issuing a couple more queries, you can find something like this:

SQL> select dbid from v$database

SQL> select dbid, retention from dba_hist_wr_control
----------- -----------
 1357924680 40150 0:0:0
 1234567890 60 0:0:0.0 
 2468013579 60 0:0:0.0

The Solution

The solution for this issue is simple enough.

We can use the following stored procedure to cleanup the mess:

dbms_workload_repository.drop_snapshot_range(min,max, db_id)

By following the next few steps you can find the snap range you want to drop and clear them out.

SQL> select min(snap_id), max(snap_id) from dba_hist_snapshot where dbid = 1357924680;

------------ -------------
       55555         99999

SQL> exec dbms_workload_repository.drop_snapshot_range(55555,99999, 1357924680);

PL/SQL procedure successfully completed.

SQL> select *from dba_hist_snapshot where dbid = 1357924680;

no rows selected

So, this is it for today! I hope this helps some of you fellow DBAs out there!

NOTE: DB_IDs, SNAP_IDs and timestamps have been modified to protect the identity of the protagonists of this post 😉

The proactive Database Administrator – Shell Script as a tool

Hello my dear readers, I’ve been very quiet during the last few weeks. A lot going on.

So, today I want to talk to you about being a PROACTIVE DBA.

This is not only about having the latest version of Oracle Enterprise Manager Cloud Control and a bunch of fancy tools giving you graphical reports of database health. Is about going to lower levels to let you know root causes are coming before they actually become an issue.

In my daily job, I try to become a lot more than a simple Operational Database Support guy. I’m not here only to keep lights on. That’s not my higher goal. My higher goal is to achieve a level where the lights keep on by themselves. Letting me focus on more important stuff. Performance. Solutions. Enhancements. Testing new approaches to old issues. Automating more and more manual, repetitive tasks.

My mindset forbids me to stay in my comfort zone doing the same tasks one day after another. I can’t just sit as a robot and do the same exact thing time after time. I need to evolve, to grow, to know more.

This is the Proactive DBA. The DBA that looks for symptoms before they appear. The one that automates all those tasks that may be automated. The one that looks for a better way to do things. The on that makes the magic happen.

Being a proactive DBA is being a DBA with a twist. A deep desire to be better, faster, fail-proof, more efficient.

In my case, the shell scripting have been a marvelous tool during the past few years. I can monitor filesystem usage, create reports of how much space is using each database in a shared filesystem, monitor archivelog generation, clean up space by issuing RMAN commands or dropping automatically older GRPs that no longer comply with our retention policy. That keeps the lights on even when I’m not watching. That’s how I like it. If I receive an email from one of these monitor scripts, I can react before the alert becomes an issue.

You can find some of my shared work at https://github.com/Silvarion/shellscript

There you’ll find a couple useful scripts, where you can grab what you like and dismiss what you don’t.