How to avoid plain-text passwords in shell scripts

Hello fellow DBAs and readers,

today I want to share with you a good option to avoid having plain-text password on storage as part of scripts or automation efforts.

A few months ago, I discovered an issue with a plain-text password sitting there in a script. Most of the people I talked about it disregarded the issue as “no big deal”. But my concern only grew when I found out that it was a common practice to script like that and I decided to do something about it.

So there are a lot of options out there, if you ask me. You can even have your own code there written in a compiled code. But truth is that given enough time, if a hacker gets that code, they can reverse engineer it to find out what you’re using. Using Python or any other scripting language is even simpler to read if you find the code and thus, is not a good option to have our “encryption method” in the same place as the passwords.

At some point we stumbled upon a good solution provided by Oracle itself: The Oracle Wallet.

If you didn’t know it. You can use it to store external passwords and access them transparently to connect to any Oracle Database.

You can find the whitepaper by clicking >> here <<

According to the whitepaper you can use the the tnsnames.ora file that is located at your $ORACLE_HOME/network/admin directory. But for my use case, I needed to keep that one as clean as possible and use a different tnsnames.ora file.

So I have this special directory where I will have the tnsnames.ora and sqlnet.ora files and the wallet directory. Let’s call it /path/to/tns/files I’ll make my Oracle binaries aware of this location by exporting the TNS_ADMIN environment variable.

Let’s make a long story short and get this done:

  1. Create your directory
    mkdir -p /path/to/tns/files/wallet
  2. Create tnsnames.ora and sqlnet.ora files
    cd /path/to/tns/files
    vi tnsnames.ora
    vi sqlnet.ora
    • For the sqlnet.ora file use the following content:
      WALLET_LOCATION = 
        (SOURCE = 
          (METHOD = FILE) 
          (METHOD_DATA = 
            (DIRECTORY = /path/to/tns/files/wallet))) 
      SQLNET.WALLET_OVERRIDE = TRUE
      SSL_CLIENT_AUTHENTICATION = FALSE
  3. Create the wallet files
    mkstore -wrl /path/to/tns/files/wallet -create
    • Now you’ll see the cwallet.sso and the ewallet.p12 files sitting there.
  4. It’s time to start saving your credentials
    mkstore -wrl /path/to/tns/files/wallet -createCredential <DB NAME> <USERNAME> [<PASSWORD>]

You can see I place the password as optional. As you may have guessed already, I don’t want to leave the password there in the command history of my SHELL.

So, finally you can test your connection using sqlplus /@<DB NAME>

Advertisements

AWR: Same SNAP ID, Different Dates

Hello everyone,

Recently I ran into an interesting issue. I created a script to automatically generate AWR + ADDM reports by asking only for the start and end timestamps.

Basically the script looks for the minimum and maximum SNAP_IDs between the start and end times in order to get comprehensive information out of the AWR.

Everything worked well until we got the starting time pointing to last year. But we only store 2 weeks of AWR data. So this seemed like a ghost snapshot.

The issue

2 years working perfectly until we got something like this:

SQL> select distinct snap_id, begin_interval_time, end_interval_time
from dba_hist_snapshot
where snap_id=99999
order by 2;

  SNAP_ID BEGIN_INTERVAL_TIME END_INTERVAL_TIME 
---------- ------------------------- -------------------------
 99999     03-MAY-16 03.30.24.785000 03-MAY-16 03.45.30.203000
 99999     03-MAY-16 03.30.24.810000 03-MAY-16 03.45.30.226000
 99999     03-MAY-16 03.30.24.824000 03-MAY-16 03.45.30.252000
 99999     03-MAY-16 03.30.24.828000 03-MAY-16 03.45.30.227000
 99999     31-JAN-17 12.00.31.899000 31-JAN-17 12.15.37.672000
 99999     31-JAN-17 12.00.31.920000 31-JAN-17 12.15.37.693000
 99999     31-JAN-17 12.00.31.921000 31-JAN-17 12.15.37.697000
 99999     31-JAN-17 12.00.31.921000 31-JAN-17 12.15.37.694000

8 rows selected

So, how can this happen?

Basically, we refreshed this database from another one using RMAN Clone, this means that after the cloning, the databases were identical, including AWR snapshots.

This introduces SNAP_IDs from the source database into the target database, and thus when the target database reaches that same SNAP_ID number, we face this situation.

By issuing a couple more queries, you can find something like this:

SQL> select dbid from v$database
 DBID
-----------
 1234567890

SQL> select dbid, retention from dba_hist_wr_control
 DBID       RETENTION
----------- -----------
 1357924680 40150 0:0:0
 1234567890 60 0:0:0.0 
 2468013579 60 0:0:0.0

The Solution

The solution for this issue is simple enough.

We can use the following stored procedure to cleanup the mess:

dbms_workload_repository.drop_snapshot_range(min,max, db_id)

By following the next few steps you can find the snap range you want to drop and clear them out.

SQL> select min(snap_id), max(snap_id) from dba_hist_snapshot where dbid = 1357924680;

MIN(SNAP_ID)  MAX(SNAP_ID)
------------ -------------
       55555         99999

SQL> exec dbms_workload_repository.drop_snapshot_range(55555,99999, 1357924680);

PL/SQL procedure successfully completed.

SQL> select *from dba_hist_snapshot where dbid = 1357924680;

no rows selected

So, this is it for today! I hope this helps some of you fellow DBAs out there!

NOTE: DB_IDs, SNAP_IDs and timestamps have been modified to protect the identity of the protagonists of this post 😉

The proactive Database Administrator – Shell Script as a tool

Hello my dear readers, I’ve been very quiet during the last few weeks. A lot going on.

So, today I want to talk to you about being a PROACTIVE DBA.

This is not only about having the latest version of Oracle Enterprise Manager Cloud Control and a bunch of fancy tools giving you graphical reports of database health. Is about going to lower levels to let you know root causes are coming before they actually become an issue.

In my daily job, I try to become a lot more than a simple Operational Database Support guy. I’m not here only to keep lights on. That’s not my higher goal. My higher goal is to achieve a level where the lights keep on by themselves. Letting me focus on more important stuff. Performance. Solutions. Enhancements. Testing new approaches to old issues. Automating more and more manual, repetitive tasks.

My mindset forbids me to stay in my comfort zone doing the same tasks one day after another. I can’t just sit as a robot and do the same exact thing time after time. I need to evolve, to grow, to know more.

This is the Proactive DBA. The DBA that looks for symptoms before they appear. The one that automates all those tasks that may be automated. The one that looks for a better way to do things. The on that makes the magic happen.

Being a proactive DBA is being a DBA with a twist. A deep desire to be better, faster, fail-proof, more efficient.

In my case, the shell scripting have been a marvelous tool during the past few years. I can monitor filesystem usage, create reports of how much space is using each database in a shared filesystem, monitor archivelog generation, clean up space by issuing RMAN commands or dropping automatically older GRPs that no longer comply with our retention policy. That keeps the lights on even when I’m not watching. That’s how I like it. If I receive an email from one of these monitor scripts, I can react before the alert becomes an issue.

You can find some of my shared work at https://github.com/Silvarion/shellscript

There you’ll find a couple useful scripts, where you can grab what you like and dismiss what you don’t.

Applying a command or a SQL script to all databases in the same host

Hello everyone,

This is my first post on the oracle blogs section!

I really hope the posts I will publish here help some of you to have a more dynamic and comfortable work.

This time I’m publishing a simple KSH script that can allow you to apply a single command or a SQL script to all running databases in a single host.

Now let’s explain each part:

  • Functions
    • Utility Functions: refer to this post to check the utility functions I normally use.
      • You’ll see debugPrint and msgPrint in most of my shell scripts. Those are used to give a nice format to the message I’m sending to the console/log.
      • debugPrint is about to be deprecated as I added that option to the more generic msgPrint function.
    • Crawl
      • This is the main function that receives the command/sql file to be run in every instance.
  • Main Algorythm
    • for DBNAME in $(ps -ef | grep pmon | cut -d”_” -s -f3 | grep -v ASM)
      • Here, we cycle through all the running instances in the host. Looking for pmon processes and getting only the INSTANCE name from it.
    • crawl “@${2}”
      • In the SQL script mode see how the argument passed to the crawl function is appended to the @ sign, so that we don’t have to check for it when running inside the function.
    • crawl “${2}”
      • In the command mode we pass the argument inside double quotes so that it is considered a single argument by the function while running.

With some light modifications, we can get this same schema to work with any of the command line utilities from Oracle. But that’s a future project of mine. I will try to add support for RMAN in the next few months as my work allows me to.

Hope that some of you find this post useful.

Quickly check Oracle listeners

Hello everyone,

Today I would like to share something with you. In my day to day work, I have encountered more than one place where DEV environments are spread through only some hosts. This normally leads to creating several databases on the same host and to create several listeners to be able to split communication through several ports.

Having this, after OS patches or any OS related activity that would require the database software to be shutdown, it’s an usual case that people forget to start listeners.  For this I have this useful script:

Enjoy.

Hope this helps in making your job easier.