New GitHub Repo: powershell

Hello my fellow DBAs and dear readers!

My personal project in this case, started as a need for my job. High security measures were forcing us to stop any scripting over Linux, where our databases are (100+ databases, over 200 instances) running. Added to those limitations, no BASH for Windows or cygwin available for installation on workstations. This pushed me to find something else. Python, maybe? Can’t install extensions. Perl? adds complexity that I couldn’t afford due to time frames. I end up saying: Let’s give PowerShell a shot.

I start looking for resources and found some nice videos about PowerShell on the Microsoft Virtual Academy a couple weeks ago. As I progress on the lessons from MVA and start applying what I learn to my own PS scripts, I get a nice taste of what this can become.

We all know for a fact that the Windows console sucks big time. It falls pale when compared to Bourne Again Shell (a.k.a. BASH) and other popular Linux shells. So, it happened that Microsoft started this PowerShell thing. something that looked similar to the console, but different.

On it early stages, it was awful. But now, this kid known as PS has grown to a solid v5.1 that has some very nice functionalities. Let’s face it. We rely on Linux for servers, but Windows, Active Directory and the general Microsoft ecosystem is pretty robust when it comes to enterprise software infrastructure.

Now, I’m an Oracle DBA, looking for a way to do what I did on Linux, but from a Windows workstation. What I found was amazing and as I advance on my PS skills building, I will be posting PowerShell scripts on my GitHub powershell repository to share it with you all.

I highly recommend automation and scripting enthusiasts to take a look at what PowerShell has to offer, especially now that PowerShell 6(Beta) is also available for Linux.

The ability to use other tools and integrate them into the PS script make it a particularly good option for interacting with files, databases and all kinds of objects and data sources.

The future is uncertain, but everything seems to lead to a wonderful mix of tools and architectures that can make our life as DBAs a lot easier.

Write-Output "See you soon"

How to ‘find’ with argument list too long

Hello dear readers,

Today I’m going to share a very short, but (I hope) useful piece of information for some Linux admins and DBAs as well.

The find tool is very good in isolating files that match a certain pattern.

I use it, in particular, for cleaning up old logs and files that are no longer required in file system if you generate 5 to 10 logs a day. You could even do this manually, but when you generate 5 to 10 logs per minute, you need to automate this as much as you can.

So piping the following commands together do the job nicely

find . -name *xmin*.log -mtime +30 | xargs -r rm -f

Or so I thought.

Lately I have been receiving the following error:

/usr/bin/find: Argument list too long

A shame… Cleanup is not working anymore and I have to go over it manually. But manually doesn’t work either as the number of files is huge.

Turns out that the error pops up at substitution time because of the pattern *xmin*.log. It doesn’t act as a pattern at all but as a list of all the files that match that pattern.

So, if you want it to work as expected, just enclose it in single quotes as follows:

find . -name '*xmin*.log' -mtime +30 | xargs -r rm -f

And that’s it… You’re back in the automated game!

How to raise and shine at work

I will start this article with a small personal story.

During 2016, the company growth forced a new performance recognition program to be instated. The monthly newsletter mentions were falling short and a need for new motivation and career improvement efforts were the natural answer to this.

So, since June 2016, the Monthly Hero Program was inaugurated, some sort of MVP for all of our projects. Later, the top performers lineup and then the Annual Award Program.

Then, these trophies started showing up. I was amazed, motivated and highly committed to keep up the line.

And I’m still are. More because I love what I do, than because of the actual prizes.

People turned towards me asking how can I stay motivated all the time. How can I stay focused. How can I work with such passion.

My answer is always the same: I love my job. I take risks. I don’t care if I get fired because I did “too good” or “too much”. For me, is all about making things better each day, even if that blurs the lines of the job roles. After all, I’m working with several teams to reach a common goal.

How to do it!?

So, you want to be on the far right side of the Gauss Bell, where the top performers are. I want you to be there too.

Here’s how to get there.

First, ask yourself the following questions:

  1. Would you do what you do know at work, just for the sake of knowledge, practice or fun?
  2. Let’s say you don’t need money at all. Would you still keep the same career path you have today? (Maybe even as a cofounder of a new company)
  3. Do you wake up with the mood to get to work to see what challenges await?

If you answered yes to all of the previous questions, you’re already on the path to that “Top Performer” side… If you’re not there already.

If you answered no to any of the previous questions, then you have to find what is missing or even think about changing your career path or your job.

Risk it

If you don’t feel motivated, energized and appreciated on your current job position, give it try and look for something else. Look here and there and anywhere until you find that job offer that you feel that has your name written on it.

That’s the best first step.

Of course I know you have bills to pay. I’m not telling you that you should quit right away. I’m telling you: open your eyes.

The comfort zone is always blurring the outside. it doesn’t want you to explore, to step outside and find new horizons, challenges and motivators. You feel relaxed, but eventually you start to get bored.

Have fun

If you’re not having home when you’re working at least 50% of the time, you’re on the wrong place and you need to find another job. Time to shake off the dust and place yourself on the market. It’s time to find your place.

Give your best

Evauluation Checklist

No matter what you do. It matters that you do it the best you can. Read, study, research. Look always for better ways to do whatever you need to do. Look for best practices. Look for processes that have room for improvement and go for it.

Tell your peers and your lead/supervisor about your findings (written email, as explicit as possible) and let the inertia do its thing. Always show off your ideas. Speak during the meetings. Demonstrate that you’re an expert.

Do not limit your work to “what you’re asked to do”. Try to find a way to improve how things are done, and those actions will be louder than any words.

Go above and beyond, just for the sake of it. Because you know you can do it. Because you know it can be improved. Because yes!

Know yourself

Nobody knows you as you do. You can try to lie to yourself but it won’t last forever. So start being honest with you and look for that place where you have to be. That will get you half the way towards being a top performer wherever you are.

Spread the word

If you do something good. Let people know. Share it so others can learn and use it as well. That will give you the credit automatically.

Think. Research. Plan. Execute. Improve. Share.

Those are your steps to be recognized as a Rockstar wherever you are.

Final thoughts

Improving stuff, automating and sharing your knowledge won’t get you fired. Don’t be greedy, close and unreachable. If you are, how do you expect others to recognize how good you are?

Be open, friendly and engage in sharing your experience and knowledge. That will get you far!

How to avoid plain-text passwords in shell scripts

Hello fellow DBAs and readers,

today I want to share with you a good option to avoid having plain-text password on storage as part of scripts or automation efforts.

A few months ago, I discovered an issue with a plain-text password sitting there in a script. Most of the people I talked about it disregarded the issue as “no big deal”. But my concern only grew when I found out that it was a common practice to script like that and I decided to do something about it.

So there are a lot of options out there, if you ask me. You can even have your own code there written in a compiled code. But truth is that given enough time, if a hacker gets that code, they can reverse engineer it to find out what you’re using. Using Python or any other scripting language is even simpler to read if you find the code and thus, is not a good option to have our “encryption method” in the same place as the passwords.

At some point we stumbled upon a good solution provided by Oracle itself: The Oracle Wallet.

If you didn’t know it. You can use it to store external passwords and access them transparently to connect to any Oracle Database.

You can find the whitepaper by clicking >> here <<

According to the whitepaper you can use the the tnsnames.ora file that is located at your $ORACLE_HOME/network/admin directory. But for my use case, I needed to keep that one as clean as possible and use a different tnsnames.ora file.

So I have this special directory where I will have the tnsnames.ora and sqlnet.ora files and the wallet directory. Let’s call it /path/to/tns/files I’ll make my Oracle binaries aware of this location by exporting the TNS_ADMIN environment variable.

Let’s make a long story short and get this done:

  1. Create your directory
    mkdir -p /path/to/tns/files/wallet
  2. Create tnsnames.ora and sqlnet.ora files
    cd /path/to/tns/files
    vi tnsnames.ora
    vi sqlnet.ora
    • For the sqlnet.ora file use the following content:
        (SOURCE = 
          (METHOD = FILE) 
          (METHOD_DATA = 
            (DIRECTORY = /path/to/tns/files/wallet))) 
  3. Create the wallet files
    mkstore -wrl /path/to/tns/files/wallet -create
    • Now you’ll see the cwallet.sso and the ewallet.p12 files sitting there.
  4. It’s time to start saving your credentials
    mkstore -wrl /path/to/tns/files/wallet -createCredential <DB NAME> <USERNAME> [<PASSWORD>]

You can see I place the password as optional. As you may have guessed already, I don’t want to leave the password there in the command history of my SHELL.

So, finally you can test your connection using sqlplus /@<DB NAME>

AWR: Same SNAP ID, Different Dates

Hello everyone,

Recently I ran into an interesting issue. I created a script to automatically generate AWR + ADDM reports by asking only for the start and end timestamps.

Basically the script looks for the minimum and maximum SNAP_IDs between the start and end times in order to get comprehensive information out of the AWR.

Everything worked well until we got the starting time pointing to last year. But we only store 2 weeks of AWR data. So this seemed like a ghost snapshot.

The issue

2 years working perfectly until we got something like this:

SQL> select distinct snap_id, begin_interval_time, end_interval_time
from dba_hist_snapshot
where snap_id=99999
order by 2;

---------- ------------------------- -------------------------
 99999     03-MAY-16 03-MAY-16
 99999     03-MAY-16 03-MAY-16
 99999     03-MAY-16 03-MAY-16
 99999     03-MAY-16 03-MAY-16
 99999     31-JAN-17 31-JAN-17
 99999     31-JAN-17 31-JAN-17
 99999     31-JAN-17 31-JAN-17
 99999     31-JAN-17 31-JAN-17

8 rows selected

So, how can this happen?

Basically, we refreshed this database from another one using RMAN Clone, this means that after the cloning, the databases were identical, including AWR snapshots.

This introduces SNAP_IDs from the source database into the target database, and thus when the target database reaches that same SNAP_ID number, we face this situation.

By issuing a couple more queries, you can find something like this:

SQL> select dbid from v$database

SQL> select dbid, retention from dba_hist_wr_control
----------- -----------
 1357924680 40150 0:0:0
 1234567890 60 0:0:0.0 
 2468013579 60 0:0:0.0

The Solution

The solution for this issue is simple enough.

We can use the following stored procedure to cleanup the mess:

dbms_workload_repository.drop_snapshot_range(min,max, db_id)

By following the next few steps you can find the snap range you want to drop and clear them out.

SQL> select min(snap_id), max(snap_id) from dba_hist_snapshot where dbid = 1357924680;

------------ -------------
       55555         99999

SQL> exec dbms_workload_repository.drop_snapshot_range(55555,99999, 1357924680);

PL/SQL procedure successfully completed.

SQL> select *from dba_hist_snapshot where dbid = 1357924680;

no rows selected

So, this is it for today! I hope this helps some of you fellow DBAs out there!

NOTE: DB_IDs, SNAP_IDs and timestamps have been modified to protect the identity of the protagonists of this post 😉