How to ‘find’ with argument list too long

Hello dear readers,

Today I’m going to share a very short, but (I hope) useful piece of information for some Linux admins and DBAs as well.

The find tool is very good in isolating files that match a certain pattern.

I use it, in particular, for cleaning up old logs and files that are no longer required in file system if you generate 5 to 10 logs a day. You could even do this manually, but when you generate 5 to 10 logs per minute, you need to automate this as much as you can.

So piping the following commands together do the job nicely

find . -name *xmin*.log -mtime +30 | xargs -r rm -f

Or so I thought.

Lately I have been receiving the following error:

/usr/bin/find: Argument list too long

A shame… Cleanup is not working anymore and I have to go over it manually. But manually doesn’t work either as the number of files is huge.

Turns out that the error pops up at substitution time because of the pattern *xmin*.log. It doesn’t act as a pattern at all but as a list of all the files that match that pattern.

So, if you want it to work as expected, just enclose it in single quotes as follows:

find . -name '*xmin*.log' -mtime +30 | xargs -r rm -f

And that’s it… You’re back in the automated game!


The proactive Database Administrator – Shell Script as a tool

Hello my dear readers, I’ve been very quiet during the last few weeks. A lot going on.

So, today I want to talk to you about being a PROACTIVE DBA.

This is not only about having the latest version of Oracle Enterprise Manager Cloud Control and a bunch of fancy tools giving you graphical reports of database health. Is about going to lower levels to let you know root causes are coming before they actually become an issue.

In my daily job, I try to become a lot more than a simple Operational Database Support guy. I’m not here only to keep lights on. That’s not my higher goal. My higher goal is to achieve a level where the lights keep on by themselves. Letting me focus on more important stuff. Performance. Solutions. Enhancements. Testing new approaches to old issues. Automating more and more manual, repetitive tasks.

My mindset forbids me to stay in my comfort zone doing the same tasks one day after another. I can’t just sit as a robot and do the same exact thing time after time. I need to evolve, to grow, to know more.

This is the Proactive DBA. The DBA that looks for symptoms before they appear. The one that automates all those tasks that may be automated. The one that looks for a better way to do things. The on that makes the magic happen.

Being a proactive DBA is being a DBA with a twist. A deep desire to be better, faster, fail-proof, more efficient.

In my case, the shell scripting have been a marvelous tool during the past few years. I can monitor filesystem usage, create reports of how much space is using each database in a shared filesystem, monitor archivelog generation, clean up space by issuing RMAN commands or dropping automatically older GRPs that no longer comply with our retention policy. That keeps the lights on even when I’m not watching. That’s how I like it. If I receive an email from one of these monitor scripts, I can react before the alert becomes an issue.

You can find some of my shared work at

There you’ll find a couple useful scripts, where you can grab what you like and dismiss what you don’t.

Shell script to download files or directories from SVN (or web)

Hello everybody!

Today I bring to you a new public Gist that holds a shell script that you can use to download 1 file, a list of files or an entire directory from the SVN (or any webpage for what matters) with a single run.

The only important prerequisite for it to run successfully is that WGET is installed in the linux box where you’re going to run it.


Hope you find this useful. If so, please share it!

Utility Script

When working with shell scripts it’s always useful to have common functions grouped in one or more scripts that can be sourced from the running script needing the functions.

I have placed some of my own scripts in a public GitHub so that we can all share them and use them. Here you have my personal utility function script.

Enjoy and, if you like it, share!

Hope this brings some automation and code organization to your job.


Applying a command or a SQL script to all databases in the same host

Hello everyone,

This is my first post on the oracle blogs section!

I really hope the posts I will publish here help some of you to have a more dynamic and comfortable work.

This time I’m publishing a simple KSH script that can allow you to apply a single command or a SQL script to all running databases in a single host.

Now let’s explain each part:

  • Functions
    • Utility Functions: refer to this post to check the utility functions I normally use.
      • You’ll see debugPrint and msgPrint in most of my shell scripts. Those are used to give a nice format to the message I’m sending to the console/log.
      • debugPrint is about to be deprecated as I added that option to the more generic msgPrint function.
    • Crawl
      • This is the main function that receives the command/sql file to be run in every instance.
  • Main Algorythm
    • for DBNAME in $(ps -ef | grep pmon | cut -d”_” -s -f3 | grep -v ASM)
      • Here, we cycle through all the running instances in the host. Looking for pmon processes and getting only the INSTANCE name from it.
    • crawl “@${2}”
      • In the SQL script mode see how the argument passed to the crawl function is appended to the @ sign, so that we don’t have to check for it when running inside the function.
    • crawl “${2}”
      • In the command mode we pass the argument inside double quotes so that it is considered a single argument by the function while running.

With some light modifications, we can get this same schema to work with any of the command line utilities from Oracle. But that’s a future project of mine. I will try to add support for RMAN in the next few months as my work allows me to.

Hope that some of you find this post useful.