Last Updated: February 25, 2016
·
855
· fully_baked

Truncating a list of files in CentOS

At work our big enterprise system generates a metric tonne of log files from various parts of the system. These can easily run to hundreds of megabytes each, and while immensely useful for debugging and monitoring our development and staging environments, are a nuisance for disk space and for ploughing through to find a given issue.

Every so often we need to tidy them up and the quick way is to just delete them all and let the system recreate them. However, doing that means you need a file in place to tail -f on if you're watching for something. Sometimes it is just a pain in the neck to remove and recreate files, and there is always the chance of human error in getting names and/or permissions right.

So here is a quick alternative for bash, obviously you could do this in your scripting language of choice instead as they all have pretty good file handling.

It leverages two things you can do straight from the command line.

The first is truncating a file via redirection

$ ls -l a_file.txt
-rw-r--r-- 1 auser auser 33 Nov 11 17:07 a_file.txt

$ > a_file.txt

$ ls -l a_file.txt
-rw-r--r-- 1 auser auser 0 Nov 11 17:07 a_file.txt

By redirecting nothing to the file, it overwrites any content in the file and truncates it to 0 bytes. Note: this will also create the file if it doesn't already exist

Then we use a simple for loop to iterate over a list of files and apply the command

for each f in *.log; do > f; done

In this case we are globbing for all files with the .log extension in the current directory and applying our truncation, leaving us a list of log files at 0 bytes with all the correct filenames and permissions.

Taken from my blog at http://www.fullybaked.co.uk/