Joined March 2012
·

William Orr

LinkedIn
·
Mountain View, CA
·
·
·

Calling find with -delete is probably faster, since you don't have to execute rm on each file, it just passes the filename directly to unlink(2).

However it's definitely not safer.

My advice here is to switch to using the xargs command. One of the nicer things is that it will build the longest possible rm command and then execute that rather than doing it individually (like say if you were to loop over the output of find in a for loop), which decreases the number of times you call fork(2) and execve(2).

Additionally, it's worth mentioning that @Kwpolska's solution is dangerous as well. If you have any files with spaces in the names, you'll get some unintended results. xargs (as well as rm) will treat those as separate arguments and they won't be split properly.

The way to get around that (provided you're using GNU xargs/find (you probably are)) is to add the -0 flag to xargs and -print0 to find. find will now delimit its output with NULL bytes, and xargs will use the NULL byte as the delimiter instead of any space character.

Your final command will look like:

find . -name \*.pyc -print0 | xargs -0 rm

Cheers

Posted to Correctly opening a file in Perl over 1 year ago

Thanks for the feedback.

I made sure to only call open if $filename isn't a falsey value, and to explain my reasoning behind die in the protip

Posted to Constants in Perl over 1 year ago

Thanks for the response. TIL taking a reference of a literal or a return value.

Const::Fast still provides faster access by removing the function call every time you access your constant.

I tend to use ack. It's a great tool - definitely worth a look.

https://metacpan.org/module/PETDANCE/ack-1.96/ack-base

Achievements
538 Karma
41,825 Total ProTip Views