Linux | File Cleaner | bash script

Recently we setup a Linux server for keeping backups & decided not to use certain switches while the backups were synched from Windows machines. This created an additional situation like maintaining the storage space based on different business requirements and using our own solutions. So the following script was developed. Please note, this script has been tested on CentOS/RHEL/OEL 7 environment & executed with root privileges.

#!/bin/bash

# Cleanup tool for Linux Samba Server
# Rajesh Thampi
# Date: Sep 2021
# Instructions
# Copy the script to a file with .sh extension
# Make it executable (eg: chmod +x filecleaner.sh)
# Execute! (eg:./filecleaner.sh 1>filecleaner.log 2>filecleaner.err
# And be careful :)


function purgeit(){
# local DIRNAME="$1"
# local FILETYPE="$2"

cd "$1"
echo "Entered Directory: ${PWD}"
#Logic
#Check whether $3 number of files matching the patterns provided by $2 are present those were created within $4 days, then delete all files older than $4 days
 
if [ $(find -maxdepth 1 -name "$2" -type f -mtime -"$4" | wc -l) -ge $3 ]; then
local OBSFILECNT=$(find -maxdepth 1 -name "$2" -type f -mtime +"$4" | wc -l)
echo "There are ${OBSFILECNT} files & will be purged"
local obsfiles=$(find -maxdepth 1 -name "$2" -type f -mtime +"$4")

#The below loop is ONLY for logging purpose
#We'll delete all files matching the pattern in a single line command using "find"

        if [ $OBSFILECNT -gt 0 ]; then
        echo "Below Files will be deleted"
        for eachfile in "$obsfiles"
                do
                echo "$eachfile"
                done
                #find and delete will eliminate the need to treat files with space and other escape characters in the filenames.
                find -maxdepth 1 -name "$2" -type f -mtime +"$4" -exec rm -rf {} \;
        fi
fi
}

#Call the function passing four variables: path, type of the files to purge, number of files to keep & age of the files
#those need to be deleted

#syntax: purgeit "/backup/server_sql" ("*.txt" OR "my*.php" OR "*" OR "*.*") 4 5
#example: purgeit "/backup/server_sql" "*.zip" 4 5
#You can call this function N number of times passing different paths and other values

Now let us see the logic in details.

Consider you have a path “/backup/server_sql” where your Microsoft SQL Server is uploading a full backup daily. As we are synching the backup files using ROBOCOPY from the Windows server without mirroring, the daily full backup files will start mounting in the Linux files server. Then we came up with a business plan to:

  • Keep minimum 4 number of most recent full backups for the SQL server in the Linux path those were created within last N number of days. If there are no files for last N days found, existing files will not deleted (gives an opportunity to investigate why there are no files uploaded to Linux file server)
  • Delete files those are 5 days or older from the Linux path after insuring minimum N number of files are within the repository.
  • Combined with a function send alert emails, this small snippet could function as both a storage maintenance and monitoring tool.

Interested about including email alerts? Let us know and will share the additional code with you exclusively.

Linux | Send mail using internal mail command

Hi guys

We are on VEEM+VMWare infrastructure from a while, yet I am paranoid to maintain copies of the backups on different media once after going through couple of nightmares. We take weekly cold backup for our ERP Production server, move the tar files to a standby Linux server, and move those backups once again to an external HDD.

So basically I have a full VM backed up, the same VM holds a weekly cold backup, standby Linux server holding a copy of the cold backup files & to finish it, again copied to an external HDD. The funniest part is, we are moving the entire VMs to a TANDBERG Quick Station as well!

Though everything works fine till date, the last part of the deal needs to intimate me about successful completion of copying the tar files to the external media, ie, HDD that is formatted using NTFS, so that I can use it on both Linux and Windows environments

Be warned: The below bash script only works in an environment that has an internal SMTP server (or I don’t know how to relay the messages through an external SMTP relay and to disappoint you further, I don’t care about relaying through external SMTP). In addition, you must be on Linux 6 and above to use the internal mail command as demonstrated below. Linux 5 doesn’t support many switches provided with the example.

Further, below example demonstrates the basic level of error capturing with “bash” scripts as well

[code language=”bash” gutter=”false”]
#!/bin/bash
/bin/cp -rf /u02/backup/PROD_DAILY_BACKUP*.* /media/Elements/ 2> /dev/null

if [ $? -eq 0 ]
then
echo "The files were successfully copied to external hard disk" | mailx -v -s "ERP Tar Files Moved to External HDD | Success" -S smtp=smtp://server.domain.com -S from="ERP Alerts <someone@example.com>" someoneelse@example.com,someone2@example.com
else
echo "Files were not copied to external HDD" | mailx -v -s "ERP Tar Files to External HDD | Failed" -S smtp=smtp://server.domain.com -S from="ERP Alerts <someone@example.com>" someoneelse@example.com,someone2@example.com
fi
[/code]

Try it and let e know whether it worked for you :)

regards,

rajesh