In the dynamic landscape of database management, automating routine tasks is crucial for efficiency and reliability. This blog post introduces a Bash script designed to automate the process of uploading Oracle database backups to AWS S3. The script, named get_s3sync.sh, not only syncs the backups but also performs cleanup operations to maintain a streamlined storage environment.
10 14 * * * /root/get_s3sync.sh /dbadeeds/rman/backups/
#!/bin/bash
# August 2023 Sandeep Reddy Narani
# Display: Script upload files to AWS S3 if the upload count is greater than 30 in directory_path and deletes files
# Interactive database picker
if [ -f /tmp/get_s3sync_expdps.lock ]; then
echo "Script is already running. Exiting..."
exit 1
fi
touch /tmp/get_s3sync_expdps.lock
if [ $# -ne 1 ]; then
echo "Usage: $0 <directory_path>"
exit 1
fi
directory_path="$1"
file_count=$(/usr/local/bin/aws s3 sync --dryrun "$directory_path" s3://bucketname-db-backups/oracle/ | wc -l)
if [ "$file_count" -gt 30 ]; then
echo "Starting sync process for $directory_path..."
# Start the sync process
/usr/local/bin/aws s3 sync "$directory_path" s3://bucketname-db-backups/oracle/
echo "Sync process completed for $directory_path."
# Delete files older than 1 day
find "$directory_path" -type f -mtime +1 -exec rm {} \;
# Delete the lock file
echo "Deleting the lockfile"
rm -f /tmp/get_s3sync_expdps.lock
# Send email notification
echo "AWS S3 Sync and cleanup completed for $directory_path and Files older than 1 day deleted from FRK EFS" | mail -s "AWS FRK S3 Backup Process Report" youremailaddress@.com
else
echo "File count is not greater than 30 for $directory_path. No action needed."
# Delete the lock file
echo "Deleting the lockfile"
rm -f /tmp/get_s3sync_expdps.lock
Leave a comment