In the realm of Oracle database management, efficiency and automation are paramount. Database administrators often find themselves grappling with the intricacies of exporting and importing data, a crucial aspect of database maintenance, especially in scenarios involving data migration, backup, or replication.
Understanding the Script
At its core, this script serves as an interactive database picker, presenting users with a menu of databases defined in the oratab file. Upon selection, it sets the necessary environment variables using oraenv and proceeds with the expdp and impdp operations based on user-defined parameters.
Key Features
- Dynamic Database Selection: The script dynamically fetches database entries from the oratab file, providing users with a menu to choose the desired database for data export and import operations.
- Environment Variable Setup: It intelligently sets essential environment variables such as ORACLE_SID, ORACLE_HOME, and PATH, ensuring seamless execution of expdp and impdp commands.
- Data Pump Directory Management: Prior to initiating export operations, the script checks for the existence of Data Pump directories and creates them if necessary, ensuring a structured and organized approach to data storage.
- Tablespace Remapping: The script fetches tablespaces from both the source and target databases, enabling automatic remapping during import operations, thereby facilitating a smooth transition of data between environments.
- Error Handling and Notification: Robust error handling mechanisms are in place to detect and notify administrators of any failures or errors encountered during expdp and impdp operations, ensuring timely intervention and resolution.
#!/usr/bin/ksh
# April 2022 Sandeep Reddy Narani
# Interactive database picker
# Displays menu of databases defined in oratab then asks which one to use then uses oraenv to set env variables
# ./get_expdp_impdp.sh cdb source_service source_user target_Service target_user
# SETTING ORATAB:
if [ -f /etc/oratab ]; then
ORATAB=/etc/oratab
export ORATAB
## If OS is Solaris:
elif [ -f /var/opt/oracle/oratab ]; then
ORATAB=/var/opt/oracle/oratab
export ORATAB
fi
export ORACLE_SID=$1
export ORACLE_HOME=$(grep "^${ORACLE_SID}:" ${ORATAB} | cut -d: -f2 -s)
export PATH=/usr/local/bin:/bin:/usr/bin:/usr/local/sbin:/usr/sbin:/home/oracle/.local/bin:/home/oracle/bin:$ORACLE_HOME/bin
export LD_LIBRARY_PATH=$ORACLE_HOME/lib
export LD=/home/oracle/scripts
#export ML=""
export CDATE=$(date '+%Y%m%d_%H%M')
export DIR=DATAPUMP
export SOURCE_SERVICE_NAME=$2
export SOURCE_SCHEMA_NAME=$3
export TARGET_SERVICE_NAME=$4
export TARGET_SCHEMA_NAME=$5
export EXL=statistics
# Create Data Pump directory if it does not exist
DP_DIRECTORY=/mnt/db-dump/expdps/${SOURCE_SERVICE_NAME}
# Check if the directory exists, create it if necessary
if [ ! -d "${DP_DIRECTORY}" ]; then
echo "Creating Data Pump directory: ${DP_DIRECTORY}"
mkdir -p "${DP_DIRECTORY}"
fi
if ! $ORACLE_HOME/bin/sqlplus -S dbadeeds/password@${SOURCE_SERVICE_NAME} <<EOF | grep -q 'DIRECTORY_NAME'
set heading off
set feedback off
select DIRECTORY_NAME from dba_directories where DIRECTORY_NAME = 'EBKP_DUMP';
exit;
EOF
then
echo "Creating Data Pump directory: ${DP_DIRECTORY}"
$ORACLE_HOME/bin/sqlplus -S dbadeeds/password@${SOURCE_SERVICE_NAME} <<EOF
create or replace directory EBKP_DUMP as '${DP_DIRECTORY}';
---GRANT READ, WRITE ON DIRECTORY ${DP_DIRECTORY} TO system;
exit;
EOF
echo "Re-Mapping dump directory ${DP_DIRECTORY} to ${TARGET_SERVICE_NAME}."
$ORACLE_HOME/bin/sqlplus -S dbadeeds/password@${TARGET_SERVICE_NAME} <<EOF
create or replace directory datapump as '${DP_DIRECTORY}';
---GRANT READ, WRITE ON DIRECTORY ${DP_DIRECTORY} TO dbadeeds;
exit;
EOF
fi
echo "Get tablespaces names for remapping from source and target databases..."
# Function to fetch tablespaces from source and target databases
fetch_tablespaces() {
$ORACLE_HOME/bin/sqlplus -S dbadeeds/password@${1} <<EOF
set heading off
set feedback off
select default_tablespace, temporary_tablespace from dba_users where username like '%ADMIN';
exit;
EOF
}
# Fetch source and target tablespaces
SOURCE_TABLESPACES=$(fetch_tablespaces ${SOURCE_SERVICE_NAME})
TARGET_TABLESPACES=$(fetch_tablespaces ${TARGET_SERVICE_NAME})
# Parse source and target tablespaces
SOURCE_DEFAULT_TABLESPACE=$(echo $SOURCE_TABLESPACES | awk '{print $1}')
SOURCE_TEMP_TABLESPACE=$(echo $SOURCE_TABLESPACES | awk '{print $2}')
TARGET_DEFAULT_TABLESPACE=$(echo $TARGET_TABLESPACES | awk '{print $1}')
TARGET_TEMP_TABLESPACE=$(echo $TARGET_TABLESPACES | awk '{print $2}')
ELOG_FILE=expdp_daily_${SOURCE_SERVICE_NAME}_${CDATE}.log
ILOG_FILE=impdp_daily_${TARGET_SERVICE_NAME}_${CDATE}.log
echo "Starting Exporting ${SOURCE_SCHEMA_NAME} Schema in ${SOURCE_SERVICE_NAME} Database"
echo
echo "Displaying input parameter and starting database schema backup for both source and target..."
echo "-------------------------------------------------------------"
echo "Source Service Name: ${SOURCE_SERVICE_NAME}"
echo "Source Schema Name: ${SOURCE_SCHEMA_NAME}"
echo "Target Service Name: ${TARGET_SERVICE_NAME}"
echo "Target Schema Name: ${TARGET_SCHEMA_NAME}"
echo "-------------------------------------------------------------"
echo
echo "Displaying Tablespaces for both source and target..."
echo "-------------------------------------------------------------"
echo "Source Default Tablespace: ${SOURCE_DEFAULT_TABLESPACE}"
echo "Source Temp Tablespace: ${SOURCE_TEMP_TABLESPACE}"
echo "Target Default Tablespace: ${TARGET_DEFAULT_TABLESPACE}"
echo "Target Temp Tablespace: ${TARGET_TEMP_TABLESPACE}"
echo "-------------------------------------------------------------"
echo
if [ "${TY}" == "Y" ]; then
echo "Taking EXPDP Database full=y backup"
$ORACLE_HOME/bin/expdp dbadeeds/password@${SOURCE_SERVICE_NAME} directory=EBKP_DUMP dumpfile=expdp_daily_${SOURCE_SERVICE_NAME}_${CDATE}_%U.dmp logfile=${ELOG_FILE} full=y PARALLEL=16 exclude=${EXL} COMPRESSION=ALL metrics=y
else
echo "Taking EXPDP Database Schema ${SCH} backup"
$ORACLE_HOME/bin/expdp dbadeeds/password@${SOURCE_SERVICE_NAME} directory=EBKP_DUMP dumpfile=expdp_daily_${SOURCE_SERVICE_NAME}_${CDATE}_%U.dmp logfile=${ELOG_FILE} schemas=${SOURCE_SCHEMA_NAME} PARALLEL=2 exclude=${EXL} COMPRESSION=ALL metrics=y
fi
# Check if the expdp job failed or encounteredan error
if grep -q 'ORA-' ${DP_DIRECTORY}/${ELOG_FILE}; then
echo "EXPDP job failed or encountered an error. Sending email notification."
mailx -s "EXPDP Job Failed or Errored" ${ML} < ${DP_DIRECTORY}/${ELOG_FILE}
else
echo "EXPDP job completed successfully."
# Check if the export dump file exists
EXPDP_DUMP_FILE="expdp_daily_${SOURCE_SERVICE_NAME}_${CDATE}_*.dmp"
COUNT=$(ls ${DP_DIRECTORY}/expdp_daily_${SOURCE_SERVICE_NAME}_${CDATE}_*.dmp | wc -l)
if [ $COUNT -gt 0 ]; then
echo "Export dump files found."
# Proceed with the import process
echo "Starting the import process in a different database..."
echo `$ORACLE_HOME/bin/impdp dbadeeds/password@${TARGET_SERVICE_NAME} directory=datapump logfile=${ILOG_FILE} dumpfile=expdp_daily_${SOURCE_SERVICE_NAME}_${CDATE}_%U.dmp remap_schema=${SOURCE_SCHEMA_NAME}:${TARGET_SCHEMA_NAME} remap_tablespace=${SOURCE_DEFAULT_TABLESPACE}:${TARGET_DEFAULT_TABLESPACE},${SOURCE_TEMP_TABLESPACE}:${TARGET_TEMP_TABLESPACE} table_exists_action=replace PARALLEL=2 exclude=${EXL} transform=disable_archive_logging:y cluster=n`
# Check if the impdp job failed or encountered an error
if grep -q 'ORA-' ${DP_DIRECTORY}/${ILOG_FILE}; then
echo "IMPDP job failed or encountered an error. Sending email notification."
mailx -s "IMPDP Job Failed or Errored" ${ML} < ${DP_DIRECTORY}/${ILOG_FILE}
else
echo "IMPDP job completed successfully."
fi
else
echo "Export dump file ${EXPDP_DUMP_FILE} not found. Aborting the import process."
fi
fi
By leveraging this script, organizations can streamline their database maintenance workflows, mitigate operational complexities, and focus on leveraging their data assets to drive business innovation and growth.
Leave a comment