Tuesday, September 9, 2025

incase of failure.

 

If a previous DBMS_SCHEDULER job from the Option 1 script (or a similar script) is currently running and performing a complete backup (exporting both metadata and data) instead of the intended metadata-only backup, you’ll need to stop the running job(s), clean up any partial files in DATA_PUMP_DIR, and optionally remove the scheduler jobs to prevent further executions. Since you’re using the workaround script (schedule_export_multiple_schemas_no_s3.sql) that exports to DATA_PUMP_DIR and you’re manually uploading to S3 due to the PLS-00904 error, I’ll provide scripts to:

  1. Stop running DBMS_SCHEDULER jobs (and associated Data Pump jobs).
  2. Clean up files in DATA_PUMP_DIR (dump and log files).
  3. Remove scheduler jobs to prevent future runs.
  4. Verify no residual processes (e.g., Data Pump jobs or S3 tasks).
  5. Run the updated metadata-only script to replace the complete backup job.

These scripts are tailored for Toad for Oracle, assuming you’re logged in as the master user (e.g., admin) or a privileged user with access to DBMS_SCHEDULER, DBMS_DATAPUMP, rdsadmin.rds_file_util, and UTL_FILE. I’ll also include the updated metadata-only script from your latest request to ensure you can restart with the correct configuration.

Step 1: Stop Running DBMS_SCHEDULER Jobs

Running DBMS_SCHEDULER jobs (e.g., EXPORT_HR_JOB_20250909131700) may be executing Data Pump exports in the background. Stopping them will also terminate the associated Data Pump jobs.

Script to Stop Running Scheduler Jobs:

sql
SET LINESIZE 180;
SET SERVEROUTPUT ON;

-- List running scheduler jobs
BEGIN
    DBMS_OUTPUT.PUT_LINE('--- Running Scheduler Jobs ---');
    FOR rec IN (SELECT job_name, state, last_start_date
                FROM dba_scheduler_jobs
                WHERE job_name LIKE 'EXPORT_%_JOB_%'
                AND state = 'RUNNING'
                ORDER BY last_start_date DESC) LOOP
        DBMS_OUTPUT.PUT_LINE('Job: ' || rec.job_name || ' | State: ' || rec.state || ' | Last Start: ' || rec.last_start_date);
    END LOOP;
END;
/

-- Stop running scheduler jobs
DECLARE
    CURSOR c_jobs IS
        SELECT job_name
        FROM dba_scheduler_jobs
        WHERE job_name LIKE 'EXPORT_%_JOB_%'
        AND state = 'RUNNING';
BEGIN
    FOR rec IN c_jobs LOOP
        BEGIN
            DBMS_SCHEDULER.STOP_JOB(job_name => rec.job_name, force => TRUE);
            DBMS_OUTPUT.PUT_LINE('Stopped job: ' || rec.job_name);
        EXCEPTION
            WHEN OTHERS THEN
                DBMS_OUTPUT.PUT_LINE('Error stopping job ' || rec.job_name || ': ' || SQLERRM);
        END;
    END LOOP;
END;
/
  • Usage:
    • Run in Toad’s SQL Editor (F5 or Ctrl+Enter).
    • The first block lists running jobs (e.g., EXPORT_HR_JOB_20250909131700).
    • The second block stops them using DBMS_SCHEDULER.STOP_JOB with force => TRUE to ensure immediate termination.
    • Check DBMS Output for confirmation.

Note:

  • Stopping a scheduler job should also stop the associated Data Pump job, as it’s invoked within the job’s PL/SQL block.
  • If the job doesn’t stop, the force => TRUE option terminates it, but partial dump files may remain in DATA_PUMP_DIR.

Step 2: Stop Running Data Pump Jobs (if Necessary)

If the scheduler job stop doesn’t fully terminate the Data Pump job (e.g., due to a hung process), you can explicitly stop it.

Script to Stop Running Data Pump Jobs:

sql
SET LINESIZE 180;
SET SERVEROUTPUT ON;

-- List running Data Pump jobs
BEGIN
    DBMS_OUTPUT.PUT_LINE('--- Running Data Pump Jobs ---');
    FOR rec IN (SELECT owner_name, job_name, state, operation, job_mode
                FROM dba_datapump_jobs
                WHERE state = 'EXECUTING'
                ORDER BY job_name) LOOP
        DBMS_OUTPUT.PUT_LINE('Job: ' || rec.job_name || ' | Owner: ' || rec.owner_name || ' | State: ' || rec.state);
    END LOOP;
END;
/

-- Stop running Data Pump jobs
DECLARE
    CURSOR c_jobs IS
        SELECT owner_name, job_name
        FROM dba_datapump_jobs
        WHERE state = 'EXECUTING';
BEGIN
    FOR rec IN c_jobs LOOP
        BEGIN
            DBMS_DATAPUMP.STOP_JOB(
                handle => DBMS_DATAPUMP.ATTACH(job_name => rec.job_name, job_owner => rec.owner_name),
                immediate => 1
            );
            DBMS_OUTPUT.PUT_LINE('Stopped Data Pump job: ' || rec.job_name);
        EXCEPTION
            WHEN OTHERS THEN
                DBMS_OUTPUT.PUT_LINE('Error stopping Data Pump job ' || rec.job_name || ': ' || SQLERRM);
        END;
    END LOOP;
END;
/
  • Usage:
    • Run in SQL Editor (F5).
    • Lists and stops any EXECUTING Data Pump jobs.
    • The immediate => 1 parameter ensures the job stops quickly, but partial files may remain.

Step 3: Clean Up Files in DATA_PUMP_DIR

Partial or completed dump and log files from the running job (e.g., hr_export.dmp, hr_export_u01.dmp, hr_export.log) should be removed from DATA_PUMP_DIR to free space and avoid conflicts.

Script to List and Clean Up Files in DATA_PUMP_DIR:

sql
SET LINESIZE 180;
SET SERVEROUTPUT ON;

-- List files in DATA_PUMP_DIR with sizes in GBs
BEGIN
    DBMS_OUTPUT.PUT_LINE('--- Files in DATA_PUMP_DIR ---');
    FOR file_rec IN (SELECT filename, type, ROUND(bytes / 1024 / 1024 / 1024, 4) AS size_gb, mtime
                     FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR'))
                     WHERE filename LIKE '%_export%'
                     ORDER BY mtime DESC) LOOP
        DBMS_OUTPUT.PUT_LINE('File: ' || RPAD(file_rec.filename, 30) || ' | Type: ' || RPAD(file_rec.type, 10) || 
                             ' | Size (GB): ' || TO_CHAR(file_rec.size_gb, 'FM9990.0000') || ' | Modified: ' || file_rec.mtime);
    END LOOP;
    -- Total size
    DECLARE
        v_total_size_gb NUMBER;
    BEGIN
        SELECT ROUND(SUM(bytes) / 1024 / 1024 / 1024, 4) INTO v_total_size_gb
        FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR'));
        DBMS_OUTPUT.PUT_LINE('Total Size (GB): ' || TO_CHAR(v_total_size_gb, 'FM9990.0000'));
    END;
END;
/

-- Clean up files in DATA_PUMP_DIR
DECLARE
    CURSOR c_files IS
        SELECT filename
        FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR'))
        WHERE filename LIKE '%_export%';
BEGIN
    FOR rec IN c_files LOOP
        BEGIN
            UTL_FILE.FREMOVE('DATA_PUMP_DIR', rec.filename);
            DBMS_OUTPUT.PUT_LINE('Removed: ' || rec.filename);
        EXCEPTION
            WHEN OTHERS THEN
                DBMS_OUTPUT.PUT_LINE('Error removing ' || rec.filename || ': ' || SQLERRM);
        END;
    END LOOP;
END;
/
  • Usage:
    • Run the first block to list files (e.g., hr_export.dmp, hr_export_u01.dmp, hr_export.log) and their sizes in GBs.
    • Run the second block to delete files matching the export pattern.
    • If you want to keep specific files, modify the WHERE filename LIKE '%_export%' clause or remove files individually:
      sql
      EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR', 'hr_export.dmp');

Step 4: Remove Scheduler Jobs

To prevent the previous job (which performs a complete backup) from running again, drop the scheduler jobs. This is optional if you want to keep the jobs but disable them.

Script to List and Drop Scheduler Jobs:

sql
SET LINESIZE 180;
SET SERVEROUTPUT ON;

-- List scheduler jobs
BEGIN
    DBMS_OUTPUT.PUT_LINE('--- Scheduler Jobs ---');
    FOR rec IN (SELECT job_name, state, last_start_date, enabled
                FROM dba_scheduler_jobs
                WHERE job_name LIKE 'EXPORT_%_JOB_%'
                ORDER BY last_start_date DESC) LOOP
        DBMS_OUTPUT.PUT_LINE('Job: ' || RPAD(rec.job_name, 30) || ' | State: ' || RPAD(rec.state, 10) || 
                             ' | Enabled: ' || rec.enabled || ' | Last Start: ' || rec.last_start_date);
    END LOOP;
END;
/

-- Disable and drop scheduler jobs
DECLARE
    CURSOR c_jobs IS
        SELECT job_name
        FROM dba_scheduler_jobs
        WHERE job_name LIKE 'EXPORT_%_JOB_%';
BEGIN
    FOR rec IN c_jobs LOOP
        BEGIN
            -- Disable job
            DBMS_SCHEDULER.DISABLE(name => rec.job_name, force => TRUE);
            DBMS_OUTPUT.PUT_LINE('Disabled job: ' || rec.job_name);
        EXCEPTION
            WHEN OTHERS THEN
                DBMS_OUTPUT.PUT_LINE('Error disabling job ' || rec.job_name || ': ' || SQLERRM);
        END;
        BEGIN
            -- Drop job
            DBMS_SCHEDULER.DROP_JOB(job_name => rec.job_name, force => TRUE);
            DBMS_OUTPUT.PUT_LINE('Dropped job: ' || rec.job_name);
        EXCEPTION
            WHEN OTHERS THEN
                DBMS_OUTPUT.PUT_LINE('Error dropping job ' || rec.job_name || ': ' || SQLERRM);
        END;
    END LOOP;
END;
/
  • Usage:
    • Run the first block to list jobs (e.g., EXPORT_HR_JOB_20250909131700).
    • Run the second block to disable and drop them.
    • If you only want to disable jobs (to keep for later), skip the DROP_JOB call.

Step 5: Verify No Residual Processes

Ensure no lingering Data Pump or S3 tasks are running.

Script to Check Residual Processes:

sql
SET LINESIZE 180;
SET SERVEROUTPUT ON;

-- Check Data Pump jobs
BEGIN
    DBMS_OUTPUT.PUT_LINE('--- Data Pump Jobs ---');
    FOR rec IN (SELECT owner_name, job_name, state
                FROM dba_datapump_jobs
                ORDER BY job_name DESC) LOOP
        DBMS_OUTPUT.PUT_LINE('Job: ' || RPAD(rec.job_name, 30) || ' | Owner: ' || RPAD(rec.owner_name, 15) || ' | State: ' || rec.state);
    END LOOP;
END;
/

-- Check recent S3 task logs in BDUMP
BEGIN
    DBMS_OUTPUT.PUT_LINE('--- Recent S3 Task Logs in BDUMP ---');
    FOR rec IN (SELECT filename, mtime
                FROM TABLE(rdsadmin.rds_file_util.listdir('BDUMP'))
                WHERE filename LIKE 'dbtask-%.log'
                ORDER BY mtime DESC FETCH FIRST 10 ROWS ONLY) LOOP
        DBMS_OUTPUT.PUT_LINE('Task Log: ' || rec.filename || ' | Modified: ' || rec.mtime);
    END LOOP;
END;
/
  • Usage:
    • Run to confirm no EXECUTING Data Pump jobs or unexpected S3 tasks.
    • If S3 tasks are running (unlikely since you’re manually uploading), you can’t stop them directly but can wait for completion or contact AWS Support.

Step 6: Run the Metadata-Only Script

After stopping and cleaning up, run the updated metadata-only script from your latest request to export only schema metadata.

Script (Repeated for Convenience):

schedule_export_multiple_schemas_no_s3_metadata.sql
sql

Execution:

  • Hardcode parameters or use substitution variables.
  • Run with F5 in Toad’s SQL Editor.
  • Monitor using the scripts below.

Step 7: Manual S3 Upload (Parallel)

After the metadata-only export completes, upload files to S3 in parallel (as discussed previously).

Script for Parallel S3 Upload:

sql
SET LINESIZE 180;
SET SERVEROUTPUT ON;

DECLARE
    v_schema_names VARCHAR2(4000) := 'HR,FINANCE';
    v_bucket_name VARCHAR2(100) := 'mys3bucket';
    v_use_multiple_files VARCHAR2(1) := 'Y';
    v_schema_list DBMS_UTILITY.UNCL_ARRAY;
    v_count NUMBER;
    v_schema_name VARCHAR2(30);
    v_prefix VARCHAR2(100);
    v_task_id VARCHAR2(100);
BEGIN
    DBMS_UTILITY.COMMA_TO_TABLE(v_schema_names, v_count, v_schema_list);
    FOR i IN 1..v_count LOOP
        v_schema_name := TRIM(v_schema_list(i));
        IF v_use_multiple_files = 'Y' THEN
            v_prefix := LOWER(v_schema_name) || '_export_u';
        ELSE
            v_prefix := LOWER(v_schema_name) || '_export';
        END IF;
        v_task_id := rdsadmin.rdsadmin_s3_tasks.upload_to_s3(
            p_bucket_name => v_bucket_name,
            p_prefix => v_prefix,
            p_s3_prefix => '',
            p_directory_name => 'DATA_PUMP_DIR'
        );
        DBMS_OUTPUT.PUT_LINE('S3 Upload Task ID for ' || v_schema_name || ': ' || v_task_id);
    END LOOP;
END;
/
  • Usage:
    • Run after export jobs complete.
    • Submits uploads in parallel (tasks run concurrently in RDS).

Monitoring Scripts (Updated)

These include your requests for reading log file contents and fetching sizes in GBs.

1. S3 Upload Task ID Monitoring:

sql
SET LINESIZE 180;
SET SERVEROUTPUT ON;

-- List recent S3 task logs
BEGIN
    FOR rec IN (SELECT filename, mtime
                FROM TABLE(rdsadmin.rds_file_util.listdir('BDUMP'))
                WHERE filename LIKE 'dbtask-%.log'
                ORDER BY mtime DESC FETCH FIRST 10 ROWS ONLY) LOOP
        DBMS_OUTPUT.PUT_LINE('Task Log: ' || rec.filename || ' | Modified: ' || rec.mtime);
    END LOOP;
END;
/

-- Read specific task log
DECLARE
    v_task_id VARCHAR2(100) := '1663788182757-49'; -- Replace with task ID
BEGIN
    FOR log_line IN (SELECT text FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP', 'dbtask-' || v_task_id || '.log'))) LOOP
        DBMS_OUTPUT.PUT_LINE(log_line.text);
    END LOOP;
END;
/

2. Data Pump Job Monitoring:

sql
SET LINESIZE 180;

SELECT owner_name, job_name, state, operation, job_mode, degree
FROM dba_datapump_jobs
WHERE state = 'EXECUTING'
ORDER BY job_name;

SELECT owner_name, job_name, state, operation, job_mode, degree
FROM dba_datapump_jobs
ORDER BY job_name DESC;

3. File Size and Log Content Monitoring:

sql
SET LINESIZE 180;
SET SERVEROUTPUT ON;

-- List files with sizes in GBs
BEGIN
    FOR file_rec IN (SELECT filename, type, ROUND(bytes / 1024 / 1024 / 1024, 4) AS size_gb, mtime
                     FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR'))
                     WHERE filename LIKE '%_export%'
                     ORDER BY mtime DESC) LOOP
        DBMS_OUTPUT.PUT_LINE('File: ' || RPAD(file_rec.filename, 30) || ' | Type: ' || RPAD(file_rec.type, 10) || 
                             ' | Size (GB): ' || TO_CHAR(file_rec.size_gb, 'FM9990.0000') || ' | Modified: ' || file_rec.mtime);
    END LOOP;
    DECLARE
        v_total_size_gb NUMBER;
    BEGIN
        SELECT ROUND(SUM(bytes) / 1024 / 1024 / 1024, 4) INTO v_total_size_gb
        FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR'));
        DBMS_OUTPUT.PUT_LINE('Total Size (GB): ' || TO_CHAR(v_total_size_gb, 'FM9990.0000'));
    END;
END;
/

-- Read log file contents
BEGIN
    FOR file_rec IN (SELECT filename
                     FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR'))
                     WHERE filename LIKE '%_export%.log'
                     ORDER BY mtime DESC) LOOP
        DBMS_OUTPUT.PUT_LINE('Reading: ' || file_rec.filename);
        FOR log_line IN (SELECT text FROM TABLE(rdsadmin.rds_file_util.read_text_file('DATA_PUMP_DIR', file_rec.filename))) LOOP
            DBMS_OUTPUT.PUT_LINE(log_line.text);
        END LOOP;
        DBMS_OUTPUT.PUT_LINE('--- End of ' || file_rec.filename || ' ---');
    END LOOP;
EXCEPTION
    WHEN OTHERS THEN
        DBMS_OUTPUT.PUT_LINE('Error: ' || SQLERRM);
END;
/

4. Scheduler Job Monitoring:

sql
SET LINESIZE 180;

SELECT job_name, state, enabled, run_count, last_start_date, last_run_duration
FROM dba_scheduler_jobs
WHERE job_name LIKE 'EXPORT_%_JOB_%'
ORDER BY last_start_date DESC;

SELECT job_name, status, error#, SUBSTR(additional_info, 1, 100) AS info
FROM dba_scheduler_job_run_details
WHERE job_name LIKE 'EXPORT_%_JOB_%'
ORDER BY log_date DESC;

Notes

  • Complete vs. Metadata-Only: The old job was performing a complete backup (metadata and data). The new script ensures metadata-only export.
  • S3 Upload: Parallel uploads are handled as before.
  • Troubleshooting: If the new script fails to compile, check user_errors:
    sql
    SELECT line, position, text FROM user_errors WHERE name = 'EXPORT_SCHEMAS_TO_DIR';
  • Residual Files: Ensure cleanup is complete before running the new script to avoid conflicts.

Let me know if you need help with S3 uploads, further monitoring, or a complete backup script if needed!

No comments: