Step-by-Step Solution for Oracle RDS
Understand RDS Constraints:
- No OS-level access means no direct impdp command-line execution.
- You must use DBMS_DATAPUMP within a SQL*Plus or SQL Developer session (or similar tool) connected to your RDS instance.
- The dump file must be in an Oracle directory accessible to RDS (e.g., DATA_PUMP_DIR or a custom directory created via CREATE DIRECTORY).
- RDS requires specific privileges: ensure your user has DATAPUMP_IMP_FULL_DATABASE or explicit grants for DBMS_DATAPUMP, CREATE DIRECTORY, and read/write on the directory.
Generate SQLFILE Using DBMS_DATAPUMP:
- Instead of impdp ... SQLFILE, run a PL/SQL block to create a SQL file containing the DDL from the dump. This file will be written to the Oracle directory (e.g., DATA_PUMP_DIR).
- Below is a PL/SQL script to generate the SQLFILE equivalent:
sqlSET SERVEROUTPUT ON SIZE UNLIMITED; DECLARE h1 NUMBER; v_directory VARCHAR2(128) := 'DATA_PUMP_DIR'; -- Replace with your directory v_dumpfile VARCHAR2(256) := 'YOUR_DUMPFILE.dmp'; -- Your dump file name v_sqlfile VARCHAR2(256) := 'ddl_preview.sql'; -- Output SQL file v_logfile VARCHAR2(256) := 'ddl_preview.log'; -- Log file for errors job_state VARCHAR2(30); sts ku$_Status; le ku$_LogEntry; ind NUMBER; BEGIN -- Open a Data Pump job in SQLFILE mode h1 := DBMS_DATAPUMP.OPEN( operation => 'IMPORT', job_mode => 'SCHEMA', job_name => NULL ); -- Add the dump file DBMS_DATAPUMP.ADD_FILE( handle => h1, filename => v_dumpfile, directory => v_directory, filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE ); -- Add the SQL file (this is where DDL will be written) DBMS_DATAPUMP.ADD_FILE( handle => h1, filename => v_sqlfile, directory => v_directory, filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_SQL_FILE ); -- Add a log file for diagnostics DBMS_DATAPUMP.ADD_FILE( handle => h1, filename => v_logfile, directory => v_directory, filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE ); -- Restrict to specific schema DBMS_DATAPUMP.METADATA_FILTER( handle => h1, name => 'SCHEMA_LIST', value => '''YOUR_SCHEMA_NAME''' ); -- Start the job (this writes DDL to sqlfile, no actual import) DBMS_DATAPUMP.START_JOB(h1); -- Monitor job completion job_state := 'UNDEFINED'; WHILE job_state NOT IN ('COMPLETED', 'STOPPED') LOOP DBMS_DATAPUMP.GET_STATUS( h1, DBMS_DATAPUMP.KU$_STATUS_JOB_ERROR + DBMS_DATAPUMP.KU$_STATUS_JOB_STATUS, -1, job_state, sts ); -- Log any errors IF BITAND(sts.mask, DBMS_DATAPUMP.KU$_STATUS_JOB_ERROR) != 0 THEN le := sts.error; IF le IS NOT NULL THEN ind := le.FIRST; WHILE ind IS NOT NULL LOOP DBMS_OUTPUT.PUT_LINE('Error: ' || le(ind).LogText); ind := le.NEXT(ind); END LOOP; END IF; END IF; DBMS_LOCK.SLEEP(1); -- Avoid tight loop END LOOP; DBMS_OUTPUT.PUT_LINE('SQLFILE generation completed. State: ' || job_state); DBMS_DATAPUMP.DETACH(h1); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('Error in SQLFILE job: ' || SQLERRM); IF h1 IS NOT NULL THEN DBMS_DATAPUMP.DETACH(h1); END IF; RAISE; END; /- Replace: YOUR_SCHEMA_NAME, YOUR_DUMPFILE.dmp, and DATA_PUMP_DIR (or your custom directory).
- Run this in SQL*Plus or SQL Developer connected to your RDS instance as the schema user or a privileged user.
- This creates ddl_preview.sql in the specified directory, containing all CREATE TABLE/INDEX statements with their tablespace clauses (including REVANTH14).
Retrieve the SQL File from RDS:
- Since RDS doesn't provide direct filesystem access, use the aws_s3 package or RDS Data Pump directory access to download ddl_preview.sql:
- Option 1: AWS S3 (if dump is in S3):
- If your dump file is in an S3 bucket (common in RDS), you can list and download the generated ddl_preview.sql using:sql
SELECT * FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR'));- Copy the file to S3:sql
BEGIN RDSADMIN.RDS_FILE_UTIL.COPY_TO_S3( p_directory => 'DATA_PUMP_DIR', p_filename => 'ddl_preview.sql', p_bucket_name => 'YOUR_S3_BUCKET', p_s3_prefix => 'exports/' ); END; / - Download from S3 using AWS CLI or console:text
aws s3 cp s3://YOUR_S3_BUCKET/exports/ddl_preview.sql .
- Copy the file to S3:
- If your dump file is in an S3 bucket (common in RDS), you can list and download the generated ddl_preview.sql using:
- Option 2: UTL_FILE (if no S3):
- Read the file contents directly in PL/SQL (see parsing script below).
- Requires UTL_FILE permissions on the directory.
- Option 1: AWS S3 (if dump is in S3):
- Since RDS doesn't provide direct filesystem access, use the aws_s3 package or RDS Data Pump directory access to download ddl_preview.sql:
Parse the SQL File for Tablespaces:
If you can download ddl_preview.sql, parse it locally (as described previously) using grep/PowerShell to extract unique tablespaces:
textgrep -i "TABLESPACE" ddl_preview.sql | sed -n "s/.*TABLESPACE[[:space:]]*\(['\"]\)\([^'\"]*\)\1.*/\2/p" | sort -u > tablespaces.txt- Or, in PowerShell:text
Select-String -Path "ddl_preview.sql" -Pattern "TABLESPACE\s+['`"]([^'`"]+)['`"]" | ForEach-Object { $_.Matches.Groups[1].Value } | Sort-Object -Unique | Out-File -Encoding utf8 tablespaces.txt
- Or, in PowerShell:
If you can't download the file easily, parse it within RDS using UTL_FILE (requires directory read privileges):
sqlSET SERVEROUTPUT ON SIZE UNLIMITED; DECLARE v_file UTL_FILE.FILE_TYPE; v_line VARCHAR2(32767); v_tablespace VARCHAR2(128); TYPE t_ts_tab IS TABLE OF VARCHAR2(128) INDEX BY PLS_INTEGER; v_ts_tab t_ts_tab; v_idx PLS_INTEGER := 1; BEGIN v_file := UTL_FILE.FOPEN('DATA_PUMP_DIR', 'ddl_preview.sql', 'R'); LOOP BEGIN UTL_FILE.GET_LINE(v_file, v_line); IF REGEXP_LIKE(UPPER(v_line), 'TABLESPACE\s+[''"]?([^''"[:space:]]+)[''"]?') THEN v_tablespace := REGEXP_SUBSTR(UPPER(v_line), 'TABLESPACE\s+[''"]?([^''"[:space:]]+)[''"]?', 1, 1, NULL, 1); IF v_tablespace IS NOT NULL AND NOT v_ts_tab.EXISTS(v_tablespace) THEN v_ts_tab(v_tablespace) := v_tablespace; -- Dedup using index DBMS_OUTPUT.PUT_LINE('Found tablespace: ' || v_tablespace); END IF; END IF; EXCEPTION WHEN NO_DATA_FOUND THEN EXIT; END; END LOOP; UTL_FILE.FCLOSE(v_file); -- Output remap statements v_idx := v_ts_tab.FIRST; WHILE v_idx IS NOT NULL LOOP DBMS_OUTPUT.PUT_LINE( 'DBMS_DATAPUMP.METADATA_REMAP(handle => h1, name => ''REMAP_TABLESPACES'', old_value => ''' || v_ts_tab(v_idx) || ''', value => ''NEW_TABLESPACE_01'');' ); v_idx := v_ts_tab.NEXT(v_idx); END LOOP; EXCEPTION WHEN OTHERS THEN IF UTL_FILE.IS_OPEN(v_file) THEN UTL_FILE.FCLOSE(v_file); END IF; DBMS_OUTPUT.PUT_LINE('Error parsing SQL file: ' || SQLERRM); RAISE; END; /- This prints all unique tablespaces (e.g., REVANTH14, etc.) and generates the exact DBMS_DATAPUMP.METADATA_REMAP statements.
- Copy the outputted remap statements for use in your import script.
Update Your Import Script:
Take the generated remap statements and insert them into your original script (or the enhanced one I provided earlier) between SET_PARAMETER and START_JOB.
Example integration into your script:
sql-- ... existing code ... DBMS_DATAPUMP.SET_PARAMETER(h1, 'TABLE_EXISTS_ACTION', 'REPLACE'); -- Pasted remap statements from SQLFILE parsing DBMS_DATAPUMP.METADATA_REMAP(handle => h1, name => 'REMAP_TABLESPACES', old_value => 'REVANTH14', value => 'NEW_TABLESPACE_01'); DBMS_DATAPUMP.METADATA_REMAP(handle => h1, name => 'REMAP_TABLESPACES', old_value => 'OTHER_TS1', value => 'NEW_TABLESPACE_01'); -- ... other remaps ... DBMS_DATAPUMP.START_JOB(h1); -- ... rest of script ...
Run the Import:
- Execute the updated script in SQL*Plus or SQL Developer.
- Since all tablespaces (including REVANTH14) are now remapped, the ORA-00959: tablespace 'REVANTH14' does not exist error should be resolved.
2. List Specific Files (e.g., Matching a Pattern)
Filter for files (exclude directories) matching a pattern like %.dmp (all dump files) or old_export% (prefix match).
SELECT filename, filesize, TO_CHAR(mtime, 'YYYY-MM-DD HH24:MI:SS') AS modified
FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR'))
WHERE type = 'file' AND UPPER(filename) LIKE '%.DMP'
ORDER BY mtime DESC;CASE
WHEN filesize < POWER(1024, 3) THEN ROUND(filesize / POWER(1024, 2), 2) || ' MB'
ELSE ROUND(filesize / POWER(1024, 3), 2) || ' GB'
END AS size,
TO_CHAR(mtime, 'YYYY-MM-DD HH24:MI:SS') AS modified
FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR'))
WHERE type = 'file'
AND LOWER(filename) LIKE '%meta%'
ORDER BY mtime DESC;
FOR rec IN (
SELECT filename, filesize
FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR'))
WHERE type = 'file'
AND LOWER(filename) LIKE '%meta%'
) LOOP
BEGIN
UTL_FILE.FREMOVE('DATA_PUMP_DIR', rec.filename);
DBMS_OUTPUT.PUT_LINE('Deleted: ' || rec.filename || ' (' ||
CASE
WHEN rec.filesize < POWER(1024, 3) THEN ROUND(rec.filesize / POWER(1024, 2), 2) || ' MB'
ELSE ROUND(rec.filesize / POWER(1024, 3), 2) || ' GB'
END || ')');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error deleting ' || rec.filename || ': ' || SQLERRM);
END;
END LOOP;
DBMS_OUTPUT.PUT_LINE('Deletion process complete.');
END;
/
3. Delete a Specific File
Use this EXEC statement to remove an exact file (replace 'export_2025.dmp' with the filename from the list). It requires DELETE_FILE privilege.
EXEC UTL_FILE.FREMOVE('DATA_PUMP_DIR', 'export_2025.dmp');4. Delete Multiple Files Matching a Pattern
Oracle doesn't support direct DELETE on table functions, so use an anonymous PL/SQL block (run as one statement). This loops over matching files (e.g., all .dmp files) and deletes them. Adjust the WHERE clause for your pattern.
SET SERVEROUTPUT ON SIZE UNLIMITED;
BEGIN
FOR rec IN (
SELECT filename
FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR('DATA_PUMP_DIR'))
WHERE type = 'file' AND UPPER(filename) LIKE '%.DMP' -- Change pattern here, e.g., 'OLD_EXPORT%'
) LOOP
BEGIN
UTL_FILE.FREMOVE('DATA_PUMP_DIR', rec.filename);
DBMS_OUTPUT.PUT_LINE('Deleted: ' || rec.filename); -- Optional: Enable with SET SERVEROUTPUT ON
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE('Error deleting ' || rec.filename || ': ' || SQLERRM);
END;
END LOOP;
END;
/SET SERVEROUTPUT ON SIZE UNLIMITED SET VERIFY OFF ACCEPT dir_name CHAR PROMPT 'Enter directory name (default: DATA_PUMP_DIR): ' DEFAULT 'DATA_PUMP_DIR' ACCEPT file_pattern CHAR PROMPT 'Enter file pattern for deletion (e.g., "old_export%.dmp"; press Enter to skip deletion): ' DECLARE -- Cursor for listing files using RDSADMIN.RDS_FILE_UTIL.LISTDIR CURSOR file_cursor IS SELECT filename, type, filesize, mtime FROM TABLE(RDSADMIN.RDS_FILE_UTIL.LISTDIR('&dir_name')) ORDER BY mtime DESC; lv_pattern VARCHAR2(1024) := '&file_pattern'; -- User input for deletion pattern lv_del_count NUMBER := 0; lv_confirm CHAR(1); BEGIN DBMS_OUTPUT.PUT_LINE('Directory: ' || UPPER('&dir_name')); DBMS_OUTPUT.PUT_LINE('=== Listing all files ==='); DBMS_OUTPUT.PUT_LINE(RPAD('File Name', 40) || ' | Type | Size (bytes) | Modified'); DBMS_OUTPUT.PUT_LINE(RPAD('=', 40, '-') || ' | ---- | ------------ | --------'); FOR rec IN file_cursor LOOP DBMS_OUTPUT.PUT_LINE(RPAD(NVL(rec.filename, ''), 40) || ' | ' || rec.type || ' | ' || TO_CHAR(rec.filesize) || ' | ' || TO_CHAR(rec.mtime, 'YYYY-MM-DD HH24:MI:SS')); END LOOP; -- If no pattern provided, skip deletion IF lv_pattern IS NULL OR TRIM(lv_pattern) = '' THEN DBMS_OUTPUT.PUT_LINE('No deletion pattern provided. Exiting.'); RETURN; END IF; -- Confirm deletion (prompt via SQL*Plus ACCEPT outside; simulate here) -- For full interactivity, run the confirmation part separately if needed lv_confirm := 'N'; -- Default to no; override with manual input or adjust -- To prompt: Add ACCEPT confirm CHAR PROMPT 'Delete matching files? (Y/N): ' DEFAULT 'N' IF UPPER(lv_confirm) = 'Y' THEN DBMS_OUTPUT.PUT_LINE('=== Deleting files matching: ' || lv_pattern || ' ==='); FOR rec IN file_cursor LOOP IF rec.type = 'file' AND UPPER(rec.filename) LIKE UPPER(lv_pattern) THEN -- Match pattern, files only lv_del_count := lv_del_count + 1; BEGIN UTL_FILE.FREMOVE(UPPER('&dir_name'), rec.filename); DBMS_OUTPUT.PUT_LINE('Deleted: ' || rec.filename || ' (' || rec.filesize || ' bytes)'); EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('Error deleting ' || rec.filename || ': ' || SQLERRM); END; END IF; END LOOP; DBMS_OUTPUT.PUT_LINE('Deletion complete. ' || lv_del_count || ' files processed.'); ELSE DBMS_OUTPUT.PUT_LINE('=== Dry run: Files that would be deleted ==='); FOR rec IN file_cursor LOOP IF rec.type = 'file' AND UPPER(rec.filename) LIKE UPPER(lv_pattern) THEN DBMS_OUTPUT.PUT_LINE('Would delete: ' || rec.filename || ' (' || rec.filesize || ' bytes)'); lv_del_count := lv_del_count + 1; END IF; END LOOP; IF lv_del_count = 0 THEN DBMS_OUTPUT.PUT_LINE('No matching files found.'); END IF; END IF; EXCEPTION WHEN OTHERS THEN DBMS_OUTPUT.PUT_LINE('Error: ' || SQLERRM); END; /
For Non-Partitioned Indexes
This script now includes the table_name in the generated comment, making it clear which table the index belongs to.
SELECT
CASE
WHEN degree IS NULL OR degree = '1' THEN -- Degree '1' is the effective NOPARALLEL
'ALTER INDEX "' || owner || '"."' || index_name || '" NOPARALLEL; -- On Table: ' || table_name
WHEN degree = 'DEFAULT' THEN
'ALTER INDEX "' || owner || '"."' || index_name || '" PARALLEL (DEGREE DEFAULT); -- On Table: ' || table_name
ELSE
'ALTER INDEX "' || owner || '"."' || index_name || '" PARALLEL ' || degree || '; -- On Table: ' || table_name
END AS revert_statement
FROM
dba_indexes
WHERE
-- Filter for non-partitioned indexes
partitioned = 'NO'
-- You can add the same filters you used for the rebuild script
-- AND status IN ('VALID', 'UNUSABLE', 'FAILED')
-- AND table_name = 'YOUR_TABLE_NAME_HERE'
ORDER BY
owner, index_name;
-- Set server output on to view the reports and messages
SET SERVEROUTPUT ON;
-- Prompt the user for an option to either report or execute
ACCEPT execute_option CHAR PROMPT 'Do you want to rebuild and revert unusable indexes? (Y/N): '
DECLARE
-- Variable to hold the user's input
v_execute_option VARCHAR2(1) := UPPER('&execute_option');
-- Cursor to find all unusable indexes (both partitioned and non-partitioned)
CURSOR c_unusable_indexes IS
SELECT owner, index_name, table_name, partitioned, degree
FROM dba_indexes
WHERE status IN ('UNUSABLE', 'FAILED')
-- Exclude system-owned schemas
AND owner NOT IN ('SYS', 'SYSTEM', 'SYSMAN', 'MDSYS', 'CTXSYS', 'WMSYS', 'XDB', 'EXFSYS', 'DBSNMP')
ORDER BY owner, table_name, index_name;
-- Variables to hold the dynamically generated SQL statements
v_rebuild_sql VARCHAR2(1000);
v_revert_sql VARCHAR2(1000);
BEGIN
DBMS_OUTPUT.PUT_LINE('--- Step 1: Unusable Index Report ---');
DBMS_OUTPUT.PUT_LINE('The following indexes are currently unusable:');
DBMS_OUTPUT.PUT_LINE('--------------------------------------------------');
-- Report which indexes need to be rebuilt and what their revert commands will be
FOR r_index IN c_unusable_indexes LOOP
-- Build the REBUILD statement
IF r_index.partitioned = 'YES' THEN
-- For partitioned indexes, we need to find and list partitions
FOR p_part IN (SELECT partition_name FROM dba_ind_partitions WHERE index_owner = r_index.owner AND index_name = r_index.index_name) LOOP
v_rebuild_sql := 'ALTER INDEX "' || r_index.owner || '"."' || r_index.index_name || '" REBUILD PARTITION ' || p_part.partition_name || ' ONLINE NO LOGGING PARALLEL 8';
DBMS_OUTPUT.PUT_LINE('Partitioned Index: ' || r_index.owner || '.' || r_index.index_name || ' (Partition: ' || p_part.partition_name || ')');
DBMS_OUTPUT.PUT_LINE(' Rebuild Command: ' || v_rebuild_sql || ';');
END LOOP;
ELSE
v_rebuild_sql := 'ALTER INDEX "' || r_index.owner || '"."' || r_index.index_name || '" REBUILD ONLINE NO LOGGING PARALLEL 8';
DBMS_OUTPUT.PUT_LINE('Non-Partitioned Index: ' || r_index.owner || '.' || r_index.index_name);
DBMS_OUTPUT.PUT_LINE(' Rebuild Command: ' || v_rebuild_sql || ';');
END IF;
-- Build the REVERT statement based on the original degree
CASE
WHEN r_index.degree IS NULL OR r_index.degree = '1' THEN
v_revert_sql := 'ALTER INDEX "' || r_index.owner || '"."' || r_index.index_name || '" NOPARALLEL';
WHEN r_index.degree = 'DEFAULT' THEN
v_revert_sql := 'ALTER INDEX "' || r_index.owner || '"."' || r_index.index_name || '" PARALLEL (DEGREE DEFAULT)';
ELSE
v_revert_sql := 'ALTER INDEX "' || r_index.owner || '"."' || r_index.index_name || '" PARALLEL ' || r_index.degree;
END CASE;
DBMS_OUTPUT.PUT_LINE(' Revert Command: ' || v_revert_sql || ';');
DBMS_OUTPUT.PUT_LINE('--------------------------------------------------');
END LOOP;
DBMS_OUTPUT.PUT_LINE(CHR(10));
-- Step 2: Conditional Execution
IF v_execute_option = 'Y' THEN
DBMS_OUTPUT.PUT_LINE('--- Rebuild and Revert Execution ---');
DBMS_OUTPUT.PUT_LINE('Executing rebuilds. See log for progress...');
FOR r_index IN c_unusable_indexes LOOP
BEGIN
-- Build the REBUILD statement and execute
IF r_index.partitioned = 'YES' THEN
FOR p_part IN (SELECT partition_name FROM dba_ind_partitions WHERE index_owner = r_index.owner AND index_name = r_index.index_name) LOOP
v_rebuild_sql := 'ALTER INDEX "' || r_index.owner || '"."' || r_index.index_name || '" REBUILD PARTITION ' || p_part.partition_name || ' ONLINE NO LOGGING PARALLEL 8';
DBMS_OUTPUT.PUT_LINE(' -> Rebuilding Partition: ' || r_index.owner || '.' || r_index.index_name || '(' || p_part.partition_name || ')');
EXECUTE IMMEDIATE v_rebuild_sql;
END LOOP;
ELSE
v_rebuild_sql := 'ALTER INDEX "' || r_index.owner || '"."' || r_index.index_name || '" REBUILD ONLINE NO LOGGING PARALLEL 8';
DBMS_OUTPUT.PUT_LINE(' -> Rebuilding Index: ' || r_index.owner || '.' || r_index.index_name);
EXECUTE IMMEDIATE v_rebuild_sql;
END IF;
-- Build the REVERT statement and execute
CASE
WHEN r_index.degree IS NULL OR r_index.degree = '1' THEN
v_revert_sql := 'ALTER INDEX "' || r_index.owner || '"."' || r_index.index_name || '" NOPARALLEL';
WHEN r_index.degree = 'DEFAULT' THEN
v_revert_sql := 'ALTER INDEX "' || r_index.owner || '"."' || r_index.index_name || '" PARALLEL (DEGREE DEFAULT)';
ELSE
v_revert_sql := 'ALTER INDEX "' || r_index.owner || '"."' || r_index.index_name || '" PARALLEL ' || r_index.degree;
END CASE;
DBMS_OUTPUT.PUT_LINE(' -> Reverting Parallelism: ' || r_index.owner || '.' || r_index.index_name);
EXECUTE IMMEDIATE v_revert_sql;
DBMS_OUTPUT.PUT_LINE(' -> SUCCESS: ' || r_index.owner || '.' || r_index.index_name || ' rebuilt and reverted.');
EXCEPTION
WHEN OTHERS THEN
DBMS_OUTPUT.PUT_LINE(' -> FAILED: ' || r_index.owner || '.' || r_index.index_name || ' -> ' || SQLERRM);
END;
END LOOP;
ELSE
DBMS_OUTPUT.PUT_LINE('Script finished in report-only mode. No changes were made.');
END IF;
DBMS_OUTPUT.PUT_LINE('--- Script Complete ---');
END;
/For Partitioned Indexes
The dba_ind_partitions view does not contain the table_name column directly. You must join it with the dba_indexes view to retrieve this information.
SELECT
'ALTER INDEX ' || dba_ind_partitions.index_owner || '.' || dba_ind_partitions.index_name || ' REBUILD PARTITION ' || dba_ind_partitions.partition_name || ' ONLINE NO LOGGING; -- On Table: ' || dba_indexes.table_name AS rebuild_statement
FROM
dba_ind_partitions
JOIN
dba_indexes ON dba_ind_partitions.index_owner = dba_indexes.owner
AND dba_ind_partitions.index_name = dba_indexes.index_name
WHERE
dba_ind_partitions.status IN ('UNUSABLE', 'FAILED')
AND dba_ind_partitions.index_owner NOT IN ('SYS', 'SYSTEM', 'SYSMAN', 'MDSYS', 'CTXSYS', 'WMSYS', 'XDB', 'EXFSYS', 'DBSNMP')
ORDER BY
dba_ind_partitions.index_owner, dba_ind_partitions.index_name;Usage for One-Stop Maintenance/Ad-Hoc Runs
Ad-Hoc (On-Demand): Connect via SQL*Plus/SQL Developer as master user, run the script, enter 'Y'/'N' and schemas. It's self-contained—no setup required. Example for 100+ across schemas:
textDo you want to rebuild unusable indexes? (Y/N): Y Enter schema name(s) to process (comma-separated, or ALL for all non-system schemas): HR,SCOTT,ALL -- Mix as needed- Output includes progress (e.g., "Executing: ALTER INDEX...") and final summary (e.g., "Successfully rebuilt: 127, Failed: 3").
Scheduled Maintenance (One-Stop Automation): Wrap it in Oracle Scheduler for weekly/monthly runs during off-peak (e.g., 2 AM Sundays). This makes it truly "one-stop"—create once, forget it runs.
Step 1: Create a Stored Procedure (save the script's PL/SQL as a procedure, hardcoding or parameterizing schemas/rebuild='Y'):
sqlCREATE OR REPLACE PROCEDURE rebuild_unusable_indexes_proc( p_schema_list IN VARCHAR2 DEFAULT 'ALL', p_rebuild_option IN VARCHAR2 DEFAULT 'Y' ) AS -- Paste the entire DECLARE/BEGIN...END; block from the script here, -- but replace v_schema_list and v_rebuild_option with parameters -- (e.g., v_schema_list := UPPER(p_schema_list); v_rebuild_option := UPPER(p_rebuild_option);) BEGIN -- Your script body END; /Step 2: Schedule the Job (run daily/weekly for all schemas):
sqlBEGIN DBMS_SCHEDULER.CREATE_JOB( job_name => 'REBUILD_UNUSABLE_INDEXES_JOB', job_type => 'PLSQL_BLOCK', job_action => 'BEGIN rebuild_unusable_indexes_proc(p_schema_list => ''ALL'', p_rebuild_option => ''Y''); END;', start_date => SYSTIMESTAMP + INTERVAL '1' HOUR, -- Start soon, or set future repeat_interval => 'FREQ=WEEKLY; BYDAY=SUN; BYHOUR=2; BYMINUTE=0;', -- Sundays at 2 AM enabled => TRUE, comments => 'Weekly unusable index maintenance' ); END; /- Monitor: SELECT * FROM dba_scheduler_jobs WHERE job_name = 'REBUILD_UNUSABLE_INDEXES_JOB';
- Logs: Check DBA_SCHEDULER_JOB_RUN_DETAILS for output/errors, or CloudWatch for RDS.
- For ad-hoc trigger: EXEC DBMS_SCHEDULER.RUN_JOB('REBUILD_UNUSABLE_INDEXES_JOB');
Potential Enhancements for Extreme Scale
If you're dealing with 500+ indexes regularly:
- Add Undo/Temp Space Check: Before starting, query V$UNDOSTAT and V$TEMP_SPACE_HEADER—abort if >80% full.
- Progress Monitoring: Integrate V$SESSION_LONGOPS polling in the loop for ETA on long rebuilds.
- Email Alerts: Use UTL_MAIL to send summary (if licensed) or log to a table for RDS integration.
- Filter by Size/Age: Add to cursors: AND last_analyzed > SYSDATE - 30 (recently analyzed) or join DBA_SEGMENTS for blocks > 1000 (large only).
DECLARE
v_bucket_name VARCHAR2(128) := 'my-rds-dumps-bucket'; -- Replace with your S3 bucket name
v_s3_prefix VARCHAR2(200) := 'exports/'; -- S3 folder path where dump files are stored
v_file_prefix VARCHAR2(200) := 'DEMO1_meta_2509092151234.dmp'; -- Specific file or pattern (e.g., 'DEMO1_meta_%.dmp')
v_task_id VARCHAR2(100); -- For S3 download task ID
v_error_msg VARCHAR2(4000);
v_file_count NUMBER;
TYPE file_list_t IS TABLE OF VARCHAR2(200);
v_files file_list_t;
BEGIN
-- Validate file prefix
IF v_file_prefix IS NULL OR LENGTH(v_file_prefix) = 0 THEN
DBMS_OUTPUT.PUT_LINE('Validation failed: File prefix cannot be null or empty.');
RAISE_APPLICATION_ERROR(-20001, 'File prefix cannot be null or empty.');
END IF;
-- Debug: Output download details
DBMS_OUTPUT.PUT_LINE('Starting S3 download for files matching: ' || v_file_prefix || ' from S3 path: ' || v_s3_prefix);
-- Download from S3
v_task_id := rdsadmin.rdsadmin_s3_tasks.download_from_s3(
p_bucket_name => v_bucket_name,
p_prefix => v_file_prefix,
p_s3_prefix => v_s3_prefix,
p_directory_name => 'DATA_PUMP_DIR'
);
-- Output task ID
DBMS_OUTPUT.PUT_LINE('S3 download task started for ' || v_file_prefix || ', Task ID: ' || v_task_id);
-- Wait for download to complete
DBMS_LOCK.SLEEP(30); -- Wait 30 seconds to ensure download finishes
-- Verify downloaded files
SELECT filename
BULK COLLECT INTO v_files
FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR'))
WHERE UPPER(filename) LIKE UPPER(v_file_prefix);
v_file_count := v_files.COUNT;
IF v_file_count = 0 THEN
DBMS_OUTPUT.PUT_LINE('No files found in DATA_PUMP_DIR after download matching: ' || v_file_prefix);
DBMS_OUTPUT.PUT_LINE('Available files in DATA_PUMP_DIR:');
FOR rec IN (SELECT filename FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) WHERE filename LIKE '%') LOOP
DBMS_OUTPUT.PUT_LINE(' - ' || rec.filename);
END LOOP;
RAISE_APPLICATION_ERROR(-20003, 'No files downloaded matching prefix: ' || v_file_prefix);
END IF;
-- Debug: Output downloaded files
DBMS_OUTPUT.PUT_LINE('Found ' || v_file_count || ' file(s) matching prefix: ' || v_file_prefix);
FOR i IN 1..v_files.COUNT LOOP
DBMS_OUTPUT.PUT_LINE(' - File ' || i || ': ' || v_files(i));
END LOOP;
EXCEPTION
WHEN OTHERS THEN
v_error_msg := 'Error in download_from_s3: ' || SQLERRM;
DBMS_OUTPUT.PUT_LINE(v_error_msg);
RAISE_APPLICATION_ERROR(-20001, v_error_msg);
END;
/
```
### 2. Import Script with Tablespace Remapping
This anonymous block imports the metadata-only `.dmp` files into the target database, remapping 24 source tablespaces to 12 target tablespaces using `REMAP_TABLESPACE`. It aligns with the baseline’s naming convention (`<schema_name>_meta_<timestamp>.dmp`) and parameters.
<xaiArtifact artifact_id="f7a6e9ef-432a-430d-b40d-754cbc3730f4" artifact_version_id="5f727391-c504-44d5-93d6-96d29e447bde" title="import_metadata_schemas.sql" contentType="text/sql">
```sql
DECLARE
v_handle NUMBER;
v_job_name VARCHAR2(128); -- Increased to avoid ORA-06502
v_dumpfile VARCHAR2(200); -- Increased for safety
v_logfile VARCHAR2(200); -- Increased for safety
v_timestamp VARCHAR2(19) := '2509092151234'; -- Replace with the timestamp from your dump file
v_error_msg VARCHAR2(4000);
p_schema VARCHAR2(128) := 'DEMO1'; -- Example schema name
p_parallel NUMBER := 2; -- Example parallelism (e.g., 1, 2, 3)
p_multiple_files VARCHAR2(1) := 'N'; -- 'Y' for multiple files, 'N' for single file
TYPE tablespace_mapping_t IS TABLE OF VARCHAR2(256);
v_tablespace_mappings tablespace_mapping_t := tablespace_mapping_t(
'TBS1:NEW_TBS1', 'TBS2:NEW_TBS1', -- Customize with your mappings
'TBS3:NEW_TBS2', 'TBS4:NEW_TBS2',
'TBS5:NEW_TBS3', 'TBS6:NEW_TBS3',
'TBS7:NEW_TBS4', 'TBS8:NEW_TBS4',
'TBS9:NEW_TBS5', 'TBS10:NEW_TBS5',
'TBS11:NEW_TBS6', 'TBS12:NEW_TBS6',
'TBS13:NEW_TBS7', 'TBS14:NEW_TBS7',
'TBS15:NEW_TBS8', 'TBS16:NEW_TBS8',
'TBS17:NEW_TBS9', 'TBS18:NEW_TBS9',
'TBS19:NEW_TBS10', 'TBS20:NEW_TBS10',
'TBS21:NEW_TBS11', 'TBS22:NEW_TBS11',
'TBS23:NEW_TBS12', 'TBS24:NEW_TBS12'
);
BEGIN
-- Validate schema name length
IF LENGTH(p_schema) > 100 THEN
DBMS_OUTPUT.PUT_LINE('Validation failed: Schema name too long.');
RAISE_APPLICATION_ERROR(-20002, 'Schema name too long. Maximum length is 100 characters.');
END IF;
-- Validate p_multiple_files
IF UPPER(p_multiple_files) NOT IN ('Y', 'N') THEN
DBMS_OUTPUT.PUT_LINE('Validation failed: p_multiple_files must be Y or N.');
RAISE_APPLICATION_ERROR(-20003, 'p_multiple_files must be ''Y'' or ''N''.');
END IF;
-- Create a unique job name
v_job_name := SUBSTR('SCHEMA_IMP_' || UPPER(p_schema) || '_' || TO_CHAR(SYSTIMESTAMP, 'YYMMDDHH24MISSFF3'), 1, 128);
-- Create unique log and dump file names
v_dumpfile := UPPER(p_schema) || '_meta_' || v_timestamp || CASE WHEN UPPER(p_multiple_files) = 'Y' THEN '%U.dmp' ELSE '.dmp' END;
v_logfile := UPPER(p_schema) || '_meta_import_' || TO_CHAR(SYSTIMESTAMP, 'YYMMDDHH24MISSFF3') || '.log';
-- Validate file name lengths
IF LENGTH(v_dumpfile) > 200 OR LENGTH(v_logfile) > 200 THEN
DBMS_OUTPUT.PUT_LINE('Validation failed: File name too long.');
RAISE_APPLICATION_ERROR(-20004, 'Generated file name too long. Maximum length is 200 characters.');
END IF;
-- Debug: Output job details
DBMS_OUTPUT.PUT_LINE('Starting import job: ' || v_job_name);
-- Open the import job
v_handle := DBMS_DATAPUMP.OPEN(
operation => 'IMPORT',
job_mode => 'SCHEMA',
job_name => v_job_name,
version => 'LATEST'
);
DBMS_OUTPUT.PUT_LINE('Job opened with handle: ' || v_handle);
-- Add log file
DBMS_DATAPUMP.ADD_FILE(
handle => v_handle,
filename => v_logfile,
directory => 'DATA_PUMP_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_LOG_FILE
);
DBMS_OUTPUT.PUT_LINE('Log file added: ' || v_logfile);
-- Add dump file
DBMS_DATAPUMP.ADD_FILE(
handle => v_handle,
filename => v_dumpfile,
directory => 'DATA_PUMP_DIR',
filetype => DBMS_DATAPUMP.KU$_FILE_TYPE_DUMP_FILE
);
DBMS_OUTPUT.PUT_LINE('Dump file added: ' || v_dumpfile);
-- Filter to specified schemas
DBMS_DATAPUMP.METADATA_FILTER(
handle => v_handle,
name => 'SCHEMA_EXPR',
value => q'[= ']' || '''' || UPPER(p_schema) || '''' || q'[']'
);
DBMS_OUTPUT.PUT_LINE('Schema filter applied for: ' || p_schema);
-- Remap tablespaces (24 to 12)
FOR i IN 1..v_tablespace_mappings.COUNT LOOP
DBMS_DATAPUMP.METADATA_REMAP(
handle => v_handle,
name => 'REMAP_TABLESPACE',
value => v_tablespace_mappings(i)
);
DBMS_OUTPUT.PUT_LINE('Remapped tablespace: ' || v_tablespace_mappings(i));
END LOOP;
-- Set parallelism
DBMS_DATAPUMP.SET_PARALLEL(
handle => v_handle,
degree => GREATEST(1, LEAST(p_parallel, 100))
);
DBMS_OUTPUT.PUT_LINE('Parallelism set to: ' || GREATEST(1, LEAST(p_parallel, 100)));
-- Start the job
DBMS_DATAPUMP.START_JOB(v_handle);
DBMS_OUTPUT.PUT_LINE('Job started');
-- Detach to run in background
DBMS_DATAPUMP.DETACH(v_handle);
DBMS_OUTPUT.PUT_LINE('Job detached');
-- Output confirmation
DBMS_OUTPUT.PUT_LINE('Job ' || v_job_name || ' started for schema ' || p_schema || '. Log: ' || v_logfile || ' in DATA_PUMP_DIR.');
EXCEPTION
WHEN OTHERS THEN
v_error_msg := 'Error in import_metadata_schemas: ' || SQLERRM;
DBMS_OUTPUT.PUT_LINE(v_error_msg);
RAISE_APPLICATION_ERROR(-20001, v_error_msg);
END;
/
```
### How to Run
1. **Download Script**:
- **Customize Parameters**:
```sql
v_bucket_name VARCHAR2(128) := 'my-rds-dumps-bucket'; -- Your S3 bucket name
v_s3_prefix VARCHAR2(200) := 'exports/'; -- S3 folder path
v_file_prefix VARCHAR2(200) := 'DEMO1_meta_2509092151234.dmp'; -- Match your dump file
```
- For multiple files, use a pattern (e.g., `'DEMO1_meta_%.dmp'`).
- **Enable Server Output**:
```sql
SET SERVEROUTPUT ON;
```
- **Run**:
```sql
DECLARE
...
END;
/
```
- **Verify**:
```sql
SELECT filename FROM TABLE(rdsadmin.rds_file_util.listdir('DATA_PUMP_DIR')) WHERE filename LIKE 'DEMO1_meta_%';
SELECT task_id, status, error_msg FROM rdsadmin.rdsadmin_s3_tasks WHERE task_id = 'task_12345'; -- Replace with task_id
```
2. **Import Script**:
- **Customize Parameters**:
```sql
p_schema VARCHAR2(128) := 'DEMO1'; -- Match source schema
p_parallel NUMBER := 2;
p_multiple_files VARCHAR2(1) := 'N';
v_timestamp VARCHAR2(19) := '2509092151234'; -- Match dump file timestamp
v_tablespace_mappings tablespace_mapping_t := tablespace_mapping_t(
'TBS1:NEW_TBS1', 'TBS2:NEW_TBS1', -- Customize with your mappings
'TBS3:NEW_TBS2', 'TBS4:NEW_TBS2',
...
'TBS23:NEW_TBS12', 'TBS24:NEW_TBS12'
);
```
- **Enable Server Output**:
```sql
SET SERVEROUTPUT ON;
```
- **Run**:
```sql
DECLARE
...
END;
/
```
- **Verify**:
```sql
SELECT job_name, state, error_msg FROM dba_datapump_jobs WHERE job_name LIKE 'SCHEMA_IMP_%';
```
### Troubleshooting
1. **Download Issues**:
- Check task status:
```sql
SELECT task_id, status, error_msg FROM rdsadmin.rdsadmin_s3_tasks WHERE task_id = 'task_12345';
```
- Ensure `v_file_prefix` matches the S3 file (case-sensitive).
- Verify IAM role permissions and bucket region.
2. **Import Issues**:
- Check log file in `DATA_PUMP_DIR` (e.g., `DEMO1_meta_import_2509092151234.log`).
- Ensure target tablespaces exist and the schema is valid.
- If `ORA-31626` occurs, add `PRAGMA AUTONOMOUS_TRANSACTION` to the import script:
```sql
DECLARE
PRAGMA AUTONOMOUS_TRANSACTION;
...
BEGIN
...
COMMIT;
EXCEPTION
...
COMMIT;
END;
/
```
3. **Provide Details**:
- Share `DBMS_OUTPUT`, task status, log file contents, and Oracle RDS version.
### Notes
- **Tablespace Mapping**: Customize `v_tablespace_mappings` with your actual 24-to-12 mappings.
- **File Naming**: Ensure `v_timestamp` matches the source dump file’s timestamp.
- **Future Enhancements**: If needed, add `DBMS_SCHEDULER` for multiple schemas or `.sql` file generation for review.
Please run these scripts and share any errors or output for further assistance!
No comments:
Post a Comment