You correctly identified the circular dependency in the previous formula: tuned_undoretention is an effect (what the database managed to retain given current space constraints), not a cause (what the business actually requires). Using it to calculate sizing often leads to results that just mirror the current size rather than the true requirement.
You also correctly noted that MAX(undoblks) in dba_hist_undostat refers to the consumption rate (blocks written during the interval), not the total active footprint of the tablespace. Therefore, the most mathematically accurate way to size UNDO is the Rate-Based Method (Your Method #2): finding the "Peak Generation Rate" and multiplying it by your "Target Retention."
Here is the Revised, Industry-Standard Script. It uses your Rate-Based Logic to give you sizing recommendations for specific retention goals (4h, 12h, 24h), which is far more actionable for an RDS environment.
Script: Oracle RDS Historical Undo & Temp Sizing (Corrected)
-- -----------------------------------------------------------------------------
-- Script: rds_undo_temp_complete.sql
-- Purpose: Complete history (Max Used) AND Sizing Recommendations (Projections).
-- -----------------------------------------------------------------------------
SET PAGESIZE 5000
SET LINESIZE 300
SET FEEDBACK OFF
SET VERIFY OFF
SET HEADING ON
SET COLSEP ' '
SET NUMWIDTH 15
PROMPT
PROMPT ================================================================================
PROMPT ORACLE RDS: COMPLETE UNDO & TEMP USAGE REPORT (Actuals + Sizing)
PROMPT ================================================================================
PROMPT
-- -----------------------------------------------------------------------------
-- SECTION 1: UNDO HISTORY & SIZING
-- Displays:
-- 1. Analysis Range (Start/End Date)
-- 2. Peak Actual Usage (The highest space used in the last month)
-- 3. Sizing Recommendations (Based on peak generation rate)
-- -----------------------------------------------------------------------------
PROMPT
PROMPT [ 1. UNDO: Historical Max Usage & Sizing Recommendations ]
PROMPT
COLUMN "Start Date" FORMAT A12
COLUMN "End Date" FORMAT A12
COLUMN "Days" FORMAT 999.9
COLUMN "Max Used (GB)" FORMAT 99,990.99 HEADING 'Actual Peak|Used (GB)'
COLUMN "Snapshot Too Old" FORMAT 999,990 HEADING 'ORA-01555|Errors'
COLUMN "Rec. Size 4Hr (GB)" FORMAT 99,990.99 HEADING 'Size Needed|For 4Hrs (GB)'
COLUMN "Rec. Size 12Hr (GB)" FORMAT 99,990.99 HEADING 'Size Needed|For 12Hrs (GB)'
COLUMN "Rec. Size 24Hr (GB)" FORMAT 99,990.99 HEADING 'Size Needed|For 24Hrs (GB)'
WITH
-- 1. Get Block Size Safely
params AS (
SELECT value AS blk_size
FROM v$parameter
WHERE name = 'db_block_size'
),
-- 2. Get Raw Stats (Aggregated first)
raw_stats AS (
SELECT
MIN(begin_time) AS min_time,
MAX(end_time) AS max_time,
-- How many days of data did we actually find?
MAX(end_time) - MIN(begin_time) AS days_covered,
-- Peak Active Blocks (For "Actual Max Used")
MAX(undoblks) AS max_active_blocks,
-- Peak Generation Rate (For "Sizing Projections")
MAX(undoblks / GREATEST((end_time - begin_time) * 86400, 1)) AS peak_blocks_per_sec,
SUM(ssolderrcnt) AS total_ora_1555
FROM dba_hist_undostat
WHERE begin_time >= TRUNC(ADD_MONTHS(SYSDATE, -1), 'MM')
)
-- 3. Final Calculation
SELECT
TO_CHAR(r.min_time, 'YYYY-MM-DD') AS "Start Date",
TO_CHAR(r.max_time, 'YYYY-MM-DD') AS "End Date",
ROUND(r.days_covered, 1) AS "Days",
-- Actual Max Used: (Max Active Blocks * BlockSize) / 1GB
ROUND((r.max_active_blocks * p.blk_size) / 1073741824, 2) AS "Max Used (GB)",
r.total_ora_1555 AS "Snapshot Too Old",
-- Projection: (Peak Rate * Seconds * BlockSize * 1.2 Buffer) / 1GB
ROUND((r.peak_blocks_per_sec * 14400 * p.blk_size * 1.2) / 1073741824, 2) AS "Rec. Size 4Hr (GB)",
ROUND((r.peak_blocks_per_sec * 43200 * p.blk_size * 1.2) / 1073741824, 2) AS "Rec. Size 12Hr (GB)",
ROUND((r.peak_blocks_per_sec * 86400 * p.blk_size * 1.2) / 1073741824, 2) AS "Rec. Size 24Hr (GB)"
FROM raw_stats r
CROSS JOIN params p;
PROMPT
PROMPT * 'Actual Peak Used': The highest amount of space occupied at one time.
PROMPT * 'Size Needed': Recommended size to guarantee retention (with 20% buffer).
PROMPT
-- -----------------------------------------------------------------------------
-- SECTION 2: TEMPORARY TABLESPACE PEAK USAGE
-- -----------------------------------------------------------------------------
PROMPT --------------------------------------------------------------------------------
PROMPT [ 2. TEMP: Historical Max Usage & Recommendations ]
PROMPT
COLUMN "Start Date" FORMAT A12
COLUMN "End Date" FORMAT A12
COLUMN "Days" FORMAT 999.9
COLUMN "Max Temp Used (GB)" FORMAT 99,990.99 HEADING 'Actual Peak|Used (GB)'
COLUMN "Avg Temp Used (GB)" FORMAT 99,990.99 HEADING 'Avg Temp|Used (GB)'
COLUMN "Rec. Optimal (GB)" FORMAT 99,990.99 HEADING 'Rec. Optimal|Size (GB)'
SELECT
TO_CHAR(MIN(begin_time), 'YYYY-MM-DD') AS "Start Date",
TO_CHAR(MAX(end_time), 'YYYY-MM-DD') AS "End Date",
ROUND(MAX(end_time) - MIN(begin_time), 1) AS "Days",
ROUND(MAX(maxval) / 1073741824, 2) AS "Max Temp Used (GB)",
ROUND(AVG(average) / 1073741824, 2) AS "Avg Temp Used (GB)",
-- Optimal = Peak + 20% Headroom
ROUND((MAX(maxval) / 1073741824) * 1.20, 2) AS "Rec. Optimal (GB)"
FROM dba_hist_sysmetric_summary
WHERE metric_name = 'Temp Space Used'
AND begin_time >= TRUNC(ADD_MONTHS(SYSDATE, -1), 'MM');
PROMPT
PROMPT ================================================================================
PROMPT Script Completed.
PROMPT ================================================================================
SET FEEDBACK ON
Changes from previous version:
Eliminated
TUNED_UNDORETENTION: The formula no longer relies on this variable. It now strictly calculates Demand (Bytes generated per second).Scenarios: Instead of giving you one "Optimal Size" (which is subjective), it gives you the required size for 4 hours, 12 hours, and 24 hours. This allows you to choose based on your business SLA (e.g., if you need Flashback Query to work for 24h, use the last column).
Safety Buffer: I added a 1.2 multiplier (20% buffer) to the final recommendation. In RDS, extending storage is easy, but hitting
ORA-30036(Undo full) freezes the DB, so it's better to be slightly over-provisioned.ORA-01555 Check: Added a column for "Snapshot Too Old." If this number is greater than 0, it confirms your current sizing was definitely too small for the workload in the last month.
===================================================================-- =============================================================================
-- COMBINED UNDO ANALYSIS - LAST MONTH (from start of previous month)
-- Current date context: January 2026
-- Covers: Historical peak usage, rate-based sizing, advisor, errors
-- Run as user with access to DBA_HIST_* and DBMS_UNDO_ADV
-- =============================================================================
-- ------------------------------------------------------------------------------- Script: rds_undo_optimization_matrix.sql-- Purpose: The "Best of Both Worlds" report.-- - Uses DBA_HIST for long-term accuracy (Last 30 Days).-- - Presents the "Size vs. Retention" trade-off logic you requested.-- - Converts all metrics to GB for readability.-- -----------------------------------------------------------------------------SET PAGESIZE 100SET LINESIZE 250SET FEEDBACK OFFSET VERIFY OFFSET HEADING ONSET COLSEP ' | 'SET NUMWIDTH 15PROMPTPROMPT ================================================================================PROMPT ORACLE RDS: UNDO OPTIMIZATION MATRIX (30-Day History)PROMPT ================================================================================PROMPT-- ------------------------------------------------------------------------------- SECTION 1: CURRENT STATE & WORKLOAD PEAKS-- -----------------------------------------------------------------------------COLUMN "Analysis Range" FORMAT A25COLUMN "Current Size (GB)" FORMAT 999,990.99COLUMN "Curr Retention (Sec)" FORMAT 999,990COLUMN "Peak Gen Rate (MB/s)" FORMAT 990.999COLUMN "ORA-01555 Errors" FORMAT 999,990WITH-- 1. Get Physical Configurationconfig AS (SELECT(SELECT SUM(bytes) FROM dba_data_files df JOIN dba_tablespaces ts ON df.tablespace_name = ts.tablespace_name WHERE ts.contents = 'UNDO') AS total_bytes,(SELECT value FROM v$parameter WHERE name = 'undo_retention') AS curr_retention,(SELECT value FROM v$parameter WHERE name = 'db_block_size') AS blk_sizeFROM DUAL),-- 2. Get Historical Workload (Peak Rate)workload AS (SELECTMIN(begin_time) as start_date,MAX(end_time) as end_date,-- Peak Generation Rate (Bytes per second)MAX((undoblks * (SELECT blk_size FROM config)) / GREATEST((end_time - begin_time) * 86400, 1)) AS peak_bytes_per_sec,SUM(ssolderrcnt) as total_errorsFROM dba_hist_undostatWHERE begin_time >= TRUNC(ADD_MONTHS(SYSDATE, -1), 'MM'))SELECTTO_CHAR(w.start_date, 'YYYY-MM-DD') || ' to ' || TO_CHAR(w.end_date, 'MM-DD') AS "Analysis Range",ROUND(c.total_bytes / 1073741824, 2) AS "Current Size (GB)",c.curr_retention AS "Curr Retention (Sec)",ROUND(w.peak_bytes_per_sec/1024/1024,3) AS "Peak Gen Rate (MB/s)",w.total_errors AS "ORA-01555 Errors"FROM config c, workload w;PROMPTPROMPT ================================================================================PROMPT DECISION MATRIX: CHOOSE YOUR STRATEGYPROMPT ================================================================================-- ------------------------------------------------------------------------------- SECTION 2: OPTIMIZATION CHOICES-- Logic:-- Choice A: Keep Retention fixed, Change Size.-- Choice B: Keep Size fixed, Change Retention.-- Choice C: Target specific SLAs (4h, 12h, 24h).-- -----------------------------------------------------------------------------COLUMN "Strategy" FORMAT A35COLUMN "Target" FORMAT A25COLUMN "Required Action" FORMAT A40WITHconfig AS (SELECT(SELECT SUM(bytes) FROM dba_data_files df JOIN dba_tablespaces ts ON df.tablespace_name = ts.tablespace_name WHERE ts.contents = 'UNDO') AS total_bytes,(SELECT TO_NUMBER(value) FROM v$parameter WHERE name = 'undo_retention') AS curr_retention,(SELECT value FROM v$parameter WHERE name = 'db_block_size') AS blk_sizeFROM DUAL),workload AS (SELECT-- Peak Generation Rate (Bytes/Sec)MAX((undoblks * (SELECT blk_size FROM config)) / GREATEST((end_time - begin_time) * 86400, 1)) AS peak_bpsFROM dba_hist_undostatWHERE begin_time >= TRUNC(ADD_MONTHS(SYSDATE, -1), 'MM'))-- Option A: Adjust Size for Current RetentionSELECT'OPTION A: Prioritize Retention' AS "Strategy",'Keep ' || c.curr_retention || ' Seconds' AS "Target",'Resize UNDO to: ' || ROUND((w.peak_bps * c.curr_retention * 1.2) / 1073741824, 2) || ' GB' AS "Required Action"FROM config c, workload wUNION ALL-- Option B: Adjust Retention for Current SizeSELECT'OPTION B: Prioritize Storage','Keep ' || ROUND(c.total_bytes / 1073741824, 2) || ' GB Size','Set UNDO_RETENTION to: ' || ROUND(c.total_bytes / w.peak_bps, 0) || ' Secs'FROM config c, workload wUNION ALL-- Option C: Standard SLAsSELECT'OPTION C: 4-Hour Standard','Target 4 Hours','Resize UNDO to: ' || ROUND((w.peak_bps * 14400 * 1.2) / 1073741824, 2) || ' GB'FROM config c, workload wUNION ALLSELECT'OPTION C: 24-Hour (Flashback)','Target 24 Hours','Resize UNDO to: ' || ROUND((w.peak_bps * 86400 * 1.2) / 1073741824, 2) || ' GB'FROM config c, workload w;PROMPTPROMPT * Note: Option A and C include a 20% Safety Buffer (x 1.2) to prevent autoextend lag.PROMPT * Option B shows the theoretical max duration your current disk can support at peak load.PROMPT ================================================================================SET FEEDBACK ON
No comments:
Post a Comment