Thursday, July 10, 2025

ASH/AWR Reports

You're looking for how to download the HTML-formatted AWR or ASH reports that you've generated in your Oracle RDS instance. Since you don't have direct operating system access on RDS, you can't just scp the files. AWS provides specific mechanisms for this.

There are two primary ways to download these reports from Oracle RDS:

  1. Through the AWS RDS Console (Recommended and Easiest)

  2. Using rdsadmin.rds_file_util.read_text_file from a SQL client

Let's detail each method.


Method 1: Downloading via the AWS RDS Console (Easiest)

This is the most common and user-friendly way. When you generate an AWR or ASH report using rdsadmin.rds_run_awr_report or rdsadmin.rds_run_ash_report (especially if you specify report_type => 'HTML'), RDS automatically makes these reports available for download in the console.

Steps:

  1. Generate the Report in HTML Format:

    Make sure you specified 'HTML' as the report_type when calling the rdsadmin procedures.

    • For AWR:

      SQL
      SELECT rdsadmin.rds_run_awr_report(
                 l_begin_snap => YOUR_BEGIN_SNAP_ID,
                 l_end_snap   => YOUR_END_SNAP_ID,
                 l_report_type => 'HTML' -- <<< IMPORTANT: Specify HTML
             ) AS AWR_REPORT_TEXT FROM DUAL;
      
    • For ASH:

      SQL
      BEGIN
          rdsadmin.rds_run_ash_report(
              begin_time => TO_TIMESTAMP('YYYY-MM-DD HH24:MI:SS', 'YYYY-MM-DD HH24:MI:SS'), -- Your start time
              end_time   => TO_TIMESTAMP('YYYY-MM-DD HH24:MI:SS', 'YYYY-MM-DD HH24:MI:SS'),   -- Your end time
              report_type => 'HTML' -- <<< IMPORTANT: Specify HTML
          );
      END;
      /
      

    After executing the above, the report will be generated and placed in a default directory managed by RDS (usually BDUMP or a specific diagnostic directory).

  2. Navigate to the RDS Console:

    • Go to the AWS Management Console and navigate to RDS.

    • In the navigation pane, choose Databases.

    • Select your Oracle DB instance.

  3. Go to the Logs & events Tab:

    • Click on the "Logs & events" tab.

  4. Scroll Down to the "Logs" Section:

    • In the "Logs" section, you'll see a list of various log files.

  5. Search for your Report File:

    • AWR reports generated in HTML will typically have names like awrrpt_BEGINSNAP_ENDSNAP.html (e.g., awrrpt_123_124.html).

    • ASH reports generated in HTML will typically have names like ashrpt_YYYYMMDDHH24MISS_YYYYMMDDHH24MISS.html (e.g., ashrpt_20250709000000_20250709010000.html).

    • You might need to use the search/filter box if you have many log files.

  6. Download the Report:

    • Select the .html report file you want to download.

    • Click the "Download" button.

The file will download to your local machine, and you can then open it in any web browser.


Method 2: Downloading via SQL Client using rdsadmin.rds_file_util.read_text_file

This method is useful if you want to automate the retrieval, or if you prefer to stay within your SQL client, but it requires more steps and manual file creation on your local machine.

Steps:

  1. Generate the Report (if not already done):

    Use the rdsadmin.rds_run_awr_report or rdsadmin.rds_run_ash_report as shown in Method 1, ensuring report_type => 'HTML'.

  2. Identify the Report Filename:

    You'll need the exact filename (e.g., awrrpt_123_124.html) and the directory it's in (usually BDUMP by default for these reports). You can list files in the BDUMP directory:

    SQL
    SELECT filename, filesize, mtime
    FROM TABLE(rdsadmin.rds_file_util.listdir('BDUMP'))
    WHERE filename LIKE 'awrrpt_%.html' OR filename LIKE 'ashrpt_%.html'
    ORDER BY mtime DESC;
    
  3. Read the HTML Content using SQL:

    The rdsadmin.rds_file_util.read_text_file function reads the content of the file.2 You'll need to spool this output to a local file.

    • For SQL*Plus (recommended for this method):

      SQL
      -- Set output formatting to prevent line breaks and headers in the HTML content
      SET HEADING OFF
      SET FEEDBACK OFF
      SET PAGESIZE 0
      SET LINESIZE 32767 -- Max line size to avoid wrapping HTML
      SET LONG 32767      -- Max long for CLOB output
      SET TRIMSPOOL ON    -- Trim trailing spaces
      
      -- Spool the output to a local HTML file
      SPOOL C:\Path\To\Your\Report\awrrpt_123_124.html -- <<< CHANGE THIS PATH AND FILENAME
      
      SELECT text
      FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP', 'awrrpt_123_124.html')); -- <<< CHANGE FILENAME
      
      SPOOL OFF
      SET HEADING ON
      SET FEEDBACK ON
      SET PAGESIZE 14
      SET LINESIZE 80 -- Reset your SQL*Plus settings
      
    • For SQL Developer/Toad:

      You might need to copy the CLOB output directly from the query result grid and paste it into a text editor, then save it as an .html file. This is more manual than SPOOL but works if SPOOL isn't an option or you prefer the GUI.

  4. Open the HTML File:

    Once saved to your local machine, open the .html file with your preferred web browser.

You've provided a very good set of SQL queries targeting dba_hist_active_sess_history and dba_hist_snapshot, which are the core views for AWR and ASH data. These queries are excellent starting points for a detailed performance investigation.

However, since you're operating in an AWS RDS for Oracle environment, there are crucial considerations and necessary adjustments. Oracle RDS for Oracle restricts direct access to some DBMS_WORKLOAD_REPOSITORY functions and requires the use of rdsadmin specific packages for generating AWR/ASH reports and interacting with diagnostic files.

Here's my assessment and how to "tune" (adapt) them for your RDS environment and general best practices:

General Assessment:

  • Good Starting Point: The logic in each query directly targets the relevant performance metrics.

  • ASH-Focused: Most of the queries hit dba_hist_active_sess_history, which is excellent for detailed, granular analysis during spikes.

  • Missing Bind Variables for FETCH FIRST: While FETCH FIRST N ROWS ONLY is good, a bind variable could make it more flexible.

  • No Schema-Specific Filtering: For multi-tenant or multi-application databases, filtering by schema/user could be beneficial (though user_id is selected in one query).

  • DBID and Instance Number for AWR: DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_HTMLtypically requires DBID and instance number. For RDS, the instance number is usually 1, and DBID can be retrieved from v$database.


Detailed Review and Tuning for Oracle RDS:

Let's go through each query, assess it, and provide the RDS-adapted version or tuning tips.

Crucial RDS Note: In RDS, you cannot directly call DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_HTML as a SELECT statement directly in SQL*Plus/SQL Developer that outputs HTML. You must use rdsadmin.rds_run_awr_report for report generation, and then retrieve the report from the RDS Console or using rdsadmin.rds_file_util.read_text_file.


1. 01_awr_generate_html.sql - Generate AWR HTML Report

  • Original Query:

    SQL
    SELECT output
    FROM TABLE(DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_HTML(
        (SELECT dbid FROM v$database),
        1, -- Instance Number
        :begin_snap_id,
        :end_snap_id
    ));
    
  • Assessment:

    • Problematic for RDS: Direct invocation of DBMS_WORKLOAD_REPOSITORY.AWR_REPORT_HTML in this TABLE() format is generally not supported for direct output in RDS.

    • Correct Parameters: The use of DBID, Instance Number, begin_snap_idend_snap_id is correct for AWR.

  • Tuning/RDS Adaptation:

    • You must use the rdsadmin.rds_run_awr_report procedure. This procedure places the generated report file in a diagnostic directory on the RDS instance, which you then download via the AWS Console or read via rdsadmin.rds_file_util.read_text_file.

    • Recommendation:

      SQL
      -- RDS-Adapted: Generate AWR HTML Report
      -- This procedure will place the awrrpt_...html file in a diagnostic directory on RDS.
      -- You will then download it from the AWS RDS Console (Logs & events -> Logs) or read it using rdsadmin.rds_file_util.read_text_file.
      BEGIN
          rdsadmin.rds_run_awr_report(
              l_begin_snap => :begin_snap_id, -- Bind variable for AWR begin snapshot ID
              l_end_snap   => :end_snap_id,   -- Bind variable for AWR end snapshot ID
              l_report_type => 'HTML'         -- Specify HTML output
          );
      END;
      /
      
    • To Retrieve: Use AWS RDS Console (Logs & events tab) or SELECT text FROM TABLE(rdsadmin.rds_file_util.read_text_file('BDUMP', 'awrrpt_YOUR_BEGIN_SNAP_ID_YOUR_END_SNAP_ID.html')); (using SPOOL for SQL*Plus).


2. 02_top_10_sessions_cpu.sql - Top 10 sessions consuming CPU

  • Original Query:

    SQL
    SELECT session_id, session_serial#, COUNT(*) AS samples
    FROM dba_hist_active_sess_history
    WHERE sample_time BETWEEN :begin_time AND :end_time
      AND session_state = 'ON CPU'
    GROUP BY session_id, session_serial#
    ORDER BY samples DESC
    FETCH FIRST 10 ROWS ONLY;
    
  • Assessment:

    • Excellent: Directly targets CPU-consuming sessions using session_state = 'ON CPU'FETCH FIRST 10 ROWS ONLY is good.

  • Tuning/RDS Adaptation:

    • Include more identifying columns: PROGRAMMODULEUSERNAMESQL_ID. This makes the output much more useful for debugging.

    • Join V$SESSION (if current) or DBA_USERS: To get the username from user_id.

    • Recommendation:

      SQL
      -- Top N sessions consuming CPU during a specific period
      SELECT
          h.session_id,
          h.session_serial#,
          u.username,
          h.program,
          h.module,
          h.sql_id,
          COUNT(*) AS cpu_samples -- Renamed alias for clarity
      FROM dba_hist_active_sess_history h
      JOIN dba_users u ON h.user_id = u.user_id
      WHERE h.sample_time BETWEEN :begin_time AND :end_time
        AND h.session_state = 'ON CPU'
      GROUP BY
          h.session_id,
          h.session_serial#,
          u.username,
          h.program,
          h.module,
          h.sql_id
      ORDER BY cpu_samples DESC
      FETCH FIRST 10 ROWS ONLY; -- Use a bind variable here if you want: FETCH FIRST :num_rows ROWS ONLY
      

3. 03_top_10_sqls_cpu.sql - Top 10 SQLs by CPU

  • Original Query:

    SQL
    SELECT sql_id, COUNT(*) AS samples
    FROM dba_hist_active_sess_history
    WHERE sample_time BETWEEN :begin_time AND :end_time
      AND session_state = 'ON CPU'
    GROUP BY sql_id
    ORDER BY samples DESC
    FETCH FIRST 10 ROWS ONLY;
    
  • Assessment:

    • Good: Correctly identifies CPU-bound SQL.

  • Tuning/RDS Adaptation:

    • Get the SQL text: This is essential for understanding the query. You can join DBA_HIST_SQLTEXT.

    • Consider total elapsed time: While CPU is a focus, sometimes a query is high CPU and high elapsed time due to other waits.

    • Recommendation:

      SQL
      -- Top N SQLs by CPU consumption during a specific period
      SELECT
          h.sql_id,
          s.sql_text, -- Get SQL text
          COUNT(*) AS cpu_samples
      FROM dba_hist_active_sess_history h
      JOIN dba_hist_sqltext s ON h.sql_id = s.sql_id AND h.dbid = s.dbid -- Join to get SQL text
      WHERE h.sample_time BETWEEN :begin_time AND :end_time
        AND h.session_state = 'ON CPU'
        AND h.sql_id IS NOT NULL -- Exclude background processes/non-SQL activity
      GROUP BY h.sql_id, s.sql_text
      ORDER BY cpu_samples DESC
      FETCH FIRST 10 ROWS ONLY; -- Use a bind variable here if you want: FETCH FIRST :num_rows ROWS ONLY
      
    • Note on SQL_TEXT: DBA_HIST_SQLTEXT.SQL_TEXT is a LONG datatype. In many SQL clients, you might need to set SET LONG XXX (e.g., SET LONG 20000) to retrieve the full text, or use DBMS_METADATA.GET_DDL for SQL statements if available and if you have the hash value/address. For ASH, the SQL_TEXT in DBA_HIST_SQLTEXT is often sufficient.


4. 04_top_wait_events.sql - Top wait events

  • Original Query:

    SQL
    SELECT event, COUNT(*) AS waits
    FROM dba_hist_active_sess_history
    WHERE sample_time BETWEEN :begin_time AND :end_time
      AND session_state = 'WAITING'
    GROUP BY event
    ORDER BY waits DESC
    FETCH FIRST 10 ROWS ONLY;
    
  • Assessment:

    • Good: Directly identifies top wait events.

  • Tuning/RDS Adaptation:

    • Consider wait class: Grouping by wait_class can provide a higher-level view of the bottleneck (e.g., 'User I/O', 'Concurrency', 'Commit').

    • Recommendation:

      SQL
      -- Top N wait events during a specific period
      SELECT
          h.event,
          h.wait_class, -- Include wait class
          COUNT(*) AS wait_samples
      FROM dba_hist_active_sess_history h
      WHERE h.sample_time BETWEEN :begin_time AND :end_time
        AND h.session_state = 'WAITING'
        AND h.wait_class != 'Idle' -- Exclude idle waits
      GROUP BY h.event, h.wait_class
      ORDER BY wait_samples DESC
      FETCH FIRST 10 ROWS ONLY; -- Use a bind variable if desired
      

5. 05_top_users.sql - Top users by session count

  • Original Query:

    SQL
    SELECT user_id, COUNT(*) AS active_sessions
    FROM dba_hist_active_sess_history
    WHERE sample_time BETWEEN :begin_time AND :end_time
    GROUP BY user_id
    ORDER BY active_sessions DESC;
    
  • Assessment:

    • Good: Identifies active users.

  • Tuning/RDS Adaptation:

    • Get username: The user_id is less readable than the username.

    • Recommendation:

      SQL
      -- Top N users by active session samples during a specific period
      SELECT
          u.username,
          COUNT(*) AS active_session_samples -- Renamed for clarity (it's active session samples, not raw count)
      FROM dba_hist_active_sess_history h
      JOIN dba_users u ON h.user_id = u.user_id
      WHERE h.sample_time BETWEEN :begin_time AND :end_time
      GROUP BY u.username
      ORDER BY active_session_samples DESC
      FETCH FIRST 10 ROWS ONLY; -- Use a bind variable if desired
      

6. 06_top_modules_programs.sql - Top programs and modules

  • Original Query:

    SQL
    SELECT program, COUNT(*) AS samples
    FROM dba_hist_active_sess_history
    WHERE sample_time BETWEEN :begin_time AND :end_time
    GROUP BY program
    ORDER BY samples DESC;
    
  • Assessment:

    • Good: Identifies programs.

  • Tuning/RDS Adaptation:

    • Include module: Often, MODULE provides more granular detail than PROGRAM.

    • Recommendation:

      SQL
      -- Top N programs and modules by active session samples during a specific period
      SELECT
          h.program,
          h.module, -- Include module for more detail
          COUNT(*) AS active_session_samples
      FROM dba_hist_active_sess_history h
      WHERE h.sample_time BETWEEN :begin_time AND :end_time
      GROUP BY h.program, h.module
      ORDER BY active_session_samples DESC
      FETCH FIRST 10 ROWS ONLY; -- Use a bind variable if desired
      

7. 07_db_load.sql - DB Load profile

  • Original Query:

    SQL
    SELECT TO_CHAR(sample_time, 'YYYY-MM-DD HH24:MI') AS minute,
           COUNT(*) AS active_sessions
    FROM dba_hist_active_sess_history
    WHERE sample_time BETWEEN :begin_time AND :end_time
    GROUP BY TO_CHAR(sample_time, 'YYYY-MM-DD HH24:MI')
    ORDER BY minute;
    
  • Assessment:

    • Excellent: Provides a minute-by-minute (or second-by-second if you adapt TO_CHAR) view of active sessions, directly showing the load profile over time.

  • Tuning/RDS Adaptation:

    • Recommendation: Keep as is. This query is fundamental for visualizing load spikes from ASH. You could get more granular with seconds if needed: TO_CHAR(sample_time, 'YYYY-MM-DD HH24:MI:SS').


8. 08_blocking_sessions.sql - Blocking sessions

  • Original Query:

    SQL
    SELECT blocking_session, session_id, session_serial#, COUNT(*) AS samples
    FROM dba_hist_active_sess_history
    WHERE sample_time BETWEEN :begin_time AND :end_time
      AND blocking_session IS NOT NULL
    GROUP BY blocking_session, session_id, session_serial#
    ORDER BY samples DESC;
    
  • Assessment:

    • Good: Identifies blocking chains from ASH.

  • Tuning/RDS Adaptation:

    • Get more info on blocking/blocked sessions: Username, program, SQL ID.

    • Recommendation:

      SQL
      -- Top N blocking/blocked session pairs by active samples
      SELECT
          h.blocking_session,
          u_blocker.username AS blocking_username,
          h.session_id,
          h.session_serial#,
          u_blocked.username AS blocked_username,
          h.sql_id AS blocked_sql_id,
          h.event AS blocked_wait_event,
          COUNT(*) AS samples
      FROM dba_hist_active_sess_history h
      LEFT JOIN dba_users u_blocker ON h.blocking_session_id = u_blocker.user_id -- Note: blocking_session_id exists in ASH
      LEFT JOIN dba_users u_blocked ON h.user_id = u_blocked.user_id
      WHERE h.sample_time BETWEEN :begin_time AND :end_time
        AND h.blocking_session IS NOT NULL -- This implies it's a session being blocked
      GROUP BY
          h.blocking_session,
          u_blocker.username,
          h.session_id,
          h.session_serial#,
          u_blocked.username,
          h.sql_id,
          h.event
      ORDER BY samples DESC
      FETCH FIRST 10 ROWS ONLY; -- Use a bind variable if desired
      
    • Note: blocking_session_id is the correct column for the ID of the blocking session in dba_hist_active_sess_history.


9. 09_top_objects.sql - Top accessed objects

  • Original Query:

    SQL
    SELECT o.object_name, o.object_type, COUNT(*) AS access_count
    FROM dba_hist_active_sess_history h
    JOIN dba_objects o ON h.current_obj# = o.object_id
    WHERE h.sample_time BETWEEN :begin_time AND :end_time
    GROUP BY o.object_name, o.object_type
    ORDER BY access_count DESC;
    
  • Assessment:

    • Good: Identifies hot objects.

  • Tuning/RDS Adaptation:

    • Include owner: Crucial for unique identification of objects across schemas.

    • Filter for relevant object types: You might only be interested in TABLE or INDEX.

    • Recommendation:

      SQL
      -- Top N accessed objects by active session samples
      SELECT
          o.owner AS object_owner, -- Include owner
          o.object_name,
          o.object_type,
          COUNT(*) AS access_samples
      FROM dba_hist_active_sess_history h
      JOIN dba_objects o ON h.current_obj# = o.object_id AND h.dbid = o.owner_id -- Add dbid/owner_id to join for robustness
      WHERE h.sample_time BETWEEN :begin_time AND :end_time
        AND h.current_obj# IS NOT NULL
        AND o.owner NOT IN ('SYS', 'SYSTEM', 'DBSNMP', 'OUTLN', 'AUDSYS', 'RDSADMIN') -- Exclude internal Oracle objects
        AND o.object_type IN ('TABLE', 'INDEX', 'PARTITION', 'SUBPARTITION') -- Focus on common types
      GROUP BY o.owner, o.object_name, o.object_type
      ORDER BY access_samples DESC
      FETCH FIRST 10 ROWS ONLY; -- Use a bind variable if desired
      

By using these adapted queries, you'll be able to extract much richer and more contextual information from your Oracle RDS AWR/ASH data, enabling a more precise performance diagnosis. Remember to always provide the correct bind variables (:begin_time:end_time:begin_snap_id:end_snap_id) when executing these queries.

No comments: