In your Oracle RDS database with the PLAN1 Resource Manager plan using the RATIO management method, you have:
- LOW_GROUP: Assigned a 30% UTILIZATION_LIMIT, shared by users like USER_ABC and USER_123(and possibly others).
- CPU_LIMITED_USER_GROUP: Assigned a 70% UTILIZATION_LIMIT, with USER_A assigned to it.
- Other Groups: HIGH_GROUP and OTHER_GROUPS also exist in PLAN1, with their own MGMT_P1weights and possibly UTILIZATION_LIMIT settings.
You’ve asked what happens when users assigned to LOW_GROUP (e.g., USER_ABC, USER_123) and CPU_LIMITED_USER_GROUP (e.g., USER_A) are active in the database simultaneously. I’ll explain the behavior in layman terms, provide a technical breakdown, and relate it to your setup, considering the RATIO method, UTILIZATION_LIMIT, and the default parallel limit (since PARALLEL_DEGREE_LIMIT_P1 was not specified). The explanation will account for your PLAN1 configuration, industry-standard directive settings (SWITCH_ESTIMATE => FALSE, SWITCH_FOR_CALL => FALSE, etc.), and RDS constraints.
Layman Explanation
Think of the database as a kitchen with a limited amount of cooking capacity (CPU, like a big pizza). The PLAN1 plan is like a set of rules for dividing the pizza among different tables (consumer groups):
- LOW_GROUP (e.g., USER_ABC, USER_123) gets a small slice, capped at 30% of the pizza, no matter how hungry they are.
- CPU_LIMITED_USER_GROUP (e.g., USER_A) gets a bigger slice, capped at 70% of the pizza.
- HIGH_GROUP and OTHER_GROUPS get their own slices, based on their own rules.
When users from both LOW_GROUP and CPU_LIMITED_USER_GROUP are active (i.e., running queries), they compete for the pizza (CPU). Here’s what happens:
- If the kitchen is busy (CPU usage hits 100%), the RATIO method decides how much each table gets based on their priority (MGMT_P1 weights, e.g., 15 for LOW_GROUP, 70 for CPU_LIMITED_USER_GROUP). USER_A gets a much larger share (up to 70%) compared to LOW_GROUP users (up to 30% total).
- If multiple users are at the same table (e.g., USER_ABC and USER_123 in LOW_GROUP), they split their table’s slice (30% max).
- The UTILIZATION_LIMIT ensures no table takes more than its cap, even if extra pizza is available.
So, USER_A will likely dominate CPU usage (up to 70%), while LOW_GROUP users share a smaller portion (up to 30% total), and HIGH_GROUP or OTHER_GROUPS get whatever’s left based on their weights.
Technical Breakdown
Let’s analyze the behavior when users from LOW_GROUP and CPU_LIMITED_USER_GROUP are active simultaneously, based on your PLAN1 configuration.
Your Setup
From your previous inputs:
- LOW_GROUP Directive:sqlDBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'PLAN1',GROUP_OR_SUBPLAN => 'LOW_GROUP', -- Corrected from LOQ_GROUP typoSWITCH_ESTIMATE => FALSE,SWITCH_FOR_CALL => FALSE,UTILIZATION_LIMIT => 30,COMMENT => NULL);
- MGMT_P1: Not specified in your snippet, but based on the script, adjusted to 15 for RATIO balance.
- UTILIZATION_LIMIT => 30: Caps LOW_GROUP at 30% CPU usage.
- SWITCH_ESTIMATE => FALSE, SWITCH_FOR_CALL => FALSE: Ensures sessions stay in LOW_GROUP without automatic switching.
- Likely no PARALLEL_DEGREE_LIMIT_P1, so uses default parallel limit (governed by PARALLEL_MAX_SERVERS).
- CPU_LIMITED_USER_GROUP Directive (from the script):sqlDBMS_RESOURCE_MANAGER.CREATE_PLAN_DIRECTIVE(PLAN => 'PLAN1',GROUP_OR_SUBPLAN => 'CPU_LIMITED_USER_GROUP',COMMENT => 'Limit CPU to 70% for CPU_LIMITED_USER_GROUP',MGMT_P1 => 70,UTILIZATION_LIMIT => 70,SWITCH_ESTIMATE => FALSE,SWITCH_FOR_CALL => FALSE,MGMT_MTH => 'RATIO',ACTIVE_SESS_POOL_MTH => 'ACTIVE_SESS_POOL_ABSOLUTE',PARALLEL_DEGREE_LIMIT_MTH => 'PARALLEL_DEGREE_LIMIT_ABSOLUTE',QUEUEING_MTH => 'FIFO_TIMEOUT');
- MGMT_P1 => 70: Gives high CPU priority (70/100 relative to other groups).
- UTILIZATION_LIMIT => 70: Caps CPU_LIMITED_USER_GROUP at 70% CPU.
- No PARALLEL_DEGREE_LIMIT_P1: Uses default parallel limit.
- Other Groups:
- HIGH_GROUP: Adjusted to MGMT_P1 => 10 in the script.
- OTHER_GROUPS: Adjusted to MGMT_P1 => 5.
- Total MGMT_P1 weights: 70 + 15 + 10 + 5 = 100, ensuring RATIO balance.
- Users:
- LOW_GROUP: USER_ABC, USER_123 (and possibly others).
- CPU_LIMITED_USER_GROUP: USER_A.
- HIGH_GROUP, OTHER_GROUPS: Other users or default sessions.
What Happens When Users Are Active Simultaneously?
When users from LOW_GROUP (e.g., USER_ABC, USER_123) and CPU_LIMITED_USER_GROUP (e.g., USER_A) are active, Oracle Resource Manager allocates CPU based on the RATIO method and enforces UTILIZATION_LIMIT caps. Here’s the detailed behavior:
- CPU Allocation with RATIO Method:
- The RATIO method allocates CPU based on MGMT_P1 weights when the database is CPU-bound (total CPU usage reaches 100%).
- Weights: CPU_LIMITED_USER_GROUP (70), LOW_GROUP (15), HIGH_GROUP (10), OTHER_GROUPS (5).
- Proportional Share (when CPU-bound):
- CPU_LIMITED_USER_GROUP: 70 / (70 + 15 + 10 + 5) = 70% of available CPU (before UTILIZATION_LIMIT).
- LOW_GROUP: 15 / 100 = 15% of available CPU.
- HIGH_GROUP: 10 / 100 = 10% of available CPU.
- OTHER_GROUPS: 5 / 100 = 5% of available CPU.
- If CPU is not fully utilized (e.g., only 50% total usage), groups can use more than their proportional share, up to their UTILIZATION_LIMIT.
- UTILIZATION_LIMIT Enforcement:
- LOW_GROUP: Capped at 30% CPU. If USER_ABC and USER_123 are active, they share this 30% (e.g., ~15% each if equally demanding). Even if CPU is available, LOW_GROUP cannot exceed 30%.
- CPU_LIMITED_USER_GROUP: Capped at 70% CPU. If USER_A is the only user, they can use up to 70%. If others join this group later, they share the 70% cap.
- Interaction: When both groups are active:
- USER_A can dominate CPU usage (up to 70%), leaving at least 30% for others.
- LOW_GROUP is limited to 30% total, shared among its users.
- If HIGH_GROUP or OTHER_GROUPS users are active, they compete for the remaining CPU (after USER_A’s 70% and LOW_GROUP’s 30%), based on their MGMT_P1 (10 and 5).
- Parallel Execution:
- Since no PARALLEL_DEGREE_LIMIT_P1 is set for either group, the default parallel limit applies (governed by PARALLEL_MAX_SERVERS, e.g., 40). Queries from USER_A or LOW_GROUP users can request high parallelism (e.g., /*+ PARALLEL(8) */), but:
- USER_A’s queries are capped by the 70% CPU limit, so excessive parallelism may be throttled.
- LOW_GROUP queries are capped at 30% CPU, limiting their parallel execution impact.
- Example: If USER_A runs a query with PARALLEL(8), it uses up to 8 parallel servers but cannot exceed 70% CPU. LOW_GROUP users share 30% CPU, so their parallel queries are more constrained.
- Since no PARALLEL_DEGREE_LIMIT_P1 is set for either group, the default parallel limit applies (governed by PARALLEL_MAX_SERVERS, e.g., 40). Queries from USER_A or LOW_GROUP users can request high parallelism (e.g., /*+ PARALLEL(8) */), but:
- Behavior Scenarios:
- Only USER_A Active: USER_A can use up to 70% CPU, with parallelism up to PARALLEL_MAX_SERVERS (e.g., 40), depending on query hints or table settings.
- Only LOW_GROUP Users Active: USER_ABC and USER_123 share 30% CPU (e.g., ~15% each), with parallelism up to PARALLEL_MAX_SERVERS.
- Both Active, CPU-Bound:
- USER_A: Gets ~70% CPU (per MGMT_P1 and capped by UTILIZATION_LIMIT).
- LOW_GROUP: Gets ~15% CPU (per MGMT_P1), capped at 30%, shared by USER_ABC and USER_123 (e.g., ~7.5% each).
- HIGH_GROUP and OTHER_GROUPS: Split the remaining ~15% (10% and 5% per MGMT_P1).
- Both Active, Not CPU-Bound: If total CPU usage is below 100% (e.g., 50%), USER_A can use up to 70%, and LOW_GROUP up to 30%, without strict RATIO enforcement, as there’s enough CPU to go around.
- Switching Settings:
- SWITCH_ESTIMATE => FALSE and SWITCH_FOR_CALL => FALSE ensure USER_A stays in CPU_LIMITED_USER_GROUP and LOW_GROUP users stay in LOW_GROUP for their entire sessions, preventing unexpected switches.
- Impact on Other Groups:
- HIGH_GROUP and OTHER_GROUPS may get reduced CPU priority (MGMT_P1 => 10 and 5) when USER_A is active, especially if CPU-bound, due to CPU_LIMITED_USER_GROUP’s high MGMT_P1 => 70.
Example Scenario
- Users Active: USER_A (in CPU_LIMITED_USER_GROUP), USER_ABC, and USER_123 (in LOW_GROUP) run CPU-intensive queries simultaneously.
- CPU-Bound (100% Usage):
- USER_A: Gets up to 70% CPU (e.g., for a parallel query with DOP 8, capped at 70% CPU).
- LOW_GROUP: Gets up to 15% CPU (per MGMT_P1), but capped at 30%. USER_ABC and USER_123share this (e.g., ~7.5% each).
- HIGH_GROUP: Gets ~10% CPU (if active).
- OTHER_GROUPS: Gets ~5% CPU (if active).
- Not CPU-Bound (e.g., 50% Usage):
- USER_A can use up to 50% (or 70% if demand increases), and LOW_GROUP can use up to 30% (shared), as there’s no contention.
- Parallelism: Both groups can use high DOP (e.g., 8 or 16), but USER_A’s queries are less likely to be throttled due to the higher 70% CPU cap.
Verify Behavior
To observe the behavior when users from both groups are active:
- Check Mappings:sqlSELECT attribute, value, consumer_groupFROM dba_rsrc_group_mappingsWHERE attribute = 'ORACLE_USER' AND value IN ('USER_A', 'USER_ABC', 'USER_123');
- Run Test Queries:
- As USER_A:sqlSELECT /*+ PARALLEL(8) */ COUNT(*) FROM dba_objects, dba_objects;
- As USER_ABC or USER_123:sqlSELECT /*+ PARALLEL(8) */ COUNT(*) FROM dba_objects, dba_objects;
- As USER_A:
- Monitor Sessions:sqlSELECT sid, username, resource_consumer_groupFROM v$sessionWHERE username IN ('USER_A', 'USER_ABC', 'USER_123');
- Check CPU Usage:sqlSELECT name, active_sessions, cpu_wait_time, consumed_cpu_timeFROM V$RSRC_CONSUMER_GROUPWHERE name IN ('LOW_GROUP', 'CPU_LIMITED_USER_GROUP', 'HIGH_GROUP', 'OTHER_GROUPS');
- Monitor in CloudWatch: Check RDS > Monitoring > CPUUtilization to confirm USER_A uses up to 70% and LOW_GROUP up to 30%.
Impact on Your Setup
- LOW_GROUP (USER_ABC, USER_123): Limited to 30% CPU, shared among users. When USER_A is active, their lower MGMT_P1 (15) means they get less priority, but their 30% cap ensures some CPU availability.
- CPU_LIMITED_USER_GROUP (USER_A): Prioritized with up to 70% CPU, likely dominating resource usage when CPU-bound.
- HIGH_GROUP and OTHER_GROUPS: May get reduced CPU (10% and 5% per MGMT_P1) when USER_Ais active, especially in high-load scenarios.
- Parallelism: Default limit (PARALLEL_MAX_SERVERS) allows flexible parallelism, but USER_A’s queries benefit more due to the higher CPU cap.
Recommendations
- Test in Non-Production: Simulate concurrent activity to confirm CPU allocation:
- Run parallel queries as USER_A, USER_ABC, and USER_123.
- Monitor V$RSRC_CONSUMER_GROUP and CloudWatch to verify limits.
- Adjust MGMT_P1 if Needed: If LOW_GROUP or HIGH_GROUP users experience too much throttling, consider tweaking MGMT_P1 values (e.g., LOW_GROUP to 20, HIGH_GROUP to 15) while keeping the total at 100.
- Check PARALLEL_MAX_SERVERS:If parallelism is too high, set PARALLEL_DEGREE_LIMIT_P1 (e.g., 8) in the directive to limit impact:sqlSHOW PARAMETER PARALLEL_MAX_SERVERS;sqlDBMS_RESOURCE_MANAGER.UPDATE_PLAN_DIRECTIVE(PLAN => 'PLAN1',GROUP_OR_SUBPLAN => 'CPU_LIMITED_USER_GROUP',NEW_PARALLEL_DEGREE_LIMIT_P1 => 8);
Notes
- Typo: Assumes LOQ_GROUP was a typo for LOW_GROUP.
- Default Parallel Limit: No PARALLEL_DEGREE_LIMIT_P1 means queries use PARALLEL_MAX_SERVERS, capped by UTILIZATION_LIMIT.
- Error Handling: The script avoids ORA-01031 and DELETE_CONSUMER_GROUP_MAPPING issues.
Run the backup script first, then the provided script (create_and_assign_cpu_limited_user_group.sql) to assign USER_A to CPU_LIMITED_USER_GROUP. Test concurrent activity to observe CPU allocation.
No comments:
Post a Comment