pga_aggregate_target is a target, as opposed to a hard limit – so it isn’t unusual to go above that. 13G above that, now that’s unusual though! There IS an enhancement request in, to make a hard-limit setting, but that does not currently exist. There is a known bug in 10203 with certain statements burning up memory – bug 5947623 – however, the 10203/aix version of this patch is 64-bit, and the SR header says you are on 32-bit, so that isn’t an option….and 10203 is old enough that I can’t get a new version of the patch made. As I was unable to see any errors (e.g., ORA-4030) thre does not seem to be any problem with the operation of the database. PGA_AGGREGATE_TARGET does not set a hard limit on pga usage. It is only a target value used to dynamically size the process work areas. It also does not affect other areas of the pga that are allowed to grow beyond this limit. There are certain areas of pga that cannot be controlled by initialization parameters. Such areas include pl/sql memory collections such as pl/sql tables and varrays. Depending on the programming code and amount of data being handled these areas can grow very large (up to 20G internal limit on 10) and can consume large amounts of memory. This memory growth can be controlled by good programming practices. As an example, use LIMIT clause with BULK COLLECT. Additionally, programming mistakes can also lead to excessive memory usage. You can take steps to control the size of a process. However, from within the database framework you cannot place a hard limit on the size of a process by setting any initialization parameters or database configuration. You can limit the size of a process from the OS side by setting kernel limits or user shell limits but this leads to the ORA-4030 and will cause transaction rollback. As noted in bug 7279150, "... this is not a hard limit and that we will exceed it when it is undersized and the workload increases rapidly, such as when they start the workload for their testing or when they spawn a new set of sessions from their application servers." As the DBA you need to get confirmation from your operating system administrator that the amount of memory reported as being in use by a process includes or does not include shared memory. If shared memory is included in the value displayed by the operating system utility, then the shared pool size must be deducted from that value to know how much private memory the process is actually using. See note 174555.1 "UNIX Determining the Size of an Oracle Process". If an RDBMS user process is using more private memory than expected, then the DBA has three options: - Do nothing - Monitor the RDBMS iuser session to find out what SQL statements are being performed or were being performed by that RDBMS session. Using the SQL*Trace functionality of the database would normally be done if information from the end user cannot be obtained directly as to what they were doing since the memory usage was higher than expected or what they are doing right now. - Kill that RDBMS user session.
本文转自maclean_007 51CTO博客,原文链接:http://blog.51cto.com/maclean/1276748