Memory
The Greenplum Command Center Query Monitor reports the current total memory consumed by all processes executing a query. When there is insufficient memory available for a query to complete, the query has an error status in the query monitor and an out of memory error is logged.
If you have enabled resource groups in Greenplum Database, you can manage the amount of memory available to queries by tuning resource group parameters, and by setting Greenplum Database configuration parameters that affect resource group memory.
For a detailed description of resource group memory management, see Using Resource Groups in the Greenplum Database Administrator Guide.
If you are using resource queues, see Memory and Resource Management with Resource Queues and Using Resource Queues for ways to troubleshoot memory problems with resource queues.
See Tuning SQL Queries for help with query optimization.
The following summary describes the resource group parameters and related Greenplum Database server configuration parameters that determine the amount of memory available to database queries and how configuration choices affect concurrency, spill file usage, and query performance.
Resource Group Memory Configuration Parameters
A resource group has parameters CONCURRENCY
, MEMORY_LIMIT
, MEMORY_SHARED_QUOTA
, and MEMORY_SPILL_RATIO
, which determine how much memory is allocated to execute a query. The CPU_LIMIT
parameter has no effect on memory allocation. See the CREATE RESOURCE GROUP
SQL reference for command syntax and information about these parameters.
MEMORY_LIMIT
MEMORY_LIMIT
s must not exceed 100. If the sum of all resource groups’ MEMORY_LIMIT
s is less than 100, the remaining, unallocated memory is global resource group shared memory, available to queries from all resource groups on a first-come, first-served basis.MEMORY_SHARED_QUOTA
CONCURRENCY
CONCURRENCY
transaction slots. Every transaction starts with this fixed portion of memory and, if needed, Greenplum Database uses additional memory from the resource group shared memory and global resource group shared memory.MEMORY_SPILL_RATIO
memory_spill_ratio
configuration parameter in the session.When a query executes, Greenplum Database allocates memory to it from the fixed portion of the resource group’s memory. If the query needs more memory and the resource group has available shared memory, Greenplum Database allocates additional memory. If insufficient shared memory is available, Greenplum Database allocates additional memory from global shared memory, if available. If the required memory is not available the transaction fails.
Greenplum Database Memory Configuration Parameters
The following Greenplum Database configuration parameters affect resource group memory allocation and concurrency.
gp_resource_group_memory_limit
0.7
(70%). This memory is divided equally among the primary segments on each host, and further divided among resource groups with the MEMORY_LIMIT
resource group parameter. Any memory not allocated to resource groups becomes global shared memory available to queries from all resource groups. See gp_resource_group_memory_limit
for a complete reference for this parameter.gp_resgroup_memory_policy
eager_free
, re-allocates memory from completed operators to operators later in the query plan. The alternative value for this parameter, auto
, allocates a fixed amount of memory to operators that are not memory-intensive and the rest to the memory-intensive operators. The default value is usually the optimal setting. See gp_resgroup_memory_policy
for a complete reference for this parameter.memory_spill_ratio
MEMORY_SPILL_RATIO
value by setting the memory_spill_ratio
configuration parameter in the session. The value is a percentage of the fixed memory allocation for transactions in the resource group, expressed as an integer from 0 to 100. The performance of queries with low memory requirements can be improved by setting the memory_spill_ratio
parameter in the session to a low setting, for example 0 to 2. See memory_spill_ratio
for more information about this parameter.