CPU skew occurs when the work to execute a query is not distributed evenly among the segments.
The CPU metric is the average of the CPU percentages used by each process executing the query. The CPU skew metric is a variance statistic based on the difference between the average and each segment’s current CPU metric. The smaller the CPU skew, the more equally the work is distributed. The CPU skew metric varies between 0.0 (no skew) and 1.0. The lower the skew metric the more fully the database cluster’s resources are utilized.
CPU skew is usually related to the volume of data processed by the segments while executing the query execution plan. There are two types of skew you should investigate: data skew and computational skew.
A high CPU skew may be an indication of data skew, where tables used by the query are distributed unevenly, so that some segments have more data to process than their peers. You can check for data skew in a table by running a query like this one:
=# SELECT gp_segment_id, COUNT(*) FROM <table-name> GROUP BY gp_segment_id;
The row count should be approximately equal for each segment. If the rows are distributed unevenly, check the distribution key for the table. A good distribution key is a column or list of columns with unique or nearly unique values, such as the table’s primary key. Setting the distribution policy to
DISTRIBUTED RANDOMLY also ensures a well-distributed table, but precludes taking advantage of performance-enhancing strategies such as co-location for tables with equivalent primary keys.
High CPU skew can be the result of computational skew, which occurs during query execution. Some of the operations in the query plan can cause some segments to do more work than others. For example, joins, sorts, or aggregations on columns with low cardinality or unevenly distributed values can contribute to CPU skew by causing some segments to process many more tuples than others.