Pivotal Greenplum® Command Center 6.3.0 Release Notes
Updated 2020-9-11
About This Release
This document contains release information about Pivotal Greenplum Command Center 6.3. Greenplum Command Center 6.3 provides management and monitoring functionality for Pivotal Greenplum Database 6.
See Enhancements and Changes in Greenplum Command Center 6.3 for information about new features and changes in this Command Center release.
Supported Platforms
Greenplum Command Center 6.3.0 is compatible with the following platforms.
- Pivotal Greenplum Database 6.x.
- Red Hat Enterprise Linux 6.x1 and 7.x
- CentOS 6.x1 and 7.x
- SUSE Enterprise Linux 12
- Ubuntu 18.04
See Pivotal Greenplum Command Center Supported Platforms for the most current compatibility information.
Enhancements and Changes in Greenplum Command Center 6.3
Workload Management
The Command Center interface for managing workload assignment rules has moved to a new page, Workload> Workload Mgmt. Resource group configuration remains on the Workload> Resource Groups page. See Workload Management.
Workload management rules can now be assigned using any combination of resource group name (or all resource groups, by default), database role, and query tag identifiers.
Workload management rules can include one or more conditions that must be met before the rule action is performed. Conditions are triggered after configured maximum values for CPU time, Planner Cost, Running time, Slices, or Total Disk I/O are exceeded by a query.
For Greenplum 6.8 or later, you can configure workload management rules to automatically move queries to a specified resource group. These rules can be created on earlier Greenplum versions, but are immediately placed in the Inactive state.
Command Center automatically attempts to retry applying a failed rule action 2 times, after waiting a minimum of 15 seconds between retries. You can configure the time interval using the new configuration parameter,
wlm_query_cooldown_time
. See Greenplum Command Center Parameters.A new configuration parameter,
wlm_short_query_threshold
, can be used to ensure that Command Center only applies workload management rules after a query has run for at least the specified number of seconds. See Greenplum Command Center Parameters.Programmatically managing workload rules using the JSON object is no longer supported.
The new
gpmetrics
tables,gpcc_wlm_rule
andgpcc_wlm_log_history
, were introduced to store workload rule definitions and log history. See the gpmetrics Schema Reference.
Query Monitor
The Query Monitor page includes a new column, CPU Time, to show the amount of system CPU consumed by each query.
Command Center now saves information about DDL statements (for example,
CREATE
andDROP
statements) to history, as well as DML and queries. In earlier versions of Command Center, DDL information was displayed but not saved to history.With Greenplum 6.8 or later, the Workload column on the Query Monitor page provides a drop-down menu that you can use to reassign the query to a different resource group.
The Blocked By column is no longer displayed for active queries. To view information about blocking transactions, use the tooltip that is displayed when the query status is Blocked. See Query Monitor.
Permissions Changes
- Only Operator and Admin users can move queries from the Query Monitor page.
- Only Admin users can make changes to the Recommendations and Workload pages.
- Basic users can now view the Table Browser page.
Fixed Issues
[30545] The metrics collection code was updated to resolve a buffer overflow condition that could cause Greenplum Database to crash when
gp_enable_query_metrics
was set to “on.”[30812] Resolved a problem where the
rows_out
value displayed an incorrect number for certain queries.[173978192] Resolved a problem where the web socket connection was not rebuilt after a user attempted to login to Command Center after a previous session timed out.
[174275398] Command Center will now fail to start if the web server port (28080) is being used by another program.
[174665588] Command Center now displays the correct value for Statement Memory for resource group entries.
Enhancements and Changes in Greenplum Command Center 6.2
Command Center Installation Changes
Command Center directory names have changed to omit
-web
.The Command Center installation directory name has changed from
greenplum-cc-web-<version>
togreenplum-cc-<version>
, for example/usr/local/greenplum-cc-6.2.0
.The Command Center installer creates a symbolic link
greenplum-cc
to the Command Center home directory if the gpadmin user has write permission in the installation directory. If the directory already exists, the link is recreated to link to the new Command Center installation directory. Use the link to access thegpcc_path.sh
file in your shell startup script to ensure that you access the most recent installation. For example, add this line to your.bashrc
or.bash_profile
file:source /usr/local/greenplum-cc/gpcc_path.sh
If the installation directory is not writable by the gpadmin user, you can create the directory and the symbolic link and set the owner to gpadmin before you run
gpccinstall
.You can run more than one Command Center instance on the same host. For example, if you run a Greenplum Database 5.x system and a Greenplum Database 6.x system on the same cluster, you can install Command Center instances for each system on the same master host. This feature is supported on Greenplum Database 6.8.0 or higher.
Before you run the Command Center installer, choose the Greenplum Database instance by sourcing the environment file (
greenplum_path.sh
). You must manually edit the$GPCC_HOME/app.conf
file for each additional Command Center instance you want to run to choose different port numbers for thehttpport
,httpsport
,rpcport
,ws_perf_port
, andagent_perf_port
parameters. See Installing Multiple Command Center Instances for details.
Recommendations Feature
The new Recommendations feature helps to identify Greenplum Database tables that require maintenance to reduce bloat, improve accuracy of query plans, and reduce skew that affects query performance.
- Schedule a table scan to run at a designated time and for a specified period of time. The scan can be scheduled to repeat daily or weekly on selected days of the week.
- Command Center runs queries to collect information about tables, processing as many tables as it can during the scheduled period. The next scan resumes where the previous scan left off. Collected data is saved in the new
gpmetrics.gpcc_table_info
and [gpmetrics_table_info_history
](../topics/ref-gpmetrics.html#gpcc_table_info_history] tables in the gpperfmon database. - The Recommendations page lists scanned tables in ranked order with recommendations to
VACUUM
,VACUUM FULL
,ANALYZE
, or redistribute tables.
See Recommendations for more information about the Recommendations feature.
I/O Wait Metrics
Command Center now collects CPU IOWait metrics. You can view IOWait metrics on the Host Metrics, Cluster Metrics, and History pages.
On the Host Metrics page, the CPU column value is the total of System, User, and IOWait percentages. The chart and pop-up box break out the System, User, IOWait, and Idle percentages separately.
On the Cluster Metrics and History pages, the CPU chart shows System and User percentages. The IOWait percentage is included in the System percentage, as in previous Command Center versions. The new IOWait chart shows just the percentage of CPU time spent in IOWait.
Workload Management Improvements
You can now set Memory Spill Ratio % for resource groups on the Workload Management page. Transactions spill to disk when they reach this threshold.
While editing resource group attributes, Command Center recalculates and displays Statement Memory, the amount of memory allocated to a query for each resource group.
See Workload Management for more information about these features.
gpcc stop Utility
- If the Greenplum system is down, running the
gpcc stop
command will stop the Command Center agent and web server processes.
Fixed Issues
When a query called a UDF that runs an inner query, the top level query could be missing in the Query Monitor view and in query history. This caused some columns in the Query Monitor view to display
-
instead of the correct value. This issue is fixed.The disk IO read/write calculations in
gpcc_system_history
and the Cluster Metrics page included R/W rates for multiple devices on the same physical disk. This is fixed. The disk IO read/write rate calculations now exclude:- partitions of disk
- dm (device-mapper)
- loop device
- md (multi-device such as raid)
The reported disk IO rate is now the same as the actual IO rate of the physical disk.
On the System ⟩ Storage page, values displayed as terrabytes (TB) were incorrectly converted from gigabytes (GB) by dividing by 1000. This is fixed. Values greater than 1024GB are correctly converted to TB by dividing by 1024.
Security vulnerability fixes have been added to prevent ClickJacking Attacks and to deny use of risky HTTP request methods.
The Running, Queued, and Blocked query counts on the Dashboard were misleading because they were not the current status at the time you were viewing the Dashboard, but the status about 15 seconds earlier. The numbers have been removed to avoid confusion. To view current numbers, go to the real-time Query Monitor by clicking the query graph from the Dashboard.
A Disk Full alert was raised when the total disk space for all hosts in the cluster exceeded the threshold. Now an alert is raised if disk usage for any host exceeds the threshold. The alert email includes the name of the host that triggered the alert.
The backend scan for the table browser periodically connected to the template1 database. This could prevent a user from creating a new database, because
CREATE DATABASE
is not allowed when there are any connections to the template1 database. This issue has been fixed. The template1 database is omitted from the backend scan.
Enhancements and Changes in Greenplum Command Center 6.1
Command Center Installation Changes
The Command Center release download file names have changed to include the Greenplum Database major version, for example
greenplum-cc-web-6.1.0-gp6-rhel7-x86_64.zip
.The Command Center installer checks that the metrics collector extension running in the Greenplumn Database system is the correct version for the Command Center version you are installing. If a new metrics collector extension is needed, the installer instructs you to install the correct version using the
gppkg
utility.The Command Center installer creates four entries in the
pg_hba.conf
authentication file for thegpmon
role if there are no existinggpmon
entries.local gpperfmon gpmon md5 host all gpmon 127.0.0.1/28 md5 host all gpmon ::1/128 md5 host all gpmon samenet md5
The
samenet
entry is new in this release, and the installer will add it to thepg_hba.conf
file even when there are existing entries for thegpmon
role.Note that the Table Browser feature requires
all
in the database field of thehost
entries so thatgpmon
can retrieve table information from each database.If you use an authentication method other than
md5
for thegpmon
user, such as LDAP or Kerberos, edit thepg_hba.conf
file to enable connections from all hosts and access to all databases.
New and Changed Features
A new Table Browser is added to allow administrators to see information about tables in Greenplum Databases. Tables displayed can be filtered by database, schema, owner, and size. The Command Center interface provides details about tables, including table storage types, distribution policies, partitions, sizes, and last access times. It lists recent queries that have accessed the table, with links into the Command Center query history. Notably, the Table Browser displays only metadata for tables; it does not provide access to data stored in tables.
Note: When you first start Command Center after installing this version, Command Center loads data into the new
gpmetrics.gpcc_table_info
table in the gpperfmon database. For databases with large numbers of tables, the initial load could take five minutes or longer. Table data will not be available in the Table Browser until it has been loaded.The table browser uses two new tables in the
gpmetrics
schema:gpcc_table_info
andgpcc_table_info_history
. See gpmetrics Schema Reference for information about the contents of these tables.The Command Center web server (
gpccws
) memory consumption is much improved compared to earlier Command Center versions.On the History page, the count of running queries at any point on the Queries graph now includes queries that started and finished in the interval since the previous metrics sample. A new
queries_finished
column is added to thegpmetrics.gpcc_database_history
table to record this count.The metrics collector extension adds a new
gpcc.enable_query_profiling
server configuration parameter that can be enabled to help with performance troubleshooting. When off, the default, the metrics collector does not collect queries executed by the gpmon user in the gpperfmon database or plan node history for queries that run in less than ten seconds (ormin_query_time
, if greater than ten seconds). If you enablegpcc.enable_query_profiling
in a session the metrics collector collects those queries. This parameter is available when the MetricsCollector-6.1.0 gppkg or above is installed in Greenplum Database 6.x.Each night, Command Center archives files in the
$GPCC_HOME/ccdata
directory that are more than two weeks old. The files in this directory include saved query text files (q*.txt
), plan node files (gpccexec*.txt
), and table size and statistics information files (gpcc_size_info.dat
andgpcc_stat_info.dat
). Archive files have names in the formatarchive_YYYYMMDD_YYYYMMDD.tar.gz
, where the dates are the beginning and end dates of the week included in the archive. The archived files are no longer needed by Command Center but may be useful for troubleshooting; you can remove them manually if you do not want to save them.
Resolved Issues
When a Command Center user signs out of Command Center, the Welcome page is displayed, but the message “Checking DB status…” is displayed for several seconds before the login form is presented. This is fixed. If there are no errors the login form is now displayed immediately.
The rightmost edge of system metrics graphs could drop to zero if the metrics data for that period was not yet available to display. This is fixed by cutting the time axis on the right edge of metrics graphs to the last period with available data. As a result of this fix, the time period displayed at the right edge of a graph can be slightly earlier than the last sync time displayed at the top of the page.
Users with self-only permissions could see other users’ query history. In Command Center 6.1 users with self-only permission have no access to the query history page, so they cannot see their own or others’ query history. Self-only users will be allowed access to their own query history in a future Command Center release.
The Command Center Storage Status page did not display tablespaces with locations other than
pg_default
orpg_global
. This is fixed. Expanding a hostname now shows each tablespace location. Hovering over the tablespace location displays a pop-up with a list of the data directories at that location. Data directories may not be displayed until Command Center has refreshed the Storage Status page, which occurs every four hours.The
send_alert.sh
script in the$MASTER_DATA_DIRECTORY/gpmetrics
directory had a bug that caused some values to be incorrectly expanded when substituted into the template. This is fixed by replacing all occurrences of$ARGUMENT
with${ARGUMENT}
in the script. You can implement the same fix if you have created a customsend_alert.sh
script.
Known Issues
The following are known issues in the current Greenplum Command Center release.
Limitation for Defining Planner Cost in Workload Rules
When you define a workload management rule that uses the Planner Cost condition, the input field transforms your entry to a double datatype value. This can limit the ability to accurately define large planner costs that may be necessary for the GPORCA optimizer. For example, if you enter the GPORCA maximum cost value as 1457494496834852608, the actual value is converted to 1457494496834852600 for the rule. The value shown in the completed rule definition is the value that the rule enforces, and it may not be the exact value that you entered. Increase the cost value as necessary to cover the cost you want to set as the maximum.
External Updates to Workload Management Rules are Delayed
If you recreate the gp_wlm
extension from outside of Command Center after you have already created workload management rules using the Command Center interface, the rules engine may not run for a period of roughly 1 hour. This behavior occurs because Command Center checks for the availability of the extension every hour. Any changes you make outside of Command Center in this situation will not be visible until Command Center checks for the extension, or until you login to Command Center and access the Workload> Workload Mgmt page.
Customized SSH Path Not Supported with the Upgrade (-u) Option
If you upgrade your Command Center installation using the gpccinstall -u
option and you also specify an SSH binary using the -ssh-path <path>
option, the customized SSH path will not be used during the installation and the ssh_path
parameter will not be set in the app.conf
file.
Calculated Root Table Size Inaccurate for Some Partitioned Tables
When viewing the Table Browser view, the calculated size for a partitioned table is incorrect if some child partitions are in different schemas than the root table.
Sorting Tables by Size on Partition List Page is Slow
If there are a large number of tables in a schema in a database, sorting the partition table list by size can take a long time to display.