Greenplum csvlog(日志数据)检索、释义(gp_toolkit.gp_log*)

本文涉及的产品
RDS MySQL Serverless 基础系列,0.5-2RCU 50GB
云原生数据库 PolarDB MySQL 版,Serverless 5000PCU 100GB
云原生数据库 PolarDB 分布式版,标准版 2核8GB
简介: 标签PostgreSQL , Greenplum , csvlog , gp_toolkit背景由于GP为分布式数据库,当查看它的一些日志时,如果到服务器上查看,会非常的繁琐,而且不好排查问题。

标签

PostgreSQL , Greenplum , csvlog , gp_toolkit


背景

由于GP为分布式数据库,当查看它的一些日志时,如果到服务器上查看,会非常的繁琐,而且不好排查问题。

例如这是某个节点的日志

2018-08-23 15:34:30.004070 CST,"digoal","postgres",p12775,th-1756479360,"127.0.0.1","52802",2018-08-23 15:34:29 CST,0,con7,cmd4,seg-1,,dx4,,sx1,"LOG","00000","statement: COPY part FROM '/tmp/dss-data/part.csv' WITH csv DELIMITER '|';",,,,,,"COPY part FROM '/tmp/dss-data/part.csv' WITH csv DELIMITER '|';",0,,"postgres.c",1590,  
  
2018-08-23 15:34:30.004098 CST,"digoal","postgres",p12775,th-1756479360,"127.0.0.1","52802",2018-08-23 15:34:29 CST,0,con7,cmd4,seg-1,,dx4,,sx1,"ERROR","25P02","current transaction is aborted, commands ignored until end of transaction block",,,,,,"COPY part FROM '/tmp/dss-data/part.csv' WITH csv DELIMITER '|';",0,,"postgres.c",1669,  
2018-08-23 15:34:30.004113 CST,"digoal","postgres",p12775,th-1756479360,"127.0.0.1","52802",2018-08-23 15:34:29 CST,0,con7,cmd4,seg-1,,dx4,,sx1,"LOG","00000","An exception was encountered during the execution of statement: COPY part FROM '/tmp/dss-data/part.csv' WITH csv DELIMITER '|';",,,,,,,0,,,,  

Greenplum提供了gp_toolkit.gp_log...视图,用来汇聚日志,方便查看。

postgres=# \dv gp_toolkit.gp_log*  
                       List of relations  
   Schema   |          Name          | Type | Owner  | Storage   
------------+------------------------+------+--------+---------  
 gp_toolkit | gp_log_command_timings | view | digoal | none  
 gp_toolkit | gp_log_database        | view | digoal | none  
 gp_toolkit | gp_log_master_concise  | view | digoal | none  
 gp_toolkit | gp_log_system          | view | digoal | none  
(4 rows)  

gp_toolkit.gp_log_* 视图

  • gp_log_command_timings (只输出会话,PID,时间,如果关注运行较长时间的详细信息,可根据会话,PID在gp_log_system中定位)
This view uses an external table to read the log files on the master and report the execution time of SQL commands executed in a database session.   
The use of this view requires superuser permissions.  
  • gp_log_master_concise (只有master节点的日志)
This view uses an external table to read a subset of the log fields from the master log file.   
The use of this view requires superuser permissions.  
  • gp_log_system (汇聚master,segment,mirror节点的日志,含所有数据库)
This view uses an external table to read the server log files of the entire Greenplum system (master, segments, and mirrors) and lists all log entries.   
Associated log entries can be identified by the session id (logsession) and command id (logcmdcount).   
The use of this view requires superuser permissions.  
  • gp_log_database (汇聚master,segment,mirror节点的日志,含当前数据库)
This view uses an external table to read the server log files of the entire Greenplum system (master, segments, and mirrors) and lists log entries associated with the current database.   
Associated log entries can be identified by the session id (logsession) and command id (logcmdcount).   
The use of this view requires superuser permissions.  

使用举例

select * from gp_toolkit.gp_log_database limit 1000;  
  
-[ RECORD 5 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  
logtime        | 2018-08-24 02:42:24.084998+08  
loguser        | digoal  
logdatabase    | postgres  
logpid         | p1598  
logthread      | th1426366592  
loghost        | 127.0.0.1  
logport        | 47598  
logsessiontime | 2018-08-24 02:41:42+08  
logtransaction | 0  
logsession     | con8  
logcmdcount    | cmd6  
logsegment     | seg-1  
logslice       |   
logdistxact    | dx8  
loglocalxact   |   
logsubxact     | sx1  
logseverity    | LOG  
logstate       | 00000  
logmessage     | statement: SELECT n.nspname as "Schema",  
               |   c.relname as "Name",  
               |   CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",  
               |   pg_catalog.pg_get_userbyid(c.relowner) as "Owner", CASE c.relstorage WHEN 'h' THEN 'heap' WHEN 'x' THEN 'external' WHEN 'a' THEN 'append only' WHEN 'v' THEN 'none' WHEN 'c' THEN 'append only columnar' END as "Storage"  
               |   
               | FROM pg_catalog.pg_class c  
               |      LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace  
               | WHERE c.relkind IN ('r','')  
               | AND c.relstorage IN ('h', 'a', 'c','')  
               |       AND n.nspname <> 'pg_catalog'  
               |       AND n.nspname <> 'information_schema'  
               |       AND n.nspname !~ '^pg_toast'  
               |   AND pg_catalog.pg_table_is_visible(c.oid)  
               | ORDER BY 1,2;  
logdetail      |   
loghint        |   
logquery       |   
logquerypos    |   
logcontext     |   
logdebug       | SELECT n.nspname as "Schema",  
               |   c.relname as "Name",  
               |   CASE c.relkind WHEN 'r' THEN 'table' WHEN 'v' THEN 'view' WHEN 'i' THEN 'index' WHEN 'S' THEN 'sequence' WHEN 's' THEN 'special' END as "Type",  
               |   pg_catalog.pg_get_userbyid(c.relowner) as "Owner", CASE c.relstorage WHEN 'h' THEN 'heap' WHEN 'x' THEN 'external' WHEN 'a' THEN 'append only' WHEN 'v' THEN 'none' WHEN 'c' THEN 'append only columnar' END as "Storage"  
               |   
               | FROM pg_catalog.pg_class c  
               |      LEFT JOIN pg_catalog.pg_namespace n ON n.oid = c.relnamespace  
               | WHERE c.relkind IN ('r','')  
               | AND c.relstorage IN ('h', 'a', 'c','')  
               |       AND n.nspname <> 'pg_catalog'  
               |       AND n.nspname <> 'information_schema'  
               |       AND n.nspname !~ '^pg_toast'  
               |   AND pg_catalog.pg_table_is_visible(c.oid)  
               | ORDER BY 1,2;  
logcursorpos   | 0  
logfunction    |   
logfile        | postgres.c  
logline        | 1590  
logstack       |   
-[ RECORD 6 ]--+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------  
logtime        | 2018-08-24 02:43:12.031548+08  
loguser        | postgres  
logdatabase    | postgres  
logpid         | p1705  
logthread      | th1426366592  
loghost        | [local]  
logport        |   
logsessiontime | 2018-08-24 02:43:12+08  
logtransaction | 0  
logsession     | con9  
logcmdcount    |   
logsegment     | seg-1  
logslice       |   
logdistxact    |   
loglocalxact   |   
logsubxact     | sx1  
logseverity    | FATAL  
logstate       | 28000  
logmessage     | no pg_hba.conf entry for host "[local]", user "postgres", database "postgres", SSL off  
logdetail      |   
loghint        |   
logquery       |   
logquerypos    |   
logcontext     |   
logdebug       |   
logcursorpos   | 0  
logfunction    |   
logfile        | auth.c  
logline        | 602  
logstack       |   
postgres=# select * from gp_toolkit.gp_log_command_timings order by logsession,logcmdcount limit 100;  
 logsession | logcmdcount | logdatabase | loguser  | logpid |          logtimemin           |          logtimemax           |   logduration     
------------+-------------+-------------+----------+--------+-------------------------------+-------------------------------+-----------------  
 con10      | cmd1        | postgres    | digoal   | p14843 | 2018-08-24 05:43:49.156082+08 | 2018-08-24 05:43:49.156082+08 | 00:00:00  
 con10      | cmd2        | postgres    | digoal   | p14843 | 2018-08-24 05:43:49.158241+08 | 2018-08-24 05:43:49.158241+08 | 00:00:00  
 con10      | cmd3        | postgres    | digoal   | p14843 | 2018-08-24 05:43:51.984189+08 | 2018-08-24 05:43:51.984189+08 | 00:00:00  
 con10      | cmd4        | postgres    | digoal   | p14843 | 2018-08-24 05:43:51.987823+08 | 2018-08-24 05:43:51.987823+08 | 00:00:00  
 con10      | cmd5        | postgres    | digoal   | p14843 | 2018-08-24 05:43:53.144745+08 | 2018-08-24 05:43:53.159263+08 | 00:00:00.014518  
 con10      | cmd6        | postgres    | digoal   | p14843 | 2018-08-24 05:43:56.584268+08 | 2018-08-24 05:43:56.584268+08 | 00:00:00  
 con10      | cmd7        | postgres    | digoal   | p14843 | 2018-08-24 05:43:56.586394+08 | 2018-08-24 05:43:56.586394+08 | 00:00:00  
 con10      | cmd8        | postgres    | digoal   | p14843 | 2018-08-24 05:43:58.292289+08 | 2018-08-24 05:43:58.294961+08 | 00:00:00.002672  
 con100     | cmd1        | postgres    | postgres | p8206  | 2018-08-24 05:18:03.279438+08 | 2018-08-24 05:18:03.280025+08 | 00:00:00.000587  
select * from gp_toolkit.gp_log_master_concise limit 1000;  
logtime     | 2018-08-24 02:40:34.295103+08  
logdatabase | template1  
logsession  |   
logcmdcount | cmd21  
logseverity | LOG  
logmessage  | statement:   
            |                 SELECT oid, fsname  
            |                 FROM pg_filespace  
            |                 ORDER BY fsname;  

csvlog字段含义

对于PostgreSQL来说,csvlog字段含义可以在代码,或手册中找到。

1、手册介绍

https://www.postgresql.org/docs/devel/static/runtime-config-logging.html

CREATE TABLE postgres_log  
(  
  log_time timestamp(3) with time zone,  
  user_name text,  
  database_name text,  
  process_id integer,  
  connection_from text,  
  session_id text,  
  session_line_num bigint,  
  command_tag text,  
  session_start_time timestamp with time zone,  
  virtual_transaction_id text,  
  transaction_id bigint,  
  error_severity text,  
  sql_state_code text,  
  message text,  
  detail text,  
  hint text,  
  internal_query text,  
  internal_query_pos integer,  
  context text,  
  query text,  
  query_pos integer,  
  location text,  
  application_name text,  
  PRIMARY KEY (session_id, session_line_num)  
);  

2、通过file_fdw外部表,PG也可以在SQL中查看日志

https://www.postgresql.org/docs/devel/static/file-fdw.html

CREATE EXTENSION file_fdw;  
  
CREATE SERVER pglog FOREIGN DATA WRAPPER file_fdw;  
  
CREATE FOREIGN TABLE pglog (  
  log_time timestamp(3) with time zone,  
  user_name text,  
  database_name text,  
  process_id integer,  
  connection_from text,  
  session_id text,  
  session_line_num bigint,  
  command_tag text,  
  session_start_time timestamp with time zone,  
  virtual_transaction_id text,  
  transaction_id bigint,  
  error_severity text,  
  sql_state_code text,  
  message text,  
  detail text,  
  hint text,  
  internal_query text,  
  internal_query_pos integer,  
  context text,  
  query text,  
  query_pos integer,  
  location text,  
  application_name text  
) SERVER pglog  
OPTIONS ( filename '/home/josh/data/log/pglog.csv', format 'csv' );  

3、通过源码查看csvlog释义

src/backend/utils/error/elog.c

/*  
 * Constructs the error message, depending on the Errordata it gets, in a CSV  
 * format which is described in doc/src/sgml/config.sgml.  
 */  
static void  
write_csvlog(ErrorData *edata)  
{  
...  

Greenplum CSVLOG释义

https://greenplum.org/docs/5100/ref_guide/gp_toolkit.html#topic16

Table 11. gp_log_command_timings view

Column Description
logsession The session identifier (prefixed with "con").
logcmdcount The command number within a session (prefixed with "cmd").
logdatabase The name of the database.
loguser The name of the database user.
logpid The process id (prefixed with "p").
logtimemin The time of the first log message for this command.
logtimemax The time of the last log message for this command.
logduration Statement duration from start to end time.

Table 13. gp_log_master_concise view

Column Description
logtime The timestamp of the log message.
logdatabase The name of the database.
logsession The session identifier (prefixed with "con").
logcmdcount The command number within a session (prefixed with "cmd").
logmessage Log or error message text.

Table 12. gp_log_database view

Column Description
logtime The timestamp of the log message.
loguser The name of the database user.
logdatabase The name of the database.
logpid The associated process id (prefixed with "p").
logthread The associated thread count (prefixed with "th").
loghost The segment or master host name.
logport The segment or master port.
logsessiontime Time session connection was opened.
logtransaction Global transaction id.
logsession The session identifier (prefixed with "con").
logcmdcount The command number within a session (prefixed with "cmd").
logsegment The segment content identifier (prefixed with "seg" for primary or "mir" for mirror. The master always has a content id of -1).
logslice The slice id (portion of the query plan being executed).
logdistxact Distributed transaction id.
loglocalxact Local transaction id.
logsubxact Subtransaction id.
logseverity LOG, ERROR, FATAL, PANIC, DEBUG1 or DEBUG2.
logstate SQL state code associated with the log message.
logmessage Log or error message text.
logdetail Detail message text associated with an error message.
loghint Hint message text associated with an error message.
logquery The internally-generated query text.
logquerypos The cursor index into the internally-generated query text.
logcontext The context in which this message gets generated.
logdebug Query string with full detail for debugging.
logcursorpos The cursor index into the query string.
logfunction The function in which this message is generated.
logfile The log file in which this message is generated.
logline The line in the log file in which this message is generated.
logstack Full text of the stack trace associated with this message.

Table 14. gp_log_system view

Column Description
logtime The timestamp of the log message.
loguser The name of the database user.
logdatabase The name of the database.
logpid The associated process id (prefixed with "p").
logthread The associated thread count (prefixed with "th").
loghost The segment or master host name.
logport The segment or master port.
logsessiontime Time session connection was opened.
logtransaction Global transaction id.
logsession The session identifier (prefixed with "con").
logcmdcount The command number within a session (prefixed with "cmd").
logsegment The segment content identifier (prefixed with "seg" for primary or "mir" for mirror. The master always has a content id of -1).
logslice The slice id (portion of the query plan being executed).
logdistxact Distributed transaction id.
loglocalxact Local transaction id.
logsubxact Subtransaction id.
logseverity LOG, ERROR, FATAL, PANIC, DEBUG1 or DEBUG2.
logstate SQL state code associated with the log message.
logmessage Log or error message text.
logdetail Detail message text associated with an error message.
loghint Hint message text associated with an error message.
logquery The internally-generated query text.
logquerypos The cursor index into the internally-generated query text.
logcontext The context in which this message gets generated.
logdebug Query string with full detail for debugging.
logcursorpos The cursor index into the query string.
logfunction The function in which this message is generated.
logfile The log file in which this message is generated.
logline The line in the log file in which this message is generated.
logstack Full text of the stack trace associated with this message.

参考

https://greenplum.org/docs/5100/ref_guide/gp_toolkit.html#topic16

https://www.postgresql.org/docs/devel/static/runtime-config-logging.html

https://www.postgresql.org/docs/devel/static/file-fdw.html

src/backend/utils/error/elog.c

相关实践学习
日志服务之使用Nginx模式采集日志
本文介绍如何通过日志服务控制台创建Nginx模式的Logtail配置快速采集Nginx日志并进行多维度分析。
相关文章
|
8天前
|
监控 NoSQL MongoDB
mongoDB查看数据的插入日志
【5月更文挑战第9天】mongoDB查看数据的插入日志
307 4
|
8天前
|
监控 NoSQL MongoDB
mongoDB查看数据的插入日志
【5月更文挑战第2天】mongoDB查看数据的插入日志
305 0
|
6天前
|
关系型数据库 MySQL 数据库
mysql数据库bin-log日志管理
mysql数据库bin-log日志管理
|
6天前
|
存储 关系型数据库 数据库
关系型数据库文件方式存储LOG FILE(日志文件)
【5月更文挑战第11天】关系型数据库文件方式存储LOG FILE(日志文件)
45 1
|
7天前
|
运维 监控 安全
Java一分钟之-Log4j与日志记录的重要性
【5月更文挑战第16天】Log4j是Java常用的日志框架,用于灵活地记录程序状态和调试问题。通过设置日志级别和过滤器,可避免日志输出混乱。为防止日志文件过大,可配置滚动策略。关注日志安全性,如Log4j 2.x的CVE-2021-44228漏洞,及时更新至安全版本。合理使用日志能提升故障排查和系统监控效率。
55 0
|
8天前
|
C++
JNI Log 日志输出
JNI Log 日志输出
51 1
|
8天前
|
存储 运维 大数据
聊聊日志硬扫描,阿里 Log Scan 的设计与实践
泛日志(Log/Trace/Metric)是大数据的重要组成,伴随着每一年业务峰值的新脉冲,日志数据量在快速增长。同时,业务数字化运营、软件可观测性等浪潮又在对日志的存储、计算提出更高的要求。
243 6
|
8天前
|
关系型数据库 MySQL 数据管理
MySQL通过 bin-log 恢复从备份点到灾难点之间数据
MySQL通过 bin-log 恢复从备份点到灾难点之间数据
197 0
|
8天前
|
XML Java Maven
Springboot整合与使用log4j2日志框架【详解版】
该文介绍了如何在Spring Boot中切换默认的LogBack日志系统至Log4j2。首先,需要在Maven依赖中排除`spring-boot-starter-logging`并引入`spring-boot-starter-log4j2`。其次,创建`log4j2-spring.xml`配置文件放在`src/main/resources`下,配置包括控制台和文件的日志输出、日志格式和文件切分策略。此外,可通过在不同环境的`application.yml`中指定不同的log4j2配置文件。最后,文章提到通过示例代码解释了日志格式中的各种占位符含义。
|
8天前
|
运维 监控 Go
Golang深入浅出之-Go语言中的日志记录:log与logrus库
【4月更文挑战第27天】本文比较了Go语言中标准库`log`与第三方库`logrus`的日志功能。`log`简单但不支持日志级别配置和多样化格式,而`logrus`提供更丰富的功能,如日志级别控制、自定义格式和钩子。文章指出了使用`logrus`时可能遇到的问题,如全局logger滥用、日志级别设置不当和过度依赖字段,并给出了避免错误的建议,强调理解日志级别、合理利用结构化日志、模块化日志管理和定期审查日志配置的重要性。通过这些实践,开发者能提高应用监控和故障排查能力。
94 1