您好,登錄后才能下訂單哦!
本篇文章給大家分享的是有關(guān)MySQL中慢查詢記錄的原理是什么,小編覺得挺實(shí)用的,因此分享給大家學(xué)習(xí),希望大家閱讀完這篇文章后可以有所收獲,話不多說,跟著小編一起來看看吧。
實(shí)際上慢查詢中的時(shí)間就是時(shí)鐘時(shí)間,是通過操作系統(tǒng)的命令獲得的時(shí)間,如下是Linux中獲取時(shí)間的方式
while (gettimeofday(&t, NULL) != 0) {} newtime= (ulonglong)t.tv_sec * 1000000 + t.tv_usec; return newtime;
實(shí)際上就是通過OS的API gettimeofday函數(shù)獲得的時(shí)間。
long_query_time:如果執(zhí)行時(shí)間超過本參數(shù)設(shè)置記錄慢查詢。
log_queries_not_using_indexes:如果語句未使用索引記錄慢查詢。
log_slow_admin_statements:是否記錄管理語句。(如ALTER TABLE,ANALYZE TABLE, CHECK TABLE, CREATE INDEX, DROP INDEX, OPTIMIZE TABLE, and REPAIR TABLE.)
本文主要討論long_query_time參數(shù)的含義。
如果我們將語句的執(zhí)行時(shí)間定義為如下:
實(shí)際消耗時(shí)間 = 實(shí)際執(zhí)行時(shí)間+鎖等待消耗時(shí)間
那么long_query_time實(shí)際上界定的是實(shí)際執(zhí)行時(shí)間,所以有些情況下雖然語句實(shí)際消耗的時(shí)間很長但是是因?yàn)殒i等待時(shí)間較長而引起的,那么實(shí)際上這種語句也不會(huì)記錄到慢查詢。
我們看一下log_slow_applicable函數(shù)的代碼片段:
res= cur_utime - thd->utime_after_lock; if (res > thd->variables.long_query_time) thd->server_status|= SERVER_QUERY_WAS_SLOW; else thd->server_status&= ~SERVER_QUERY_WAS_SLOW;
這里實(shí)際上清楚的說明了上面的觀點(diǎn),是不是慢查詢就是通過這個(gè)函數(shù)進(jìn)行的判斷的,非常重要。我可以清晰的看到如下公式:
res (實(shí)際執(zhí)行時(shí)間 ) = cur_utime(實(shí)際消耗時(shí)間) - thd->utime_after_lock( 鎖等待消耗時(shí)間)
實(shí)際上在慢查詢中記錄的正是
Query_time:實(shí)際執(zhí)行時(shí)間
Lock_time:鎖等待消耗時(shí)間
但是是否是慢查詢其評(píng)判標(biāo)準(zhǔn)卻是實(shí)際執(zhí)行時(shí)間及Query_time - Lock_time
其中鎖等待消耗時(shí)間( Lock_time)我現(xiàn)在已經(jīng)知道的包括:
MySQL層 MDL LOCK等待消耗的時(shí)間。(Waiting for table metadata lock)
MySQL層 MyISAM表鎖消耗的時(shí)間。 (Waiting for table level lock)
InnoDB層 行鎖消耗的時(shí)間。
我們可以看到在公式中utime_after_lock( 鎖等待消耗時(shí)間Lock_time)的記錄也就成了整個(gè)公式的關(guān)鍵,那么我們?cè)囍M(jìn)行debug。
1、MySQL層utime_after_lock的記錄方式
不管是 MDL LOCK等待消耗的時(shí)間還是 MyISAM表鎖消耗的時(shí)間都是在MySQL層記錄的,實(shí)際上它只是記錄在函數(shù)mysql_lock_tables的末尾會(huì)調(diào)用的THD::set_time_after_lock進(jìn)行的記錄時(shí)間而已如下:
void set_time_after_lock() { utime_after_lock= my_micro_time(); MYSQL_SET_STATEMENT_LOCK_TIME(m_statement_psi, (utime_after_lock - start_utime)); }
那么這里可以解析為代碼運(yùn)行到mysql_lock_tables函數(shù)的末尾之前的所有的時(shí)間都記錄到utime_after_lock時(shí)間中,實(shí)際上并不精確。但是實(shí)際上MDL LOCK的獲取和MyISAM表鎖的獲取都包含在里面。所以即便是select語句也會(huì)看到Lock_time并不為0。下面是棧幀:
#0 THD::set_time_after_lock (this=0x7fff28012820) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_class.h:3414 #1 0x0000000001760d6d in mysql_lock_tables (thd=0x7fff28012820, tables=0x7fff28c16b58, count=1, flags=0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/lock.cc:366 #2 0x000000000151dc1a in lock_tables (thd=0x7fff28012820, tables=0x7fff28c165b0, count=1, flags=0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_base.cc:6700 #3 0x00000000017c4234 in Sql_cmd_delete::mysql_delete (this=0x7fff28c16b50, thd=0x7fff28012820, limit=18446744073709551615) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_delete.cc:136 #4 0x00000000017c84ba in Sql_cmd_delete::execute (this=0x7fff28c16b50, thd=0x7fff28012820) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_delete.cc:1389 #5 0x00000000015a7814 in mysql_execute_command (thd=0x7fff28012820, first_level=true) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:3729 #6 0x00000000015adcd6 in mysql_parse (thd=0x7fff28012820, parser_state=0x7ffff035b600) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5836 #7 0x00000000015a1b95 in dispatch_command (thd=0x7fff28012820, com_data=0x7ffff035bd70, command=COM_QUERY) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1447 #8 0x00000000015a09c6 in do_command (thd=0x7fff28012820) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1010
2、InnoDB層的行鎖的utime_after_lock記錄方式
InnoDB引擎層調(diào)用通過thd_set_lock_wait_time調(diào)用thd_storage_lock_wait函數(shù)完成的棧幀如下:
#0 thd_storage_lock_wait (thd=0x7fff2c000bc0, value=9503561) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_class.cc:798 #1 0x00000000019a4b2a in thd_set_lock_wait_time (thd=0x7fff2c000bc0, value=9503561) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:1784 #2 0x0000000001a4b50f in lock_wait_suspend_thread (thr=0x7fff2c088200) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/lock/lock0wait.cc:363 #3 0x0000000001b0ec9b in row_mysql_handle_errors (new_err=0x7ffff0317d54, trx=0x7ffff2f2e5d0, thr=0x7fff2c088200, savept=0x0) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/row/row0mysql.cc:772 #4 0x0000000001b4fe61 in row_search_mvcc (buf=0x7fff2c087640 "\377", mode=PAGE_CUR_G, prebuilt=0x7fff2c087ac0, match_mode=0, direction=0) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/row/row0sel.cc:5940 #5 0x00000000019b3051 in ha_innobase::index_read (this=0x7fff2c087100, buf=0x7fff2c087640 "\377", key_ptr=0x0, key_len=0, find_flag=HA_READ_AFTER_KEY) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9104 #6 0x00000000019b4374 in ha_innobase::index_first (this=0x7fff2c087100, buf=0x7fff2c087640 "\377") at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9551 #7 0x00000000019b462c in ha_innobase::rnd_next (this=0x7fff2c087100, buf=0x7fff2c087640 "\377") at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9656 #8 0x0000000000f66f1b in handler::ha_rnd_next (this=0x7fff2c087100, buf=0x7fff2c087640 "\377") at /root/mysql5.7.14/percona-server-5.7.14-7/sql/handler.cc:3099 #9 0x00000000014c61b6 in rr_sequential (info=0x7ffff03189e0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/records.cc:520 #10 0x00000000017c56c3 in Sql_cmd_delete::mysql_delete (this=0x7fff2c006ae8, thd=0x7fff2c000bc0, limit=1) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_delete.cc:454 #11 0x00000000017c84ba in Sql_cmd_delete::execute (this=0x7fff2c006ae8, thd=0x7fff2c000bc0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_delete.cc:1389
函數(shù)本身還是很簡單自己看看就知道了就是相加而已如下:
void thd_storage_lock_wait(THD *thd, long long value) { thd->utime_after_lock+= value; }
這是Percona的解釋:
Specifies how much information to include in your slow log. The value is a comma-delimited string, and can contain any combination of the following values:
microtime: Log queries with microsecond precision (mandatory).
query_plan: Log information about the query’s execution plan (optional).
innodb: Log InnoDB statistics (optional).
minimal: Equivalent to enabling just microtime.
standard: Equivalent to enabling microtime,innodb.
full: Equivalent to all other values OR’ed together.
總之在Percona中可以修改這個(gè)參數(shù)獲得更加詳細(xì)的信息大概的格式如下:
# Time: 2018-05-30T09:30:12.039775Z # User@Host: root[root] @ localhost [] Id: 10 # Schema: test Last_errno: 1317 Killed: 0 # Query_time: 19.254508 Lock_time: 0.001043 Rows_sent: 0 Rows_examined: 0 Rows_affected: 0 # Bytes_sent: 44 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0 # InnoDB_trx_id: 0 # QC_Hit: No Full_scan: No Full_join: No Tmp_table: No Tmp_table_on_disk: No # Filesort: No Filesort_on_disk: No Merge_passes: 0 # InnoDB_IO_r_ops: 0 InnoDB_IO_r_bytes: 0 InnoDB_IO_r_wait: 0.000000 # InnoDB_rec_lock_wait: 0.000000 InnoDB_queue_wait: 0.000000 # InnoDB_pages_distinct: 0 SET timestamp=1527672612; select count(*) from z1 limit 1;
本節(jié)將會(huì)進(jìn)行詳細(xì)的解釋,全部的慢查詢的輸出都來自于函數(shù)File_query_log::write_slow ,有興趣的同學(xué)可以自己看看,我這里也會(huì)給出輸出的位置和含義,其中含義部分可能給出的是源碼中的注釋。
1、第一部分時(shí)間
# Time: 2018-05-30T09:30:12.039775Z
對(duì)應(yīng)的代碼:
my_snprintf(buff, sizeof buff,"# Time: %s\n", my_timestamp);
其中my_timestamp取值來自于
thd->current_utime();
實(shí)際上就是:
while (gettimeofday(&t, NULL) != 0) {} newtime= (ulonglong)t.tv_sec * 1000000 + t.tv_usec; return newtime;
可以看到實(shí)際就是調(diào)用gettimeofday系統(tǒng)調(diào)用得到的系統(tǒng)當(dāng)前時(shí)間。
注意:
對(duì)于5.6來講還有一句判斷
if (current_time != last_time)
如果兩次打印的時(shí)間秒鐘一致則不會(huì)輸出時(shí)間,只有通過后面介紹的
SET timestamp=1527753496;
來判斷時(shí)間,5.7.14沒有看到這樣的代碼。
2、第二部分用戶信息
# User@Host: root[root] @ localhost [] Id: 10
對(duì)應(yīng)的代碼:
buff_len= my_snprintf(buff, 32, "%5u", thd->thread_id()); if (my_b_printf(&log_file, "# User@Host: %s Id: %s\n", user_host, buff) == (uint) -1) goto err; }
user_host是一串字符串,參考代碼:
size_t user_host_len= (strxnmov(user_host_buff, MAX_USER_HOST_SIZE, sctx->priv_user().str ? sctx->priv_user().str : "", "[", sctx_user.length ? sctx_user.str : (thd->slave_thread ? "SQL_SLAVE" : ""), "] @ ", sctx_host.length ? sctx_host.str : "", " [", sctx_ip.length ? sctx_ip.str : "", "]", NullS) - user_host_buff);
解釋如下:
root: m_priv_user - The user privilege we are using. May be "" for anonymous user。
[root]: m_user - user of the client, set to NULL until the user has been read from the connection。
localhost: m_host - host of the client。
[]:client IP m_ip - client IP。
Id: 10 thd->thread_id()實(shí)際上就是show processlist出來的id。
3、第三部分schema等信息
# Schema: test Last_errno: 1317 Killed: 0
對(duì)應(yīng)的代碼:
"# Schema: %s Last_errno: %u Killed: %u\n" (thd->db().str ? thd->db().str : ""), thd->last_errno, (uint) thd->killed,
Schema:
m_db Name of the current (default) database.If there is the current (default) database, "db" contains its name. If there is no current (default) database, "db" is NULL and "db_length" is 0. In other words, "db", "db_length" must either be NULL, or contain a valid database name.
Last_errno:
Variable last_errno contains the last error/warning acquired during query execution.
Killed: 這里代表的是終止的錯(cuò)誤碼。源碼中如下:
enum killed_state
{
NOT_KILLED=0,
KILL_BAD_DATA=1,
KILL_CONNECTION=ER_SERVER_SHUTDOWN,
KILL_QUERY=ER_QUERY_INTERRUPTED,
KILL_TIMEOUT=ER_QUERY_TIMEOUT,
KILLED_NO_VALUE /* means neither of the states */
};
在錯(cuò)誤碼中代表如下:
{ "ER_SERVER_SHUTDOWN", 1053, "Server shutdown in progress" },
{ "ER_QUERY_INTERRUPTED", 1317, "Query execution was interrupted" },
{ "ER_QUERY_TIMEOUT", 1886, "Query execution was interrupted, max_statement_time exceeded" },
4、第四部分執(zhí)行信息
這部分可能是大家最關(guān)心的部分,很多信息也是默認(rèn)輸出都會(huì)輸出的。
# Query_time: 19.254508 Lock_time: 0.001043 Rows_sent: 0 Rows_examined: 0 Rows_affected: 0 # Bytes_sent: 44 Tmp_tables: 0 Tmp_disk_tables: 0 Tmp_table_sizes: 0 # InnoDB_trx_id: 0
對(duì)應(yīng)代碼:
my_b_printf(&log_file, "# Schema: %s Last_errno: %u Killed: %u\n" "# Query_time: %s Lock_time: %s Rows_sent: %llu" " Rows_examined: %llu Rows_affected: %llu\n" "# Bytes_sent: %lu", (thd->db().str ? thd->db().str : ""), thd->last_errno, (uint) thd->killed, query_time_buff, lock_time_buff, (ulonglong) thd->get_sent_row_count(), (ulonglong) thd->get_examined_row_count(), (thd->get_row_count_func() > 0) ? (ulonglong) thd->get_row_count_func() : 0, (ulong) (thd->status_var.bytes_sent - thd->bytes_sent_old) my_b_printf(&log_file, " Tmp_tables: %lu Tmp_disk_tables: %lu " "Tmp_table_sizes: %llu", thd->tmp_tables_used, thd->tmp_tables_disk_used, thd->tmp_tables_size) snprintf(buf, 20, "%llX", thd->innodb_trx_id);及thd->innodb_trx_id
Query_time:語句執(zhí)行的時(shí)間及實(shí)際消耗時(shí)間 。
Lock_time:包含MDL lock和InnoDB row lock和MyISAM表鎖消耗時(shí)間的總和及鎖等待消耗時(shí)間。前面已經(jīng)進(jìn)行了描述(實(shí)際上也并不全是鎖等待的時(shí)間只是鎖等待包含在其中)。
我們來看看Query_time和Lock_time的源碼來源,它們來自于Query_logger::slow_log_write函數(shù)如下: query_utime= (current_utime > thd->start_utime) ? (current_utime - thd->start_utime) : 0; lock_utime= (thd->utime_after_lock > thd->start_utime) ? (thd->utime_after_lock - thd->start_utime) : 0; 下面是數(shù)據(jù)current_utime 的來源, current_utime= thd->current_utime(); 實(shí)際上就是: while (gettimeofday(&t, NULL) != 0) {} newtime= (ulonglong)t.tv_sec * 1000000 + t.tv_usec; return newtime; 獲取當(dāng)前時(shí)間而已 對(duì)于thd->utime_after_lock的獲取我已經(jīng)在前文進(jìn)行了描述,不再解釋。
Rows_sent:發(fā)送給mysql客戶端的行數(shù),下面是源碼中的解釋
Number of rows we actually sent to the client
Rows_examined:InnoDB引擎層掃描的行數(shù),下面是源碼中的解釋。(備注棧幀1)
Number of rows read and/or evaluated for a statement. Used for slow log reporting.
An examined row is defined as a row that is read and/or evaluated
according to a statement condition, including increate_sort_index(). Rows may be counted more than once, e.g., a statement including ORDER BY could possibly evaluate the row in filesort() before reading it for e.g. update.
Rows_affected:涉及到修改的話(比如DML語句)這是受影響的行數(shù)。
for DML statements: to the number of affected rows;
for DDL statements: to 0.
Bytes_sent:發(fā)送給客戶端的實(shí)際數(shù)據(jù)的字節(jié)數(shù),它來自于
(ulong) (thd->status_var.bytes_sent - thd->bytes_sent_old)
Tmp_tables:臨時(shí)表的個(gè)數(shù)。
Tmp_disk_tables:磁盤臨時(shí)表的個(gè)數(shù)。
Tmp_table_sizes:臨時(shí)表的大小。
以上三個(gè)指標(biāo)來自于:
thd->tmp_tables_used thd->tmp_tables_disk_used thd->tmp_tables_size
這三個(gè)指標(biāo)增加的位置對(duì)應(yīng)在free_tmp_table函數(shù)中如下:
thd->tmp_tables_used++; if (entry->file) { thd->tmp_tables_size += entry->file->stats.data_file_length; if (entry->file->ht->db_type != DB_TYPE_HEAP) thd->tmp_tables_disk_used++; }
InnoDB_trx_id:事物ID,也就是trx->id,/*!< transaction id */
5、第五部分優(yōu)化器相關(guān)信息
# QC_Hit: No Full_scan: No Full_join: No Tmp_table: No Tmp_table_on_disk: No # Filesort: No Filesort_on_disk: No Merge_passes: 0
這一行來自于如下代碼:
my_b_printf(&log_file, "# QC_Hit: %s Full_scan: %s Full_join: %s Tmp_table: %s " "Tmp_table_on_disk: %s\n" \ "# Filesort: %s Filesort_on_disk: %s Merge_passes: %lu\n", ((thd->query_plan_flags & QPLAN_QC) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_FULL_SCAN) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_FULL_JOIN) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_TMP_TABLE) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_TMP_DISK) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_FILESORT) ? "Yes" : "No"), ((thd->query_plan_flags & QPLAN_FILESORT_DISK) ? "Yes" : "No"),
這里注意一個(gè)處理的技巧,這里query_plan_flags中每一位都代表一個(gè)含義,這樣存儲(chǔ)既能存儲(chǔ)足夠多的信息同時(shí)存儲(chǔ)空間也很小,是C/C++中常用的方式。
QC_Hit: No:是否query cache命中。
Full_scan: 此處相當(dāng)于Select_scan 的含義,是否進(jìn)行了全掃描包括using index。
Full_join: 此處相當(dāng)于Select_full_join 的含義,是否被驅(qū)動(dòng)表使用到了索引,如果沒有使用到索引則為YES。
考慮如下的執(zhí)行計(jì)劃
mysql> desc select *,sleep(1) from testuin a,testuin1 b where a.id1=b.id1; +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+ | id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+ | 1 | SIMPLE | a | NULL | ALL | NULL | NULL | NULL | NULL | 5 | 100.00 | NULL | | 1 | SIMPLE | b | NULL | ALL | NULL | NULL | NULL | NULL | 5 | 20.00 | Using where; Using join buffer (Block Nested Loop) | +----+-------------+-------+------------+------+---------------+------+---------+------+------+----------+----------------------------------------------------+ 2 rows in set, 1 warning (0.00 sec)
如此輸出如下:
# QC_Hit: No Full_scan: Yes Full_join: Yes
Tmp_table:是否使用了臨時(shí)表,在函數(shù)create_tmp_table中設(shè)置。
Tmp_table_on_disk:是否使用了磁盤臨時(shí)表,如果時(shí)候innodb引擎則在create_innodb_tmp_table函數(shù)中設(shè)置。
Filesort:是否進(jìn)行了排序,在函數(shù)filesort中設(shè)置。
Filesort_on_disk:是否使用了磁盤排序,同樣在函數(shù)filesort中設(shè)置,但是設(shè)置之前會(huì)進(jìn)行是否需要磁盤排序文件的判斷。
Merge_passes: 進(jìn)行多路歸并排序,歸并的次數(shù)。
Variable query_plan_fsort_passes collects information about file sort passes
acquired during query execution.
6、第六部分InnoDB相關(guān)信息
# InnoDB_IO_r_ops: 0 InnoDB_IO_r_bytes: 0 InnoDB_IO_r_wait: 0.000000 # InnoDB_rec_lock_wait: 0.000000 InnoDB_queue_wait: 0.000000 # InnoDB_pages_distinct: 0
這一行來自于如下代碼:
char buf[3][20]; snprintf(buf[0], 20, "%.6f", thd->innodb_io_reads_wait_timer / 1000000.0); snprintf(buf[1], 20, "%.6f", thd->innodb_lock_que_wait_timer / 1000000.0); snprintf(buf[2], 20, "%.6f", thd->innodb_innodb_que_wait_timer / 1000000.0); if (my_b_printf(&log_file, "# InnoDB_IO_r_ops: %lu InnoDB_IO_r_bytes: %llu " "InnoDB_IO_r_wait: %s\n" "# InnoDB_rec_lock_wait: %s InnoDB_queue_wait: %s\n" "# InnoDB_pages_distinct: %lu\n", thd->innodb_io_reads, thd->innodb_io_read, buf[0], buf[1], buf[2], thd->innodb_page_access) == (uint) -1)
InnoDB_IO_r_ops:物理IO讀取次數(shù)。
InnoDB_IO_r_bytes:物理IO讀取的總字節(jié)數(shù)。
InnoDB_IO_r_wait:物理IO讀取等待的時(shí)間。innodb 使用 BUF_IO_READ標(biāo)記為物理io讀取繁忙,參考函數(shù)buf_wait_for_read。
InnoDB_rec_lock_wait:等待行鎖消耗的時(shí)間。在函數(shù)que_thr_end_lock_wait中設(shè)置。
InnoDB_queue_wait: 等待進(jìn)入innodb引擎消耗的時(shí)間,在函數(shù)srv_conc_enter_innodb_with_atomics中設(shè)置。(參考http://blog.itpub.net/7728585/viewspace-2140446/)
InnoDB_pages_distinct: innodb訪問的頁數(shù),包含物理和邏輯IO,在函數(shù)buf_page_get_gen的末尾通過_increment_page_get_statistics函數(shù)設(shè)置。
7、第七部分set timestamp
SET timestamp=1527753496;
這一句來自源碼,注意源碼注釋解釋就是獲取的服務(wù)器的當(dāng)前的時(shí)間(current_utime)。
/* This info used to show up randomly, depending on whether the query checked the query start time or not. now we always write current timestamp to the slow log */ end= my_stpcpy(end, ",timestamp="); end= int10_to_str((long) (current_utime / 1000000), end, 10); if (end != buff) { *end++=';'; *end='\n'; if (my_b_write(&log_file, (uchar*) "SET ", 4) || my_b_write(&log_file, (uchar*) buff + 1, (uint) (end-buff))) goto err; }
七、總結(jié)
本文通過查詢?cè)创a解釋了一些關(guān)于MySQL慢查詢的相關(guān)的知識(shí),主要解釋了慢查詢是基于什么標(biāo)準(zhǔn)進(jìn)行記錄的,同時(shí)輸出中各個(gè)指標(biāo)的含義,當(dāng)然這僅僅是我自己得出的結(jié)果,如果有不同意見可以一起討論。
備注棧幀1:
本棧幀主要跟蹤Rows_examined的變化及 join->examined_rows++;的變化
(gdb) info b Num Type Disp Enb Address What 1 breakpoint keep y 0x0000000000ebd5f3 in main(int, char**) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/main.cc:25 breakpoint already hit 1 time 4 breakpoint keep y 0x000000000155b94f in do_select(JOIN*) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:872 breakpoint already hit 5 times 5 breakpoint keep y 0x000000000155ca39 in evaluate_join_record(JOIN*, QEP_TAB*) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:1473 breakpoint already hit 20 times 6 breakpoint keep y 0x00000000019b4313 in ha_innobase::index_first(uchar*) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9547 breakpoint already hit 4 times 7 breakpoint keep y 0x00000000019b45cd in ha_innobase::rnd_next(uchar*) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9651 8 breakpoint keep y 0x00000000019b2ba6 in ha_innobase::index_read(uchar*, uchar const*, uint, ha_rkey_function) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9004 breakpoint already hit 3 times 9 breakpoint keep y 0x00000000019b4233 in ha_innobase::index_next(uchar*) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9501 breakpoint already hit 5 times #0 ha_innobase::index_next (this=0x7fff2cbc6b40, buf=0x7fff2cbc7080 "\375\n") at /root/mysql5.7.14/percona-server-5.7.14-7/storage/innobase/handler/ha_innodb.cc:9501 #1 0x0000000000f680d8 in handler::ha_index_next (this=0x7fff2cbc6b40, buf=0x7fff2cbc7080 "\375\n") at /root/mysql5.7.14/percona-server-5.7.14-7/sql/handler.cc:3269 #2 0x000000000155fa02 in join_read_next (info=0x7fff2c007750) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:2660 #3 0x000000000155c397 in sub_select (join=0x7fff2c007020, qep_tab=0x7fff2c007700, end_of_records=false) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:1274 #4 0x000000000155bd06 in do_select (join=0x7fff2c007020) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:944 #5 0x0000000001559bdc in JOIN::exec (this=0x7fff2c007020) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_executor.cc:199 #6 0x00000000015f9ea6 in handle_query (thd=0x7fff2c000b70, lex=0x7fff2c003150, result=0x7fff2c006cd0, added_options=0, removed_options=0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_select.cc:184 #7 0x00000000015acd05 in execute_sqlcom_select (thd=0x7fff2c000b70, all_tables=0x7fff2c006688) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5391 #8 0x00000000015a5320 in mysql_execute_command (thd=0x7fff2c000b70, first_level=true) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:2889 #9 0x00000000015adcd6 in mysql_parse (thd=0x7fff2c000b70, parser_state=0x7ffff035b600) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:5836 #10 0x00000000015a1b95 in dispatch_command (thd=0x7fff2c000b70, com_data=0x7ffff035bd70, command=COM_QUERY) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1447 #11 0x00000000015a09c6 in do_command (thd=0x7fff2c000b70) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/sql_parse.cc:1010 #12 0x00000000016e29d0 in handle_connection (arg=0x3859ae0) at /root/mysql5.7.14/percona-server-5.7.14-7/sql/conn_handler/connection_handler_per_thread.cc:312 #13 0x0000000001d7bfdc in pfs_spawn_thread (arg=0x38607b0) at /root/mysql5.7.14/percona-server-5.7.14-7/storage/perfschema/pfs.cc:2188 #14 0x0000003f74807aa1 in start_thread () from /lib64/libpthread.so.0 #15 0x0000003f740e8bcd in clone () from /lib64/libc.so.6
以上就是MySQL中慢查詢記錄的原理是什么,小編相信有部分知識(shí)點(diǎn)可能是我們?nèi)粘9ぷ鲿?huì)見到或用到的。希望你能通過這篇文章學(xué)到更多知識(shí)。更多詳情敬請(qǐng)關(guān)注億速云行業(yè)資訊頻道。
免責(zé)聲明:本站發(fā)布的內(nèi)容(圖片、視頻和文字)以原創(chuàng)、轉(zhuǎn)載和分享為主,文章觀點(diǎn)不代表本網(wǎng)站立場,如果涉及侵權(quán)請(qǐng)聯(lián)系站長郵箱:is@yisu.com進(jìn)行舉報(bào),并提供相關(guān)證據(jù),一經(jīng)查實(shí),將立刻刪除涉嫌侵權(quán)內(nèi)容。