24

MySQL 快速删除大量数据(千万级别)的几种实践方案

 3 years ago
source link: http://www.cnblogs.com/NaughtyCat/p/one-fast-way-to-delete-huge-data-in-mysql.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.

2QRrErr.jpg!web

笔者最近工作中遇见一个性能瓶颈问题,MySQL表,每天大概新增776万条记录,存储周期为7天,超过7天的数据需要在新增记录前老化。连续运行9天以后,删除一天的数据大概需要3个半小时(环境:128G, 32核,4T硬盘),而这是不能接受的。当然如果要整个表删除,毋庸置疑,用

TRUNCATE TABLE就好。

最初的方案(因为未预料到删除会如此慢),代码如下(最简单和朴素的方法):

delete from table_name where cnt_date <= target_date

后经过研究,最终实现了飞一般的速度删除770多万条数据,单张表总数据量在4600万上下,优化过程的方案层层递进,详细记录如下:

  • 批量删除(每次限定一定数量),然后循环删除直到全部数据删除完毕;同时 key_buffer_size 由默认的8M提高到512M

运行效果: 删除时间大概从3个半小时提高到了3小时

(1)通过 limit (具体size 酌情设置) 限制一次删除的数据量,然后判断数据是否删除完,附源码如下(Python实现):

def delete_expired_data(mysqlconn, day):
    mysqlcur = mysqlconn.cursor()
    delete_sql = "DELETE from table_name where cnt_date<='%s' limit 50000" % day
    query_sql = "select srcip from table_name  where cnt_date <= '%s' limit 1" % day
    try: 
        df = pd.read_sql(query_sql, mysqlconn)
        while True:
            if df is None or df.empty:
                break
            mysqlcur.execute(delete_sql)
            mysqlconn.commit()

            df = pd.read_sql(query_sql, mysqlconn)
    except:
       mysqlconn.rollback()

(2)增加 key_buffer_size

mysqlcur.execute("SET GLOBAL key_buffer_size = 536870912")

key_buffer_size是global变量,详情参见Mysql官方文档:  https://dev.mysql.com/doc/refman/5.7/en/server-configuration.html

  • DELETE QUICK + OPTIMIZE TABLE

适用场景: MyISAM Tables

Why: MyISAM删除的数据维护在一个链表中,这些空间和行的位置接下来会被Insert的数据复用。 直接的delete后,mysql会合并索引块,涉及大量内存的拷贝移动;而OPTIMIZE TABLE直接重建索引,及直接把数据块情况,再重新搞一份。

运行效果: 删除时间大3个半小时提高到了1小时40分

具体代码如下:

def delete_expired_data(mysqlconn, day):
    mysqlcur = mysqlconn.cursor()
    delete_sql = "DELETE QUICK from table_name where cnt_date<='%s' limit 50000" % day
    query_sql = "select srcip from table_name where cnt_date <= '%s' limit 1" % day
    optimize_sql = "OPTIMIZE TABLE g_visit_relation_asset"
    try: 
        df = pd.read_sql(query_sql, mysqlconn)
        while True:
            if df is None or df.empty:
                break
            mysqlcur.execute(delete_sql)
            mysqlconn.commit()

            df = pd.read_sql(query_sql, mysqlconn)
        mysqlcur.execute(optimize_sql)
        mysqlconn.commit()
    except:
       mysqlconn.rollback()
  • 表分区,删除直接删除过期日期所在的分区(最终方案—秒杀)

MySQL表分区有几种方式,包括RANGE、KEY、LIST、HASH,具体参见官方文档。因为这里的应用场景日期在变化,所以不适合才用RANGE固定分区名称,而HASH分区更适宜

(1)分区表定义,SQL语句如下:

ALTER TABLE table_name PARTITION BY HASH(TO_DAYS(cnt_date)) PARTITIONS 7;

TO_DAYS将日期(必须为日期类型,否则会报错: Constant, random or timezone-dependent expressions in (sub)partitioning function are not allowed )转换为天数(按一年的天数计算),然后HASH;分区建立7个。实际上,就是 days MOD 7  。

(2)查询出需要老化的日期所在的分区,SQL语句如下:

"explain partitions select * from g_visit_relation_asset where cnt_date = '%s'" % expired_day

(3)OPTIMIZE or REBUILD partition,SQL语句如下:

"ALTER TABLE g_visit_relation_asset OPTIMIZE PARTITION '%s'" % partition

完整代码如下【Python实现】,循环删除小于指定日期的数据:

def clear_partition_data(mysqlconn, day):
    mysqlcur = mysqlconn.cursor()
    expired_day = day
    query_partition_sql = "explain partitions select * from table_name where cnt_date = '%s'" % expired_day
    # OPTIMIZE or REBUILD after truncate partition
    try: 
        while True:
            df = pd.read_sql(query_partition_sql, mysqlconn)
            if df is None or df.empty:
                break
            partition = df.loc[0, 'partitions']
            if partition is not None:
                clear_partition_sql = "alter table table_name TRUNCATE PARTITION %s" % partition
                mysqlcur.execute(clear_partition_sql)
                mysqlconn.commit()

                optimize_partition_sql = "ALTER TABLE table_name OPTIMIZE PARTITION %s" % partition
                mysqlcur.execute(optimize_partition_sql)
                mysqlconn.commit()
            
            expired_day = (expired_day - timedelta(days = 1)).strftime("%Y-%m-%d")
            df = pd.read_sql(query_partition_sql, mysqlconn)
    except:
       mysqlconn.rollback()
  •  其它

如果删除的数据超过表数据的百分之50,建议拷贝所需数据到临时表,然后删除原表,再重命名临时表为原表,附MySQL如下:

   INSERT INTO New
      SELECT * FROM Main
         WHERE ...;  -- just the rows you want to keep
   RENAME TABLE main TO Old, New TO Main;
   DROP TABLE Old;   -- Space freed up here

参考:

1) https://dev.mysql.com/doc/refman/5.7/en/alter-table-partition-operations.html 具体分区说明

2) http://mysql.rjweb.org/doc.php/deletebig#solutions    删除大数据的解决方案

本文版权归作者和博客园共有,欢迎转载,但未经作者同意必须保留此段声明,且在文章页面明显位置给出原文连接,否则保留追究法律责任的权利。

************************************************************************

精力有限,想法太多,专注做好一件事就行

  • 我只是一个程序猿。 5年内把代码写好,技术博客字字推敲,坚持零拷贝和原创
  • 写博客的意义在于 打磨文笔, 训练逻辑条理性,加深对知识的系统性理解;如果恰好又对别人有点帮助,那真是一件令人开心的事

************************************************************************


About Joyk


Aggregate valuable and interesting links.
Joyk means Joy of geeK