分类目录归档:MySQL

记录

刚刚alert table 创建索引 半路kill了 但是主库成功了 从库没有成功

主库删除索引 从库就停止同步了

马上发现报警 跳过一个错误恢复同步

杨龙

2018年10月20日

刚刚yum安装MySQL 8.0,直接把我的mariadb卸载了

pt-online-schema-change 最佳参数

pt-online-schema-change \
--user username \
--password 123456 \
--host x.x.x.x D=db,t=table \
--execute --alter "ADD COLUMN vip_version TINYINT NOT NULL DEFAULT 0" \
--charset utf8mb4 \
--no-version-check \
--nocheck-replication-filters \
--max-load Threads_running=10 \
--critical-load Threads_running=200 \
--chunk-time=0.5

–charset utf8mb4 设置下编码

–max-load Threads_running=10 低负载才运行,设置尽量小点

–critical-load Threads_running=200 终止条件 设置大点防止容易失败

–no-version-check 阿里云rds可能需要此参数

–chunk-time=0.5 这个优于设置chunk-size

–nocheck-replication-filters 有主从配置的时候需要 阿里云rds可能不需要这个参数

简单实现MySQL负载低时才继续运行任务,负载高的时候睡眠

public static function runSleep($sec = 10, $echo = null) {
while (true) {
$result = NoThingModel::getInstance()->db->select("show status like 'Threads_running'", [], false);
$result = reset($result);
if ($result) {
if ($result->Variable_name == 'Threads_running') {
if ($result->Value > 10) {
if ($echo) {
echo "sleep 10\n";
}
sleep($sec);
} else {
break;
}
} else {
break;
}
} else {
break;
}
}
}

MySQL自动备份到亚马逊S3


#!/bin/sh

# Updates etc at: https://github.com/woxxy/MySQL-backup-to-Amazon-S3
# Under a MIT license

# change these variables to what you need
MYSQLROOT=XXX
MYSQLPASS=XXXX
S3BUCKET=xx-mysql-backup
# DATABASE='--all-databases'
DATABASE=${1-default}
FILENAME=${DATABASE}
HOST=master
DUMPOPTION='--quick --single-transaction'
# the following line prefixes the backups with the defined directory. it must be blank or end with a /
S3PATH=mysql-backup/${DATABASE}
# when running via cron, the PATHs MIGHT be different. If you have a custom/manual MYSQL install, you should set this manually like MYSQLDUMPPATH=/usr/local/mysql/bin/
MYSQLDUMPPATH=
#tmp path.
TMP_PATH=/tmp/

DATESTAMP=$(date +".%Y-%m-%d.%H:%M:%S")
DELDATESTAMP=$(date -d"15 day ago 2017-04-16" +".%Y-%m-%d.%H:%M:%S")
DAY=$(date +"%d")
DAYOFWEEK=$(date +"%A")

PERIOD=${2-day}
if [ ${PERIOD} = "auto" ]; then
if [ ${DAY} = "01" ]; then
PERIOD=month
elif [ ${DAYOFWEEK} = "Sunday" ]; then
PERIOD=week
else
PERIOD=day
fi
fi

echo "Selected period: $PERIOD."

echo "Starting backing up the database to a .gz file..."

# dump all databases
${MYSQLDUMPPATH}mysqldump -h${HOST} ${DUMPOPTION} --user=${MYSQLROOT} --password=${MYSQLPASS} ${DATABASE} | gzip > ${TMP_PATH}${FILENAME}.gz

echo "Done backing up the database to a file."
# echo "Starting compression..."

# tar czf ${TMP_PATH}${FILENAME}${DATESTAMP}.tar.gz ${TMP_PATH}${FILENAME}.sql
mv ${TMP_PATH}${FILENAME}.gz ${TMP_PATH}${FILENAME}${DATESTAMP}.gz

# echo "Done compressing the backup file."

# upload all databases
echo "Uploading the new backup..."
s3cmd put -f --check-md5 -s --continue-put ${TMP_PATH}${FILENAME}${DATESTAMP}.gz s3://${S3BUCKET}/${S3PATH}${PERIOD}/
echo "New backup uploaded."
if [ $? -ne 0 ]
then
echo "Re uploading the backup file..."
s3cmd put -f --check-md5 -s --continue-put ${TMP_PATH}${FILENAME}${DATESTAMP}.gz s3://${S3BUCKET}/${S3PATH}${PERIOD}/
echo "Re upload backup file done."
fi

echo "Moving the backup from past $PERIOD to another folder..."
s3cmd mv --recursive s3://${S3BUCKET}/${S3PATH}${PERIOD}/${FILENAME}${DELDATESTAMP} s3://${S3BUCKET}/${S3PATH}previous_${PERIOD}/
echo "Past backup moved."

# we want at least two backups, two months, two weeks, and two days
echo "Removing old backup (15 ${PERIOD}s ago)..."
s3cmd del --recursive s3://${S3BUCKET}/${S3PATH}previous_${PERIOD}/
echo "Old backup removed."

# remove databases dump
# rm ${TMP_PATH}${FILENAME}.sql
echo "Removing the gz files..."
rm ${TMP_PATH}${FILENAME}${DATESTAMP}.gz
echo "Files removed."
echo "All done."