Interview Questions And Answers Set - 5
MySQL - SQL Commands
Related to Replication
Replication can be controlled through the SQL interface. Below is the
summary of commands:
SLAVE START Starts the slave thread. (Slave)
SLAVE STOP Stops the slave thread. (Slave)
SET SQL_LOG_BIN=0 Disables update logging if the user has process privilege.
Ignored otherwise. (Master)
SET SQL_LOG_BIN=1 Re-enables update logging if the user has process privilege.
Ignored otherwise. (Master)
SET SQL_SLAVE_SKIP_COUNTER=n Skip the next n events from the master. Only valid
when the slave thread is not running, otherwise, gives an error. Useful for
recovering from replication glitches.
RESET MASTER Deletes all binary logs listed in the index file, resetting the
binlog index file to be empty. In pre-3.23.26 versions, FLUSH MASTER (Master)
RESET SLAVE Makes the slave forget its replication position in the master logs.
In pre 3.23.26 versions the command was called FLUSH SLAVE(Slave)
LOAD TABLE tblname FROM MASTER Downloads a copy of the table from master to the
CHANGE MASTER TO master_def_list Changes the master parameters to the values
specified in master_def_list and restarts the slave thread. master_def_list is a
comma-separated list of master_def where master_def is one of the following:
MASTER_HOST, MASTER_USER, MASTER_PASSWORD, MASTER_PORT, MASTER_CONNECT_RETRY,
MASTER_LOG_FILE, MASTER_LOG_POS. Example:
CHANGE MASTER TO
You only need to specify the values that need to be changed. The values that you
omit will stay the same with the exception of when you change the host or the
port. In that case, the slave will assume that since you are connecting to a
different host or a different port, the master is different. Therefore, the old
values of log and position are not applicable anymore, and will automatically be
reset to an empty string and 0, respectively (the start values). Note that if
you restart the slave, it will remember its last master. If this is not
desirable, you should delete the `master.info' file before restarting, and the
slave will read its master from my.cnf or the command line. (Slave)
SHOW MASTER STATUS Provides status information on the binlog of the master.
SHOW SLAVE STATUS Provides status information on essential parameters of the
slave thread. (Slave)
SHOW MASTER LOGS Only available starting in Version 3.23.28. Lists the binary
logs on the master. You should use this command prior to PURGE MASTER LOGS TO to
find out how far you should go.
PURGE MASTER LOGS TO 'logname' Available starting in Version 3.23.28. Deletes
all the replication logs that are listed in the log index as being prior to the
specified log, and removed them from the log index, so that the given log now
becomes first. Example:
PURGE MASTER LOGS TO 'mysql-bin.010'
This command will do nothing and fail with an error if you have an active slave
that is currently reading one of the logs you are trying to delete. However, if
you have a dormant slave,and happen to purge one of the logs it wants to read,
the slave will be unable to replicate once it comes up. The command is safe to
run while slaves are replicating - you do not need to stop them. You must first
check all the slaves with SHOW SLAVE STATUS to see which log they are on, then
do a listing of the logs on the master with SHOW MASTER LOGS, find the earliest
log among all the slaves (if all the slaves are up to date, this will be the
last log on the list), backup all the logs you are about to delete (optional)
and purge up to the target log.
MySQL - Replication FAQ
Why do I sometimes see more than one Binlog_Dump thread on the master after I
have restarted the slave?
Binlog_Dump is a continuous process that is handled by the server in the
Catch up on the updates.
Once there are no more updates left, go into pthread_cond_wait(), from which we
can be awakened either by an update or a kill.
On wake up, check the reason. If we are not supposed to die, continue the
If there is some fatal error, such as detecting a dead client, terminate the
So if the slave thread stops on the slave, the corresponding Binlog_Dump thread
on the master will not notice it until after at least one update to the master
(or a kill), which is needed to wake it up from pthread_cond_wait(). In the
meantime, the slave could have opened another connection, which resulted in
another Binlog_Dump thread.
The above problem should not be present in Version 3.23.26 and later versions.
In Version 3.23.26 we added server-id to each replication server, and now all
the old zombie threads are killed on the master when a new replication thread
connects from the same slave
How do I rotate replication
In Version 3.23.28 you should use PURGE MASTER LOGS TO command after determining
which logs can be deleted, and optionally backing them up first. In earlier
versions the process is much more painful, and cannot be safely done without
stopping all the slaves in the case that you plan to re-use log names. You will
need to stop the slave threads, edit the binary log index file, delete all the
old logs, restart the master, start slave threads,and then remove the old log
How do I upgrade on a hot
If you are upgrading pre-3.23.26 versions, you should just lock the master
tables, let the slave catch up, then run FLUSH MASTER on the master, and FLUSH
SLAVE on the slave to reset the logs, then restart new versions of the master
and the slave. Note that the slave can stay down for some time - since the
master is logging all the updates, the slave will be able to catch up once it is
up and can connect.
After 3.23.26, we have locked the replication protocol for modifications, so you
can upgrade masters and slave on the fly to a newer 3.23 version and you can
have different versions of MySQL running on the slave and the master, as long as
they are both newer than 3.23.26.
What issues should I be
aware of when setting up two-way replication?
MySQL replication currently does not support any locking protocol between master
and slave to guarantee the atomicity of a distributed (cross-server) update. In
in other words, it is possible for client A to make an update to co-master 1,
and in the meantime, before it propagates to co-master 2, client B could make an
update to co-master 2 that will make the update of client A work differently
than it did on co-master 1. Thus when the update of client A will make it to
co-master 2, it will produce tables that will be different than what you have on
co-master 1, even after all the updates from co-master 2 have also propagated.
So you should not co-chain two servers in a two-way replication relationship,
unless you are sure that you updates can safely happen in any order, or unless
you take care of mis-ordered updates somehow in the client code.
You must also realize that two-way replication actually does not improve
performance very much, if at all, as far as updates are concerned. Both servers
need to do the same amount of updates each, as you would have one server do. The
only difference is that there will be a little less lock contention, because the
updates originating on another server will be serialized in one slave thread.
This benefit, though, might be offset by network delays.
How can I use replication
to improve performance of my system?
You should set up one server as the master, and direct all writes to it, and
configure as many slaves as you have the money and rackspace for, distributing
the reads among the master and the slaves. You can also start the slaves with
--skip-bdb, --low-priority-updates and --delay-key-write-for-all-tables to get
speed improvements for the slave. In this case the slave will use
non-transactional MyISAM tables instead of BDB tables to get more speed.
What should I do to
prepare my client code to use performance-enhancing replication?
A: If the part of your code that is responsible for database access has been
properly abstracted/modularized, converting it to run with the replicated setup
should be very smooth and easy - just change the implementation of your database
access to read from some slave or the master, and to awlays write to the master.
If your code does not have this level of abstraction, setting up a replicated
system will give you an opportunity/motivation to it clean up. You should start
by creating a wrapper library /module with the following functions:
safe_ means that the function will take care of handling all the error
You should then convert your client code to use the wrapper library. It may be a
painful and scary process at first, but it will pay off in the long run. All
applications that follow the above pattern will be able to take advantage of
one-master/many slaves solution. The code will be a lot easier to maintain, and
adding troubleshooting options will be trivial. You will just need to modify one
or two functions, for example, to log how long each query took, or which query,
among your many thousands, gave you an error. If you have written a lot of code
already, you may want to automate the conversion task by using Monty's replace
utility, which comes with the standard distribution of MySQL, or just write your
own Perl script. Hopefully, your code follows some recognizable pattern. If not,
then you are probably better off re-writing it anyway, or at least going through
and manually beating it into a pattern.
Note that, of course, you can use different names for the functions. What is
important is having unified interface for connecting for reads, connecting for
writes, doing a read, and doing a write.