Various locks in Mysql database

Hits: 0

Table of contents



How to add table lock

concurrent lock

MyISAM lock scheduling


Transactions and their ACID properties

Problems with concurrent transactions

What locks does InnoDB have?

InnoDB row lock mode and locking method

InnoDB row lock implementation

gap lock

Next-Key lock

InnoDB locking rules

When to use table locks

About deadlock



When introducing the difference between InnoDB and MyIsam, I mentioned: InnoDB supports table and row (default) level locks, while MyISAM supports table level locks

This article focuses on the introduction of locks in the Mysql database


Compared with other databases, MySQL’s [lock mechanism] is relatively simple, and its most notable feature is that different storage engines support different lock mechanisms.

MySQL can be roughly classified into the following three types of locks:

  • Table-level lock: low overhead and fast locking; no deadlock; large locking granularity, the highest probability of lock conflict and the lowest concurrency.
  • Row-level locks: high overhead, slow locking; deadlocks; the smallest locking granularity, the lowest probability of lock conflicts, and the highest concurrency.
  • Page locks: overhead and locking time are between table locks and row locks; deadlocks can occur; locking granularity is between table locks and row locks, and the degree of concurrency is average


When using MyIsam, we can only use table-level locks, and MySQL table-level locks have two modes:

Table shared lock (Table Read Lock) and table exclusive write lock (Table Write Lock), they work as follows:

  • A read operation to a certain table will not block other users’ requests for the same table, but will block write requests to the same table;
  • Write operations to MyISAM will block other users’ read and write operations to the same table;
  • MyISAM table reads and writes, and between writes, are serial.

When a thread acquires a write lock on a table, only the thread holding the lock can update the table. Read and write operations by other threads will wait until the lock is released.

How to add table lock

Before executing a query statement (SELECT), MyISAM will automatically add read locks to all involved tables, and before executing update operations (UPDATE, DELETE, INSERT, etc.), it will automatically add write locks to the involved tables. This process does not require User intervention, so users generally do not need to explicitly lock MyISAM tables directly with the LOCK TABLE command.

Explicitly locking MyISAM tables is generally to simulate transaction operations to a certain extent and achieve consistent reading of multiple tables at a certain point in time. For example, there is an order table orders, which records the total amount of the order, and an order detail table order_detail, which records the subtotal of the amount of each product of the order. Suppose we need to check the total amount of these two tables. If they are equal, you may need to execute the following two SQLs:

SELECT SUM(total) FROM orders;
SELECT SUM(subtotal) FROM order_detail;

At this time, if the two tables are not locked first, the wrong result may be produced, because the order_detail table may have changed during the execution of the first statement. So the correct way should be:

LOCK tables orders read local,order_detail read local;
SELECT SUM(total) FROM orders;
SELECT SUM(subtotal) FROM order_detail;
Unlock tables;

In particular, the following two points should be noted.

  • The above example adds the ‘local’ option to LOCK TABLES, its role is to allow other users to insert records at the end of the table under the condition that the concurrent insertion conditions of the MyISAM table are met
  • When explicitly adding table locks to a table with LOCKTABLES, locks on all involved tables must be acquired at the same time. That is to say, after executing LOCK TABLES, you can only access the explicitly locked tables, but not the unlocked tables; at the same time, if you add a read lock, you can only perform query operations, but not update operate. In fact, it is basically the same in the case of automatic locking. MySQL will obtain all the locks required by the SQL statement at one time. This is also the reason why MyISAM tables are not deadlocked (Deadlock Free). In addition, MySQL supports lock escalation, that is, when the conditions are met, it is allowed to upgrade from table shared locks to table exclusive locks.

A session uses the LOCK TABLE command to add a read lock to the table film_text. This session can query the records in the locked table, but an error will be prompted when updating or accessing other tables; at the same time, another session can query the records in the table, but the update will cause an error. A lock wait occurs.

When using LOCK TABLE, not only need to lock all the tables used at one time, but also, how many times the same table appears in the SQL statement, how many times it needs to be locked by the same alias as in the SQL statement, otherwise there will be an error!

[concurrent] lock

Under certain conditions, MyISAM also supports concurrent queries and operations.

The MyISAM storage engine has a system variable concurrent_insert that is specifically used to control its concurrent insert behavior, and its value can be 0, 1, or 2, respectively.

  • When concurrent_insert is set to 0, concurrent inserts are not allowed.
  • When concurrent_insert is set to 1, if MyISAM allows a table read while another process inserts records from the end of the table. This is also the default setting for MySQL.
  • When concurrent_insert is set to 2, records are allowed to be inserted at the end of the table regardless of whether there are holes in the MyISAM table, and concurrent insertion of records at the end of the table is allowed.

The concurrent insert feature of the MyISAM storage engine can be used to solve the query and insert lock contention for the same table in the application. For example, setting the concurrent_insert system variable to 2 will always allow concurrent inserts; at the same time, by periodically executing the OPTIONMIZE TABLE statement during system idle periods to defragment the space and collect intermediate holes caused by deleting records.

Lock Scheduling in MyISAM

As mentioned earlier, the read and write locks of the MyISAM storage engine are mutually exclusive, and the read operations are serial. So, how does MySQL deal with a process requesting a read lock on a MyISAM table and another process requesting a write lock on the same table? The answer is that the writer process acquires the lock first. Not only that, even if the read process first requests the lock waiting queue and the write request arrives later, the write lock will be inserted before the read request! This is because MySQL considers write requests generally more important than read requests. This is why MyISAM tables are not suitable for applications with a large number of update operations and query operations, because a large number of update operations will make it difficult for query operations to obtain read locks, which may block forever. This situation can sometimes get really bad! Fortunately, we can adjust MyISAM’s scheduling behavior through some settings.

  • By specifying the startup parameter low-priority-updates, the MyISAM engine defaults to giving priority to read requests.
  • By executing the command SET LOW_PRIORITY_UPDATES=1, the priority of the update request issued by this connection is reduced.
  • Reduce the priority of an INSERT, UPDATE, DELETE statement by specifying the LOW_PRIORITY attribute of the statement.

Although the above three methods are either update priority or query priority, they can still be used to solve the serious problem of read lock waiting in applications where query is relatively important (such as user login system).

In addition, MySQL also provides a compromise method to adjust read and write conflicts, that is, to set an appropriate value for the system parameter max_write_lock_count. When the read lock of a table reaches this value, MySQL temporarily lowers the priority of write requests. , to give the reading process a chance to acquire the lock.

The write-first scheduling mechanism and workaround have been discussed above. Here’s another point to emphasize: some long-running query operations will also “starve” the writing process! Therefore, the application should try to avoid long-running query operations, and do not always try to solve the problem with a SELECT statement. Because this seemingly ingenious SQL statement is often complicated and takes a long time to execute, if possible, the SQL statement can be “decomposed” by using intermediate tables and other measures, so that each step of the query can be executed in a shorter time. time to complete, thereby reducing lock conflicts. If complex queries are unavoidable, you should try to schedule them to be executed during the idle time of the database. For example, some periodic statistics can be scheduled to be executed at night.


There are two biggest differences between InnoDB and MyISAM: one is to support transactions (TRANSACTION); the other is to use row-level locks.

There are many differences between row-level locks and table-level locks. In addition, the introduction of transactions also brings some new problems.

Transaction and its ACID properties

A transaction is a logical processing unit consisting of a set of SQL statements, and a transaction has 4 properties, commonly referred to as the ACID properties of a transaction.

  • Atomicity: A transaction is an atomic unit of operation in which all or none of the changes to data are performed.
  • Consistent: Data must remain in a consistent state at both the start and completion of a transaction. This means that all relevant data rules must be applied to the modification of the transaction to maintain integrity; at the end of the transaction, all internal data structures (such as B-tree indexes or doubly linked lists) must also be correct.
  • Isolation: The database system provides a certain isolation mechanism to ensure that transactions are executed in an “independent” environment that is not affected by external concurrent operations. This means that the intermediate state in the transaction process is invisible to the outside world, and vice versa.
  • Durable: After the transaction is completed, its modifications to the data are permanent, even in the event of a system failure.

Problems with concurrent transactions

Compared with serial processing, concurrent transaction processing can greatly increase the utilization of database resources and improve the transaction throughput of the database system, so that it can support more users. However, concurrent transaction processing will also bring some problems, mainly including the following situations.

  • Lost Update: When two or more transactions select the same row and then update that row based on the originally selected value, the lost update problem occurs because each transaction is unaware of the existence of the other – The last update overwrites updates made by other firms. For example, two editors make electronic copies of the same document. Each editor independently changes their copy and then saves the changed copy, thus overwriting the original document. The editor who last saved their changes saves a copy of their changes overwrites the changes made by another editor. This problem can be avoided if another editor cannot access the same file until one editor completes and commits the transaction
  • Dirty Reads: A transaction is modifying a record. Before the transaction is committed, the data of this record is in an inconsistent state; at this time, another transaction also reads the same record. Control, the second transaction reads the “dirty” data and does further processing accordingly, resulting in uncommitted data dependencies. This phenomenon is vividly called “dirty reads”.
  • Non -Repeatable Reads: A transaction has read some data has changed, or some records have been deleted! This phenomenon is called “non-repeatable read”.
  • Phantom Reads: A transaction re-reads previously retrieved data according to the same query conditions, but finds that other transactions have inserted new data that satisfies its query conditions. This phenomenon is called “phantom reads”.
    • The reason for the phantom read is that row locks can only lock rows, but the action of inserting a new record requires updating the “gap” between the records. Therefore, in order to solve the phantom read problem, InnoDB has to introduce a new lock, that is, the gap lock (Gap Lock), which will be introduced later.

What locks does InnoDB have?

  1. row lock
    1. Shared lock (lock in share mode)
    2. Exclusive lock (for update)
  2. Intent locks (table level)
    1. Intent Shared Lock
    2. intent exclusive lock
  3. gap lock
  4. Next-key lock: row lock (exclusive lock) + gap lock

InnoDB row lock mode and locking method

InnoDB implements the following two types of row locks.

  • Shared lock(s): Allows a transaction to read a row, preventing other transactions from acquiring an exclusive lock on the same dataset. xxx lock in share mode
  • Exclusive lock (X): Allows transactions that acquire exclusive locks to update data, preventing other transactions from acquiring shared read locks and exclusive write locks on the same data set. xxx for update

In addition, in order to allow row locks and table locks to coexist and implement a multi-granularity lock mechanism, InnoDB also has two internally used Intention Locks, both of which are table locks.

Intent Shared Lock (IS): The transaction intends to give a shared lock to a data row, and the transaction must first acquire the IS lock of the table before adding a shared lock to a data row.

Intentional exclusive lock (IX): The transaction intends to add an exclusive lock to a data row, and the transaction must first obtain the IX lock of the table before adding an exclusive lock to a data row.

InnoDB row lock mode compatibility list

Current lock mode / compatibility / request lock mode X IX S IS
X conflict conflict conflict conflict
IX conflict compatible conflict compatible
S conflict conflict compatible compatible
IS conflict compatible compatible compatible

If the lock mode requested by a transaction is compatible with the current lock, InnoDB grants the requested lock to the transaction; otherwise, if the two are not compatible, the transaction waits for the lock to be released.

Intent locks are automatically added by InnoDB without user intervention. For UPDATE, DELETE and INSERT statements, InnoDB will automatically add exclusive locks (X) to the involved datasets; for ordinary SELECT statements, InnoDB will automatically add shared locks (S) to the involved datasets; transactions can be explicitly given by the following statement Recordset plus shared lock or row lock.

Shared lock(s): SELECT * FROM table_name WHERE … LOCK IN SHARE MODE

Use SELECT .. IN SHARE MODE to obtain a shared lock, which is mainly used to confirm whether a row of records exists when data dependencies are required, and to ensure that no one performs UPDATE or DELETE operations on this record. However, if the current transaction also needs to update the record, it is likely to cause deadlock. For applications that need to update after locking the row record, the SELECT … FOR UPDATE method should be used to obtain an exclusive lock.

InnoDB row lock implementation

InnoDB row lock is achieved through the index entry on the index , which is different from MySQL and Oracle, which is achieved by locking the corresponding data row in the data. The characteristics of InnoDB’s row lock implementation mean that InnoDB will use row-level locks only if data is retrieved through index conditions, otherwise, InnoDB will use table locks (if it is RR / Serializable level, it will use Next-Key Locks on the primary key ( row lock + gap lock) to achieve the operation of locking the table)

In practical applications, special attention should be paid to this feature of InnoDB row locks, otherwise, a large number of lock conflicts may be caused, thereby affecting concurrent performance.

In addition, in InnoDB transactions, row locks are added when needed, but they are not released immediately when they are not needed, but are released when the transaction ends. This is the two-phase locking protocol.

Therefore, if you need to lock multiple rows in your transaction, put the locks that are most likely to cause lock conflicts and most likely to affect concurrency as far back as possible.

gap lock

When we retrieve data with range conditions instead of equality conditions, and request a shared or exclusive lock, InnoDB ( repeatable read, only valid at serialization level ) will lock the index entries of existing data that meet the conditions; for keys A record whose value is within the condition range but does not exist is called a “gap (GAP)”. InnoDB will also lock this “gap”. This locking mechanism is called a gap lock, which is usually an open interval (xx, xx ).

For example, if there are only 101 records in the emp table, and the empid values ​​are 1,2,…,100,101, the following SQL:


It is a retrieval of a range condition. InnoDB will not only lock the eligible records with an empid value of 101, but also lock the “gap” with an empid greater than 101 (these records do not exist).

The purpose of using gap locks in InnoDB is to prevent phantom reads to meet the requirements of the relevant isolation level. For the above example, if gap locks are not used, if other transactions insert any records with empid greater than 100, then this transaction will If the above statement is executed again, a phantom read will occur; on the other hand, it is to meet the needs of its recovery and replication. About the impact of its recovery and replication on the mechanism, and the use of gap locks by InnoDB at different isolation levels.

Obviously, when using range conditions to retrieve and lock records, the locking mechanism of InnoDB will block concurrent insertion of key values ​​within the eligible range, which often results in serious lock waits . Therefore, in actual development, especially for applications with many concurrent inserts, we should try our best to optimize business logic, use equality conditions to access updated data, and avoid using range conditions.

Second, the existence of gap locks may lead to deadlocks, as follows:

Suppose id is the primary key. There are records with id=5 and 10 in the table, but there are no records with id=9.

  1. Session A executes the select … for update statement. Since the row with id=9 does not exist, a gap lock (5,10) will be added; (PS: The basic unit of locking is next-key lock, now because id=9 The record does not exist, so the next-key lock degenerates into a gap lock)

  2. Session B executes the select … for update statement, and the gap lock (5,10) will also be added, and there will be no conflict between the gap locks, so this statement can be executed successfully;

  3. Session B tried to insert a row (9,9,9), but was blocked by the gap lock of session A, so it had to wait;

  4. Session A tries to insert a row (9,9,9), which is blocked by session B’s gap lock.

Note: There is no conflict between gap locks in different sessions (gap locks are not interlocked), and the conflict with gap locks is the operation of “insert a record into this gap”

Next-Key Lock

Next-key lock is the basic unit of InnoDB locking. It is an interval that opens and closes before, namely row lock + gap lock

InnoDB locking rules

Two “principles”, two “optimizations” and a “bug”:

  • Principle 1: The basic unit of locking is next-key lock. next-key lock is an interval that opens before and closes later.
  • Principle 2: Only objects accessed during the search process will be locked.
  • Optimization 1: For the equivalent query on the index, when the unique index is locked, the next-key lock degenerates into a row lock.
  • Optimization 2: For an equal-value query on an index, when traversing to the right and the last value does not satisfy the equal-value condition, the next-key lock degenerates into a gap lock.
  • A bug: Range queries on unique indexes would access until the first value that does not satisfy the condition.

When to use table locks

For InnoDB tables, row-level locks should be used in most cases, because transactions and row locks are often the reasons why we choose InnoDB tables. However, in individual special transactions, table-level locks can also be considered.

  • The first situation is: the transaction needs to update most or all of the data, and the table is relatively large. If the default row lock is used, not only the transaction execution efficiency is low, but also other transactions may cause long-term lock waiting and lock conflicts. This situation Consider using table locks to improve the execution speed of the transaction.
  • The second case is: the transaction involves multiple tables, which is more complicated and may cause deadlock, resulting in rollback of a large number of transactions. In this case, you can also consider locking the tables involved in the transaction at one time, so as to avoid deadlock and reduce the overhead of the database due to transaction rollback.

Of course, there should not be too many of these two transactions in the application, otherwise, the MyISAM table should be considered.

Under InnoDB, pay attention to the following two points when using table locks.

(1) Although the use of LOCK TALBES can add table-level locks to InnoDB, it must be noted that table locks are not managed by the InnoDB storage engine layer, but by the upper layer of MySQL Server, only when autocommit=0, When innodb_table_lock=1 (the default setting), the InnoDB layer can know the table locks added by MySQL, and MySQL Server can perceive the row locks added by InnoDB. In this case, InnoDB can automatically identify deadlocks involving table-level locks; otherwise, InnoDB will There is no way to automatically detect and handle such deadlocks.

(2) When using LOCAK TABLES to lock InnoDB, pay attention to setting AUTOCOMMIT to 0, otherwise MySQL will not lock the table; before the transaction ends, do not use UNLOCAK TABLES to release the table lock, because UNLOCK TABLES will implicitly commit However, COMMIT or ROLLBACK cannot release the table-level locks added with LOCAK TABLES, so generally we must commit the transaction first, and then release the table lock with UNLOCK TABLES. See the following statement for the correct method.

[do something with tables t1 and here];

About deadlock

MyISAM table locks are deadlock free, because MyISAM always acquires all the locks it needs at one time, either satisfied or waiting, so there is no deadlock. But in InnoDB, in addition to a single SQL transaction, the lock is gradually obtained, which determines the possibility of deadlock in InnoDB.

After a deadlock occurs, InnoDB can generally detect it automatically, and make one transaction release the lock and return, while the other transaction acquires the lock and continues to complete the transaction. There are two processing methods

  1. Go directly to wait until timeout. This timeout can be set by the parameter innodb_lock_wait_timeout (default 50s)
    1. For online services, this wait time is often unacceptable.
    2. If it is set to 1s, when a deadlock occurs, it can be released very quickly, but if it is not a deadlock, but a simple lock waiting, it will cause a lot of accidental injuries
  2. (Recommended) Active deadlock detection. After a deadlock is found, a transaction in the deadlock chain is actively rolled back, so that other transactions can continue to execute. Set the parameter innodb_deadlock_detect to on, indicating that this logic is turned on
    1. If there is a scenario where many transactions need to update the same row (hot row), each newly blocked thread must judge whether it will cause a deadlock due to its own joining. This is a time complexity of O (n) operation. Assuming that there are 1000 concurrent threads to update the same row at the same time, then the deadlock detection operation is on the order of 1 million. Although the final detection result is that there is no deadlock, it consumes a lot of CPU resources during this period. As a result, you will see high CPU utilization, but not a few transactions per second.
      1. For the above situation, if you can ensure that there will be no deadlock in this business, you can temporarily turn off the deadlock detection (headache doctor)
      2. Control the degree of concurrency. If only 10 threads are updating the same row at the same time, the cost of deadlock detection is very low, and there will be no problem of high CPU usage. This concurrency control is best done on the database server side/middleware, not on the client side, because there are usually many clients/many connections/many threads. The general idea is to queue updates to the same row before entering the engine. This way there won’t be a lot of deadlock detection work inside InnoDB.
      3. Change one line to logically multiple lines to reduce lock conflicts

However, when external locks are involved, or when locks are involved, InnoDB cannot fully automatically detect deadlocks, which needs to be solved by setting the lock wait timeout parameter innodb_lock_wait_timeout. It should be noted that this parameter is not only used to solve the deadlock problem. In the case of high concurrent access, if a large number of transactions are suspended because the required lock cannot be obtained immediately, it will take up a lot of computer resources and cause serious performance. problems, and even bring down the database. We can avoid this by setting an appropriate lock wait timeout threshold.

Generally speaking, deadlock is a problem of application design, and most of them can be avoided by adjusting business process, database object design, transaction size, and SQL statements accessing the database. The following is an example to introduce several common methods of deadlock.

(1) In the application, if different programs will access multiple tables concurrently, try to agree to access the tables in the same order, which can greatly reduce the chance of deadlock. If the order in which the two sessions access the two tables is different, the chance of a deadlock is very high! But if they are accessed in the same order, deadlock can be avoided.

(2) When the program processes data in batches, if the data is sorted in advance to ensure that each thread processes records in a fixed order, the possibility of deadlock can also be greatly reduced.

(3) In a transaction, if you want to update a record, you should directly apply for a lock of sufficient level, that is, an exclusive lock, instead of applying for a shared lock first, and then apply for an exclusive lock when updating, or even deadlock.

(4) Under the REPEATEABLE-READ isolation level, if two threads use SELECT…FOR UPDATE to add an exclusive lock to the same conditional record at the same time, if the record does not match, the two threads will lock successfully. When the program finds that the record does not yet exist, it tries to insert a new record, and if both threads do this, a deadlock occurs. In this case, changing the isolation level to READ COMMITTED can avoid the problem.

(5) When the isolation level is READ COMMITED, if both threads execute SELECT…FOR UPDATE first, it is judged whether there is a record that meets the conditions, and if not, the record is inserted. At this time, only one thread can insert successfully, and the other thread will wait for the lock. When the first thread submits, the second thread will re-error due to the primary key, but although this thread has made an error, it will obtain an exclusive lock! At this time, if there is a third thread to apply for an exclusive lock again, a deadlock will also occur. In this case, you can directly do the insert operation, and then catch the primary key heavy exception, or always execute ROLLBACK to release the acquired exclusive lock when the primary key heavy error is encountered.

Although deadlocks can be greatly reduced through the above design and optimization measures, it is difficult to completely avoid deadlocks. Therefore, it is a good programming practice to always catch and handle deadlock exceptions in program design.

If a deadlock occurs, you can use the SHOW ENGINE INNODB STATUS command to determine the cause of the last deadlock and corrective measures.


  {{EJS1}} int(11) NOT NULL,
  {{EJS2}} int(11) DEFAULT NULL,
  {{EJS3}} int(11) DEFAULT NULL,
  KEY {{EJS5}} ({{EJS6}})

insert into t values(0,0,0),(5,5,5),

Execute the following statement: select * from t where d=5 for update, what lock is triggered?

Repeatable read isolation level: Since d has no index, it is actually a table lock. InnoDB’s table lock is in the form of next-key lock (primary key row lock + gap lock), which means row locks are added to all records in the database , and also add n + gap locks (n is the number of existing records). Make sure that no new records can be inserted. In the process of row-by-row scanning, not only row locks are added to the rows, but also gap locks are added to the gaps on both sides of the row. Gap locks and row locks are collectively called next-key locks , and each next-key lock is a front-opening and back-closing interval. After our table t is initialized, if we use select * from t for update to lock all records in the entire table, 7 next-key locks are formed, which are (-∞,0], (0,5], ( 5,10], (10,15], (15,20], (20, 25], (25, +supremum].

Read has been committed: After the statement is executed, only rows that meet d=5 will have row locks

For more examples, please refer to: [Original] Surprise! The most complete select lock analysis in history (Mysql) – Lonely Smoke – Blog Park


For MyISAM table locks, there are mainly the following points

(1) Shared read locks (S) are compatible, but shared read locks (S) and exclusive write locks (X), and exclusive write locks (X) are mutually exclusive, that is to say, read and writes are serial.

(2) Under certain conditions, MyISAM allows concurrent execution of query and insert, we can use this to solve the lock contention problem for the same table and insert in the application.

(3) The default lock scheduling mechanism of MyISAM is write priority, which is not necessarily suitable for all applications. Users can adjust the contention of read-write locks by setting the LOW_PRIORITY_UPDATES parameter, or specifying the LOW_PRIORITY option in INSERT, UPDATE, and DELETE statements.

(4) Due to the large locking granularity of the table lock and the serial read and write operations, if there are many update operations, the MyISAM table may have serious lock waiting, and InnoDB tables can be considered to reduce lock conflicts.

For InnoDB tables, there are mainly the following points

(1) InnoDB’s marketing is based on indexes. If data is not accessed through indexes, InnoDB will use table locks.

(2) InnoDB gap lock mechanism and the reason why InnoDB uses gap lock.

(3) Under different isolation levels, InnoDB’s lock mechanism and consistent read strategy are different.

(4) The recovery and replication of MySQL also have a great impact on the InnoDB lock mechanism and consistent read strategy.

(5) It is difficult to completely avoid lock conflicts and even deadlocks.

After understanding the locking features of InnoDB, users can reduce lock conflicts and deadlocks through design and SQL tuning, including:

  • Try to use a lower isolation level
  • Carefully design the index, and try to use the index to access data to make locking more precise, thereby reducing the chance of lock conflicts.
  • Choose a reasonable transaction size, and small transactions have less chance of lock conflicts.
  • When explicitly locking a recordset, it is best to request a sufficient level of lock at one time. For example, if you want to modify data, it is better to apply for an exclusive lock directly, rather than apply for a shared lock first, and then request an exclusive lock when modifying, which is prone to deadlock.
  • When different programs access a set of tables, they should try to agree to access the tables in the same order. For a table, access the rows in the table in a fixed order as much as possible. This greatly reduces the chance of deadlocks.
  • Try to access data with equality conditions to avoid the impact of gap locks on concurrent insertions.
  • Do not apply for more lock levels than are actually required; do not display locks when querying unless necessary.
  • For some specific transactions, table locks can be used to improve processing speed or reduce the possibility of deadlocks.

Reprinted from: Locks in MySQL (table locks, row locks) – Snow Mountain Flying Pig – Blog Park

The main structure of the article refers to the above, but it also adds more than 40% of the content that I have summarized myself. Welcome to read it carefully.

Recommended reading:

9 out of 10 people answered incorrectly, and the other 1 was only half right: What is the lock of the database?

You may also like...

Leave a Reply

Your email address will not be published.