• /
  • /

MySQL Query Optimization

Best Practices to Improve Performance
MAY 15, 2025 • WRITTEN BY ROMAN AGABEKOV
Nothing kills a great app faster than slow database queries. Teams often optimize everything else – frontend code, server architecture, caching layers – only to discover their database queries are the actual bottleneck. Your users won't notice all the clever code you've written if they're staring at loading spinners while queries crawl. Today, we’ll cover practical methods to turn slow SQL statements into snappy responses.

1. Finding Your Bottlenecks

Before you try to fix anything, you need to know what's actually limiting your database. MySQL includes several effective diagnostic tools for this purpose.

Understand How Queries Execute

The EXPLAIN command shows you exactly how MySQL handles your queries. Add it before any SELECT statement to see which indexes are in use and how many rows are being examined:
EXPLAIN
SELECT p.post_id, p.title, u.username
FROM posts p
INNER JOIN users u ON p.author_id = u.user_id
WHERE p.created_at >= CURDATE() - INTERVAL 7 DAY
  AND u.account_status = 'active'
ORDER BY p.created_at DESC
LIMIT 15;
When looking at EXPLAIN results, pay particular attention to these fields:

  • type: This shows the join method being used. Aim for ref, range, or ideally const.
  • key: Shows which index MySQL chooses to use. If you see NULL, then no index is being used.
  • rows: Estimated number of rows to be examined. Lower is better.
  • Extra: Watch out for "Using temporary" or "Using filesort", which indicate additional processing steps.

Checking Server Status

The SHOW STATUS command provides valuable server metrics, useful for tracking down performance issues:
SHOW STATUS LIKE 'Handler%';
SHOW STATUS LIKE 'Created%';

The Slow Query Log

The slow query log captures queries that run longer than your defined threshold:
SET GLOBAL slow_query_log = 'ON';
SET GLOBAL long_query_time = 1; -- Log queries taking more than 1 second
Find your log file location with:
SHOW VARIABLES LIKE 'slow_query_log_file';

2. Making SELECT Queries Faster

After you’ve found some queries that need attention, here’s how to speed them up:

Stop Using SELECT *

Using SELECT * is convenient but wasteful. It pulls back all columns when you might only need a few. It’s an incredibly common mistake when writing queries. This practice:

  • Increases network traffic
  • Consumes more memory
  • May prevent your use of covering indexes, because all columns must be fetched from the table instead of the index alone.
This is the suboptimal method:
SELECT * FROM user_activity WHERE event_type = 'login';
And this is the better way to use SELECT, where the columns are specified:
SELECT user_id, event_time, ip_address
FROM user_activity
WHERE event_type = 'login';

Filter efficiently with indexed columns

When you filter data with a WHERE clause, the database needs to locate the matching records. How it finds these records depends greatly on proper indexing. Consider these two queries:

If there’s no index on action_type, the database has to check every row. For tables with millions of log entries, this can take seconds:
SELECT log_id, user_id
FROM audit_logs
WHERE action_type = 'delete_instance';
Since log_id is the primary key, it’s automatically indexed. The database can quickly locate just the records in the specified range. This might execute hundreds or thousands of times faster.
SELECT log_id, user_id
FROM audit_logs
WHERE log_id BETWEEN 500000 AND 510000;
For better filtering, focus on these principles:

  • Put your most selective conditions first.
  • Don’t wrap indexed columns in functions.
  • Watch out for OR conditions, which may prevent index usage.

Make ORDER BY, GROUP BY, and LIMIT Work Together

Sorting and grouping can be deceptively expensive. Behind the scenes, your database may spin up temp tables, especially if the query lacks helpful indexes. This can trigger a full table scan and turn these operations into a crawl.

The example below chains together multiple operations, which can kill performance:
SELECT user_id, COUNT(*) AS login_count
FROM activity_logs
WHERE event_type = 'login'
  AND event_time >= CURDATE() - INTERVAL 30 DAY
GROUP BY user_id
ORDER BY login_count DESC
LIMIT 20;
To make this faster:

  • Reduce reliance on temporary tables by creating indexes on columns used in GROUP BY clauses.
  • Pair LIMIT with ORDER BY so MySQL can stop scanning early one it finds enough rows. Remember that LIMIT without ORDER BY returns results in an unpredictable order, which makes pagination unreliable.

3. Smart Index Strategies

Indexes can make queries fly, but they must be designed and implemented correctly because they add overhead to write operations. Use indexes where they’ll deliver the most value:

  • Columns commonly filtered with WHERE conditions
  • Fields used to connect tables via joins
  • Columns used in ORDER BY or GROUP BY
  • High-volume tables, where full scans would be expensive
To view your existing indexes:
SHOW INDEX FROM table_name;

Tips for choosing the right index type

  • B-Tree - The standard index type. Good for equality, range queries, and sorting.

  • Hash - Best for exact matching (using =), but can't handle ranges.

  • Fulltext - Specialized for searching large text fields. They support partial matches and relevance ranking.

More Complex Indexes

Multi-column indexes can give you more complex queries a big boost, but order matters tremendously. This type of index only kicks in when the query starts with the first column in that index. You can include more columns, but only if they follow the same left-to-right order they were defined in.

For example, you cannot filter by state alone with this:
CREATE INDEX idx_location ON customers (city, state);
Covering indexes have no need to touch the underlying table, since all the required columns are already present in the index, it skips the expensive step of collecting rows from disk:
SELECT session_id, browser, last_activity
FROM user_sessions
WHERE user_id = 8675309
ORDER BY last_activity DESC
LIMIT 5;

4. Making Text Searches Quicker

The snappiness of text searches using LIKE hinges on where you place the wildcard (%). When a LIKE pattern starts with a string and tags the wildcard to the end (prefix matching), standard indexes can be used:
SELECT * FROM books WHERE title LIKE 'Harry Potter%';
It can quickly locate entries starting with "Apple" and then scan sequentially from that point.

However, this doesn’t work (becomes ineffective) if the wildcard is the beginning or the middle:

SELECT * FROM books WHERE title LIKE '%Chamber of Secrets';
This pattern disables the use of indexes and forces a full table scan.

Tips for using indexed searches with LIKE

  1. Keep index size under control with prefix indexes for just the first few characters. For instance, CREATE INDEX idx_product_name_prefix ON products (product_name(10));.
  2. Use case-insensitive collations instead of LOWER() or UPPER() functions, which prevent index usage.

Accelerating search performance through Full-Text Indexes

For complex text search needs beyond what LIKE can efficiently handle, MySQL offers FULLTEXT indexes that enable powerful word-based searches rather than simple pattern matching. With a FULLTEXT index, you can perform natural language searches that understand relevance, or boolean searches with precise control using operators like "+" (must contain) and "-" (must not contain).

For very large datasets or complex search requirements, consider dedicated search solutions like Elasticsearch, Sphinx, or Apache Solr. These tools integrate well with MySQL and offer advanced features like faceting, relevance scoring, and fuzzy matching.

5. Tuning Joins and Subqueries

JOIN operations and subqueries are two different approaches to relating data across tables. As a general guideline, use JOINs when pulling data from more than one table and subqueries for filtering or when the subquery returns a small result set. You may need to test both approaches to see which works better.

JOIN:
SELECT e.event_id, v.venue_name
FROM events e
JOIN venues v ON e.venue_id = v.venue_id
WHERE e.event_date >= CURDATE();
SUBQUERY:
SELECT article_id
FROM articles
WHERE author_id IN (
  SELECT id FROM authors WHERE verified = 1
);

Making JOIN Operations Faster

JOIN performance can fall off a cliff when indexes are missing on the columns being matched. Since primary keys are automatically indexed, your attention should be on foreign keys and any other columns that don’t already have an index.

Once you have created the necessary indexes, you can create a proper JOIN query:

SELECT r.report_id, s.system_name, a.alert_type
FROM incident_reports r
JOIN monitored_systems s ON r.system_id = s.system_id
JOIN alert_history a ON a.report_id = r.report_id
WHERE s.location = 'datacenter-west' AND a.severity = 'high';

Choosing Between IN and EXISTS conditions

When filtering with subqueries, you can choose between the SQL operations IN and EXISTS:

EXISTS is all about efficiency. It checks if a match exists, without pulling a full set of results. As soon as it finds a qualifying row, it stops looking. That makes it especially useful when you’re filtering rows from a large table based on whether related data exists elsewhere.
SELECT file_id, file_name
FROM uploads u
WHERE EXISTS (
  SELECT 1
  FROM antivirus_scans a
  WHERE a.upload_id = u.file_id
    AND a.status = 'clean'
);
IN works by building a derived table from the subquery, then cross-checking each row of the outer query against this list. It’s a better option when the result set is small and unique values that can be easily cached and reused for each outer row evaluation.
SELECT student_id, full_name
FROM students
WHERE student_id IN (
  SELECT student_id
  FROM scholarships
  WHERE year_awarded = 2024
);

6. Caching Strategies to Cut Response Times

For MySQL 5.7 and earlier, the query cache acted like a shortcut for repeated queries. Instead of running the same SQL again and again, the results could be grabbed straight from memory. This worked great for scenarios where the same queries run repeatedly with no data changes in between them. It’s less useful in write-focused environments because even a small data update could invalidate large chunks of the cache.

Note: Deprecated in MySQL 5.7 and removed in MySQL 8.0. If you're using an older version, these settings may still apply.

To configure the query cache:
SET GLOBAL query_cache_type = 1;
SET GLOBAL query_cache_size = 67108864; 
You can check how well your query cache is performing with:
SHOW STATUS LIKE 'Qcache%';
This command returns several important metrics. Look for a high ratio of Qcache_hits to total queries, which indicates the cache is working well. If you see frequent Qcache_lowmem_prunes, then you should increase your query_cache_size.

If you're using MySQL 8.0+, you'll need to rely on application-level caching solutions like Redis or Memcached instead:

Redis is an in-memory data structure store that can hold various data types and offers persistence and replication features. It's ideal for complex caching needs and can store strings, lists, sets, and more. Explore Redis’s documentation for more information on how to get started caching.

Memcached is a distributed cache focused on simplicity and speed. It's excellent for storing simpler key-value pairs across more than one server. Visit Memcached's official documentation to learn more about configuration and implementation.

Reducing redundant queries with prepared statements

With a prepared statement, the SQL is parsed, validated, and optimized just once. You can then run the same query many times, with different values, and the repeated setup steps are skipped.

Check out MySQL’s documentation on prepared statements to get started.
To implement prepared statements in your application, you'll need to:

  1. Create a statement template with placeholders (usually ? or :name) instead of literal values
  2. Prepare this template on the server, which parses and optimizes it once.
  3. Execute the prepared statement multiple times with different parameter values
  4. Close the prepared statement when finished to free resources

Most modern database libraries and frameworks provide built-in support for prepared statements with simple APIs that handle the details for you.

7. Tuning MySQL Configuration for Performance

Even the most finely tuned queries can hit a wall if the underlying server isn’t configured properly. Key configuration parameters to consider include:

  • key_buffer_size: Controls how much memory is used to cache indexes (MyISAM). Start with 256MB and check how often keys are being read from disk versus memory

  • innodb_buffer_pool_size: The main memory pool for InnoDB data and index caching. The single most important tuning knob if you use InnoDB. Allocate somewhere between half to two-thirds of your server’s total RAM.

  • tmp_table_size and max_heap_table_size: These set the upper limit for temporary tables created in memory during query processing. Set both to the same value (64MB is a good starting point).

  • innodb_log_file_size: Controls the size of InnoDB transaction logs. Larger values help with performance for write-heavy workloads but increase recovery time after crashes.


  • max_connections: Limits concurrent connections. Set based on your traffic and resource limits.

  • thread_cache_size: How many threads the server caches for reuse. Setting this appropriately reduces connection overhead.

8. Automating Query Optimization and Performance Tuning

While manual optimization is valuable, it's not sustainable as tables grow and workloads evolve. Smart teams implement automation to stay ahead of issues. Some tooling options include:

  • MySQLTuner
  • pt-query-digest
  • MySQL Workbench

How Releem Helps Optimize MySQL Queries Automatically

Releem transforms MySQL query optimization into an automated workflow. Each week, Releem intelligently analyzes your top 100 queries and top 100 slowest queries, scanning for patterns that are dragging down performance.
The Releem workflow is built for busy teams. You can connect your MySQL database through a secure, lightweight agent, then review automatically generated optimization suggestions in the Releem dashboard:


  • Smart index recommendations based on actual execution patterns. You’ll see exactly which indexes to target and why. Copy the generated CREATE INDEX statements directly to your database.

  • Performance impact reports show exactly how much each recommended change has improved performance.

  • Receive email notifications when new optimization opportunities are identified.

Performance can be your competitive advantage

Database optimization has a reputation for complexity that can be intimidating. Many developers view it as a specialized discipline requiring deep expertise, when in reality, much of it comes down to applying the basics consistently and systematically.

The methods we've detailed in this article are practical ways to glean real results. But queries aren’t something you fix once and forget. As your appl scales and SQL statement patterns shift, optimizations that once worked perfectly may no longer be sufficient. This is where automated monitoring and optimization tools become invaluable, helping you stay ahead of issues rather than constantly reacting to them.

Want to stop spending your nights and weekends tuning queries? We built Releem to handle the heavy lifting of MySQL optimization so you don't have to. Give it a try for free!
WRITTEN BY ROMAN AGABEKOV
Ready to dive in?
Try Releem today for FREE!