Techniques to Optimize MySQL: Indexes, Slow Queries, Configuration

How to Optimize MySQL Indexes Slow Queries Configuration_YWF

Many web developers see MySQL as the world’s most popular relational database. However, at some points, you will find many parts haven’t been optimized. However, instead of investigating it further, many people prefer to leave it at default values. As a solution, we have to combine the previous tips with new method that came out since, as presented bellows:

Configuration Optimization

One of the most important ways that every user of MySQL should do is to upgrade the configuration.5.7 which has better defaults than its previous version. If you use a Linux-based host, your configuration will be like /etc/mysql/my.cnf. Furthermore, your installation might load a secondary configuration file into that configuration file, as a result if the my.cnf file doesn’t contain much content, the other file /etc/mysql/mysql.conf.d/mysqld.cnf  might have.

Editing Configuration

It is important to feel comfortable when you are using the command line, before learning on how to edit configuration. For example, you can copy the file out into the main filesystem by copying it into the shared folder with cp /etc/mysql/my.cnf /home/vagrant/Code if you’re editing locally on a Vagrant box. Then, use a regular text editor to edit it and copy it back into place when done or else you can use a simple text editor, for instance vim by executing sudo vim /etc/mysql/my.cnf.

Manual Tweaks

To create this config file under the [mysqld] section, you should make the following manual tweaks out of the box.

innodb_buffer_pool_size = 1G # (adjust value here, 50%-70% of total RAM)

innodb_log_file_size = 256M

innodb_flush_log_at_trx_commit = 1 # may change to 2 or 0

innodb_flush_method = O_DIRECT

  • innodb_buffer_pool_size

The buffer pool is used to store caching data and indexes in memory. It can keep frequently accessed data in memory. Therefore, you can add this part of your app(s) the most RAM up to 70% of all RAM when you’re running a dedicated or virtual server where there is often a bottleneck in the DB.

  • Even though, you can find clear information about the log file size here, but the important point is about how much data to store in a log before removing it. A log in this case indicates checkpoint time because with MySQL, even though writes happen in the background, it still affects foreground performance. In fact, having big log files mean better performance because you create new and smaller checkpoints. However, it takes longer recovery time when there is a crash.
  • innodb_flush_log_at_trx_commit will explain what happens with the log file. Select 1 to get the safest setting since the log is flushed to disk after every transaction. Select 0 or 2 to get less ACID, but more performant. There is no big difference in this case to outweigh the stability benefits of the setting of 1.
  • innodb_flush_method to avoid double buffering, it will be set to Unless the I/O system is in very low performance, you should always perform this command.

Variable Inspector

Here are the steps to install the variable inspector on Ubuntu:

wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.debsudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.debsudo apt-get updatesudo apt-get install percona-toolkit

 

You can also apply the instructions for other systems.

Then, run the toolkit with:

pt-variable-advisor h=localhost,u=homestead,p=secret

The output should not show these:

# WARN delay_key_write: MyISAM index blocks are never flushed until necessary. # NOTE max_binlog_size: The max_binlog_size is smaller than the default of 1GB. # NOTE sort_buffer_size-1: The sort_buffer_size variable should generally be left at its default unless an expert determines it is necessary to change it. # NOTE innodb_data_file_path: Auto-extending InnoDB files can consume a lot of disk space that is very difficult to reclaim later. # WARN log_bin: Binary logging is disabled, so point-in-time recovery and replication are not possible.

You don’t have to fix these as none of them are critical. Binary logging for replication and snapshot purposes is the only one we could add.

max_binlog_size = 1Glog_bin = /var/log/mysql/mysql-bin.logserver-id=master-01binlog-format = ‘ROW’

  • The max_binlog_size setting will determine how large binary logs will be. These logs will log your transactions and queries and make checkpoints. A log may be bigger than max if a transaction is bigger than max. Otherwise, MySQL will keep them at that limit.
  • With log_bin option, you can turn on the binary logging altogether. However, without it you can’t do snapshotting or replication. Note that this can be very strenuous on the disk space. Bear in mind that this can weigh the disk space. To activate binary logging, you will need a server ID, this will inform the logs which server they came from.

With its sane defaults, the new MySQL makes things nearly production ready. Every app is certainly different and has additional custom tweaks applicable.

MySQL Tuner

The main purpose of Tuner is to monitor a database in longer intervals and suggest changes based on what it’s seen in the logs.

You can simply download it to install it:

wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.plchmod +x mysqltuner.pl

You also will be asked for admin username and password for the database when running it with ./mysqltuner.pl as well as running output information from the quick scan. You can see the example below.

[] InnoDB is enabled.[] InnoDB Thread Concurrency: 0[OK] InnoDB File per table is activated[OK] InnoDB buffer pool / data size: 1.0G/11.2M[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (50 %): 256.0M * 2/1.0G should be equal 25%[!!] InnoDB buffer pool <= 1G and Innodb_buffer_pool_instances(!=1).[] Number of InnoDB Buffer Pool Chunk : 8 for 8 Buffer Pool Instance(s)[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances[OK] InnoDB Read buffer efficiency: 96.65% (19146 hits/ 19809 total)[!!] InnoDB Write Log efficiency: 83.88% (640 hits/ 763 total)[OK] InnoDB log waits: 0.00% (0 waits / 123 writes)

 

Keep in mind that this tool should be run once per week since the server has been running. You can also set up a cronjob to inform you the results periodically. So, make sure after every configuration change, you will restart the mysql server:

sudo service mysql restart

Indexes

The easiest way to understand MySQL indexes is from looking at the index which is in a book. When a book has any indexes, you won’t have to go through the whole book to search for a subject. Index helps you search something faster without having to go through the whole book. Therefore, MySQL indexes will help you speeding up your select queries. However, the index also has to be created and stored which cause the update and insert queries will be slower. Besides, it will cost you a bit more disk space. In general, you won’t notice the difference with updating and inserting if you have indexed your table correctly and therefore it’s advisable to add indexes at the right locations.

If the tables only contain a few rows, it doesn’t really get any benefits from indexing. Therefore, can we discover which indexes to add and which types of indexes exist?

Unique/Primary Indexes

Primary indexes are the main indexes of data, such as a user account, that might be a user ID, or a username, even a main email. Primary indexes are unique which indexes cannot be repeated in a set of data.

For example, you may experience when a user selected a specific username, nobody else can use it. Therefore as a solution, you can add a “unique” index to the username column. Furthermore, MySQL will notify if someone else tries to insert a raw which has an existed username.

ALTER TABLE `users` ADD UNIQUE INDEX `username` (`username`);

 

You can make both single column and multiple columns For example, you may need a unique index on both of those columns to make sure only you that own that username per country.

ALTER TABLE `users`ADD UNIQUE INDEX `usercountry` (`username`, `country`),

 

Regular Indexex

One of the most easiest to lookup indexes is regular indexes. This type is very useful, especially when you need to find data by specific column or combination of columns fast, without the need of data to be unique.

ALTER TABLE `users`ADD INDEX `usercountry` (`username`, `country`),

 

Fulltext Indexes

If you are looking for full-text searches, you can use FULLTEXT indexes. Several storage engines that support FULLTEXT indexes are only the InnoDB and MyISAM. While for TEXT columns are only CHAR and VARCHAR.

You will find these indexes are very useful especially for all the text searching. Keep in mind that finding words inside of bodies is FULLTEXT’s specialty. Therefore, you can use it on posts, comments, descriptions, reviews, etc.

Descending Indexes

Descending Indexes is an alteration from version 8+. When you have enormous tables to cultivate, you will find this index will come in handy. It works by sorting in descending order but came at a small performance penalty. It surely will speed things up.

CREATE TABLE t (  c1 INT, c2 INT,  INDEX idx1 (c1 ASC, c2 ASC),  INDEX idx2 (c1 ASC, c2 DESC),  INDEX idx3 (c1 DESC, c2 ASC),  INDEX idx4 (c1 DESC, c2 DESC));

 

Furthermore, when dealing with logs written in the database, posts and comments which are stored last to first and similar, you can consider applying DESC to an index.

Bottlenecks

This part will explain how to detect and monitor for bottlenecks in a database.

slow_query_log  = /var/log/mysql/mysql-slow.loglong_query_time = 1log-queries-not-using-indexes = 1

You can add the above command to the configuration; as a result it will monitor queries and those not using indexes. You can analyze it for index usage with the aforementioned pt-index-usage  tool, once this log has some data or you can also apply the pt-query-digest tool which the results will be like these:

pt-query-digest /var/log/mysql/mysql-slow.log # 360ms user time, 20ms system time, 24.66M rss, 92.02M vsz# Current date: Thu Feb 13 22:39:29 2014# Hostname: *# Files: mysql-slow.log# Overall: 8 total, 6 unique, 1.14 QPS, 0.00x concurrency ________________# Time range: 2014-02-13 22:23:52 to 22:23:59# Attribute          total     min     max     avg     95%  stddev  median# ============     ======= ======= ======= ======= ======= ======= =======# Exec time            3ms   267us   406us   343us   403us    39us   348us# Lock time          827us    88us   125us   103us   119us    12us    98us# Rows sent             36       1      15    4.50   14.52    4.18    3.89# Rows examine          87       4      30   10.88   28.75    7.37    7.70# Query size         2.15k     153     296  245.11  284.79   48.90  258.32# ==== ================== ============= ===== ====== ===== ===============# Profile# Rank Query ID           Response time Calls R/Call V/M   Item# ==== ================== ============= ===== ====== ===== ===============#    1 0x728E539F7617C14D  0.0011 41.0%     3 0.0004  0.00 SELECT blog_article#    2 0x1290EEE0B201F3FF  0.0003 12.8%     1 0.0003  0.00 SELECT portfolio_item#    3 0x31DE4535BDBFA465  0.0003 12.6%     1 0.0003  0.00 SELECT portfolio_item#    4 0xF14E15D0F47A5742  0.0003 12.1%     1 0.0003  0.00 SELECT portfolio_category#    5 0x8F848005A09C9588  0.0003 11.8%     1 0.0003  0.00 SELECT blog_category#    6 0x55F49C753CA2ED64  0.0003  9.7%     1 0.0003  0.00 SELECT blog_article# ==== ================== ============= ===== ====== ===== ===============# Query 1: 0 QPS, 0x concurrency, ID 0x728E539F7617C14D at byte 736 ______# Scores: V/M = 0.00# Time range: all events occurred at 2014-02-13 22:23:52# Attribute    pct   total     min     max     avg     95%  stddev  median# ============ === ======= ======= ======= ======= ======= ======= =======# Count         37       3# Exec time     40     1ms   352us   406us   375us   403us    22us   366us# Lock time     42   351us   103us   125us   117us   119us     9us   119us# Rows sent     25       9       1       4       3    3.89    1.37    3.89# Rows examine  24      21       5       8       7    7.70    1.29    7.70# Query size    47   1.02k     261     262  261.25  258.32       0  258.32# String:# Hosts        localhost# Users        *# Query_time distribution#   1us#  10us# 100us  #################################################################   1ms#  10ms# 100ms#    1s#  10s+# Tables#    SHOW TABLE STATUS LIKE ‘blog_article’\G#    SHOW CREATE TABLE `blog_article`\G# EXPLAIN /*!50100 PARTITIONS*/SELECT b0_.id AS id0, b0_.slug AS slug1, b0_.title AS title2, b0_.excerpt AS excerpt3, b0_.external_link AS external_link4, b0_.description AS description5, b0_.created AS created6, b0_.updated AS updated7 FROM blog_article b0_ ORDER BY b0_.created DESC LIMIT 10

 

You can also analyze these logs by hand, but you have to export the log into a more “analyzable” format which can be done like this:

mysqldumpslow /var/log/mysql/mysql-slow.log

To filter data and make sure only important things are exported, you can have additional parameters. For example: the top 10 queries sorted by average execution time.

mysqldumpslow -t 10 -s at /var/log/mysql/localhost-slow.log

Summary

The above techniques are given to make MySQL fly. So, when you have to deal with configuration optimization, indexes and bottlenecks, don’t hesitate to apply the above techniques.

Smart Ways to Measure the Value of Your Content

smart content

Many sources have discussed about the importance of high quality content to the success of SEO campaign, but how do you know that your content contains good value to support your SEO campaign? Below are several ways that can determine whether your content already has great value to your search engine optimisation. You better check it out!

  1. Run an NPS Survey

A Net Promoter Score (NPS) is an app that is commonly used to gauge loyalty between a brand and a customer. In fact, NPS survey is the best method to measure the value that your blog is delivering to readers. NPS’s measurement can be discovered by simply asking a simple question: How likely is it that you would recommend our blog to a friend or colleague?

Respondents to the question are then grouped as follows:

  • Promoters (score 9-10) are loyal enthusiasts who will keep buying and refer others.
  • Passives (score 7-8) are satisfied but unenthusiastic customers who are exposed to competitive offerings.
  • Detractors (score 0-6) are unhappy customers who have the possibility to damage your brands and interfere your business by spreading negative comments.

The main reason why you need to run NPS survey on your blog is that because you can begin to understand how many of your readers truly appreciate your content and whether they are willing to share it.

You can find many great tools to help you run an NPS survey on your blog, such as Promotor.io, SurveyMonkey, Delighted, Qualaroo. and etc. Moreover, in order to get a wide range of surveyor, you should be mindful of ensuring you reach all kinds or readers with your survey. In fact, you need to make sure that you send all the survey to all possible readers.

  1. Pay Attention to the Comments

Thanks to the rise of social media, now you can find a lot of easy ways to engage with your readers through social networks like Facebook, LinkedIn, and twitter:

  • They can share a link to your post on Twitter, Facebook (or any network of their choice)
  • They can interact with a post where you’ve shared a link back to the blog (favoriting a tweet, sending a reply, liking on Facebook)
  • They can retweet your tweet sharing the post or share your Facebook post
  • And etc.

With all these options and ways to interact with content, the highest of engagement is by knowing that people can share and comment on your post anywhere since they are willing to take the time to respond directly within the post itself since they will not bother to waste their time on replying a comment if they do not feel inspired by our post at all.

  1. Monitor Mentions and Shares

The clearest indicator that will reveal your content level is by measuring the amount of mentions and shares. You can see whether your content contains good value or not by seeing the number of shares it receives and the number of mentions you receive via social media tools.  There are many social media tools or sharing plugins that you can use to analyze how many mentions and shares that you get, such as SumoMe and Social Warfare, PostReach and Buzzsumo, but nothing comes for free.

3 ways to optimize images for better performance

http://yourwebsitefirst.com/wp-content/uploads/2016/08/5-ways-to-optimize-images-for-better-performance.jpg

Since speed plays an important role recently, therefore make sure that your web have faster performance is the key to make your traffic thrives. One of the ways that you can do to increase your web speed is by optimizing images for better performances. There are many ways of optimizing images but, we will tell you the three common ways to optimize images for better performance, such as follow:

  1. Correct Dimensions

One of many mistakes that are often made by web developers is to use the wrong dimensions which worth mentioning for completeness that usually happen when displaying a thumbnail for a larger image elsewhere.

<img src=”threepwood-retina-2x-large.jpg” width=”140″ height=”84″>

Avoid any larger image and setting its dimensions with code since it will make your page size to explode, especially when you consider how many thumbnails many pages have on them. Furthermore, you need to ensure that the dimensions of the actual image and the area in which it will be displayed are the same.

<img src=”threepwood-140×84.jpg” width=”140″ height=”84″>

Responsive images are a different story of course, but same principle applies of not, for example, serving and scaling desktop retina images on mobile. For this problem, an excellent service that makes that process much simpler.

2. GIFs and PNGs

PNG was actually the product of GIF which has been improved; however, the function still shares the same practical purpose unless in terms of making animation as GIF-including transparency-but with the benefits of smaller file sizes. You can either re-export them from your graphics app as PNGs, or use convert from ImageMagick to batch process them on the command line, depending on your preference and how many GIF images you have. Moreover, bear in mind to use a lossless optimizer to run the new images you create.

3. Progressive JPEGs

JPEG has two main loading strategies, first, ‘baseline’ loads from the top row of pixels down to the bottom, and ‘progressive’ loads a complete which is extremely pixelated and then gradually sharpens it. The trade-off here is not so much about file size but of perceived speed.

In fact, in order to input.jpg to a progressive JPEG called output.jpg., you can use convert from ImageMagick.

convert -strip -interlace Plane -quality 80 input.jpg output.jpg

You can also simply set that as the output argument as well, if you want to overwrite input.