Google Lighthouse for Measuring Mobile Site Speed

Lighthouse is a tool created by Google and was originally meant to audit Progressive Web Apps (PWA). In general, this tool will do four audits for accessibility, performance, progressive web apps and an extended list of best practices. With a flaky 3G connection on a slightly underpowered device, Google Lighthouse stimulates your mobile site visitors. The main purpose of Google Lighthouse is to increase a site speed through measuring page speed from a different, more realistic angle. This surely makes Lighthouse very recommended to any web developers and SEO services who aims for a faster loading Mobile site speed.  However, some people are more familiar with PageSpeed instead of Google Lighthouse, so what’s the difference between Page Speed Insights and Google Lighthouse?

PageSpeed Insights vs.Google Lighthouse

Right now, PageSpeed Insights is still the most popular analysis tool which can provide you with a nice score and a list of possible improvements. However, it hardly gives you an idea of the perceived loading speed of your site. Moreover, it also states that your site doesn’t follow the rules and therefore, it is slow for everyone. Here are the two most important things PageSpeed Insights looks at:

  • Time to above-the-fold load: This is the time that it takes to fully render the above-the-fold content of a page from the moment a user requests your page.
  • Time to full page load: This is the time that it takes to fully render the complete content of a page from the moment a user requests your page.

On the other hand, Lighthouse focuses on practical approach where it puts user experience front and center. Since it visits your site over a throttled 3G connection, it can predict what a real visitor would experience. Moreover, Lighthouse will check how and when it responds to input while PageSpeed Insights will just load your site. No wonder Lighthouse can easily find the exact moment when your content is ready to use.

What to Look for in Lighthouse Results?

The important point that users want to see and feel is that your site must be fast and must feel fast. In other words, users should be able to interact with your content as soon as possible. This is absolutely important for your SEO; therefore, you need to fix these issues. There are several metrics that Lighthouse uses such as follows:

  • First meaningful paint: This metric will determine how long content should appear on screen. The lower the score, the faster the page appears.
  • First interactive: it will also measure if a page is minimally interactive.
  • Consistently interactive: This measures when a page is fully interactive.
  • Perceptual Speed Index: This shows how quickly the contents of a page are visibly populated which also comes with a target loading time of <1,250 ms.
  • Estimated latency input: This measures how long your page will respond to user input.
  • Critical requests chain: It shows what resources are needed to initially render this page.

Even though Lighthouse is not as popular as PageSpeed, its functionality cannot be ignored. In fact, this is a worth to try tool for those who want to analyze their site speed. In fact, it is more fine-grained and gives you immediate feedback based on real-world usage.

How to Create a Design that Satisfies

Two-Types-of-User-Motivation--Design-to-Satisfy-ywf

Believe it or not, working as a designer or web designer will require you to know your users and motivate them. You might think that designers sound like phycology, but this is actually important for designers to acknowledge a little knowledge of phycology, especially about motivation and how to apply a good motivation in their design, since by utilizing good motivation on your design, you will provide strong reasons for users to take an action for buying your product. Take a look further on how motivation can really affect your design, such as follows.

What is Motivation?

Motivation is famously known as something that can encourage people. It gives motives, needs or wants that drive behavior and explain why people do something. In design, motivation is also needed; it could help designers see the direct way to make the product correspondent to users’ expectations and solve user’s problems. However, before your product can help users, first you have to create a product that can motivate users to try it.

There are two types of motivation that we know, extrinsic motivation and intrinsic motivation:

Extrinsic motivation

These motives come from outer sources, such as family, professional environment, competitions, contests, etc. In most cases, people who have extrinsic motivation are usually seeking of reward, such as money, prizes, diplomas, certificates, trophies, medals, praise, support, recognition, or the desire to compete with others. As a designer, we can discover users’ extrinsic motives through research for UX designers. Once we know their extrinsic motivation, we can create a design that can stimulate them.

Intrinsic motivation

Oppositely, intrinsic motivations come from inner sources, such as the need of self-improvement. This type of motivation is stronger than extrinsic motivation. If you can nail this motivation from a user, it will become a significant factor for retaining users. Therefore, it is important to get to know more about target audience at the stage of user research; in this way, designers can discover what their motives are and what kind of designs that will work for the specific users.

Knowing the types of motivation will surely help designers in creating a design that will attract and serve users best. In fact, this knowledge is necessary for UI/UX designers for several reasons, such as follows:

  • Building navigation and call-to-action elements that can truly engage to users and motivate them to take action.
  • Designing better layouts that can demonstrate key benefits or rewards.
  • Creating a process that can motivate users to try the product and test its functions.
  • Presenting aesthetic satisfaction so that users feel comfortable with your product.
  • Providing the copy that can stimulate users through describing the benefits and achievements of your product.
  • Encourage users to share their experience via various social networks; this can be a powerful extrinsic motive for other people nowadays.

Now that you know motivation is terribly important in design, by understanding users’ motivation, you can build the right design that can reach many users. So, don’t forget to analyze your users’ motivation before designing for them.

 

Techniques to Optimize MySQL: Indexes, Slow Queries, Configuration

How to Optimize MySQL Indexes Slow Queries Configuration_YWF

Many web developers see MySQL as the world’s most popular relational database. However, at some points, you will find many parts haven’t been optimized. However, instead of investigating it further, many people prefer to leave it at default values. As a solution, we have to combine the previous tips with new method that came out since, as presented bellows:

Configuration Optimization

One of the most important ways that every user of MySQL should do is to upgrade the configuration.5.7 which has better defaults than its previous version. If you use a Linux-based host, your configuration will be like /etc/mysql/my.cnf. Furthermore, your installation might load a secondary configuration file into that configuration file, as a result if the my.cnf file doesn’t contain much content, the other file /etc/mysql/mysql.conf.d/mysqld.cnf  might have.

Editing Configuration

It is important to feel comfortable when you are using the command line, before learning on how to edit configuration. For example, you can copy the file out into the main filesystem by copying it into the shared folder with cp /etc/mysql/my.cnf /home/vagrant/Code if you’re editing locally on a Vagrant box. Then, use a regular text editor to edit it and copy it back into place when done or else you can use a simple text editor, for instance vim by executing sudo vim /etc/mysql/my.cnf.

Manual Tweaks

To create this config file under the [mysqld] section, you should make the following manual tweaks out of the box.

innodb_buffer_pool_size = 1G # (adjust value here, 50%-70% of total RAM)

innodb_log_file_size = 256M

innodb_flush_log_at_trx_commit = 1 # may change to 2 or 0

innodb_flush_method = O_DIRECT

  • innodb_buffer_pool_size

The buffer pool is used to store caching data and indexes in memory. It can keep frequently accessed data in memory. Therefore, you can add this part of your app(s) the most RAM up to 70% of all RAM when you’re running a dedicated or virtual server where there is often a bottleneck in the DB.

  • Even though, you can find clear information about the log file size here, but the important point is about how much data to store in a log before removing it. A log in this case indicates checkpoint time because with MySQL, even though writes happen in the background, it still affects foreground performance. In fact, having big log files mean better performance because you create new and smaller checkpoints. However, it takes longer recovery time when there is a crash.
  • innodb_flush_log_at_trx_commit will explain what happens with the log file. Select 1 to get the safest setting since the log is flushed to disk after every transaction. Select 0 or 2 to get less ACID, but more performant. There is no big difference in this case to outweigh the stability benefits of the setting of 1.
  • innodb_flush_method to avoid double buffering, it will be set to Unless the I/O system is in very low performance, you should always perform this command.

Variable Inspector

Here are the steps to install the variable inspector on Ubuntu:

wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.debsudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.debsudo apt-get updatesudo apt-get install percona-toolkit

 

You can also apply the instructions for other systems.

Then, run the toolkit with:

pt-variable-advisor h=localhost,u=homestead,p=secret

The output should not show these:

# WARN delay_key_write: MyISAM index blocks are never flushed until necessary. # NOTE max_binlog_size: The max_binlog_size is smaller than the default of 1GB. # NOTE sort_buffer_size-1: The sort_buffer_size variable should generally be left at its default unless an expert determines it is necessary to change it. # NOTE innodb_data_file_path: Auto-extending InnoDB files can consume a lot of disk space that is very difficult to reclaim later. # WARN log_bin: Binary logging is disabled, so point-in-time recovery and replication are not possible.

You don’t have to fix these as none of them are critical. Binary logging for replication and snapshot purposes is the only one we could add.

max_binlog_size = 1Glog_bin = /var/log/mysql/mysql-bin.logserver-id=master-01binlog-format = ‘ROW’

  • The max_binlog_size setting will determine how large binary logs will be. These logs will log your transactions and queries and make checkpoints. A log may be bigger than max if a transaction is bigger than max. Otherwise, MySQL will keep them at that limit.
  • With log_bin option, you can turn on the binary logging altogether. However, without it you can’t do snapshotting or replication. Note that this can be very strenuous on the disk space. Bear in mind that this can weigh the disk space. To activate binary logging, you will need a server ID, this will inform the logs which server they came from.

With its sane defaults, the new MySQL makes things nearly production ready. Every app is certainly different and has additional custom tweaks applicable.

MySQL Tuner

The main purpose of Tuner is to monitor a database in longer intervals and suggest changes based on what it’s seen in the logs.

You can simply download it to install it:

wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.plchmod +x mysqltuner.pl

You also will be asked for admin username and password for the database when running it with ./mysqltuner.pl as well as running output information from the quick scan. You can see the example below.

[] InnoDB is enabled.[] InnoDB Thread Concurrency: 0[OK] InnoDB File per table is activated[OK] InnoDB buffer pool / data size: 1.0G/11.2M[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (50 %): 256.0M * 2/1.0G should be equal 25%[!!] InnoDB buffer pool <= 1G and Innodb_buffer_pool_instances(!=1).[] Number of InnoDB Buffer Pool Chunk : 8 for 8 Buffer Pool Instance(s)[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances[OK] InnoDB Read buffer efficiency: 96.65% (19146 hits/ 19809 total)[!!] InnoDB Write Log efficiency: 83.88% (640 hits/ 763 total)[OK] InnoDB log waits: 0.00% (0 waits / 123 writes)

 

Keep in mind that this tool should be run once per week since the server has been running. You can also set up a cronjob to inform you the results periodically. So, make sure after every configuration change, you will restart the mysql server:

sudo service mysql restart

Indexes

The easiest way to understand MySQL indexes is from looking at the index which is in a book. When a book has any indexes, you won’t have to go through the whole book to search for a subject. Index helps you search something faster without having to go through the whole book. Therefore, MySQL indexes will help you speeding up your select queries. However, the index also has to be created and stored which cause the update and insert queries will be slower. Besides, it will cost you a bit more disk space. In general, you won’t notice the difference with updating and inserting if you have indexed your table correctly and therefore it’s advisable to add indexes at the right locations.

If the tables only contain a few rows, it doesn’t really get any benefits from indexing. Therefore, can we discover which indexes to add and which types of indexes exist?

Unique/Primary Indexes

Primary indexes are the main indexes of data, such as a user account, that might be a user ID, or a username, even a main email. Primary indexes are unique which indexes cannot be repeated in a set of data.

For example, you may experience when a user selected a specific username, nobody else can use it. Therefore as a solution, you can add a “unique” index to the username column. Furthermore, MySQL will notify if someone else tries to insert a raw which has an existed username.

ALTER TABLE `users` ADD UNIQUE INDEX `username` (`username`);

 

You can make both single column and multiple columns For example, you may need a unique index on both of those columns to make sure only you that own that username per country.

ALTER TABLE `users`ADD UNIQUE INDEX `usercountry` (`username`, `country`),

 

Regular Indexex

One of the most easiest to lookup indexes is regular indexes. This type is very useful, especially when you need to find data by specific column or combination of columns fast, without the need of data to be unique.

ALTER TABLE `users`ADD INDEX `usercountry` (`username`, `country`),

 

Fulltext Indexes

If you are looking for full-text searches, you can use FULLTEXT indexes. Several storage engines that support FULLTEXT indexes are only the InnoDB and MyISAM. While for TEXT columns are only CHAR and VARCHAR.

You will find these indexes are very useful especially for all the text searching. Keep in mind that finding words inside of bodies is FULLTEXT’s specialty. Therefore, you can use it on posts, comments, descriptions, reviews, etc.

Descending Indexes

Descending Indexes is an alteration from version 8+. When you have enormous tables to cultivate, you will find this index will come in handy. It works by sorting in descending order but came at a small performance penalty. It surely will speed things up.

CREATE TABLE t (  c1 INT, c2 INT,  INDEX idx1 (c1 ASC, c2 ASC),  INDEX idx2 (c1 ASC, c2 DESC),  INDEX idx3 (c1 DESC, c2 ASC),  INDEX idx4 (c1 DESC, c2 DESC));

 

Furthermore, when dealing with logs written in the database, posts and comments which are stored last to first and similar, you can consider applying DESC to an index.

Bottlenecks

This part will explain how to detect and monitor for bottlenecks in a database.

slow_query_log  = /var/log/mysql/mysql-slow.loglong_query_time = 1log-queries-not-using-indexes = 1

You can add the above command to the configuration; as a result it will monitor queries and those not using indexes. You can analyze it for index usage with the aforementioned pt-index-usage  tool, once this log has some data or you can also apply the pt-query-digest tool which the results will be like these:

pt-query-digest /var/log/mysql/mysql-slow.log # 360ms user time, 20ms system time, 24.66M rss, 92.02M vsz# Current date: Thu Feb 13 22:39:29 2014# Hostname: *# Files: mysql-slow.log# Overall: 8 total, 6 unique, 1.14 QPS, 0.00x concurrency ________________# Time range: 2014-02-13 22:23:52 to 22:23:59# Attribute          total     min     max     avg     95%  stddev  median# ============     ======= ======= ======= ======= ======= ======= =======# Exec time            3ms   267us   406us   343us   403us    39us   348us# Lock time          827us    88us   125us   103us   119us    12us    98us# Rows sent             36       1      15    4.50   14.52    4.18    3.89# Rows examine          87       4      30   10.88   28.75    7.37    7.70# Query size         2.15k     153     296  245.11  284.79   48.90  258.32# ==== ================== ============= ===== ====== ===== ===============# Profile# Rank Query ID           Response time Calls R/Call V/M   Item# ==== ================== ============= ===== ====== ===== ===============#    1 0x728E539F7617C14D  0.0011 41.0%     3 0.0004  0.00 SELECT blog_article#    2 0x1290EEE0B201F3FF  0.0003 12.8%     1 0.0003  0.00 SELECT portfolio_item#    3 0x31DE4535BDBFA465  0.0003 12.6%     1 0.0003  0.00 SELECT portfolio_item#    4 0xF14E15D0F47A5742  0.0003 12.1%     1 0.0003  0.00 SELECT portfolio_category#    5 0x8F848005A09C9588  0.0003 11.8%     1 0.0003  0.00 SELECT blog_category#    6 0x55F49C753CA2ED64  0.0003  9.7%     1 0.0003  0.00 SELECT blog_article# ==== ================== ============= ===== ====== ===== ===============# Query 1: 0 QPS, 0x concurrency, ID 0x728E539F7617C14D at byte 736 ______# Scores: V/M = 0.00# Time range: all events occurred at 2014-02-13 22:23:52# Attribute    pct   total     min     max     avg     95%  stddev  median# ============ === ======= ======= ======= ======= ======= ======= =======# Count         37       3# Exec time     40     1ms   352us   406us   375us   403us    22us   366us# Lock time     42   351us   103us   125us   117us   119us     9us   119us# Rows sent     25       9       1       4       3    3.89    1.37    3.89# Rows examine  24      21       5       8       7    7.70    1.29    7.70# Query size    47   1.02k     261     262  261.25  258.32       0  258.32# String:# Hosts        localhost# Users        *# Query_time distribution#   1us#  10us# 100us  #################################################################   1ms#  10ms# 100ms#    1s#  10s+# Tables#    SHOW TABLE STATUS LIKE ‘blog_article’\G#    SHOW CREATE TABLE `blog_article`\G# EXPLAIN /*!50100 PARTITIONS*/SELECT b0_.id AS id0, b0_.slug AS slug1, b0_.title AS title2, b0_.excerpt AS excerpt3, b0_.external_link AS external_link4, b0_.description AS description5, b0_.created AS created6, b0_.updated AS updated7 FROM blog_article b0_ ORDER BY b0_.created DESC LIMIT 10

 

You can also analyze these logs by hand, but you have to export the log into a more “analyzable” format which can be done like this:

mysqldumpslow /var/log/mysql/mysql-slow.log

To filter data and make sure only important things are exported, you can have additional parameters. For example: the top 10 queries sorted by average execution time.

mysqldumpslow -t 10 -s at /var/log/mysql/localhost-slow.log

Summary

The above techniques are given to make MySQL fly. So, when you have to deal with configuration optimization, indexes and bottlenecks, don’t hesitate to apply the above techniques.