Tag Archives: developer

3 Techniques that Minimize Downtime

downtime

Since users keep on expecting a stable and reliable service, many web developers and system admin attempt to create infrastructures that are more reliable and able to minimize downtime. In fact, minimizing downtime is necessary for increasing customer satisfaction and decreasing support request. Below, we provide you with three areas that are crucial when it comes to downtime and we also offer some improvements that you can apply on them. Check this out!

  1. Monitoring and Alerts

Nothing works better than properly monitoring your infrastructure. In this way, you can discover any issues before they really appear and affect your customers.  Furthermore, monitoring infrastructure will also aggregate and retain a record of stats such as application performance metrics, and system resource utilization. So, the main purpose is to look for anything weird.

Usually a client is interacted on each host that collects metrics for monitoring, and then reports back to a central server. These metrics will be stored in a database and available for many services like searching, alerting, and graphing. Fortunately, there is software that can help you monitor your infrastructure, such as:

  • Graphite

Graphite provides an API that has the support of dozens of applications and programming languages.  On the other hand, metrics are pushed, stored, and graphed in the central Graphite installation.

  • Prometheus

To pull data from a variety of community supported and official clients, you can use Prometheus. It has an alerting system that is built-in and is highly scale able. Besides, it comes with client libraries for several programming languages.

  1. Software Deployment Improvement

Believe it or not, software deployment strategies are one area that plays an important role on your downtime. Unfortunately, many people often overlook it.

Bear in mind that having a complex deployment process will result in the production environment leaving the development environment behind. This can cause any risky software releases since every deployment is a much larger set of changes that naturally brings a much higher risk of problem arising. No wonder this process can easily lead to numerous bugs that can slow down development and cause the unavailability of resources.

Therefore, the best solution for this situation is to set up some up-front planning. In order to sync your production environment with your development environment, you have to figure out a strategy that allows you to automate the workflow, code integration, deployment, and testing.

Here are some best practices regarding the continuous integration and delivery (CI/CD) and testing the software that help you start automating deployments:

  • Maintaining a Single Repository

To make sure that every person on the development team works on the same code and can test their changes easily, you can maintain a single repository.

  • Automating Testing and Build Processes

Don’t forget to automate your development and testing as this will simplify deployment in an environment similar to the final use-case. Besides, you will find it helpful, especially when debugging platform-specific issues.

  1. Implementing High Availability

Another strategy that you can apply to minimize downtime is to use the concept of high availability on the infrastructure which includes principles used in designing resilient and redundant systems.

In this case, the system should be able to detect and analyze the health of the system; it has to know precisely where the error is located. Furthermore, the system must be able to redirect traffic as this can help minimize downtime through reducing interruption.

In order to upgrade to a highly available infrastructure, you have to move to multiple web servers and a load balancer from a single server. The load balancer will show you regular health checks on web servers and routes traffic from those servers that are failing.

Moreover, you can also add resilience and redundancy to increase database resilience using database replication; surely, you will find different database models on different configurations of replication. However, many believe that group replication is the most interesting one, as you can read and write operations on a redundant cluster of servers. In this way, you can detect any failing servers and routing done to prevent downtime.

In conclusion, there are three areas that can lead you to less downtime. If you truly put attention on them, you will have happier clients and of course this will lead you to more revenue.

8 Brilliant Tips for Developing the Applications for Internet of Things

Important Tips for Developing the Application for the Internet of Things_YWF

Before we go deeper with IoT, it is imperative to understand what Internet of Things (IoT) actually is. Basically, a platform enables the network, the smart device and user to get connected with the internet via a definite identifier. In general, IoT works on an embedded technology to communicate with all external environments. No wonder this thing has been seen as one of the revolutionary mobile app development trends in this generation, as it can manage everything right from an array of cohesive events to number of interconnected devices.

In fact, many have predicted that 2 billion devices will get connected to IOT by 2020, whereas the next five years $6 trillion is expected to be invested in IoT. This shows that Internet of Things have influenced a diverse set of industries to speak of, including the healthcare sector, the entertainment and gaming, automotive, home automation, and logistics. This is why IoT has a great potential in the future.

The Three Pillars of IoT

There are three major pillars which are important to note, regarding to the entire structure of the Internet of Things, such as:

  • Network: Generally network will perform similar function like router does in connecting the network to device. In this case, the devices are linked to the cloud. The infrastructure stationed at data centers will receive information. The things provide the data stream and also manage it. On the other hand, software helps to organize the things.
  • The things in themselves: it acts as Internet Gateway which is regarded as an Internet Gateway that helps other device communication through a single or many protocols. In fact, you won’t see a screen once the device gets connected to the network.
  • The cloud: it is a server that primarily aims for securing users confidential data. So, when users meet a critical juncture, the data will still get processed whereas the processing of a program usually happens during the concluding stages.

Tips for Developing Applications for IoT

Now, you have a better understanding about IoT and you may have lots of apps for the Internet of Things in mind. However, before you create one, you may want to know some factors that you can use as consideration, such as:

  1. Choose an Appropriate and Convenient Platform

The first step that every developer should do is to select the appropriate platform for the development process. One thing for sure, the platform should support the IoT applications and its components. Some platforms that are IoT proven and offer the scope to design the best in class apps are Ubidots, Xively or Thingworx. With the help of these platforms, you don’t have to start anything from the beginning.

  1. Consider the Industry for IoT Application

As stated above that Internet of Things has much widened and extended services, but there are still some fields that aren’t connected with Internet of Things. Therefore, you have to discover set of industries that can optimally connected such as healthcare, transportation, energy resources, sports, manufacturing etc. For instance, with IoT application, people can easily find transportation such as connecting buses or trains.

  1. Segregate Services from API Interface

Bear in mind that you need to separate the services from API interface while you are developing the apps for IoT. This will ensure you to have an app that runs smoothly on mobile and web desktop. You will surely get better opportunity once you manage your IoT applications well.

  1. IoT Data must be Secured Strongly

Every application developer knows about how important a strong secured environment is, especially to IoT data from the physical attacks. In fact, the security becomes more important when it comes to building GPS networks or banking apps.

  1. The Different Levels of IoT Apps

In order to understand the system and function of IoT applications, you have to know the various levels of IoT applications. Basically, there are four different layers; the devices, the ingestion tier, the analytics area and the end-user.

First of all, you have to consider the devices that you will be connecting. Then, you will see the infrastructure or the software receives data or organizes it. The next layer is all about data which actually is mainly processed with the help of analytics area. The last layer is about the end users for whom the app is getting developed.

  1. Keep an Eye on IoT Device Firmware Security

The difference between the Internet of Things from the traditional web and mobile apps is its hardware which is always apprehended to have security based issues in the firmware this is why it is essential to stay authenticated and signed before the update.

  1. Do not Neglect with Speed and Quality

You cannot comprise with the speed and quality at any cost when creating application for the IoT. Therefore, you have to focus on transforming the ideas into practice and provide stable working prototype.

  1. Ensure Scalability to the Application

It is important to build scalable applications since IoT is still a new concept. However, many believe that IoT is going to get bigger than ever with the time to arrive. In fact, having a good scalability will allow your app to remain in light for a long period of time.

In conclusion

Internet of Things is still considered as new comer in the technological area, but we will immediately see it is expanding and reached to a different height. This will surely help people access information easily and get connected one to each other at a  low price. On the other hand, it is also a challenge for the developers as it is different from other conventional methods.

Techniques to Optimize MySQL: Indexes, Slow Queries, Configuration

How to Optimize MySQL Indexes Slow Queries Configuration_YWF

Many web developers see MySQL as the world’s most popular relational database. However, at some points, you will find many parts haven’t been optimized. However, instead of investigating it further, many people prefer to leave it at default values. As a solution, we have to combine the previous tips with new method that came out since, as presented bellows:

Configuration Optimization

One of the most important ways that every user of MySQL should do is to upgrade the configuration.5.7 which has better defaults than its previous version. If you use a Linux-based host, your configuration will be like /etc/mysql/my.cnf. Furthermore, your installation might load a secondary configuration file into that configuration file, as a result if the my.cnf file doesn’t contain much content, the other file /etc/mysql/mysql.conf.d/mysqld.cnf  might have.

Editing Configuration

It is important to feel comfortable when you are using the command line, before learning on how to edit configuration. For example, you can copy the file out into the main filesystem by copying it into the shared folder with cp /etc/mysql/my.cnf /home/vagrant/Code if you’re editing locally on a Vagrant box. Then, use a regular text editor to edit it and copy it back into place when done or else you can use a simple text editor, for instance vim by executing sudo vim /etc/mysql/my.cnf.

Manual Tweaks

To create this config file under the [mysqld] section, you should make the following manual tweaks out of the box.

innodb_buffer_pool_size = 1G # (adjust value here, 50%-70% of total RAM)

innodb_log_file_size = 256M

innodb_flush_log_at_trx_commit = 1 # may change to 2 or 0

innodb_flush_method = O_DIRECT

  • innodb_buffer_pool_size

The buffer pool is used to store caching data and indexes in memory. It can keep frequently accessed data in memory. Therefore, you can add this part of your app(s) the most RAM up to 70% of all RAM when you’re running a dedicated or virtual server where there is often a bottleneck in the DB.

  • Even though, you can find clear information about the log file size here, but the important point is about how much data to store in a log before removing it. A log in this case indicates checkpoint time because with MySQL, even though writes happen in the background, it still affects foreground performance. In fact, having big log files mean better performance because you create new and smaller checkpoints. However, it takes longer recovery time when there is a crash.
  • innodb_flush_log_at_trx_commit will explain what happens with the log file. Select 1 to get the safest setting since the log is flushed to disk after every transaction. Select 0 or 2 to get less ACID, but more performant. There is no big difference in this case to outweigh the stability benefits of the setting of 1.
  • innodb_flush_method to avoid double buffering, it will be set to Unless the I/O system is in very low performance, you should always perform this command.

Variable Inspector

Here are the steps to install the variable inspector on Ubuntu:

wget https://repo.percona.com/apt/percona-release_0.1-4.$(lsb_release -sc)_all.debsudo dpkg -i percona-release_0.1-4.$(lsb_release -sc)_all.debsudo apt-get updatesudo apt-get install percona-toolkit

 

You can also apply the instructions for other systems.

Then, run the toolkit with:

pt-variable-advisor h=localhost,u=homestead,p=secret

The output should not show these:

# WARN delay_key_write: MyISAM index blocks are never flushed until necessary. # NOTE max_binlog_size: The max_binlog_size is smaller than the default of 1GB. # NOTE sort_buffer_size-1: The sort_buffer_size variable should generally be left at its default unless an expert determines it is necessary to change it. # NOTE innodb_data_file_path: Auto-extending InnoDB files can consume a lot of disk space that is very difficult to reclaim later. # WARN log_bin: Binary logging is disabled, so point-in-time recovery and replication are not possible.

You don’t have to fix these as none of them are critical. Binary logging for replication and snapshot purposes is the only one we could add.

max_binlog_size = 1Glog_bin = /var/log/mysql/mysql-bin.logserver-id=master-01binlog-format = ‘ROW’

  • The max_binlog_size setting will determine how large binary logs will be. These logs will log your transactions and queries and make checkpoints. A log may be bigger than max if a transaction is bigger than max. Otherwise, MySQL will keep them at that limit.
  • With log_bin option, you can turn on the binary logging altogether. However, without it you can’t do snapshotting or replication. Note that this can be very strenuous on the disk space. Bear in mind that this can weigh the disk space. To activate binary logging, you will need a server ID, this will inform the logs which server they came from.

With its sane defaults, the new MySQL makes things nearly production ready. Every app is certainly different and has additional custom tweaks applicable.

MySQL Tuner

The main purpose of Tuner is to monitor a database in longer intervals and suggest changes based on what it’s seen in the logs.

You can simply download it to install it:

wget https://raw.githubusercontent.com/major/MySQLTuner-perl/master/mysqltuner.plchmod +x mysqltuner.pl

You also will be asked for admin username and password for the database when running it with ./mysqltuner.pl as well as running output information from the quick scan. You can see the example below.

[] InnoDB is enabled.[] InnoDB Thread Concurrency: 0[OK] InnoDB File per table is activated[OK] InnoDB buffer pool / data size: 1.0G/11.2M[!!] Ratio InnoDB log file size / InnoDB Buffer pool size (50 %): 256.0M * 2/1.0G should be equal 25%[!!] InnoDB buffer pool <= 1G and Innodb_buffer_pool_instances(!=1).[] Number of InnoDB Buffer Pool Chunk : 8 for 8 Buffer Pool Instance(s)[OK] Innodb_buffer_pool_size aligned with Innodb_buffer_pool_chunk_size & Innodb_buffer_pool_instances[OK] InnoDB Read buffer efficiency: 96.65% (19146 hits/ 19809 total)[!!] InnoDB Write Log efficiency: 83.88% (640 hits/ 763 total)[OK] InnoDB log waits: 0.00% (0 waits / 123 writes)

 

Keep in mind that this tool should be run once per week since the server has been running. You can also set up a cronjob to inform you the results periodically. So, make sure after every configuration change, you will restart the mysql server:

sudo service mysql restart

Indexes

The easiest way to understand MySQL indexes is from looking at the index which is in a book. When a book has any indexes, you won’t have to go through the whole book to search for a subject. Index helps you search something faster without having to go through the whole book. Therefore, MySQL indexes will help you speeding up your select queries. However, the index also has to be created and stored which cause the update and insert queries will be slower. Besides, it will cost you a bit more disk space. In general, you won’t notice the difference with updating and inserting if you have indexed your table correctly and therefore it’s advisable to add indexes at the right locations.

If the tables only contain a few rows, it doesn’t really get any benefits from indexing. Therefore, can we discover which indexes to add and which types of indexes exist?

Unique/Primary Indexes

Primary indexes are the main indexes of data, such as a user account, that might be a user ID, or a username, even a main email. Primary indexes are unique which indexes cannot be repeated in a set of data.

For example, you may experience when a user selected a specific username, nobody else can use it. Therefore as a solution, you can add a “unique” index to the username column. Furthermore, MySQL will notify if someone else tries to insert a raw which has an existed username.

ALTER TABLE `users` ADD UNIQUE INDEX `username` (`username`);

 

You can make both single column and multiple columns For example, you may need a unique index on both of those columns to make sure only you that own that username per country.

ALTER TABLE `users`ADD UNIQUE INDEX `usercountry` (`username`, `country`),

 

Regular Indexex

One of the most easiest to lookup indexes is regular indexes. This type is very useful, especially when you need to find data by specific column or combination of columns fast, without the need of data to be unique.

ALTER TABLE `users`ADD INDEX `usercountry` (`username`, `country`),

 

Fulltext Indexes

If you are looking for full-text searches, you can use FULLTEXT indexes. Several storage engines that support FULLTEXT indexes are only the InnoDB and MyISAM. While for TEXT columns are only CHAR and VARCHAR.

You will find these indexes are very useful especially for all the text searching. Keep in mind that finding words inside of bodies is FULLTEXT’s specialty. Therefore, you can use it on posts, comments, descriptions, reviews, etc.

Descending Indexes

Descending Indexes is an alteration from version 8+. When you have enormous tables to cultivate, you will find this index will come in handy. It works by sorting in descending order but came at a small performance penalty. It surely will speed things up.

CREATE TABLE t (  c1 INT, c2 INT,  INDEX idx1 (c1 ASC, c2 ASC),  INDEX idx2 (c1 ASC, c2 DESC),  INDEX idx3 (c1 DESC, c2 ASC),  INDEX idx4 (c1 DESC, c2 DESC));

 

Furthermore, when dealing with logs written in the database, posts and comments which are stored last to first and similar, you can consider applying DESC to an index.

Bottlenecks

This part will explain how to detect and monitor for bottlenecks in a database.

slow_query_log  = /var/log/mysql/mysql-slow.loglong_query_time = 1log-queries-not-using-indexes = 1

You can add the above command to the configuration; as a result it will monitor queries and those not using indexes. You can analyze it for index usage with the aforementioned pt-index-usage  tool, once this log has some data or you can also apply the pt-query-digest tool which the results will be like these:

pt-query-digest /var/log/mysql/mysql-slow.log # 360ms user time, 20ms system time, 24.66M rss, 92.02M vsz# Current date: Thu Feb 13 22:39:29 2014# Hostname: *# Files: mysql-slow.log# Overall: 8 total, 6 unique, 1.14 QPS, 0.00x concurrency ________________# Time range: 2014-02-13 22:23:52 to 22:23:59# Attribute          total     min     max     avg     95%  stddev  median# ============     ======= ======= ======= ======= ======= ======= =======# Exec time            3ms   267us   406us   343us   403us    39us   348us# Lock time          827us    88us   125us   103us   119us    12us    98us# Rows sent             36       1      15    4.50   14.52    4.18    3.89# Rows examine          87       4      30   10.88   28.75    7.37    7.70# Query size         2.15k     153     296  245.11  284.79   48.90  258.32# ==== ================== ============= ===== ====== ===== ===============# Profile# Rank Query ID           Response time Calls R/Call V/M   Item# ==== ================== ============= ===== ====== ===== ===============#    1 0x728E539F7617C14D  0.0011 41.0%     3 0.0004  0.00 SELECT blog_article#    2 0x1290EEE0B201F3FF  0.0003 12.8%     1 0.0003  0.00 SELECT portfolio_item#    3 0x31DE4535BDBFA465  0.0003 12.6%     1 0.0003  0.00 SELECT portfolio_item#    4 0xF14E15D0F47A5742  0.0003 12.1%     1 0.0003  0.00 SELECT portfolio_category#    5 0x8F848005A09C9588  0.0003 11.8%     1 0.0003  0.00 SELECT blog_category#    6 0x55F49C753CA2ED64  0.0003  9.7%     1 0.0003  0.00 SELECT blog_article# ==== ================== ============= ===== ====== ===== ===============# Query 1: 0 QPS, 0x concurrency, ID 0x728E539F7617C14D at byte 736 ______# Scores: V/M = 0.00# Time range: all events occurred at 2014-02-13 22:23:52# Attribute    pct   total     min     max     avg     95%  stddev  median# ============ === ======= ======= ======= ======= ======= ======= =======# Count         37       3# Exec time     40     1ms   352us   406us   375us   403us    22us   366us# Lock time     42   351us   103us   125us   117us   119us     9us   119us# Rows sent     25       9       1       4       3    3.89    1.37    3.89# Rows examine  24      21       5       8       7    7.70    1.29    7.70# Query size    47   1.02k     261     262  261.25  258.32       0  258.32# String:# Hosts        localhost# Users        *# Query_time distribution#   1us#  10us# 100us  #################################################################   1ms#  10ms# 100ms#    1s#  10s+# Tables#    SHOW TABLE STATUS LIKE ‘blog_article’\G#    SHOW CREATE TABLE `blog_article`\G# EXPLAIN /*!50100 PARTITIONS*/SELECT b0_.id AS id0, b0_.slug AS slug1, b0_.title AS title2, b0_.excerpt AS excerpt3, b0_.external_link AS external_link4, b0_.description AS description5, b0_.created AS created6, b0_.updated AS updated7 FROM blog_article b0_ ORDER BY b0_.created DESC LIMIT 10

 

You can also analyze these logs by hand, but you have to export the log into a more “analyzable” format which can be done like this:

mysqldumpslow /var/log/mysql/mysql-slow.log

To filter data and make sure only important things are exported, you can have additional parameters. For example: the top 10 queries sorted by average execution time.

mysqldumpslow -t 10 -s at /var/log/mysql/localhost-slow.log

Summary

The above techniques are given to make MySQL fly. So, when you have to deal with configuration optimization, indexes and bottlenecks, don’t hesitate to apply the above techniques.