Ways to Make your Apps Serverless

How-to-make-your-apps-serverless_ywf

The rise of a new buzzword has made many people think that servers no longer exist, but the fact is, a server is still needed somewhere. This is why the “serverless” term may mislead many people. What makes “serverless” term is that you can successfully build your applications without deploying code to your own servers. Therefore, as a web developer your dream of spending less time worrying about servers and more time for building software will come true.

Serverless in Action

When your site serves many readers a month, it means that the traffic that comes to our scale is significant and sudden, as articles can go viral at any moment. As a result, you may have trouble keeping up and our engineers are spending too much time on operations. Therefore, as a solution, you can take a look at serverless platforms which offer you a complete success of your projects, such as more maintainable, easier to operate, and cheaper.

Amazon Web Services

Serverless has a close relationship with Amazon Web Services (AWS). In fact, AWS is the answer for one critical question; where does the custom code go? The concept of using third-party services and platform is not new, with database, you can push notifications, caching, and many other layers of an application have all been available ‘as a service’ for a while, but they sat on the edge of your application. Therefore, a server is still needed as a place for core application code, which is usually a server responding to external requests. Through AWS Lambda and AWS API Gateway, you can deploy custom application code without the overhead of managing your own servers.

AWS Lambda

Applying Lambda is quite simple, you only need to write code and upload it. Lambda is Amazon’s version of functions-as-a-service (FaaS). Then, as a response to events including HTTP requests, S3 uploads, DynamoDB updates, Kinesis streams, and many others, AWS will run the code. Since scaling happens automatically, you are only charged when your functions are running.

None of these features are strictly a requirement for serverless, but AWS has certainly set the bar high. Any serverless platform will likely to have a stateless FaaS offering with very granular billing because of the precedent set by AWS.

Other Platforms

Right now, Amazon may still be the first competitor in the arena, but other providers are showing up quickly. All the major cloud platforms have recently launched services targeted at serverless applications. Here are a few of them:

  • Google Cloud Functions: Still in alpha, having almost the same functionality to AWS Lambda and can also be triggered by HTTP requests.
  • Azure Functions: This platform is still relatively new and similar to Lambda. Other benefits are Azure has a pleasant UI and makes it easy to expose functions over HTTP without needing a separate routing service.
  • IBM OpenWhisk: This is the only open source platform. You will want to investigate this, if you are interested in deploying your own serverless platform or just curious with how they work under the hood.

Challenges

If you think serverless is the solution of every problem, you might be wrong, for serverless does not come without its challenges. In fact, the community is still discovering best practices, especially when it comes to operations as the space is new and as such. In fact, this platform still requires tools for deploying, maintaining and monitoring our applications. However, many believe that there will be many new startups’ third party services targeted at solving these problems for serverless developers.

Tools

With lots of open source community, it is possible to manually build and deploy serverless applications yourself, but we suggest that you use the existed frameworks, since a few endpoints, building, packaging, zipping, uploading and versioning all become difficult to manage. Here are some frameworks that you might want to consider:

  • Serverless framework: This framework has a robust plugin system and integrates with many community developed plugins with many community developed plugins. Its stated goal is to eventually support deployment to any of the major cloud platforms.
  • Apex: Even though it is written in Go, it supports Python, Node.js and Java runtime languages. Furthermore, the inventor of this tool, TJ Holowaychuk, is a well-known fixture in the open source community and has a great sense of what makes for good developer tools.
  • Chalice: it is the only framework created and maintained by AWS and currently supports Python.
  • Shep: If you are looking for framework that can be used for all our production services, Bustles’ own open source framework can be a great choice. It focuses on the Node.js runtime and strives to be opinionated about how you should structure, build, and deploy applications.

 It seems that in 2017 “serverless” technology will keep growing and you will see rapid adoption from startups to fortune 500 companies. This is because many developers have realized that the serverless movement is the best way to build better software.

Using CDNs to Reduce Network Latency

Using CDNs to Reduce Network Latency

There are two definitions that can be understood from network latency. In relation to overall network performance, latency is the number of milliseconds for your web content to begin rendering in a visitor’s browser.

In relation to network computing, latency is the time taken for a site visitor to make an initial connection with your webserver.

So, by minimizing latency, you will be able to correspondingly reduce page loading time and enhance your site visitor’s experience. Therefore, minimizing latency is highly recommended to any e-commerce sites. If you are a web developer this article will fit you.

How to Measure Latency

There are several methods that you can use to measure latency, such as:

Round-trip time: with Ping, you can measure a round trip time, Ping is a command line tool that bounces a request off a user’s system to any targeted server. RTT is determined by the interval it takes for the packets to be returned to the user.

Network congestion or throttling can occasionally provide a false reading, while the ping value provides a reliable assessment of latency.

Time to first byte (TTFB): After the webserver gets an initial user request, the time taken for the visitor’s browser to begin rendering a requested page is known as time to first byte (TTFB). There are two ways to measure it:

  • Actual TTFB: The amount of time taken for the first data byte from your server to reach a visitor’s browser. Network speed and connectivity affect this value.
  • Perceived TTFB: The amount of time taken for a site visitor to perceive your web content as being rendered in their browser. The time it takes for an HTML file to be parsed impacts this metric, which is critical to both SEO and the UX.

 How CDNs Reduce Your Network Latency

To reduce network latency, you can apply CDNs which work in several ways, such as:

  • Content caching: you can get this benefit through a CDN’s global network of strategically placed points of presence (PoPs); exact copies of your web pages are cached and compressed. As your site visitors are generally served content from the PoP closest to their location, this will greatly decrease RTT and latency.
  • Connection optimization: it is a session reuse and network peering that optimize connections between visitors and origin servers.
  • Progressive image rendering: For any image, a progressive series is overlaid over one another in the visitor’s browser. Each overplay is of a higher quality resolution. The visitor’s perception is that the page is being rendered more quickly in their browser than it would be otherwise.

Reducing the network latency is very important in maintaining your website in its best quality, as it determines your website’s performance and how it can attract more visitors. With these tricks, you can make an awesome website without having to worry too much about slow page loading time problems.

Why Crawl Budget and URL Scheduling Might Impact Rankings in Website Migrations

why-crawl-budget-and-url-scheduling-might-impact-rankings-in-website-migrations

During a migration, many webmasters will notice that there is turbulence happens in PageRank, this is because all signals impacting rankings haven’t passed to the new pages yet, so they assume that PageRank was lost. Besides, Googlebot also needs to collect huge amounts of data for collation in logs, mapping and updated internally, and rankings which can fluctuate throughout this process. If you are a SEO service engineer or web developer, you may need to read the following passages to understand why website migration can impact on their PageRank.

Crawl Budget = host load + URL scheduling combined

URL scheduling is important since they will show what does Googlebot want to visit (URLs), and how often?” while host load is based around “what can Googlebot visit from an Ip/host, based on capacity and server resources?” Both of them still matter in migrations, together, these make up “crawl budget” for an IP or host.

This will not bring a lot of impact, if you only have few pages of websites, but this things terribly matter when you have an e-commerce of news site with tens of thousands, hundreds of thousands, or more URLs. Sometimes, even crawling tools prior to migration “go live,” cannot detect any wrongs but the result will show that there any rankings and overall visibility drops.

This can be caused by “any late and very late signals in transit”, rather than “lost signals.” In fact, some signals could even take months to pass since Googlebot does not crawl large websites like crawling tools do.

Change Management/Freshness is Important

Everyone knows that change frequency impacts crawl frequency and URLs change all the time on the web. Keeping probability of embarrassment for search engines (the “embarrassment metric”) by returning stale content in search results below acceptable thresholds is key, and it must be managed efficiently. In order to avoid any “embarrassment”, scheduling systems are made to prioritize crawling important pages which change frequently over less important pages, such as those with insignificant changes or low-authority pages.

These kinds of key pages will be easily seen by search engine users versus pages which don’t get found often in search engine results pages. This also shows that search engines learn over time the important change frequency on web pages by comparing the latest with previous copies of the page to detect patterns of critical change frequency.

Why can’t Googlebot visit migrated pages all at once?

The above explanation has given us two conclusions; first Googlebots usually arrive at a website with a purpose, a “work schedule,” and a “bucket list” of URLs to crawl during a visit. Googlebot will surely complete its bucket list and checks around to see if there is anything more important that the URLs on the original bucket list that may also need collecting.

Furthermore, if there is important URLs, Googlebot may go a little further and crawl these other important URLs as well. If nothing further important is discovered, Googlebot returns for another bucket list to visit on your site next time.

Since Googlebot is mostly focusing on very few (important) URLs,  wheterh you’ve recently migrated a site or not, with occasional visits from time to time to those deemed least important, or not expected to have changed materially very often.

Moreover, Googlebot will likely send a signal to tell us if there is a migration of some sort underway over there when Googlebot comes across lots of redirection response codes. Once again, mostly only the most important migrating URLs will get crawled as a priority, and maybe more frequently than they normally would, too. Due to this, it is importance to know several factors, aside from page importance and change frequency that would make URLs be visited. They are limited search engine resources, host load, and URL queues an low importance of migrating pages.