Tag Archives: database

Ways to Make your Apps Serverless

How-to-make-your-apps-serverless_ywf

The rise of a new buzzword has made many people think that servers no longer exist, but the fact is, a server is still needed somewhere. This is why the “serverless” term may mislead many people. What makes “serverless” term is that you can successfully build your applications without deploying code to your own servers. Therefore, as a web developer your dream of spending less time worrying about servers and more time for building software will come true.

Serverless in Action

When your site serves many readers a month, it means that the traffic that comes to our scale is significant and sudden, as articles can go viral at any moment. As a result, you may have trouble keeping up and our engineers are spending too much time on operations. Therefore, as a solution, you can take a look at serverless platforms which offer you a complete success of your projects, such as more maintainable, easier to operate, and cheaper.

Amazon Web Services

Serverless has a close relationship with Amazon Web Services (AWS). In fact, AWS is the answer for one critical question; where does the custom code go? The concept of using third-party services and platform is not new, with database, you can push notifications, caching, and many other layers of an application have all been available ‘as a service’ for a while, but they sat on the edge of your application. Therefore, a server is still needed as a place for core application code, which is usually a server responding to external requests. Through AWS Lambda and AWS API Gateway, you can deploy custom application code without the overhead of managing your own servers.

AWS Lambda

Applying Lambda is quite simple, you only need to write code and upload it. Lambda is Amazon’s version of functions-as-a-service (FaaS). Then, as a response to events including HTTP requests, S3 uploads, DynamoDB updates, Kinesis streams, and many others, AWS will run the code. Since scaling happens automatically, you are only charged when your functions are running.

None of these features are strictly a requirement for serverless, but AWS has certainly set the bar high. Any serverless platform will likely to have a stateless FaaS offering with very granular billing because of the precedent set by AWS.

Other Platforms

Right now, Amazon may still be the first competitor in the arena, but other providers are showing up quickly. All the major cloud platforms have recently launched services targeted at serverless applications. Here are a few of them:

  • Google Cloud Functions: Still in alpha, having almost the same functionality to AWS Lambda and can also be triggered by HTTP requests.
  • Azure Functions: This platform is still relatively new and similar to Lambda. Other benefits are Azure has a pleasant UI and makes it easy to expose functions over HTTP without needing a separate routing service.
  • IBM OpenWhisk: This is the only open source platform. You will want to investigate this, if you are interested in deploying your own serverless platform or just curious with how they work under the hood.

Challenges

If you think serverless is the solution of every problem, you might be wrong, for serverless does not come without its challenges. In fact, the community is still discovering best practices, especially when it comes to operations as the space is new and as such. In fact, this platform still requires tools for deploying, maintaining and monitoring our applications. However, many believe that there will be many new startups’ third party services targeted at solving these problems for serverless developers.

Tools

With lots of open source community, it is possible to manually build and deploy serverless applications yourself, but we suggest that you use the existed frameworks, since a few endpoints, building, packaging, zipping, uploading and versioning all become difficult to manage. Here are some frameworks that you might want to consider:

  • Serverless framework: This framework has a robust plugin system and integrates with many community developed plugins with many community developed plugins. Its stated goal is to eventually support deployment to any of the major cloud platforms.
  • Apex: Even though it is written in Go, it supports Python, Node.js and Java runtime languages. Furthermore, the inventor of this tool, TJ Holowaychuk, is a well-known fixture in the open source community and has a great sense of what makes for good developer tools.
  • Chalice: it is the only framework created and maintained by AWS and currently supports Python.
  • Shep: If you are looking for framework that can be used for all our production services, Bustles’ own open source framework can be a great choice. It focuses on the Node.js runtime and strives to be opinionated about how you should structure, build, and deploy applications.

 It seems that in 2017 “serverless” technology will keep growing and you will see rapid adoption from startups to fortune 500 companies. This is because many developers have realized that the serverless movement is the best way to build better software.

Json-api-normalizer: Why JSON API and Redux Work Best When Used Together

As a web developer, we have to manage the data needed for every application we work on. There are problems when doing so, such as:

  1. Fetch data from the back end.
  2. Store it somewhere locally in the front-end application.
  3. Retrieve the data from the local store and format it as needed by the specific view or screen.

In this article, we are going to discuss about the data usage from JSON, the JSON API and GraphQL back ends, and from that, we can learn the practical way on how to manage front-end application data. As for the real use, let’s imagine that we have carried out a survey that asks the same questions of many users. After each user has given their answers, other users can comment on them if wanted to. Our web app will perform a request to the back end, store the gathered data in the local store and render the content on the page. In order to make it stay simple, we will leave out the answer-creation flow.

Redux Best Practices

What makes Redux the best is that it is changeable no matter what kind of API you consume. It doesn’t matter whether you change your API from JSON to JSON API or even GraphQL and back during development, as long as you keep your data model, so it will not affect the implementation of your state management. Below is the explanation on the best practice using Redux:

  1. Keep Data Flat in the Redux Store

First, here’s the data model:

 

 

Based on the picture above, we have a question data object that might have many post objects. It is possible that each post might have many comment objects. Each post and comment has respectively one author.

Let’s say we have a back end that returns a specific JSON response. It is possible that it would have a carefully nested structure. If you store your data in the same way you do in the store, you will face many problems after that, like, for instance, you might store the same object many times like this:

{

  “text”: “My Post”,

  “author”: {

    “name”: “Yury”,

    “avatar”: “avatar1.png”

  },

  “comments”: [

    {

      “text”: “Awesome Comment”,

      “author”: {

            “name”: “Yury”,

        “avatar”: “avatar1.png”

      }

    }

  ]

}

In the example above, it indicates that we store the same Author object in several places, which is bad, because not only does it need more memory but it also has negative side effects. You would have to pass the whole state and update all instances of the same object especially if somebody changed the user’s avatar in the back end.

To prevent something like that from happening, we can store the data in a flattened structure. This way, each object would be stored only once and would be easily accessible.

{

  “post”: [{

    “id”: 1,

    “text”: “My Post”,

    “author”: { “id”: 1 },

    “comments”: [ { “id”: 1 } ]

  }],

  “comment”: [{

    “id”: 1,

    “text”: “Awesome Comment”

  }],

  “author”: [{

    “name”: “Yury”,

    “avatar”: “avatar1.png”,

    “id”: 1

  }]

 }

  1. Store Collections as Maps Whenever Possible

After we have the data in a good flat structure, we can gradually accumulate the received data, in order for us to reuse it as a cache, to improve performance or for offline use. However, if we combine new data in the existing storage, we have to select only relevant data objects for the specific view. We can store the structure of each JSON document separately to find out which data objects were provided in a specific request to gain this. There is a list of data object IDs that we can use to gather the data from the storage.

Let’s say there is a list of friends of two different users, Alice and Bob. We will then perform two requests to gather the list and review the contents of our storage consequently. Let’s suppose that from the start the storage is empty.

/ALICE/FRIENDS RESPONSE

Here’s the User data object with an ID of 1 and a name, Mike, like this:

{

  “data”: [{

    “type”: “User”,

    “id”: “1”,

    “attributes”: {

      “name”: “Mike”

    }

  }]

}

/BOB/FRIENDS RESPONSE

This is another request that would return a User with the ID of 2 and Kevin as the name:

{

  “data”: [{

    “type”: “User”,

    “id”: “2”,

    “attributes”: {

      “name”: “Kevin”

    }

  }]

}

STORAGE STATE

This is what our storage state would look like:

{

  “users”: [

    {

      “id”: “1”,

      “name”: “Mike”

    },

    {

        “id”: “2”,

        “name”: “Kevin”

    }

  ]

}

STORAGE STATE WITH META DATA

In order to find out or distinguish which data objects in storage are relevant, we have to keep the structure of the JSON API document. With that focus, we can change it into this:

{

  “users”: [

    {

      “id”: “1”,

      “name”: “Mike”

    },

    {

        “id”: “2”,

        “name”: “Kevin”

    }

  ],

  “meta”: {

      “/alice/friends”: [

        {

          “type”: “User”,

          “id”: “1”

        }

      ],

      “/bob/friends”: [

        {

          “type”: “User”,

          “id”: “2”

        }

      ]

  }

}

With this, we can now read the meta data and gather all mentioned data objects. Now here’s the recap of the operations’ complexities:

As can be seen from the picture above, maps certainly works better than arrays because all operations have O(1) as the complexity instead of O(n). If we use a map instead of an array for the User data object, it would be like this:

STORAGE STATE REVISED

{

  “users”: {

      “1”: {

        “name”: “Mike”

      },

      “2”: {

        “name”: “Kevin”

      }

  },

  “meta”: {

      “/alice/friends”: [

        {

          “type”: “User”,

          “id”: “1”

        }

      ],

      “/bob/friends”: [

        {

          “type”: “User”,

           “id”: “2”

        }

      ]

  }

}

Now with this simple method, we can find a specific user by ID almost instantly.

Processing the Data and JSON API

There are many solutions to convert JSON documents to a Redux-friendly form. However, while there is no significant change within the application’s lifecycle, it will cause a failure if things are too dynamic, even though normalizing the function with the provision of a JSON document works great if your data model is known in advance.

Using GraphQL might be possible and interesting as well; however, if our APIs are being consumed by many third parties, we can’t adopt it.

JSON API and Redux

Redux and the JSON API work best together. The data provided by the JSON API in a flat structure by definition conforms nicely with Redux best practices. The data is typified in order to be naturally saved in Redux’s storage in a map with the format type → map of objects.

There are things to consider, though. First, it should be noted that storing the types of data objects “data” and “included” as two separate entitles in the Redux store can violate Redux best practices, as the same data objects would be stored more than once.

To solve these problems, we can use the main features of json-api-normalizer, such as:

  • Merge data and included fields, normalizing the data.
  • Collections are converted into maps in a form of a id=> object.
  • The response’s original structure is stored in a special meta

First, in order to solve problems with redundant structures and circular dependencies, the introduction of the distinction of data and included data objects in the JSON API specification is needed. Second, there is a constant update on data in Redux, although gradually, that can help with the performance improvement.

 Now that you know why JSON API works best with Redux, it can be concluded that this approach can assist us in prototyping a lot faster and flexible with changes to the data model. If you are in doubt whether using Redux with JSON API or not, this article will help you find the solution and reason why you shouldn’t doubt this method.

Best Practices for API Error Handling

Best-Practices-for-API-Error-Handling

As a web developer, we all don’t want to see Error codes in an API response, neither do you. When it happens to you, it can mean one of two things — there was something wrong in your request or your handling that the API simply couldn’t parse the passed data, or the API itself has so many problems. In either situation, traffic comes to a sudden end, and we start trying to discover the cause and solution for that.

Although being unpleasant, errors, whether in code form or simple error response, are incredibly useful. Error codes are probably the most useful diagnostic element in the API space.

Today, we’re going to discuss about the importance and the usefulness of error responses and handling approaches. There are some common error code classifications the average user will encounter, as well as some examples of these codes in action that you need to know.

The Value of Error Codes
As explained above, error codes are surprisingly, but incredibly useful. Error codes in the response stage of an API are essential in communicating failure from a developer to a user. This stage is a direct communication between client and API. It’s considered as the most important step towards informing the user of a failure, as well as boosting the error resolution process.

An error comes randomly and sometimes it’s beyond our knowledge. That’s why error responses are the only constant, consistent communication that we, as the user, can rely on when an error has occurred. Error codes can both clarify the situation, as well as communicate the intended functionality.

For example, an error code such as “401 Unauthorized – Please Pass Token.” You understand the point of failure in that response, especially that the user is unauthorized. However, you also figure out the intended functionality, which means that the API requires a token, and that token must be passed as part of the request in order to obtain authorization.

With a simple error code and resolution explanation, you have already identified the cause of the error, as well as the intended functionality and method to fix that error. It is very useful, especially for the amount of data that is actually returned.

HTTP Status Codes
Before we deeply discuss about error codes and what makes a code good, we need to sort out the HTTP Status Codes format. These codes are the most frequent status codes that the average user will encounter, not only in terms of APIs but also in terms of general internet usage. Although there are other protocols and have their own system of codes, the HTTP Status Codes dominate API communication, and vendor-specific codes tend to be derived from these ranges.

  • 1XX – Informational

The 1XX range has two basic functionalities. First, in the transfer of information concerning the protocol state of the connected devices — for example, 101 Switching Protocols is a status code noting that the client has requested a protocol change from the server, and that the request has been approved. The 1XX range also elucidates the state of the initial request. For example, 100 Continue, notes that a server has received request headers from a client, and that the server is waiting for the request body.

  • 2XX – Success

The 2XX range notes a range of successes in communication, and combines several responses into specific codes. The first three status codes excellently determine this range, like 200 OK means that a GET or POST request was successful, 201 Created indicates that a request has been brought to completion and a new resource has been created for the client, and 202 Accepted shows that the request has been accepted, and that processing has begun.

  • 3XX – Redirection

The 3XX range is all about the status of the resource or endpoint. When this kind of status code is sent, it confirms that the server is still accepting communication, but that the contacted point is not the accurate point of entry into the system. 301 Moved Permanently denotes that the client request did reach the correct system, but this request and all future requests should be managed by a different URI. This is very convenient both in subdomains and in moving a resource from one server to another.

  • 4XX – Client Error

The 4XX series of error codes is probably the most well-known because of the famous 404 Not Found status, which is a prominent marker for URLs and URIs that are formed in the wrong way. However, other more useful status codes for APIs are in this range.

414 URI Too Long is a common status code, indicating that the data pushed through in a GET request is too long, and should be changed into a POST request. Another common code is 429 Too many Requests, which is used for rate limiting to note a client is attempting too many requests simultaneously that their traffic is being rejected.

  • 5XX – Server Error

The 5XX range is reserved for error codes notably related to the server functionality. Whereas the 4XX range is the client’s responsibility (meaning that it is a client failure), the 5XX range explicitly notes failures with the server. Error codes such as 502 Bad Gateway, which marks the upstream server, has failed and that the current server is a gateway, further reveal server functionality as a method of showing where failure is transpiring.

Making a Good Error Code
With a strongly-built comprehension of HTTP Status Codes, we can start analyzing what actually makes for a good error code, and what makes for a bad error code. Quality error codes not only convey what went wrong, but also why it went wrong.

Excessively obscure error codes are very inconvenient. Imagine that you are trying to make a GET request to an API that manages digital music inventory. You’ve submitted your request to an API that usually accepts your traffic, you’ve passed the correct authorization and authentication credentials, and you think the server is ready to respond.

You send your data, and receive this kind of error code: 400 Bad Request. Without additional data and without more information, what does this mean? It’s in the 4XX range, so you know the problem was on the client side, but it doesn’t explain and solve anything, other than “bad request.”

This explains that a “helpful” error code is not as helpful as it should be. We could easily make that same response helpful and clear with less effort. Good error codes must pass three essential criteria in order to be functional, such as:

  • An HTTP Status Code, so that the source and realm of the problem can be confirmed with ease;
  • An Internal Reference IDfor documentation-specific notation of errors. In some cases, this can substitute the HTTP Status Code, as long as the internal reference sheet inserts the HTTP Status Code scheme or similar reference material.
  • Human readable messagesthat conclude the context, cause, and solution for the existing error.

From this discussion, we can conclude that error codes are useful if inserted with messages conveying what goes wrong and why it goes wrong, topped off with the human readable messages emphasizing the solution that we can take and do to handle it. That way, we can effortlessly handle it when future errors occur spontaneously.