Tag Archives: code

Json-api-normalizer: Why JSON API and Redux Work Best When Used Together

As a web developer, we have to manage the data needed for every application we work on. There are problems when doing so, such as:

  1. Fetch data from the back end.
  2. Store it somewhere locally in the front-end application.
  3. Retrieve the data from the local store and format it as needed by the specific view or screen.

In this article, we are going to discuss about the data usage from JSON, the JSON API and GraphQL back ends, and from that, we can learn the practical way on how to manage front-end application data. As for the real use, let’s imagine that we have carried out a survey that asks the same questions of many users. After each user has given their answers, other users can comment on them if wanted to. Our web app will perform a request to the back end, store the gathered data in the local store and render the content on the page. In order to make it stay simple, we will leave out the answer-creation flow.

Redux Best Practices

What makes Redux the best is that it is changeable no matter what kind of API you consume. It doesn’t matter whether you change your API from JSON to JSON API or even GraphQL and back during development, as long as you keep your data model, so it will not affect the implementation of your state management. Below is the explanation on the best practice using Redux:

  1. Keep Data Flat in the Redux Store

First, here’s the data model:

 

 

Based on the picture above, we have a question data object that might have many post objects. It is possible that each post might have many comment objects. Each post and comment has respectively one author.

Let’s say we have a back end that returns a specific JSON response. It is possible that it would have a carefully nested structure. If you store your data in the same way you do in the store, you will face many problems after that, like, for instance, you might store the same object many times like this:

{

  “text”: “My Post”,

  “author”: {

    “name”: “Yury”,

    “avatar”: “avatar1.png”

  },

  “comments”: [

    {

      “text”: “Awesome Comment”,

      “author”: {

            “name”: “Yury”,

        “avatar”: “avatar1.png”

      }

    }

  ]

}

In the example above, it indicates that we store the same Author object in several places, which is bad, because not only does it need more memory but it also has negative side effects. You would have to pass the whole state and update all instances of the same object especially if somebody changed the user’s avatar in the back end.

To prevent something like that from happening, we can store the data in a flattened structure. This way, each object would be stored only once and would be easily accessible.

{

  “post”: [{

    “id”: 1,

    “text”: “My Post”,

    “author”: { “id”: 1 },

    “comments”: [ { “id”: 1 } ]

  }],

  “comment”: [{

    “id”: 1,

    “text”: “Awesome Comment”

  }],

  “author”: [{

    “name”: “Yury”,

    “avatar”: “avatar1.png”,

    “id”: 1

  }]

 }

  1. Store Collections as Maps Whenever Possible

After we have the data in a good flat structure, we can gradually accumulate the received data, in order for us to reuse it as a cache, to improve performance or for offline use. However, if we combine new data in the existing storage, we have to select only relevant data objects for the specific view. We can store the structure of each JSON document separately to find out which data objects were provided in a specific request to gain this. There is a list of data object IDs that we can use to gather the data from the storage.

Let’s say there is a list of friends of two different users, Alice and Bob. We will then perform two requests to gather the list and review the contents of our storage consequently. Let’s suppose that from the start the storage is empty.

/ALICE/FRIENDS RESPONSE

Here’s the User data object with an ID of 1 and a name, Mike, like this:

{

  “data”: [{

    “type”: “User”,

    “id”: “1”,

    “attributes”: {

      “name”: “Mike”

    }

  }]

}

/BOB/FRIENDS RESPONSE

This is another request that would return a User with the ID of 2 and Kevin as the name:

{

  “data”: [{

    “type”: “User”,

    “id”: “2”,

    “attributes”: {

      “name”: “Kevin”

    }

  }]

}

STORAGE STATE

This is what our storage state would look like:

{

  “users”: [

    {

      “id”: “1”,

      “name”: “Mike”

    },

    {

        “id”: “2”,

        “name”: “Kevin”

    }

  ]

}

STORAGE STATE WITH META DATA

In order to find out or distinguish which data objects in storage are relevant, we have to keep the structure of the JSON API document. With that focus, we can change it into this:

{

  “users”: [

    {

      “id”: “1”,

      “name”: “Mike”

    },

    {

        “id”: “2”,

        “name”: “Kevin”

    }

  ],

  “meta”: {

      “/alice/friends”: [

        {

          “type”: “User”,

          “id”: “1”

        }

      ],

      “/bob/friends”: [

        {

          “type”: “User”,

          “id”: “2”

        }

      ]

  }

}

With this, we can now read the meta data and gather all mentioned data objects. Now here’s the recap of the operations’ complexities:

As can be seen from the picture above, maps certainly works better than arrays because all operations have O(1) as the complexity instead of O(n). If we use a map instead of an array for the User data object, it would be like this:

STORAGE STATE REVISED

{

  “users”: {

      “1”: {

        “name”: “Mike”

      },

      “2”: {

        “name”: “Kevin”

      }

  },

  “meta”: {

      “/alice/friends”: [

        {

          “type”: “User”,

          “id”: “1”

        }

      ],

      “/bob/friends”: [

        {

          “type”: “User”,

           “id”: “2”

        }

      ]

  }

}

Now with this simple method, we can find a specific user by ID almost instantly.

Processing the Data and JSON API

There are many solutions to convert JSON documents to a Redux-friendly form. However, while there is no significant change within the application’s lifecycle, it will cause a failure if things are too dynamic, even though normalizing the function with the provision of a JSON document works great if your data model is known in advance.

Using GraphQL might be possible and interesting as well; however, if our APIs are being consumed by many third parties, we can’t adopt it.

JSON API and Redux

Redux and the JSON API work best together. The data provided by the JSON API in a flat structure by definition conforms nicely with Redux best practices. The data is typified in order to be naturally saved in Redux’s storage in a map with the format type → map of objects.

There are things to consider, though. First, it should be noted that storing the types of data objects “data” and “included” as two separate entitles in the Redux store can violate Redux best practices, as the same data objects would be stored more than once.

To solve these problems, we can use the main features of json-api-normalizer, such as:

  • Merge data and included fields, normalizing the data.
  • Collections are converted into maps in a form of a id=> object.
  • The response’s original structure is stored in a special meta

First, in order to solve problems with redundant structures and circular dependencies, the introduction of the distinction of data and included data objects in the JSON API specification is needed. Second, there is a constant update on data in Redux, although gradually, that can help with the performance improvement.

 Now that you know why JSON API works best with Redux, it can be concluded that this approach can assist us in prototyping a lot faster and flexible with changes to the data model. If you are in doubt whether using Redux with JSON API or not, this article will help you find the solution and reason why you shouldn’t doubt this method.

Myths and Realities of Replaced elements in HTML

Replaced Elements in HTML Myths and Realities - YWF (2)

According to official specs, replaced elements are content outside the scope of the CSS formatting model, such as an image, embedded document, or applet. For instance, the content of the HTML IMG element is often replaced by the image that its src attribute designates. Besides, replaced elements often have intrinsic dimensions, such as an intrinsic width, an intrinsic height, and an intrinsic height specified in absolute units. Now, you may have a general description of what a replaced element is, but as a web developer, you have to look deeper about replaced elements.

Replaced Elements in the Real World
To discuss in a full description about the replaced elements, we need to go to a different resource, namely the Rendering section of the HTML Living Standard document. But, when you look deeper, the specs can be confusing. This is because some HTML elements operate as replaced elements all the time, while other do it only in specific circumstances.

Embedded Content
Embedded content is the first category of replaced elements. Embedded content means any element that imports another resource into the document, or content from another vocabulary that is inserted into the document. While these external resources have the intrinsic dimensions that match the requirements of the definition.

Embed, iframe, and video are the main elements in this category. Since they always import external content into your document, these elements are always treated as replaced elements. There are more elements that a bit more complicated that fall into this category only in special circumstances, such as:

  • applet – Treated as a replaced element when it represents a plugin, otherwise it’s treated as an ordinary element.
  • audio – Treated as a replaced element only when it is “exposing a user interface element”. Will render about one line high, as wide as is necessary to expose the user agent’s user interface features.
  • object – Treated as a replaced element when it represents an image, plugin, or nested browsing context (similar to an iframe).
  • canvas – Treated as a replaced element when it represents embedded content. That is, it contains the element’s bitmap, if any, or else a transparent black bitmap with the same intrinsic dimensions as the element.

Images
Images are others elements that treated as a replaced element with the intrinsic dimensions of the image. This category also includes the input elements with a type=”image” attribute.

When the image is not rendered on the page, things get a bit more complicated for several reasons. The <input type=”image”> will be displayed as a normal button.

Default Size of Replaced Elements
We can understand this elements by these three basic rules:

  • if the object has explicit width, height and ratio values, use them;
  • if the object only has ratio, use auto for both width and height while maintaining the said ratio;
  • if none of these dimensions are available:
    – use width: 300px; height: 150px when the viewport is larger than 300px
    – use “auto” for both width and height and a ratio of 2:1 when the viewport is smaller than 300px;

What About the Other Types of Form Controls?
There are many misconceptions about other types of form controls are replaced elements too. After all, these elements are also rendered with a default width and height. In fact, most people consider intrinsic dimensions actually comes from the following line:

Each kind of form control is also described in the widgets section, which describes the look and feel of the control. Another reason why form control looks different from one browser to the next and from one OS to another:

The elements defined in this section can be rendered in a variety of manners, within the guidelines provided below. User agents are encouraged to set the ‘appearance’ CSS property appropriately to achieve platform-native appearances for widgets, and are expected to implement any relevant animations,etc, that are appropriate for the platform.

Conclusion
It is easy to get confused about replaced elements and form controls. But, they are different categories of HTML elements, with <input type=”image”> being the only form control that is a replaced element.

 

Best Practices for API Error Handling

Best-Practices-for-API-Error-Handling

As a web developer, we all don’t want to see Error codes in an API response, neither do you. When it happens to you, it can mean one of two things — there was something wrong in your request or your handling that the API simply couldn’t parse the passed data, or the API itself has so many problems. In either situation, traffic comes to a sudden end, and we start trying to discover the cause and solution for that.

Although being unpleasant, errors, whether in code form or simple error response, are incredibly useful. Error codes are probably the most useful diagnostic element in the API space.

Today, we’re going to discuss about the importance and the usefulness of error responses and handling approaches. There are some common error code classifications the average user will encounter, as well as some examples of these codes in action that you need to know.

The Value of Error Codes
As explained above, error codes are surprisingly, but incredibly useful. Error codes in the response stage of an API are essential in communicating failure from a developer to a user. This stage is a direct communication between client and API. It’s considered as the most important step towards informing the user of a failure, as well as boosting the error resolution process.

An error comes randomly and sometimes it’s beyond our knowledge. That’s why error responses are the only constant, consistent communication that we, as the user, can rely on when an error has occurred. Error codes can both clarify the situation, as well as communicate the intended functionality.

For example, an error code such as “401 Unauthorized – Please Pass Token.” You understand the point of failure in that response, especially that the user is unauthorized. However, you also figure out the intended functionality, which means that the API requires a token, and that token must be passed as part of the request in order to obtain authorization.

With a simple error code and resolution explanation, you have already identified the cause of the error, as well as the intended functionality and method to fix that error. It is very useful, especially for the amount of data that is actually returned.

HTTP Status Codes
Before we deeply discuss about error codes and what makes a code good, we need to sort out the HTTP Status Codes format. These codes are the most frequent status codes that the average user will encounter, not only in terms of APIs but also in terms of general internet usage. Although there are other protocols and have their own system of codes, the HTTP Status Codes dominate API communication, and vendor-specific codes tend to be derived from these ranges.

  • 1XX – Informational

The 1XX range has two basic functionalities. First, in the transfer of information concerning the protocol state of the connected devices — for example, 101 Switching Protocols is a status code noting that the client has requested a protocol change from the server, and that the request has been approved. The 1XX range also elucidates the state of the initial request. For example, 100 Continue, notes that a server has received request headers from a client, and that the server is waiting for the request body.

  • 2XX – Success

The 2XX range notes a range of successes in communication, and combines several responses into specific codes. The first three status codes excellently determine this range, like 200 OK means that a GET or POST request was successful, 201 Created indicates that a request has been brought to completion and a new resource has been created for the client, and 202 Accepted shows that the request has been accepted, and that processing has begun.

  • 3XX – Redirection

The 3XX range is all about the status of the resource or endpoint. When this kind of status code is sent, it confirms that the server is still accepting communication, but that the contacted point is not the accurate point of entry into the system. 301 Moved Permanently denotes that the client request did reach the correct system, but this request and all future requests should be managed by a different URI. This is very convenient both in subdomains and in moving a resource from one server to another.

  • 4XX – Client Error

The 4XX series of error codes is probably the most well-known because of the famous 404 Not Found status, which is a prominent marker for URLs and URIs that are formed in the wrong way. However, other more useful status codes for APIs are in this range.

414 URI Too Long is a common status code, indicating that the data pushed through in a GET request is too long, and should be changed into a POST request. Another common code is 429 Too many Requests, which is used for rate limiting to note a client is attempting too many requests simultaneously that their traffic is being rejected.

  • 5XX – Server Error

The 5XX range is reserved for error codes notably related to the server functionality. Whereas the 4XX range is the client’s responsibility (meaning that it is a client failure), the 5XX range explicitly notes failures with the server. Error codes such as 502 Bad Gateway, which marks the upstream server, has failed and that the current server is a gateway, further reveal server functionality as a method of showing where failure is transpiring.

Making a Good Error Code
With a strongly-built comprehension of HTTP Status Codes, we can start analyzing what actually makes for a good error code, and what makes for a bad error code. Quality error codes not only convey what went wrong, but also why it went wrong.

Excessively obscure error codes are very inconvenient. Imagine that you are trying to make a GET request to an API that manages digital music inventory. You’ve submitted your request to an API that usually accepts your traffic, you’ve passed the correct authorization and authentication credentials, and you think the server is ready to respond.

You send your data, and receive this kind of error code: 400 Bad Request. Without additional data and without more information, what does this mean? It’s in the 4XX range, so you know the problem was on the client side, but it doesn’t explain and solve anything, other than “bad request.”

This explains that a “helpful” error code is not as helpful as it should be. We could easily make that same response helpful and clear with less effort. Good error codes must pass three essential criteria in order to be functional, such as:

  • An HTTP Status Code, so that the source and realm of the problem can be confirmed with ease;
  • An Internal Reference IDfor documentation-specific notation of errors. In some cases, this can substitute the HTTP Status Code, as long as the internal reference sheet inserts the HTTP Status Code scheme or similar reference material.
  • Human readable messagesthat conclude the context, cause, and solution for the existing error.

From this discussion, we can conclude that error codes are useful if inserted with messages conveying what goes wrong and why it goes wrong, topped off with the human readable messages emphasizing the solution that we can take and do to handle it. That way, we can effortlessly handle it when future errors occur spontaneously.