Json-api-normalizer: Why JSON API and Redux Work Best When Used Together

As a web developer, we have to manage the data needed for every application we work on. There are problems when doing so, such as:

  1. Fetch data from the back end.
  2. Store it somewhere locally in the front-end application.
  3. Retrieve the data from the local store and format it as needed by the specific view or screen.

In this article, we are going to discuss about the data usage from JSON, the JSON API and GraphQL back ends, and from that, we can learn the practical way on how to manage front-end application data. As for the real use, let’s imagine that we have carried out a survey that asks the same questions of many users. After each user has given their answers, other users can comment on them if wanted to. Our web app will perform a request to the back end, store the gathered data in the local store and render the content on the page. In order to make it stay simple, we will leave out the answer-creation flow.

Redux Best Practices

What makes Redux the best is that it is changeable no matter what kind of API you consume. It doesn’t matter whether you change your API from JSON to JSON API or even GraphQL and back during development, as long as you keep your data model, so it will not affect the implementation of your state management. Below is the explanation on the best practice using Redux:

  1. Keep Data Flat in the Redux Store

First, here’s the data model:

 

 

Based on the picture above, we have a question data object that might have many post objects. It is possible that each post might have many comment objects. Each post and comment has respectively one author.

Let’s say we have a back end that returns a specific JSON response. It is possible that it would have a carefully nested structure. If you store your data in the same way you do in the store, you will face many problems after that, like, for instance, you might store the same object many times like this:

{

  “text”: “My Post”,

  “author”: {

    “name”: “Yury”,

    “avatar”: “avatar1.png”

  },

  “comments”: [

    {

      “text”: “Awesome Comment”,

      “author”: {

            “name”: “Yury”,

        “avatar”: “avatar1.png”

      }

    }

  ]

}

In the example above, it indicates that we store the same Author object in several places, which is bad, because not only does it need more memory but it also has negative side effects. You would have to pass the whole state and update all instances of the same object especially if somebody changed the user’s avatar in the back end.

To prevent something like that from happening, we can store the data in a flattened structure. This way, each object would be stored only once and would be easily accessible.

{

  “post”: [{

    “id”: 1,

    “text”: “My Post”,

    “author”: { “id”: 1 },

    “comments”: [ { “id”: 1 } ]

  }],

  “comment”: [{

    “id”: 1,

    “text”: “Awesome Comment”

  }],

  “author”: [{

    “name”: “Yury”,

    “avatar”: “avatar1.png”,

    “id”: 1

  }]

 }

  1. Store Collections as Maps Whenever Possible

After we have the data in a good flat structure, we can gradually accumulate the received data, in order for us to reuse it as a cache, to improve performance or for offline use. However, if we combine new data in the existing storage, we have to select only relevant data objects for the specific view. We can store the structure of each JSON document separately to find out which data objects were provided in a specific request to gain this. There is a list of data object IDs that we can use to gather the data from the storage.

Let’s say there is a list of friends of two different users, Alice and Bob. We will then perform two requests to gather the list and review the contents of our storage consequently. Let’s suppose that from the start the storage is empty.

/ALICE/FRIENDS RESPONSE

Here’s the User data object with an ID of 1 and a name, Mike, like this:

{

  “data”: [{

    “type”: “User”,

    “id”: “1”,

    “attributes”: {

      “name”: “Mike”

    }

  }]

}

/BOB/FRIENDS RESPONSE

This is another request that would return a User with the ID of 2 and Kevin as the name:

{

  “data”: [{

    “type”: “User”,

    “id”: “2”,

    “attributes”: {

      “name”: “Kevin”

    }

  }]

}

STORAGE STATE

This is what our storage state would look like:

{

  “users”: [

    {

      “id”: “1”,

      “name”: “Mike”

    },

    {

        “id”: “2”,

        “name”: “Kevin”

    }

  ]

}

STORAGE STATE WITH META DATA

In order to find out or distinguish which data objects in storage are relevant, we have to keep the structure of the JSON API document. With that focus, we can change it into this:

{

  “users”: [

    {

      “id”: “1”,

      “name”: “Mike”

    },

    {

        “id”: “2”,

        “name”: “Kevin”

    }

  ],

  “meta”: {

      “/alice/friends”: [

        {

          “type”: “User”,

          “id”: “1”

        }

      ],

      “/bob/friends”: [

        {

          “type”: “User”,

          “id”: “2”

        }

      ]

  }

}

With this, we can now read the meta data and gather all mentioned data objects. Now here’s the recap of the operations’ complexities:

As can be seen from the picture above, maps certainly works better than arrays because all operations have O(1) as the complexity instead of O(n). If we use a map instead of an array for the User data object, it would be like this:

STORAGE STATE REVISED

{

  “users”: {

      “1”: {

        “name”: “Mike”

      },

      “2”: {

        “name”: “Kevin”

      }

  },

  “meta”: {

      “/alice/friends”: [

        {

          “type”: “User”,

          “id”: “1”

        }

      ],

      “/bob/friends”: [

        {

          “type”: “User”,

           “id”: “2”

        }

      ]

  }

}

Now with this simple method, we can find a specific user by ID almost instantly.

Processing the Data and JSON API

There are many solutions to convert JSON documents to a Redux-friendly form. However, while there is no significant change within the application’s lifecycle, it will cause a failure if things are too dynamic, even though normalizing the function with the provision of a JSON document works great if your data model is known in advance.

Using GraphQL might be possible and interesting as well; however, if our APIs are being consumed by many third parties, we can’t adopt it.

JSON API and Redux

Redux and the JSON API work best together. The data provided by the JSON API in a flat structure by definition conforms nicely with Redux best practices. The data is typified in order to be naturally saved in Redux’s storage in a map with the format type → map of objects.

There are things to consider, though. First, it should be noted that storing the types of data objects “data” and “included” as two separate entitles in the Redux store can violate Redux best practices, as the same data objects would be stored more than once.

To solve these problems, we can use the main features of json-api-normalizer, such as:

  • Merge data and included fields, normalizing the data.
  • Collections are converted into maps in a form of a id=> object.
  • The response’s original structure is stored in a special meta

First, in order to solve problems with redundant structures and circular dependencies, the introduction of the distinction of data and included data objects in the JSON API specification is needed. Second, there is a constant update on data in Redux, although gradually, that can help with the performance improvement.

 Now that you know why JSON API works best with Redux, it can be concluded that this approach can assist us in prototyping a lot faster and flexible with changes to the data model. If you are in doubt whether using Redux with JSON API or not, this article will help you find the solution and reason why you shouldn’t doubt this method.

What is Minification?

css minification

Web browsers aren’t concerned about the readability of code, when it comes to generating a page or running a script. In order for the file to be executed, minification strips a code file of all data that isn’t required. With minified files, you don’t need to be decompressed before they can be read, modified or executed. This is so much different from traditional compression techniques. If you are a web developer or web designer, you may need to know further information about CSS minification.

After the code for a web application is written, and before the application is deployed, minification is performed. Minification can result in faster response times and lower bandwidth costs, as it sends the minified version instead of the full version. Nowadays, minification can be widely used in many websites ranging from small personal blogs to multi-million user services.

Minification Techniques and Tools

Minify is one of the more comprehensive minification tools. Minify handles minification, caching, and compression of CSS, HTML and JavaScript files. Minify also offers integration with popular web applications and frameworks including WordPress and the Zend Framework.

HTML minification

If you are looking for performance improvements to websites, you can find the PageSpeed Insights Chrome extension in Google. PageSpeed Insights provides a “Minify HTML” rule which generates a minified version of the open website.

CSS minification

Several online tools provide instant CSS minification, since CSS is less likely to change frequently. To get a quick and simple way to minify CSS, Refresh-SF uses multiple tools to minify CSS, HTML and JavaScript.

JavaScript minification

To create a more efficient copy of any JavaScript file, you can use Google’s JavaScript optimization tool, the Closure Compiler. Usually, a developer will use the Closure Compiler to minify the code and make change to a JavaScript file. For a user’s web browser, they can access the new file in the web server.

Example of Minification

The following code block shows an example of plain HTML and CSS:

<html>
       <head>
               <style>
                 #myContent { font-family: Arial }
                #myContent { font-size: 90% }
</style>
</head>

        <body>

                 <!– start myContent –>
                       <div id=”myContent”>
                       <p>Hello world!</p>
                       </div>
                 <!– end myContent –>

        </body>
</html>

Here are the same codes after minification:

<html><head><style>#myContent{font-family:Arial;font-size:90%}</style></head><body><div id=”myContent”><p>Hello world!</p></div></body></html>

Benefits of Minification

  1. As less unnecessary data needs to be downloaded, users can load content faster. Users experience identical service without the additional overhead.
  2. Lower bandwidth costs as less data is transmitted over the network. Developers no longer send the extra content that users don’t care about.
  3. Lower resource usage since less data needs to be processed for each request. The minified content need to be generated once, but can be used for an unlimited number of requests.

Setting Up Vim for Modern JavaScript Development

setting-up-vim-for-modern-javascript-development

Now, you can do all of your JavaScript editing in Vim. Vim is a text editor. If you’re used to using Notepad (Windows), Sublime Text (Windows / Mac), Atom (Windows / Mac), Nano (Linux), or any text editor, Vim is just another program that allows you to write and edit text. But, it helps your JavaScript work gets better. In this tutorial, we will tell you how to set up vim for modern JavaScript development. If you are a web developer, this article is certainly for you.

JavaScript Syntax

The main plugin that is used is pangloss/vim-javascript. You can find many types of JS highlighting in Vim but this one is the most reliable. This plugin has support for flow and its types and you can enable that by turning it on in your vimrc:

let g:javascript_plugin_flow = 1

In addition, to add syntax support for JSX to my JavaScript files, I use mxw/vim-jsx. But, this plugin requires your JSX files to have a .jsx extension, even though I usually stick with .js. If you have the same situation, the below config is for you:

let g:jsx_ext_required = 0

Finding Files and Navigating

There are two options among so many options for fuzzy finding; they are FZF and the corresponding FZF.vim plugin. This lets me quickly navigate through projects to find the files I’m after. Not JS specific, but definitely worth a mention.

ESLint + Flow Integration

ESLint and Flow are the two command line tools that are usually used. These tools will check my code to ensure that everything typed correctly, or formatted correctly based on ESLint.

Until recently, integrating these with Vim was far from ideal, for two reasons:

  1. If you often install command line tools locally instead of globally, npm puts the executable in ./node_modules/.bin. This can break editor integrations, because they expect to have the executable available, so try running eslint, rather than ./node_modules/.bin/eslint./node_modules/.bin/eslint , for example.
  2. Vim didn’t have support for async, background jobs, up until the release of Vim 8. This causes the editor would be unresponsive for a second or two when you saved your file, and ESLint ran, until ESLint returned. This is only a small amount of lag but it’s really noticeable.

With Vim 8, all of those problems have been solved recently. Ale is a linting plugin that comes out the box with support for a variety of linting tools for different filetypes and languages, and most importantly for me that includes ESLint and Flow. Moreover, it can automatically detect which linters to run and you really don’t need to configure it all.