Tag Archives: programmer

Knowing What’s Missing in WordPress Functionality

WORDPRESS’-MISSING-FUNCTIONALITY-(AND-HOW-TO-FIND-IT)_ywf

WordPress has become one of the most well-known website platforms in the world. Many people love this platform because of its flexibility, security and ranges of plugins that one can install to provide additional functionality. However, with many benefits that it offers, WordPress still lacks with many things. If you are a web developer, the information below can be beneficial for you. Here are a few requests for the missing functionality in WordPress; and some workarounds for the meantime.

Ability to Duplicate Posts

WordPress is equipped with completely redo settings to get the desired output which can be unnecessarily time consuming. This functionality on WordPress is limited to the use of a plugin: Duplicate Post.

With a duplicate post plugin, you can “clone” a post or “create a new draft”. The latter copies the post and opens it in a new window for editing. On the other hand, the former creates a new post entirely.

Moreover, you can also edit settings that let you do things like copying with this plugin:

  • Original date,
  • Original status (saved to draft, published, pending),
  • Original excerpt; original attachments,
  • Children of the original page,
  • Taxonomies/custom fields.

You are also able to work with custom post types in this plugin. Unfortunately, not all plugins are compatible and doesn’t necessarily call out these incompatibilities on the front end. As a result, it can cause worst scenario, such as complication from using this plugin can crash your website, so it is important for you to have a backup.

Bundle Settings and Plugins for New Installs

If you are planning to create multiple sites in WordPress, it would be particularly helpful to have some functionality that combines all the desired features into a file that could be uploaded to the site you’re building. If you work with clients in a similar industry, you will see that many WordPress websites have the same base features. However, installing /activating the launch list plugins one by one is tedious, so as a solution, you can install WordPress Install Profiles plugins. Once installed and activated, Go to Plugins > Bulk Install Profiles.

Furthermore, you can easily add or remove plugins from the list with a default list of plugins. Use the name on the plugin’s URL to add a new plugin, after that, give the list profile a name and download it to your computer.

You’ll need to have the WordPress Install Profiles plugin installed and activated on that site, then import the profile you want to install on another website. However, since this plugin hasn’t been updated in many years, there may be compatibility or security issues associated with its use.

SITE CACHING

Site caching supports your site to load faster by storing the website processes in an HTML file to be loaded as needed. That is why page loading is a major factor for ranking in technical SEO. To avoid any server calls, developers wish that WordPress had site caching.

Even though you cannot find site caching built into the platform, there are many plugins that can initiate this process, like W3 Total Cache and WP Super Cache. Actually, there are some WordPress hosting companies that offer site caching. However, many find that site caching offered through a web host is more efficient than these plugins, so if that’s an option, don’t install a caching plugin.

Built-In Form Builder

You can find a lot of form builder plugins, but since most businesses use forms anyway, why not add this functionality to the WordPress core code? Rather than waiting for this functionality missing in WordPress, try an all-purpose contact form plugin like Contact Form 7. With contact Form 7, you can manage multiple contact forms, and you can easily customize form and email content with simple markup. Besides, Contact Form 7 supports Ajax-powered submitting, CAPTCHA, Akismet spam filtering, and other important security factors. Another benefit that you can get is the simplicity to set-up, flexible, as well as it offers customizable default messages, and easily-defined mail messages.

Improved Theming System

In terms of the theming system, WordPress still needs lots of improvement. There are still “sloppy code” and “disastrous mix of business and display logic”. In the current version, you can see that template hierarchy does not take plugins into account. As a result, the plugin has to override the template system, or create a workaround to provide a default template for displaying this custom post type if you have a plugin with a custom post type for movies.

The more complex the code base, there is a greater opportunity to improve code practices, eliminate short codes, and fix template hierarchy for a more efficient base theme.

Custom User Permissions

In general, WordPress serves its users with 5 roles:

  • Administrator
  • Editor
  • Author
  • Contributor
  • Writer

In terms of managing tasks, each of these roles has their own specific limits. Many developers suggest it, so that WordPress could allow users to set, specify, or limit what each individual user can do, especially for a multi-author/user site.

To overcome this functionality missing, you can use Advanced Access Manager plugin; which manages both frontend and backend access.

File Browsing Interface

Even though you can find various plugins available, you have to be careful in choosing the right plugin for your website, since certain plugins may cause your website to slow down. By having an error code, you can identify this, but when it does not, then you have to manually deactivate all files, and then reactivate them one by one in the admin area to determine what is causing the error. However, you have to use a File Transfer protocol (FTP) like Firezilla to backup all plugin files if the error does not allow access to the admin area.

Besides, developers can quickly fix issues without needing cPanel/FTP access since you can access files directly from WordPress.

Nowadays, WordPress is still the most powerful platform on its own. No wonder there are various plugins that have been developed to support the functionality missing in WordPress, even though many developers are still hoping that the issues will immediately get built-in solutions.

Ways to Make your Apps Serverless

How-to-make-your-apps-serverless_ywf

The rise of a new buzzword has made many people think that servers no longer exist, but the fact is, a server is still needed somewhere. This is why the “serverless” term may mislead many people. What makes “serverless” term is that you can successfully build your applications without deploying code to your own servers. Therefore, as a web developer your dream of spending less time worrying about servers and more time for building software will come true.

Serverless in Action

When your site serves many readers a month, it means that the traffic that comes to our scale is significant and sudden, as articles can go viral at any moment. As a result, you may have trouble keeping up and our engineers are spending too much time on operations. Therefore, as a solution, you can take a look at serverless platforms which offer you a complete success of your projects, such as more maintainable, easier to operate, and cheaper.

Amazon Web Services

Serverless has a close relationship with Amazon Web Services (AWS). In fact, AWS is the answer for one critical question; where does the custom code go? The concept of using third-party services and platform is not new, with database, you can push notifications, caching, and many other layers of an application have all been available ‘as a service’ for a while, but they sat on the edge of your application. Therefore, a server is still needed as a place for core application code, which is usually a server responding to external requests. Through AWS Lambda and AWS API Gateway, you can deploy custom application code without the overhead of managing your own servers.

AWS Lambda

Applying Lambda is quite simple, you only need to write code and upload it. Lambda is Amazon’s version of functions-as-a-service (FaaS). Then, as a response to events including HTTP requests, S3 uploads, DynamoDB updates, Kinesis streams, and many others, AWS will run the code. Since scaling happens automatically, you are only charged when your functions are running.

None of these features are strictly a requirement for serverless, but AWS has certainly set the bar high. Any serverless platform will likely to have a stateless FaaS offering with very granular billing because of the precedent set by AWS.

Other Platforms

Right now, Amazon may still be the first competitor in the arena, but other providers are showing up quickly. All the major cloud platforms have recently launched services targeted at serverless applications. Here are a few of them:

  • Google Cloud Functions: Still in alpha, having almost the same functionality to AWS Lambda and can also be triggered by HTTP requests.
  • Azure Functions: This platform is still relatively new and similar to Lambda. Other benefits are Azure has a pleasant UI and makes it easy to expose functions over HTTP without needing a separate routing service.
  • IBM OpenWhisk: This is the only open source platform. You will want to investigate this, if you are interested in deploying your own serverless platform or just curious with how they work under the hood.

Challenges

If you think serverless is the solution of every problem, you might be wrong, for serverless does not come without its challenges. In fact, the community is still discovering best practices, especially when it comes to operations as the space is new and as such. In fact, this platform still requires tools for deploying, maintaining and monitoring our applications. However, many believe that there will be many new startups’ third party services targeted at solving these problems for serverless developers.

Tools

With lots of open source community, it is possible to manually build and deploy serverless applications yourself, but we suggest that you use the existed frameworks, since a few endpoints, building, packaging, zipping, uploading and versioning all become difficult to manage. Here are some frameworks that you might want to consider:

  • Serverless framework: This framework has a robust plugin system and integrates with many community developed plugins with many community developed plugins. Its stated goal is to eventually support deployment to any of the major cloud platforms.
  • Apex: Even though it is written in Go, it supports Python, Node.js and Java runtime languages. Furthermore, the inventor of this tool, TJ Holowaychuk, is a well-known fixture in the open source community and has a great sense of what makes for good developer tools.
  • Chalice: it is the only framework created and maintained by AWS and currently supports Python.
  • Shep: If you are looking for framework that can be used for all our production services, Bustles’ own open source framework can be a great choice. It focuses on the Node.js runtime and strives to be opinionated about how you should structure, build, and deploy applications.

 It seems that in 2017 “serverless” technology will keep growing and you will see rapid adoption from startups to fortune 500 companies. This is because many developers have realized that the serverless movement is the best way to build better software.

Json-api-normalizer: Why JSON API and Redux Work Best When Used Together

As a web developer, we have to manage the data needed for every application we work on. There are problems when doing so, such as:

  1. Fetch data from the back end.
  2. Store it somewhere locally in the front-end application.
  3. Retrieve the data from the local store and format it as needed by the specific view or screen.

In this article, we are going to discuss about the data usage from JSON, the JSON API and GraphQL back ends, and from that, we can learn the practical way on how to manage front-end application data. As for the real use, let’s imagine that we have carried out a survey that asks the same questions of many users. After each user has given their answers, other users can comment on them if wanted to. Our web app will perform a request to the back end, store the gathered data in the local store and render the content on the page. In order to make it stay simple, we will leave out the answer-creation flow.

Redux Best Practices

What makes Redux the best is that it is changeable no matter what kind of API you consume. It doesn’t matter whether you change your API from JSON to JSON API or even GraphQL and back during development, as long as you keep your data model, so it will not affect the implementation of your state management. Below is the explanation on the best practice using Redux:

  1. Keep Data Flat in the Redux Store

First, here’s the data model:

 

 

Based on the picture above, we have a question data object that might have many post objects. It is possible that each post might have many comment objects. Each post and comment has respectively one author.

Let’s say we have a back end that returns a specific JSON response. It is possible that it would have a carefully nested structure. If you store your data in the same way you do in the store, you will face many problems after that, like, for instance, you might store the same object many times like this:

{

  “text”: “My Post”,

  “author”: {

    “name”: “Yury”,

    “avatar”: “avatar1.png”

  },

  “comments”: [

    {

      “text”: “Awesome Comment”,

      “author”: {

            “name”: “Yury”,

        “avatar”: “avatar1.png”

      }

    }

  ]

}

In the example above, it indicates that we store the same Author object in several places, which is bad, because not only does it need more memory but it also has negative side effects. You would have to pass the whole state and update all instances of the same object especially if somebody changed the user’s avatar in the back end.

To prevent something like that from happening, we can store the data in a flattened structure. This way, each object would be stored only once and would be easily accessible.

{

  “post”: [{

    “id”: 1,

    “text”: “My Post”,

    “author”: { “id”: 1 },

    “comments”: [ { “id”: 1 } ]

  }],

  “comment”: [{

    “id”: 1,

    “text”: “Awesome Comment”

  }],

  “author”: [{

    “name”: “Yury”,

    “avatar”: “avatar1.png”,

    “id”: 1

  }]

 }

  1. Store Collections as Maps Whenever Possible

After we have the data in a good flat structure, we can gradually accumulate the received data, in order for us to reuse it as a cache, to improve performance or for offline use. However, if we combine new data in the existing storage, we have to select only relevant data objects for the specific view. We can store the structure of each JSON document separately to find out which data objects were provided in a specific request to gain this. There is a list of data object IDs that we can use to gather the data from the storage.

Let’s say there is a list of friends of two different users, Alice and Bob. We will then perform two requests to gather the list and review the contents of our storage consequently. Let’s suppose that from the start the storage is empty.

/ALICE/FRIENDS RESPONSE

Here’s the User data object with an ID of 1 and a name, Mike, like this:

{

  “data”: [{

    “type”: “User”,

    “id”: “1”,

    “attributes”: {

      “name”: “Mike”

    }

  }]

}

/BOB/FRIENDS RESPONSE

This is another request that would return a User with the ID of 2 and Kevin as the name:

{

  “data”: [{

    “type”: “User”,

    “id”: “2”,

    “attributes”: {

      “name”: “Kevin”

    }

  }]

}

STORAGE STATE

This is what our storage state would look like:

{

  “users”: [

    {

      “id”: “1”,

      “name”: “Mike”

    },

    {

        “id”: “2”,

        “name”: “Kevin”

    }

  ]

}

STORAGE STATE WITH META DATA

In order to find out or distinguish which data objects in storage are relevant, we have to keep the structure of the JSON API document. With that focus, we can change it into this:

{

  “users”: [

    {

      “id”: “1”,

      “name”: “Mike”

    },

    {

        “id”: “2”,

        “name”: “Kevin”

    }

  ],

  “meta”: {

      “/alice/friends”: [

        {

          “type”: “User”,

          “id”: “1”

        }

      ],

      “/bob/friends”: [

        {

          “type”: “User”,

          “id”: “2”

        }

      ]

  }

}

With this, we can now read the meta data and gather all mentioned data objects. Now here’s the recap of the operations’ complexities:

As can be seen from the picture above, maps certainly works better than arrays because all operations have O(1) as the complexity instead of O(n). If we use a map instead of an array for the User data object, it would be like this:

STORAGE STATE REVISED

{

  “users”: {

      “1”: {

        “name”: “Mike”

      },

      “2”: {

        “name”: “Kevin”

      }

  },

  “meta”: {

      “/alice/friends”: [

        {

          “type”: “User”,

          “id”: “1”

        }

      ],

      “/bob/friends”: [

        {

          “type”: “User”,

           “id”: “2”

        }

      ]

  }

}

Now with this simple method, we can find a specific user by ID almost instantly.

Processing the Data and JSON API

There are many solutions to convert JSON documents to a Redux-friendly form. However, while there is no significant change within the application’s lifecycle, it will cause a failure if things are too dynamic, even though normalizing the function with the provision of a JSON document works great if your data model is known in advance.

Using GraphQL might be possible and interesting as well; however, if our APIs are being consumed by many third parties, we can’t adopt it.

JSON API and Redux

Redux and the JSON API work best together. The data provided by the JSON API in a flat structure by definition conforms nicely with Redux best practices. The data is typified in order to be naturally saved in Redux’s storage in a map with the format type → map of objects.

There are things to consider, though. First, it should be noted that storing the types of data objects “data” and “included” as two separate entitles in the Redux store can violate Redux best practices, as the same data objects would be stored more than once.

To solve these problems, we can use the main features of json-api-normalizer, such as:

  • Merge data and included fields, normalizing the data.
  • Collections are converted into maps in a form of a id=> object.
  • The response’s original structure is stored in a special meta

First, in order to solve problems with redundant structures and circular dependencies, the introduction of the distinction of data and included data objects in the JSON API specification is needed. Second, there is a constant update on data in Redux, although gradually, that can help with the performance improvement.

 Now that you know why JSON API works best with Redux, it can be concluded that this approach can assist us in prototyping a lot faster and flexible with changes to the data model. If you are in doubt whether using Redux with JSON API or not, this article will help you find the solution and reason why you shouldn’t doubt this method.