<![CDATA[Deal with JS]]>https://dealwithjs.io/Ghost 0.11Fri, 03 Aug 2018 05:30:57 GMT60<![CDATA[ES6 Features - 10 Use Cases for Proxy]]>https://dealwithjs.io/es6-features-10-use-cases-for-proxy/0b99fc98-8639-4a74-8038-215a3ee3dd99Mon, 06 Mar 2017 08:00:00 GMT


Today we look into the possible use cases of Proxy, one of the features of ES6.

TL; DR

Source code can be found on GitHub, in the jsProxy repo.


ES6 Features

Use cases for Proxy


Table of Contents

Preface

By now, you have probably heard about, or even used ES6 Proxy. There is a lot of information around about what Proxy is, and how to use it, but I found surprisingly little on what to use it for. So I tried to find some real(ish) use cases.

Proxy is supported in all modern browsers. (Hint: Internet Explorer is not a modern browser.)

This post is pretty code heavy, so I didn't put the full sources here as usual, but you find the link for the relevant source at the beginning of every section. Also, you can open index.html to browse and try out all of them.

Some examples are throwing errors. I know that every developer says the same, but don't worry, this is intentional. You will see that the lines where I expect an error are in try-catch blocks and marked with comments too.

Before we dive in, here's a little refresher on the syntax and usage.

Basic example of using a Proxy ^

Source code: 00-basic-example.js

Proxy is a way to wrap an object to intercept its basic operations, like getting a property value. We can provide a handler object with traps for the operations we want to intercept. The operations we don't define a trap for will be forwarded to the original object.

When we asking for proxy.a in our example, our handler.get trap will be called. It can do whatever it wants before/after forwarding the call to the original object. Of course, it doesn't have to forward the call, it can do something else entirely.

Users of the Proxy don't have direct access to the original object, which makes it a good tool for encapsulation, validation, access control, and a whole bunch of other things. Keep on reading to see some interesting examples.

Let's see what we can use this for...

Default values ^

Source code: 01-prop-defaults.js

The first real example is adding default values to an object. When we ask for a missing key, we will get the corresponding value from the defaults object we provided. Note that the in operator still sees what keys are or aren't there. We could define a trap for that too, you will see that later, but in this example, this is the correct behavior because this way client code can differentiate between real and default values.

You probably noticed the Reflect.get call. We can use the Reflect module to call the original function that would have ran if we haven't proxied the object. We could use obj[prop] too, like in the previous example, but this is cleaner, and it uses the same implementation a non-proxied object would use. This maybe doesn't sound important in a simple trap like get, when we can easily replicate the original behavior, but some other traps like ownKeys would be more difficult and error prone to replicate, so it's best to get into the habit of using Reflect in my opinion.

Hiding private properties ^

Source code: 02-private-properties.js

Proxy is also good for limiting access to properties, in this case, hiding properties starting with an underscore, making them really private.

As you see there are five different traps we need to define to achieve this, but all of them follows the same logic. We ask the provided filterFunc if we should forward the call, or pretend that the property isn't there.

By the way, you can see the advantages of using Reflect better now. It would have required a lot more code to implement the proper forwarding in all these traps.

One thing to note is that if we call methods on our Proxy, this will refer to the Proxy by default, and not the original object, so the method won't be able to access the private property either. We can resolve this by binding the methods to the original object in the get trap.

A better enum ^

Source code: 03-enum-basic.js

A lot of JavaScript code is using strings, plain, or frozen objects as enums. As you probably know, these solutions have their type safety problems and are generally error-prone.

Proxy can offer a viable alternative. We take a plain key-value object and making it more robust by protecting it from (even inadvertent) modifications. Even harder than Object.freeze would. (Which disallows modification, but doesn't throw an error, which can lead to silent bugs.)

In the source, there are three sections, showing how a plain object, a frozen object, and our enum Proxy handles the same mistakes we could make in client code.

We could even take it a step further and use the Proxy to augment our enum with some "simulated" methods. (These methods aren't really there, but with the Proxy, we live-bind them.) For example, we could add a method for getting back the name of the enum from its value.

Source code: 03-enum-nameof.js

We could use plain prototype inheritance for adding methods in the usual way, but the name of the methods would appear when we get the keys of the enum with Object.keys for example. Also, I wanted to show we can do this.

OnChange event for objects and arrays ^

Source code: 04-onchange-object.js

Proxy is also good for subscribing to events happening with objects. In our example, we forward everything to the original object, but after setting or deleting a property, we also call the onChange event handler.

By the same method, we could implement other events too. For example an onValidate event for validating changes before applying them.

This pattern can be useful for arrays as well. And besides item changes, it also tracks property changes, like length.

Cache with property specific TTL ^

Source code: 05-proxy-as-handler.js

With Proxy, we can control the value that we store in our object too. This is useful if we need to store some metadata with our values. In this example, we put a TTL (time to live) beside the real value, so when we set a property cache.a = 42, instead of 42, we store {ttl: 30, value: 42}. (Of course we need to do the opposite when gettig a property.) Then we decrease the ttl every second and when it reaches zero, remove the property.

In addition, we get the TTL from a function that can return different values for different property names, so for example, we could cache config values for a minute, but ajax responses for only ten seconds.

Of course, this can also be achieved by storing the TTLs in a separate object, but I think this is a more elegant solution. And I wanted to show this "inline metadata" capability because it can come in handy in many situations. One thing coming to mind is an object that interfaces some database, using the metadata to store management data, like when was the data saved, or has it been modified since.

Using the "in" operator like "includes" ^

Source code: 06-array-in.js

We can also use Proxy to do some limited operator overloading. Limited, because we can't overload any operator, only in, of, delete, and new. We will overload new in the next example.

But in this one, we just hijack the in operator to use it like Array.includes, that is, to check if a value is in the array.

Ok, this probably isn't very useful in real projects, but I think it's kind of cool.

Singleton pattern ^

Source code: 07-singleton-pattern.js

We can implement the singleton pattern (and other creational patterns) with Proxy. In this example, we use the construct trap to modify the behavior of the new operator to return the singleton instance every time.

Using singletons in JavaScript is a controversial matter. Many developers think that it's an obsolete design pattern since we use Dependecy Injection. Regardless, it's a nice and easy pattern to show how we can trap object construction with Proxy. And you can use the same method for factories, builders, etc.

Validation and revocable access ^

Source code: 08-revocable-validated.js

We can use Proxy to validate properties and property values before adding them to the object. Let's imagine we are making a library, that wants to get some options from the client code, so we fire off an event handler and passing it the empty options object (or one pre-filled with defaults), and the client code can set the options. But we want to accept only valid options and values, otherwise, we throw an error.

We could do the validation after we got back the object, but baking the validation into the object itself has some advantages. Our library code can safely assume the object is always valid, so we don't need validation and error handling code everywhere we use it.

By using a revocable proxy, we can even revoke the client code's access, so it can't modify it after the event handler returns. In more general terms, when we pass internal objects to external code for modification, we can protect it against later tampering.

Cookie object ^

Source code: 09-cookie-object.js

In this example, we augment an object with cookie persistence. When we create the object, we load the cookies to properties, and from then on, we save every change to the cookies too.

Instead of cookies, we could use a real database too, but those usually have async APIs (as they should), and that leaves us no choice but to return promises instead of values, and that would make the client code weird. Instead of console.log(dbObject.x, dbObject.y) we would have to write Promise.all([dbObject.x, dbObject.y]).then(console.log) so our Proxy wouldn't really make it any simpler, than using the original database API. But I will definitely do some more thinking in this area. Because an API like this would be very cool.

Python-like array slicing ^

Source code: 10-python-slicing.js

Those of you familiar with Python are probably love the list slicing syntax, and like myself would love to have something similar in JavaScript.

For those of you aren't familiar, Python allows you to:

  1. Take a subset of a list (array) by using the list[start:end:step] syntax. So, for example, you can get every 3rd item from index 10 to 20 by list[10:20:3]. (Note: end is exclusive, so it's really just going through items 10-19.)
  2. You can use negative indices if you need to index from the end of the list. So -1 is the last item, -2 is the second to last, etc.

Of course, you can do the same in JavaScript, but not in this concise and elegant way. But as you probably guessed by now, we can use Proxy.

JavaScript syntax doesn't allow us to put colons in the array index, but it's perfectly fine if we use a string. So we can write the previous example like list["10:20:3"], and in the get trap, we can forward the call if the index is a number, but do the slicing and return the results if the index is a string.

There's nothing special in this example, most of the code is just the splitting logic. The Proxy handler itself is about 3 lines of code, with only a get trap.

Proxy as a proxy handler ^

Source code: 11-proxy-as-handler.js

This last one is admittedly not that useful, but it's a fun little pattern I didn't want to exclude, so I've added it as a bonus 11th example.

Here, we use a Proxy object (logProxy) to act as the handler for another Proxy (myObj), generating all the traps dynamically. So when we do myObj.a = 3, normally, the handler's set trap would fire. But since the handler (logProxy) is a Proxy itself, its handler's get trap will fire, trying to get the set trap like this: logHandler.get(logProxy, 'set'). In there, we generate and return the relevant trap, made dynamically by putting logs before and after the Reflect[trap] call. This might be confusing at first read so here is an illustration of the calls happening:
ES6 Features - 10 Use Cases for Proxy

I can only think of one use for it: if you want to make a Proxy with all of the traps looking the same (ie: the same code before and after the Reflect call), so you can generate them instead of writing a bunch of boilerplate code.

As you will see, this double proxying is especially slow (which is not that surprising), so I don't recommend it in production code at all.

Performance measurements ^

Source code: performance.js

I made this little performance measurement and ran it in 3 current browsers (on a 2015 MacBook Pro).

The results:

Object / Proxy type Chrome (ms) Safari (ms) Firefox (ms)
Normal object 623 764 1510
No-op forwarding proxy 1506 1770 1950
Proxy using Reflect 3335 4531 8435
Proxy as dynamic handler 5626 6005 10947

There is a significant difference between the browsers, but in general, the more abstract proxying patterns are a lot slower. In my opinion, unreasonably slower. I could understand some performance hit for the abstraction, but 5-8x slower for a Proxy with Reflect is way too much. But this will probably improve over time, as the implementations will mature.

Interesting, that Firefox supports this feature for the longest time (since May 2015), and it was the slowest in my test.

Conclusion

My opinion is that Proxy is a useful tool in our arsenal. It can be used for a wide range of problems and has the potential to make code simpler and more readable. But we should keep in mind that if we change the way objects behave in such a low level, we won't just confuse other parts of the code, but probably our fellow developers too. We should document these objects (and patterns) properly, maybe separate them to their own file, or even come up with some naming convention to make it crystal clear that these objects are special.

Regarding performance, it is usable in production code, but not for objects under extreme usage. An options object (like in example 08) would receive only a couple calls at a time, in that case, the performance hit is negligible. In contrast, I wouldn't put a Proxy on an Angular $scope, a Redux Store, or similar heavily used object. At least not in production code. Might be an interesting experiment though... ;)


If you enjoyed reading this blog post, please share it or like the Facebook page of the website to get regular updates. You can also follow us @dealwithjs.

Happy proxying!

]]>
<![CDATA[Functional Programming in JavaScript]]>https://dealwithjs.io/functional-programming-in-javascript/50af1047-ab41-448a-912c-90c97bddaacdThu, 26 Jan 2017 22:13:00 GMT

TL; DR

Functional programming needs a different mindset than most of the developers/software engineers/programmers are used to. In this blog post I explain the main concepts of functional programming with examples written in JavaScript. Additional articles will follow with more in-depth details of Functional Programming in JS.

Source code can be found on GitHub, in the jsFunctionalProgramming repo.

I would like to thank Csaba Hellinger for the support and input he offered for writing this article.


PART 1

Table of Contents

  1. Why Functional Programming
  2. f(x) = = = J(s)
  3. First Class Functions
  4. Tail Call Optimization
  5. Testing Recursive Calls

Functional programming evolves from the Lambda Calculus which (on a very high level) is a Mathematical Abstraction of function representation and how these representations should be evaluated.

Functional programming is a declarative programming paradigm.

Why Functional Programming ? ^

Using functional programming has some specific characteristics:

  1. Avoiding State Change (and mutable data) - one of the characteristics of functional programming is that functions do not change the state of the application, they rather create a new state from the old one.
  2. Declarations vs statements - in functional programming as in Mathematics a declarative approach is used for defining/describing functions.
  3. Idempotence - it means when invoking a function (any number of times) using the same arguments will always have the same result, this also goes hand in hand with avoiding state change.

These three reasons do not seem to mean much at first glance, but if we analyze this more in depth we figure out that because of the three characteristics the following use cases are good candidates for functional programming:

  1. Parallel code execution - because of the Idempotence and Avoiding State Change properties of functional programming, the code written in a functional manner can be parallelized easier since there will not be synchronization issues.
  2. Concise/Compact Code - since functional programming has a declarative approach the code does not have additional algorithmic steps like there is in procedural programming.
  3. Different mindset - once you coded in a real functional programming language you will develop a new mindset and a new way of thinking when writing applications.

f(x) === J(s) ^

Is JavaScript a real (purely) functional programming language?

NO! JavaScript is not a pure functional programming language...

First-class Functions ^

BUT it can be used well for functional programming because it has first-class functions. By definition a programming language has first-class functions, if it handles functions as any other type within the language. For example functions can be passed as arguments to other functions or can be assigned to variables.

We will examine some first class function, but first lets start with building blocks which we need to have in order to use JS as a real functional programming language.

In most of the pure functional programming languages (Haskell, Clean, Erlang) there are no for or while loops, so iterating over lists needs to be done using recursive functions. Pure functional programming languages have language support and are optimized for list comprehension and list concatenation.

Here is an implementation of a functional for loop, we will use this in our code later on, but you will see this has limitations in JS since tail call optimization is not widely supported (yet), but more on this later.

function funcFor(first, last, step, callback) {

    //
    // recursive inner function
    //
    function inner(index) {
        if ((step > 0 && index >= last) || (step < 0 && index < last)) {
            return;
        }

        callback(index);

        //
        // next (proper tail call)
        //
        inner(index + step);
    }

    //
    // start with the first
    //
    inner(first);
}

The inner function contains the stop condition handling, it invokes the callback with the index value and the recursive call inner(index + step) ensures the loop passes on to the next step.

Recursion is an important aspect of functional programming.

Now lets see some real first class functions:

function applyIfAllNumbers(items, fn) {  
    if(areNumbers(items)) {
        return funcMap(items, fn);
    }
    return [];
}

The purpose of the applyIfAllNumbers function is, to invoke the fn function for each number in the items parameter, but ONLY if all the elements in the items array are numbers.

Below are the validator functions:

function areNumbers(numbers) {  
    if (numbers.length === 0) {
        return true;
    }
    else {
        return isNumber(numbers[0]) &&
               areNumbers(numbers.slice(1));
    }
}

function isNumber(n) {  
    return isFinite(n) && +n === n;
}

The code is straightforward, the isNumber returns true if the parameter is a number, otherwise false. The areNumbers reuses the isNumber to determine if all the elements in the numbers array are numbers or not(again, recursion was used to implement the logic).

Another example is applyForNumbersOnly:

function applyForNumbersOnly(items, fn) {  
    let numbers = filter(items, isNumber);
    return funcMap(numbers, fn);
}

Which can be simplified even more:

function applyForNumbersOnly(items, fn) {  
    return funcMap(filter(items, isNumber), fn);
}

applyForNumbersOnly invokes the fn method only for numbers in the items collection.

The funcMap function is the re-implementation of the well known map function from functional programming, but here I used the funcForEach implementation to create it:

function funcForEach(items, fn) {  
    return funcFor(0, items.length, 1, function(idx) {
        fn(items[idx]);
    });
}

function funcMap(items, fn) {  
    let result = [];
    funcForEach(items, function(item) {
        result.push(fn(item));
    });
    return result;
}

Last we have the filter function which, again, uses recursion to apply the filter logic.

function filter(input, callback) {  
    function inner(input, callback, index, output) {
        if (index === input.length) {
            return output;
        }
        return inner(input, 
                     callback, 
                     index + 1, 
                     callback(input[index]) ? output.concat(input[index]) : output);
    }
    return inner(input, callback, 0, []);
}

Tail call optimization (TCO) in JS ^

In the EcmaScript 2015 TCO specs there are some use-cases defined when the language should support tail call optimization. One of the most important requirements is to "use strict" mode in your code, otherwise JS cannot apply the TCO.

There is no built in method to detect if a browser supports TCO or not, this has to be done through code:

"use strict";

function isTCOSupported() {  
    const outerStackLen = new Error().stack.length;
    // name of the inner function mustn't be longer than the outer!
    return (function inner() {
        const innerStackLen = new Error().stack.length;
        return innerStackLen <= outerStackLen;
    }());
}

console.log(isTCOSupported() ? "TCO Available" : "TCO N/A");  

Here is an example of the functional re-implementation of Math.pow function, which can benefit from the TCO in EcmaScript 2015.

This pow implementation uses ES6 default parameters to simplify implementation.

function powES6(base, power, result=base) {  
    if (power === 0) {
        return 1;
    }

    if (power === 1) {
        return result;
    }

    return powES6(base, power - 1, result * base);
}

First thing to note is that we have three arguments instead of two. The third parameter holds the result of the calculation. We have to carry the result around in order to have an implementation where our recursive call is really a tail call and JS can use his TCO technique.

In case we cannot use ES6 features, besides that I do not recommend using recursive implementation because the language will not offer support for optimization, the implementation is more complex:

function recursivePow(base, power, result) {  
    if (power === 0) {
        return 1;
    }
    else if(power === 1) {
        return result;
    }

    return recursivePow(base, power - 1, result * base);
}

function pow(base, power) {  
    return recursivePow(base, power, base);
}

We separated the recursive calculation to another function recursivePow, this takes three arguments as the powES6 did. Using a new function and passing the base parameter to it we implemented the default parameter initialization logic from ES6.

On this page you can check how widely is the TCO implemented by different browsers and platforms.

Since Safari 10 is the browser which fully supports TCO (at the time of writing this article) I will run some tests with the pow function to see how well does it perform.

Testing Recursive Calls ^

I used the powES6 and pow functions with the following test code:

"use strict";

function stressPow(n) {  
    var result = [];
    for (var i=0; i<n; ++i) {
        result.push(
          pow(2, 0),
          pow(2, 1),
          pow(2, 2),
          pow(2, 3),
          pow(2, 4),
          pow(2, 5),
          pow(2, 10),
          pow(2, 20),
          pow(2, 30),
          pow(1, 10000),
          pow(2, 40),
          pow(3, 10),
          pow(4, 15),
          pow(1, 11000),
          pow(3.22, 125),
          pow(3.1415, 89),
          pow(7, 2500),
          pow(2, 13000)
        );
    }

    return result;
}

var start = performance.now();  
var result_standard = stressPow(2500);  
var duration = performance.now() - start;  
console.log(result_standard);  
console.log(`Duration: ${duration} ms.`);

I executed this JS code on Chrome v55, Firefox v50, Safari v9.2 and Safari v10.

Nr. Chrome v55 (ms) FF v50 (ms) Safari 9.2 (ms) Safari 10 (ms)
1 1076.12 499.1 431.94 398.51
2 1061.20 499.36 479.20 373.59
3 1069.27 518.75 415.98 368.53
4 1056.84 499.98 423.72 373.79
5 1043.06 511.05 429.64 361.37
6 1056.72 505.59 424.78 361.26
7 1096.32 507.68 427.83 358.30
8 1099.12 515.85 436.83 356.68
9 1016.51 504.91 425.74 362.77
10 1056.65 530.31 430.95 368.32
AVG 1063.18 509.25 432.66 368.33
Conclusion

Looking at the measurements it can be seen that Safari is the best performer if it comes to recursive function execution. Safari 10 with his TCO optimization support for JavaScript is the fastest, around ~2.8 times faster than running the code inside Chrome. Firefox performs almost as good as Safari 9.2 which was a surprise for me.

If you enjoyed reading this blog post, please share it or like the Facebook page of the website to get regular updates. You can also follow us @dealwithjs.

Lets stay functional!

PART 2 will come shortly with high order functions and samples how to implement procedural code in functional style.

]]>
<![CDATA[Task Automation - Building with Gulp]]>https://dealwithjs.io/task-automation-building-with-gulp/c598c550-1f41-42a3-a562-3d333cf05fabMon, 02 Jan 2017 07:25:35 GMT


Today we will set up automated tasks to build our project and to make our life easier while we are developing.

TL; DR

Source code can be found on GitHub, in the jsGulp repo.


Task Automation

Building with Gulp


Table of Contents

  1. Compiling LESS
  2. Source maps for LESS
  3. Auto prefixing LESS
  4. Watch for LESS changes
  5. Validating JS
  6. Building

Grunt vs Gulp

The most widely used task runners for node.js are Grunt and Gulp. Basically, both are doing the same thing, but with a different mindset. The main difference is that Grunt is using configuration over coding, while Gulp is the other way around. You can read a more detailed comparison here.

I chose Gulp for the following reasons:

  • I have used Gulp in several projects in the past, but it was always set up by some other team member, so I was curious how it goes.
  • Code over configuration. I like that.
  • It's faster, because tasks can run in parallel, and you don't need to do as many (temp) file operations.
  • It's the newer, hip task runner.

Goals

I will set up two general categories of tasks:

  • To support development: compile LESS files and validate JavaScript files with JSHint automatically, when any of the relevant files changes.
  • To make a build: concatenate and minify all the sources, and put them in a separate build directory.

I'm also aiming to create a simple, but easily extendable project template that you can use to jumpstart your new project.

Getting started

It will be easier to follow what's happening if you take a look at the folder structure of our (very) simple application.
Task Automation - Building with Gulp

The examples assuming you have gulp-cli installed globally. (The recommended way of using Gulp is install gulp-cli globally, then install gulp locally in your project.)

npm install gulp-cli -g  
npm install gulp --save-dev  

All other dependencies are "devDependencies", meaning they only needed for development, not production. That's why we use --save-dev for everything from now on.

All our Gulp tasks will be in the gulpfile.js in the project root. It will need to require gulp of course. And since Gulp can't really do much by itself, we will have to install and require plugins for everything we want to do.

Let's start with creating our first task.

1. Compiling LESS ^

This task will compile our main.less, and by extension, every other LESS file it imports. We will use the gulp-less plugin for this:

npm install gulp-less --save-dev  

Code:

var gulp = require('gulp'),  
    less = require('gulp-less');

gulp.task('less', function () {  
    return gulp
        .src('./less/main.less')
        .pipe(less())
        .pipe(gulp.dest('./css'));
});

I think most of this is quite straightforward. We require the plugins, and define a task, which takes our main LESS file, compiles it, and every other LESS files it imports, then saves the result in the css folder. Our HTML will use the CSS from here.

We can run the task from the command line by:

gulp less  

That's not terribly useful yet. This command is not much shorter than if we did it manually. So let's make it better.

2. Source maps for LESS ^

Let's suppose we are using the compiled CSS file in development and an element is getting the wrong color. The developer tools in the browser tell us a line number in the CSS, but we have no idea where is this color defined in the LESS files. This is what source maps are used for. Basically, it's a way to know which LESS line was compiled into which CSS line.

We need the gulp-sourcemaps plugin:

npm install gulp-sourcemaps --save-dev  

And to extend our less task:

var gulp = require('gulp'),  
    less = require('gulp-less'),
    sourcemaps = require('gulp-sourcemaps');

gulp.task('less', function () {  
    return gulp
        .src('./less/main.less')
        .pipe(sourcemaps.init())
        .pipe(less())
        .pipe(sourcemaps.write())
        .pipe(gulp.dest('./css'));
});

We haven't changed much. Just required the plugin, we initialize (store where things were) before compiling, and write (find out where things ended up) after.

In this setup, the source maps will be appended to the CSS file itself. It could be saved to a separate file, but I don't really see the advantage of that. We only use this in development, so file size is not an issue.

Before-after:
Task Automation - Building with Gulp

3. Auto prefixing LESS ^

I'm sure you know that some CSS features in some browsers can only be used with browser prefixes. For example, display: flex in IE10 is display: -ms-flexbox. So if we want (or have) to support older browsers, we have to pay attention to what should be prefixed and how.

This problem can be solved with an autoprefixer. We just write standard CSS, tell the autoprefixer what browsers we want to support, and it takes care of the prefixing.

We will use the less-plugin-autoprefix:

npm install less-plugin-autoprefix --save-dev  

Code:

var gulp = require('gulp'),  
    less = require('gulp-less'),
    sourcemaps = require('gulp-sourcemaps'),
    LessAutoprefix = require('less-plugin-autoprefix'),
    autoprefix = new LessAutoprefix({ browsers: ['last 2 versions'] });

gulp.task('less', function () {  
    return gulp
        .src('./less/main.less')
        .pipe(sourcemaps.init())
        .pipe(less({
            plugins: [autoprefix]
        }))
        .pipe(sourcemaps.write())
        .pipe(gulp.dest('./css'));
});

Most of the code is unchanged. We just required the autoprefixer, set it up, and gave it to the less function.

Before-after:
Task Automation - Building with Gulp

4. Watch for LESS changes ^

Our first task became quite useful, but we still need to run it by hand every time we change something. This is what watchers are for. Basically, it runs a task when it detects a change in the watched files.

We don't need a plugin for this, watch is a built-in feature of Gulp.

gulp.task('watch', function () {  
    gulp.watch('./less/*.less', ['less'])
        .on('change', function(event) {
            console.log(`Watch: ${event.path} was ${event.type}.`);
        });
});

As you can see, I've put this in another task, so I can run the less task once (will be useful later for building), or I can run watch to constantly update my CSS (useful while developing).

Note how I run the less task at the beginning of the watch task, so it compiles the current LESS files before start watching for changes.

Now, if we run gulp watch, it will re-run the less task after every change:
Task Automation - Building with Gulp

The LESS tasks are finished for now, but later we will do minification with the build task.
Let's go and do something for the JavaScript code.

5. Validating JS ^

One useful thing during development would be to run JSHint on our JavaScript code to validate it.

We will use the gulp-jshint for this:

npm install jshint gulp-jshint --save-dev  

Code:

var jshint = require('gulp-jshint');  
...
gulp.task('jshint', function () {  
    return gulp
        .src('./scripts/*.js')
        .pipe(jshint({
            'esversion': 6
        }))
        .pipe(jshint.reporter('default'));
});

I've also put a new watcher to our watch task to run JSHint every time the code it changes. This way we can catch many errors as soon as possible.

gulp.watch('./scripts/*.js', ['jshint'])  
    .on('change', function (event) {
        console.log(`Watch: ${event.path} was ${event.type}.`);
    });    

If we introduce an error in one of our script files, JSHint will complain about it instantly:
Task Automation - Building with Gulp

We are finished supporting the development, let's move on and create our build task.

6. Building ^

When we are satisfied with our current code and want to release a new version for hosting, we typically need to do some processing. At least we have to minify everything and concatenate our separate script files into one. This obviously improves the page load time.

We could do this the same way as before, but there is an extra step we need to take care of. Namely, the multiple script tags we use to load our JavaScript files should be replaced with a single one using the concatenated file. And the same is true for the CSS. Although we only have one CSS in our setup, the minified one will be on a different path.

Luckily there is a gulp-usemin plugin for this. And we will use gulp-concat, gulp-uglify, gulp-minify-html, and gulp-minify-css for concatenation and minification.

npm install gulp-usemin gulp-concat gulp-uglify gulp-minify-html gulp-minify-css --save-dev  

Code:

var usemin = require('gulp-usemin'),  
    concat = require('gulp-concat'),
    uglify = require('gulp-uglify'),
    minifyHtml = require('gulp-minify-html'),
    minifyCss = require('gulp-minify-css');
...
gulp.task('build', ['less'], function () {  
    return gulp
        .src('./*.html')
        .pipe(usemin({
            html: [ minifyHtml({ empty: true }) ],
            css: [ minifyCss() ],
            js: [ uglify() ]
        }))
        .pipe(gulp.dest('./build'));
});

For usemin to work, we need to mark the parts in the html that should be replaced in the building process:

<!-- build:css main.css -->  
<link rel="stylesheet" href="css/main.css">  
<!-- endbuild -->  
...
<!-- build:js scripts.js -->  
<script src="scripts/library.js"></script>  
<script src="scripts/main.js"></script>  
<!-- endbuild -->  

Usemin basically takes the files in each marked block, runs them through the plugins we specified in its parameter object, saves the result in the location we gave to gulp.dest, by the name we chose in the HTML marker comment. So for example, our two JavaScript files marked with build:js runs through the js: [ uglify() ] pipeline, then gets saved to ./build, by the name scripts.js. Finally, the marked part in the HTML is replaced with a single script tag, using this file.

The result is a build directory with three files: one .js, one .css, and one .html, all minified. So we only need to host this folder.
Task Automation - Building with Gulp

With this, I covered everything I planned to. I think this is a reasonable Gulp setup, that you can use for jump starting your own Gulp setup. Our example app is extremely simple, and every project might have its own specific needs, but now you have a basic knowledge about how you can automatize basically any repeating task in your development process.

Where to go next

There are thousands of useful plugins, and you can even write your own. Some of those I find useful, but had to exclude to keep this post from growing to an infinite length:

  • gulp-livereload - Reloading the page when something is changed.
  • gulp-bump - Autoincrement version numbers.
  • gulp-babel - Use the newest JavaScript features and transpile to the browser compatible representation.
  • gulp-imagemin - Minify images.

Conclusion

As you see, Gulp is a very easy to use task automation system. Even if you spend some time perfecting your tasks, it will save you serious time in the long run. Not to mention the boredom of manually doing the same tasks repeatedly. Automated task can also prevent human errors caused by not really paying attention to the task you are doing for the thousandth time.


If you enjoyed reading this blog post, please share it or like the Facebook page of the website to get regular updates. You can also follow us @dealwithjs.

Happy Gulping!

]]>
<![CDATA[Scaling - Clustering in node.js]]>https://dealwithjs.io/scaling-clustering-in-node-js/e66e0b08-c01c-4046-9318-41c15cdf47c9Sun, 27 Nov 2016 22:43:00 GMT


Recently I ran into a scaling problem with my web server. It significantly slowed down under load. If I could make it use all the CPU cores, the performance would increase significantly.

TL; DR

Source code can be found on GitHub, in the jsCluster repo.


Scaling

Clustering in node.js

Javascript and node.js is single threaded by default. That means applications are running on a single CPU core, and can't take advantage of the modern multicore machine it's hosted on.

Possible solutions

There are multiple ways to fix this.

  1. Run multiple instances of our server, listening on different ports and route traffic to those from a reverse proxy (like Nginx).
    Pros:
    • Full fledged web server.
    • Best performance.
    • Advanced load balancing.
    Cons:
    • Quite difficult to set it up.
    • Needs some devops skills.
    • We have to configure it differently on machines with different number of CPU cores.
    • Increased file descriptor and memory usage.
  2. Use the built-in cluster module of node.js to fork worker processes, all listening on the same port.
    Pros:
    • Very easy to set up.
    • Do everything from code, no configuration needed.
    • Dynamically scale according to the CPU cores available.
    • Easy communication between the processes.
    Cons:
    • A little less performance.
    • We might have to take care of some advanced features ourselves, like managing the lifecycle of our processes.

Since I didn't have much time for this, and I'm not particularly fluent in devops tasks, I chose the cluster module. You can read a more detailed comparison, including a third option (iptables) here.

Our basic server

Here is the code we are starting from. It's a dead simple Express web server, with a single hello world endpoint.

let express = require('express'),  
    app = express(),
    port = 3000;

app.get('/', function (req, res) {  
    res.send('Hello World!');
});

app.listen(port);  
console.log(`application is listening on port ${port}...`);  

The cluster module

Node.js has a built-in cluster module (docs) for quite a long time. It had some problems in it's past, but the kinks got ironed out, and became the recommended way of scaling for multicore machines. It's surprisingly simple to use, we just need a couple of lines for the basic setup.

let cluster = require('cluster');  
if (cluster.isMaster) {  
    let cpus = require('os').cpus().length;
    for (let i = 0; i < cpus; i += 1) {
        cluster.fork();
    }    
} else {
    let express = require('express'),
        app = express(),
        port = 3000;
    app.get('/', function (req, res) {
        res.send('Hello World!');
    });
    app.listen(port);
    console.log(`worker ${cluster.worker.id} is listening on port ${port}...`);
}

We just have to separate the code for a master, and a worker part. The first process will start up as the master, and it forks as many worker processes as the number of CPU cores we have. The cpus value we get means logical cores (so our 4 core i7 with hyperthreading = 8). The worker code is basically the same as the previous example.

With this little work, we gave our app the capability to use all cores. For hobby or small scale projects, this may be already good enough. But we need to take care of some other things, to make it more reliable and robust, if we want to use this in larger scale applications.

  1. If a worker process dies, we need to fork a new one. If we don't do this, a long running application might loose all it's workers over time.

  2. If the workers don't die, we should kill them occasionally. I know, this may sound odd. But if we don't, even a small memory leak will use up more and more memory over time, and eventually we will run out. So I kill the process somewhere between 10K and 20K requests. (In real life, we would do this much more rarely.)

let cluster = require('cluster');  
if (cluster.isMaster) {  
    let cpus = require('os').cpus().length;
    for (let i = 0; i < cpus; i += 1) {
        cluster.fork();
    }
    cluster.on('exit', function (worker) {
        console.log(`worker ${worker.id} exited, respawning...`);
        cluster.fork();
    });    
} else {
    let express = require('express'),
        app = express(),
        port = 3000,
        counter = 10000 + Math.round(Math.random() * 10000);
    app.get('/', function (req, res) {
        res.send('Hello World!');
        if (counter-- === 0) {
            cluster.worker.disconnect();
        }        
    });
    app.listen(port);
    console.log(`worker ${cluster.worker.id} is listening on port ${port}...`);
}

Our server is now capable of running indefinitely, but it's still not very robust. As you can probably imagine, the more precisely we want to manage the processes, and the more edge cases we want to handle, the more complex our code will get. For example: if the newly forked processes keep dying for some reason, it could lead to a kind of infinite loop. The standard solution to this is to wait more and more time before re-forking a worker. There are many edge cases like this. If we want to handle all of them, we will probably go insane before we succeed.

Process Managers

Process managers to the rescue. Specifically I tried PM2, but StrongLoop has similar features. This is our third option.

  1. Using a process manager.
    Pros:
    • You don't have to modify your code. At all.
    • Manages process lifecycle.
    • Much more robust than some naively coded clustering with the cluster module.
    • Gives you advanced features like process restarting, monitoring, and zero downtime reloading.
    Cons:
    • Your code doesn't really have control over your worker processes, the management is automatic.
    • Inter process communication can only be done with a different module.
    • A tiny bit slower than the cluster module.

Honestly, this solution was an afterthought on my part. If I had found PM2 before I wrote this post, I probably hadn't chose the cluster module to start with. But no regrets, the journey is often more important than the destination.

PM2 is an npm package. Install it globally, so you can use it anywhere:

npm install pm2@latest -g  

Then, simply start our basic, non-clustered server with it, and scale to the number of cores:

pm2 start serverSingle.js -i max  

Now our processes are running, and keep running indefinitely. We can even monitor the load on the processes visually:

pm2 monit  

Scaling - Clustering in node.js

PM2 can do a lot more stuff, I won't go into much detail here, you can read more about it's features here. I recommend using it instead of trying to come up with your own solution, except when you really know what you are doing.

Benchmarking

For comparing the performance of the different solutions, I used ApacheBench (ab). It's a nice tool for simple stress testing, and it's already available on macOS.

I had some problems, namely running out of ephemeral ports, which I fixed by using the KeepAlive option (-k). I don't want to go into much detail about this, but you can read more here if you are interested.

The command used to run the test.
(Meaning: Do a million requests, maximum a hundred concurrent, and keep the connection alive.)

ab -n 1000000 -c 100 -k http://localhost:3000/  

Results:
(Measured on a 2015 MacBook Pro, 4 core, 2.2 GHz Intel Core i7, 16 GB RAM.)

Metric Single Clustered PM2
Time taken for tests (s) 89 27 28
Requests per second 11239 37357 35403
Transfer rate (KB/s) 2316 7698 7295
Longest request (ms) 164 70 61

As you can see:

  • The cluster module is the fastest in most metrics.
  • PM2 is just a tiny bit slower, which is very good considering the robustness and the advanced features we get compared to the cluster module.
  • The single process is not even in the ballpark. No surprise here.

Conclusion

If you need to scale, you have to do clustering. Use a process manager if you can, it saves you a lot of trouble. If you can't, or you feel that's overkill for your use case, use the cluster module. Even in the basic setup, it's far better than nothing.


If you enjoyed reading this blog post, please share it or like the Facebook page of the website to get regular updates. You can also follow us @dealwithjs.

Happy clustering!

]]>
<![CDATA[Design Patterns - The Adapter Pattern in JavaScript]]>https://dealwithjs.io/design-patterns-the-adapter-pattern-in-javascript/5a3c5b2a-b57e-46ae-af9a-c928176e53feWed, 23 Nov 2016 22:21:58 GMT

TL; DR

Source code can be found on GitHub, jsAdapter repo


Design Patterns - The Adapter

A picture is worth a thousand words, this is for sure true once the Adapter design pattern needs to be explained.

Most of us has seen something like this:

Design Patterns - The Adapter Pattern in JavaScript

This is a travel plug sometimes it is called plug adaptor. The plug adapter adapts (transforms) one electrical interface to another. In this case the EU standard to the US standard.

It is important to highlight without the adapter the two interfaces cannot communicate with each other. Another important aspect of using/creating adapters is, that adapters are always created for existing components, which need to interact.

There are only minor differences between the Adapter, Decorator, Facade and Bridge design patterns, we will cover these in a future blog post.

Lets assume there is a project, which has a logger library. This was written by the initial project team and is not the most developer friendly to use. Since this project specific logger library was developed, new logger libraries appeared on the market and the project should be able to use these.
One way to solve this is to write adapter(s) for the logger libraries and use those.

The current logger in our imagined app is this:

function BadLogger(name) {  
    this.name = name;
    var LOG_HEADER = '[' + name + ']:';
    var self = this;

    return {
        getName: function getName() {
            return self.name;
        },
        getType: function getType() {
            return 'BadLogger';
        },
        information: function information(message) {
            console.info(LOG_HEADER + message + '- INFORMATION' );
        },
        debg: function debg(message) {
            console.log(LOG_HEADER + message + '- DEBG');
        },
        w: function w(message) {
            console.warn(LOG_HEADER + message + '- W' );
        },
        err: function err(message) {
            console.error(LOG_HEADER + message+ '- ERR' );
        }
    }
}

module.exports = {  
    getLogger: BadLogger
}

I don't want to go into details why this is a bad logger (module named as BadLogger too), but it can be seen that the function names are not quite intuitive and don't really respect any naming convention.

There is another logger which we might want to use:

function ShortLogger(name) {  
    this.name = name;
    var LOG_HEADER = '[' + name + ']';
    var self = this;
    var getTime = function() {
        return '[' + new Date().toISOString() + ']';
    }
    return {
        getName: function getName() {
            return self.name;
        },
        getType: function getType() {
            return 'ShortLogger';
        },
        i: function i(message) {
            console.info(LOG_HEADER + getTime() + '[I]: ' + message);
        },
        d: function d(message) {
            console.log(LOG_HEADER + getTime() + '[D]: ' + message);
        },
        w: function w(message) {
            console.warn(LOG_HEADER + getTime() + '[W]: ' + message);
        },
        e: function e(message) {
            console.error(LOG_HEADER + getTime() + '[E]: ' + message);
        }
    }
}

module.exports = {  
    getLogger: ShortLogger
}

As we can see this applies the Android Logger like standard, where the name of the log method is the first letter of the log message type, for example w stands for warning.

In this case the role of the adapter is to give the possibility to the development team to use any kind of logger what they want.

When building adapters there are two approaches:

  1. Build adapters for every component which have to interact with each other. In this case it means to build two adapters, one for ShortLogger and one for BadLogger.

  2. Build one adapter which can adapt any of the same type component. In this case it means to build one Adapter which should handle any logger type.

Lets stick to the second approach, below is the code for the LoggerAdapter module:

var ShortLogger = require('./ShortLogger');  
var BadLogger = require('./BadLogger');

function LoggerAdapter(loggerObj) {  
    if (!loggerObj) {
        throw Error('Parameter [loggerObj] is not defined.');
    }
    console.log('[LoggerAdapter] is using Logger with name: ' + loggerObj.getName());
    var CONSTANTS = {
        DEBUG: 'DEBUG',
        WARNING: 'WARNING',
        INFORMATION: 'INFORMATION',
        ERROR: 'ERROR',
        BAD_LOGGER: 'BadLogger',
        SHORT_LOGGER: 'ShortLogger'
    };
    var loggerFunctionMapper = {};

    if(loggerObj.getType() === CONSTANTS.BAD_LOGGER) {
        loggerFunctionMapper[CONSTANTS.DEBUG] = loggerObj.debg;
        loggerFunctionMapper[CONSTANTS.INFORMATION] = loggerObj.information;
        loggerFunctionMapper[CONSTANTS.WARNING] = loggerObj.w;
        loggerFunctionMapper[CONSTANTS.ERROR] = loggerObj.err;
    }
    else if (loggerObj.getType() === CONSTANTS.SHORT_LOGGER) {
        loggerFunctionMapper[CONSTANTS.DEBUG] = loggerObj.d;
        loggerFunctionMapper[CONSTANTS.INFORMATION] = loggerObj.i;
        loggerFunctionMapper[CONSTANTS.WARNING] = loggerObj.w;
        loggerFunctionMapper[CONSTANTS.ERROR] = loggerObj.e;
    }

    function information(message) {
        try {
          loggerFunctionMapper[CONSTANTS.INFORMATION](message);
        }
        catch(err) {
            throw Error('No implementation for Logger: ' + loggerObj.toString());
        }
    };

    function debug(message) {
        try {
          loggerFunctionMapper[CONSTANTS.DEBUG](message);
        }
        catch(err) {
            throw Error('No implementation for Logger: ' + loggerObj.toString());
        }
    };
...
    return {
        debug: debug,
        information: information,
        warning: warning,
        error: error
    }
}

module.exports = {  
    LoggerAdapter: LoggerAdapter
}

When creating the adapter, a loggerObj has to be passed in, this is used for the real logging.

Once the project is refactored to use the LoggerAdapter for logging, the underlying logger can be changed much easier.

All the magic in the adapter is the if/else part at the beginning:

if(loggerObj.getType() === CONSTANTS.BAD_LOGGER) {  
        loggerFunctionMapper[CONSTANTS.DEBUG] = loggerObj.debg;
        loggerFunctionMapper[CONSTANTS.INFORMATION] = loggerObj.information;
        loggerFunctionMapper[CONSTANTS.WARNING] = loggerObj.w;
        loggerFunctionMapper[CONSTANTS.ERROR] = loggerObj.err;
    }
    else if (loggerObj.getType() === CONSTANTS.SHORT_LOGGER) {
        loggerFunctionMapper[CONSTANTS.DEBUG] = loggerObj.d;
        loggerFunctionMapper[CONSTANTS.INFORMATION] = loggerObj.i;
        loggerFunctionMapper[CONSTANTS.WARNING] = loggerObj.w;
        loggerFunctionMapper[CONSTANTS.ERROR] = loggerObj.e;
    }

This is where we adapt the interfaces, depending on the type of log message and loggerObj we map the correct function to the log message type.

In case a new logger library needs to be supported, lets say a remote logger library which logs to a RESTful API, only this mapping needs to be adjusted and the new logger library can be used.

Below is a use case of the LoggerAdapter:

var ShortLogger = require('./ShortLogger');  
var BadLogger = require('./BadLogger');  
var LoggerAdapter = require('./LoggerAdapter');  
var shortLog = ShortLogger.getLogger('ShortLoger');  
var badLogger = BadLogger.getLogger('BadLogger');

var loggerAdapter = LoggerAdapter.LoggerAdapter(badLogger);  
loggerAdapter.information('This is logged through LoggerAdapter');  
loggerAdapter.debug('This is logged through LoggerAdapter');  
loggerAdapter.warning('This is logged through LoggerAdapter');  
loggerAdapter.error('This is logged through LoggerAdapter');

console.log();

var loggerAdapter2 = LoggerAdapter.LoggerAdapter(shortLog);  
loggerAdapter2.information('Now This is logged through LoggerAdapter');  
loggerAdapter2.debug('Now This is logged through LoggerAdapter');  
loggerAdapter2.warning('Now This is logged through LoggerAdapter');  
loggerAdapter2.error('Now This is logged through LoggerAdapter');

Once this finished, the output was the following:

[LoggerAdapter] is using Logger with name: BadLogger
[BadLogger]:This is logged through LoggerAdapter- INFORMATION
[BadLogger]:This is logged through LoggerAdapter- DEBG
[BadLogger]:This is logged through LoggerAdapter- W
[BadLogger]:This is logged through LoggerAdapter- ERR

[LoggerAdapter] is using Logger with name: ShortLoger
[ShortLoger][2016-11-23T21:10:59.729Z][I]: Now This is logged through LoggerAdapter
[ShortLoger][2016-11-23T21:10:59.729Z][D]: Now This is logged through LoggerAdapter
[ShortLoger][2016-11-23T21:10:59.730Z][W]: Now This is logged through LoggerAdapter
[ShortLoger][2016-11-23T21:10:59.730Z][E]: Now This is logged through LoggerAdapter

If you enjoyed reading this post, please share it or like the Facebook page of the website to get regular updates. You can also follow us @dealwithjs.

]]>
<![CDATA[Design Patterns - Decorating in JavaScript]]>https://dealwithjs.io/design-patterns-decorating-in-javascript/3fc2a912-f7f6-4e94-b4bc-a1c873a53b92Mon, 14 Nov 2016 21:24:00 GMT


Today we are going to talk about the decorator design pattern. More specifically, how to decorate functions in JavaScript.

TL; DR

Source code can be found on GitHub, jsDecorator or jsFiddle


Design Patterns

Decorating in JavaScript

The decorator design pattern is for extending the functionality or state of an object by repeatedly wrapping it.

That's quite a broad definition, it can mean many different things. In JavaScript - and other languages with first class functions, like Python - there is a specific use case for this. Namely, to extend a function by wrapping it in an outer function.

This can be useful for temporarily augment functions while developing or debugging an application (logging, call counting, performance profiling, etc). But even production code can be greatly simplified and made more readable by adding frequently used extra functionality via decorators (authentication, caching, permission handling, analytics, etc).

Let's say we have a suspected bottleneck function:

function isPrime(num) {  
    if (num === 2 || num === 3) { 
        return true; 
    }
    if (num % 2 === 0) { 
        return false; 
    }
    let max = Math.ceil(Math.sqrt(num));
    for (let i = 3; i <= max; i += 2) {
        if (num % i === 0) {
            return false;
        }
    }
    return true;
}

We want to measure the time it takes to execute this function, but we don't really want to manually modify it's code, and revert it after the measurement. Not to mention this is a fairly common thing we do, so an ideal solution would be a write once - reuse any time. We can achieve that by a decorator function.

function logDecorator(func) {  
    let decorated = function () {
        let start = performance.now(),
            result = func.apply(this, arguments),
            time = Math.round((performance.now() - start) * 1000);
        console.log(`${time} μs`);
        return result;
    };  
    return decorated;
}

As you can see, our decorator function gets a func function argument, creates and returns a decorated replacement function, which does the extra functionality we want (logging time in this case), and calls the original func function, much like we used to call super in overridden methods.

We can apply our decorator to any function like this:

let isPrimeLog = logDecorator(isPrime);  
isPrimeLog(22586101147);    

// output:
// 4555 μs

I think it's easier to understand if we give it a separate name, also we will compare two differently decorated functions later, and in that case, they need different names anyway. But of course we could just as easily decorate the isPrime function in place:

isPrime = logDecorator(isPrime);  

By adding two more things, we can make our log decorator more generic and useful:

  1. The anonymous decorated function should be a named one. I generally try not to use anonymous functions unless it's justifiable, because they lead to unreadable stack traces. Also, I would like to log the function name. So I derive a name from the original name of func.
  2. I compile a string from the arguments, so I can log that too.

These functionalities will also be useful in our second example, so I've put them in separate utility functions (which I omitted for brevity - see the source code for details).

function logDecorator(func) {  
    let decorated = function () {
        let params = formatArguments(arguments),
            start = performance.now(),
            result = func.apply(this, arguments),
            time = Math.round((performance.now() - start) * 1000);
        console.log(`${decorated.name}(${params}) => ${result} (${time} μs)`);
        return result;
    };  
    return renameFunction(decorated, func.name + 'Log');
}

// output for the same call:
// isPrimeLog(22586101147) => true (4555 μs)

As you can see, the output became more rich and useful. And we can put this on any function we want. Now it's probably easier for you to imagine the things we can do with this. And we aren't limited to adding functionality, we can also add state via closure.

Consider the second example: we want to add caching to functions. If we call our function with a set of arguments, it calculates the result and stores it. Whenever we call it with the same set of arguments again, it will get the result from the cache instead of recalculating it every time. This is called memoization, and can be a massive performance increase in many situations. And if we have an easily applicable solution, we can try it on any function without messing with our code too much.

function memoDecorator(func) {  
    let cache = {},
        decorated = function () {
            let params = formatArguments(arguments);
            if (params in cache) {
                return cache[params];
            } else {
                let result = func.apply(this, arguments);
                cache[params] = result;
                return result;
            }
        };
    return renameFunction(decorated, func.name + 'Memo');
}

Let's try it out:

let num = 22586101147;

let isPrimeLog = logDecorator(isPrime);  
isPrimeLog(num);  
isPrimeLog(num);  
isPrimeLog(num);

// output
// isPrimeLog(22586101147) => true (4555 μs)
// isPrimeLog(22586101147) => true (5645 μs)
// isPrimeLog(22586101147) => true (3925 μs)

isPrimeMemoLog = logDecorator(memoDecorator(isPrime));  
isPrimeMemoLog(num);  
isPrimeMemoLog(num);  
isPrimeMemoLog(num);

// output
// isPrimeMemoLog(22586101147) => true (3485 μs)
// isPrimeMemoLog(22586101147) => true (5 μs)
// isPrimeMemoLog(22586101147) => true (5 μs)

Mostly we did the same thing as in the first example. One key difference to note is how we create the cache object in our decorator, so the decorated function can access it via closure. In other words, we gave a shared state to the different calls of the decorated function.

Also note how we double-decorated our second function.

As you can see memoDecorator makes subsequent calls return the cached result instantly.

When you wrap your head around it, using decorators in JavaScript is a simple, but very powerful design pattern. It can be used effectively and elegantly, but - as almost everything - can be abused too. I encourage you to try it, play with it, try to come up with use cases in your line of work, and see what you can build with it.

Finally, I want to mention that there are plans to include a special syntax for using decorators in ES7. You can read more about that here.


If you enjoyed reading this blog post, please share it or like the Facebook page of the website to get regular updates. You can also follow us @dealwithjs.

Happy decorating!

]]>
<![CDATA[Design Patterns - The Builder Pattern in JavaScript]]>https://dealwithjs.io/design-patterns-the-builder-pattern-in-javascript/279d53a8-c78a-4b05-b161-07a0e4d08a1eFri, 11 Nov 2016 06:40:22 GMT

TL; DR

Source code can be found on GitHub, jsBuilder repo


Design Patterns - The Builder

The Builder pattern is a creational design pattern, which should be applied when complex objects have to be created. When applying this pattern the goal is to separate the object creation (or initialization) logic from the representation.

This pattern should be used when object creation is complicated and involves a lot of code.

UML diagram1 with the Builder pattern:

Design Patterns - The Builder Pattern in JavaScript

This pattern has three components, the Director which uses the Builder to create objects, the role of the Director is to coordinate the object creation in case there are multiple Builders involved. The Director also provides an extra abstraction layer when implementing this pattern.
The Builder is an abstraction layer above all the Concrete Builders which create the objects. In JavaScript the Builder is not separated clearly and in some case it can be left out completely (although I forced it to be in the sample code).
The Concrete Builders implement the real object creation. Concrete Builders, internally, can use Factories for building parts of the final object (I used factories in my implementation to demonstrate this).

I present the Builder pattern using an analogy.

Lets imagine the process of "building" a cake. In this analogy a cake is considered a complex object, because it is hard to create. The cake object has three parts, the layers, the cream and the topping.

I modeled this scenario in code. Let's start from the Director implementation. In this scenario the Director or Coordinator is represented by the PastryCook function, this handles everything related to "building" the cake.

function PastryCook() {  
    var chocolateCakeBuilder = ChocolateCakeBuilder.getBuilder();
    var strawberryCakeBuilder = StrawberryCakeBuilder.getBuilder();
    var traditionalCakeBuilder = TraditionalCakeBuilder.getBuilder();

    return {
        buildCake: function(flavor) {
            var cake = null;
            switch (flavor) {
                case 'chocolate':
                    cake = chocolateCakeBuilder.buildCake();
                    break;
                case 'strawberry':
                    cake = strawberryCakeBuilder.buildCake();
                    break;
                default:
                    cake = traditionalCakeBuilder.buildCake();
                    break;
            }

            return cake;
        }
    };
}

The PastryCook has a buildCake function, which receives the flavor argument. Based on the value of the argument I used different Builders to build the cake. By adding the possibility to create different cake objects based on a parameter passed to the Director I could demonstrate how to use multiple builders.

The implementation of ChocolateCakeBuilder handles the steps of building a chocolate cake. In this Builder I used factories to simplify the implementation and to provide an extra layer of abstraction.

function ChocolateCakeBuilder() {  
    var layerFactory = LayerFactory.getInstance();
    var creamFactory = CreamFactory.getInstance();
    var toppingFactory = ToppingFactory.getInstance();

    return {
        buildCake: function() {
            return {
                layer: layerFactory.getStandard(),
                cream: creamFactory.getChocolate(),
                topping: toppingFactory.getChocolate()
            }
        }
    };
}

Every Builder implements the buildCake function, this is the method which delivers the Product (referring to the UML diagram).

The StrawberryCakeBuilder is very similar in implementation:

function StrawberryCakeBuilder() {  
    var layerFactory = LayerFactory.getInstance();
    var creamFactory = CreamFactory.getInstance();
    var toppingFactory = ToppingFactory.getInstance();

    return {
        buildCake: function() {
            return {
                layer: layerFactory.getLowCarb(),
                cream: creamFactory.getWhipped(),
                topping: toppingFactory.getStrawberry()
            }
        }
    };
}

For the Builders I used Factories, because these (in general) add more flexibility to the code and reduce the maintenance cost of the codebase.

The source code of ToppingFactory is below; all it does is return a string with the selected topping.

function ToppingFactory() {  
    return {
        getChocolate: function() {
            return 'Topping: [Chocolate]';
        },

        getVanilla: function() {
            return 'Topping: [Vanilla]';
        },

        getStrawberry: function() {
            return 'Topping: [Strawberry]';
        },
    };
}

module.exports = {  
    getInstance: ToppingFactory
};

Sample code was written for node, the index.js has the demo code:

var PastryCook = require('./PastryCook');

var cakeBuilder = PastryCook.getBuilder();

var chocolateCake = cakeBuilder.buildCake('chocolate');  
var strawberryCake = cakeBuilder.buildCake('strawberry');  
var cake = cakeBuilder.buildCake();

console.log("The Chocolate cake is compound of:");  
console.log(JSON.stringify(chocolateCake, null, 2));

console.log("The Strawberry cake is compound of:");  
console.log(JSON.stringify(strawberryCake, null, 2));

console.log("The cake is compound of:");  
console.log(JSON.stringify(cake, null, 2));

Once the code is executed the output should be:

The Chocolate cake is compound of:  
{
  "layer": "Layer: [Standard]",
  "cream": "Cream: [Chocolate]",
  "topping": "Topping: [Chocolate]"
}
The Strawberry cake is compound of:  
{
  "layer": "Layer: [LowCarb]",
  "cream": "Cream: [Whipped]",
  "topping": "Topping: [Strawberry]"
}
The cake is compound of:  
{
  "layer": "Layer: [Standard]",
  "cream": "Cream: [Peanut Butter]",
  "topping": "Topping: [Strawberry]"
}

I tried to keep the implementation clean and developer friendly so the real use case of the Builder design pattern can be seen and understood.

If you enjoyed reading this blog post, please share it or like the Facebook page of the website to get regular updates.

Thank you for reading!

  1. Image Source is Wikipedia

]]>
<![CDATA[Design Patterns - The Factory Pattern in JavaScript]]>https://dealwithjs.io/design-patterns-the-factory-pattern-in-javascript/523f7fcc-b9a0-422c-9ace-a9ca49370a39Mon, 07 Nov 2016 22:01:33 GMT

TL; DR

Source code can be found on GitHub, jsFactory repo


Design Patterns - The Factory

Table of Contents

  1. Design Patterns
  2. The Factory Pattern
  3. Factory Pattern in JS
  4. Simple Factory
  5. Configurable Factory
  6. Demo

Design Patterns ^

A design pattern is the re-usable form of a solution to a design problem. 1

In 1977 Christopher Alexander, at that time a well-known architect, published a book, titled A Pattern Language. He explained his ideas about incremental and organic design and laid the foundations of Design Patterns.

In 1994 was released the Design Pattern book, the book which is the symbol of the Design Pattern topic, the classic written by the Gang of Four: Design Patterns: Elements of Reusable Object-Oriented Software. All the examples in this book were written in C++ and presented using C++’s language features. Understanding this book and the examples in this book is not a trivial task.

You may ask why do we need design patterns? The answer is simple, first of all design patterns offer solutions to problems which might appear in our apps. Of course, there are multiple solutions for these issues... BUT the solutions which design patterns offer have been proven and tested by thousands of engineers so we can rely on these.

The Factory Pattern ^

The Factory design pattern is one of the most simple design patterns. The main idea of this pattern is to have a common place (function, object, class, module) which is used to initialize (create) objects within the application. Imagine, if you want to update an existing type/object, lets say you want to add two more properties to an Employee type. This Employee type is used all over the modules inside your app (login, products and services, salary module, employee tracking, career management and so on).

Just imagine if the creation of objects is spread over hundreds of files within an application, maintaining and handling that large amount of files is very error-prone when the object creation logic needs to be updated. Missing to adjust some of the places where the objects are created can cause misbehavior, crash, data corruption in the application.

If the creation of the objects (in our case of type Employee but not only) is not controlled by a single component (a factory) then, you would need to change the code everywhere in the application where a new Employee is created. Once the creation of the objects is centralized, the number of affected code files, classes, modules is much smaller when this type of change is needed.

The Factory design pattern is a creational design pattern.

Creational Design patterns are the ones which usually everyone knows about, mostly because during development most of us faced situations when there was need to create multiple objects and everyone knew (or felt) these objects should be created in the same place or at least in the same manner. Some of the commonly known design patterns are Factory, Abstract Factory, Builder and of course Singleton.

Factory Pattern in JS ^

There are multiple ways to implement the Factory pattern within JavaScript, I implemented two factories, SimpleFactory and ConfigurableFactory. Both of these will serve the same purpose, creating different type of Employee objects.

Simple Factory ^

The SimpleFactory is a node module. Inside the constructor function, I defined three methods getSalesEmployee, getEngineerEmployee, getName. The first two return JSON objects with different properties (based on the type of Employee), the last returns the name of the factory object.

function SimpleFactory(name) {  
    var factoryName = name;

    var getSalesEmployee = function(firstName, lastName) {
        return {
            firstName: firstName,
            lastName: lastName,
            comission: 0,
            salary: 100,
            projects: [],
            type: 'sales'
        };
    }

    var getEngineerEmployee = function(firstName, lastName) {
        return {
            firstName: firstName,
            lastName: lastName,
            salary: 150,
            manager: '',
            technologies: [],
            projects: [],
            type: 'engineer'
        }
    }

    var getName = function() {
        return factoryName;
    }

    return {
        getSales: getSalesEmployee,
        getEngineer: getEngineerEmployee,
        getName: getName
    }
}

module.exports = {  
    getInstance: SimpleFactory
}
Configurable Factory ^

The ConfigurableFactory can be used in a different way, since this can be extended with object creation logic dynamically and the implementation of the factory does not need to be changed.

function ConfigurableFactory(name) {  
    var typeMapper = {};
    var factoryName = name;

    var addTypeSupport = function(typeName, buildFunc) {
        if (!typeName || (!buildFunc && typeof(buildFunc) !== 'function')) {
            throw Error('typeName parameter needs to be defined and buildFunc paramter has to be a function');
        }
        var lcTypeName = typeName.toLowerCase();
        if (typeMapper[lcTypeName]) {
            throw Error('There is already a [' + typeName + '] type defined');
        }
        typeMapper[lcTypeName] = buildFunc;
    }

    var getObject = function(typeName) {
        var lcTypeName = typeName.toLowerCase();
        var builder = typeMapper[lcTypeName];

        if (builder !== undefined) {
            return builder();
        }

        throw Error('Cannot build type:[' + typeName + '], does not know how to...');
    }

    var getName = function() {
        return factoryName;
    }

    return {
        addTypeSupport: addTypeSupport,
        get: getObject,
        getName: getName
    }
}

module.exports = {  
    getInstance: ConfigurableFactory
}

The implementation is more complex than in the case of SimpleFactory. In this case there are also three methods, addTypeSupport, getObject, getName, but the logic within addTypeSupport and getObject is more complex.

When addTypeSupport is invoked a new property is added to the typeMapper object, the property name is the typeName parameter and the value is the buildFunc parameter.

The buildFunc parameter has to be a function, this contains the object creation logic for the type.

Demo ^
How to use SimpleFactory?

In FactoryDemo there is the demoSimpleFactory function, where I created janeDoe and billDoe using the getSales and getEngineer functions.

demoSimpleFactory: function() {  
    var simpleFactory = SimpleFactory.getInstance('SimpleEmployeeFactory');

    console.log(SEPARATOR);
    console.log(HEADER_PREFIX, 'SimpleFactory sample')
    console.log(SEPARATOR);

    console.log('The factory is called: [' + simpleFactory.getName() + ']');

    console.log('We have a new sales colleague:');
    var janeDoe = simpleFactory.getSales('Jane', 'Doe');
    console.log(JSON.stringify(janeDoe, null, 2));

    console.log();
    console.log('We have a new engineer colleague:');
    var billDoe = simpleFactory.getEngineer('Bill', 'Doe');
    console.log(JSON.stringify(billDoe, null, 2));
}

The console output of this demo code is:

========================================================
           SimpleFactory sample
========================================================
The factory is called: [SimpleEmployeeFactory]  
We have a new sales colleague:  
{
  "firstName": "Jane",
  "lastName": "Doe",
  "comission": 0,
  "salary": 100,
  "projects": [],
  "type": "sales"
}

We have a new engineer colleague:  
{
  "firstName": "Bill",
  "lastName": "Doe",
  "salary": 150,
  "manager": "",
  "technologies": [],
  "projects": [],
  "type": "engineer"
}
How to use ConfigurableFactory?

In the same FactoryDemo file, there is the demoConfigurableFactory function:

demoConfigurableFactory: function() {  
    console.log(SEPARATOR);
    console.log(HEADER_PREFIX, 'ConfigurableFactory sample')
    console.log(SEPARATOR);

    var employeeFactory = ConfigurableFactory.getInstance('EmployeeFactory');

    employeeFactory.addTypeSupport('sales', function createSales() {
        return {
            firstName: '',
            lastName: '',
            comission: 0,
            projects: [],
            type: 'sales'
        };
    });

    employeeFactory.addTypeSupport('engineer', function createEngineer() {
        return {
            firstName: '',
            lastName: '',
            salary: 150,
            manager: '',
            technologies: [],
            projects: [],
            type: 'engineer'
        };
    });

    console.log('The factory is called: [' + employeeFactory.getName() + ']');

    var johnDoe = employeeFactory.get('sales');
    console.log('John Doe is:');
    console.log(JSON.stringify(johnDoe, null, 2));

    try {
        console.log('Trying to build CEO...');
        var dannyDoe = employeeFactory.get('ceo');
    } catch (e) {
        console.error(e.message);
    }
}

I used the addTypeSupport function to extend the factory's functionality so this can create sales and engineer employees.

The console content after running the demoConfigurableFactory is:

========================================================
           ConfigurableFactory sample
========================================================
The factory is called: [EmployeeFactory]  
John Doe is:  
{
  "firstName": "",
  "lastName": "",
  "comission": 0,
  "projects": [],
  "type": "sales"
}
Trying to build CEO...  
Cannot build type:[ceo], does not know how to...  

I hope you enjoyed this blog post, if you think this has useful information and it can help others to learn, please share it.

If you want to get notifications about my upcoming articles please follow me on @gergob.

]]>
<![CDATA[Why deal with JS?]]>https://dealwithjs.io/why-deal-with-js/f8530a7f-bcc6-42d6-a321-a1e439c26fb6Fri, 04 Nov 2016 15:53:00 GMTDeal with JS Why deal with JS?

Why deal with JS? - It's a fair question why is the tech world nowadays so JS oriented?


I know, I know what you are saying right now, there are other things, like big data, cloud computing, AI and so on which developers are excited about...


Well...

Yes, there are other technologies too which are interesting and important, but JS has such a big community around it, that its really overwhelming and year by year the number of domains which are tackled by JS are more and more.

If we take a step back and take a look at the role of JS in the tech community and in the industry by the beginning of 2000 we can see that it was mostly used for making websites dynamic and that's it, period.

In 2009 there was a turning point, node.js was released and this triggered an avalanche. Suddenly developers started to use JS, a single threaded, asynchronous programming language on the server side (where java and .net were used before) and as history has proven this worked out quite well.

In 2010 angular.js was released, at that time this client side MVC framework was so different (better?) from all the other available in the market (like knockout.js or backbone.js) that it started to be adopted by the community. Angular 1.x was (still is) around for ~7 years, which I think signifies how good that framework is and how much kudos do the guys in the angular team deserve. Angular 2 was released recently which now can be used from JS, TypeScript and Dart. The creation of Angular 2 was again a great effort of the JS community and enterprises (Google from Angular side and Microsoft from TypeScript side).

react.js appeared in 2013 and it has evolved rapidly, with its virtual DOM and JSX syntax it got a lot of attention and criticism from the JS community, but its still there and its working very well for the users. Of course the JSX syntax has its pros and cons, but that is a different topic what we will cover later on.

In 2015, again, a big step was made so JS can adopt and conquer the tech world, RN (ReactNative) was released, which glues web, android and iOS development together. Many developers consider RN as a competitor for mobile Hybrid app solutions, which might be true, but as far as I can see, RN is the one which can be easily adopted by web developers and managed to get the most out of the community.

I just mentioned three milestones which resulted in JS to be widely known, used and embraced by the tech community, I didn't even mention the release of express.js or twitter's bootstrap components, which were significant but not as important as the ones mentioned before.

So...

To give my answer to the question Why deal with JS?

I think nowadays and in the next decade JS will be the programming language (along with the mentioned technologies) which will give so much flexibility to IT companies from Human Resourcing point of view, like no other technology did before, since the creation of C programming language. There are two main reasons why I say this.

The first reason is, that today, if you have good JS knowledge (that is your main technology, your main asset) you have all the necessary technologies and frameworks at your hand to deliver a product, you can create a website, a mobile application, even a console and a SmartTV application can be created using only JS (of course these applications have to have some platform specific features and sometimes besides JS other technology needs to be used, but the core can be done with JS). So we can say that JS is the foundation a (more or less) universal application development platform.

The second reason is related to the first one and is related to career development and human resourcing. If you think of any tech company, one of the biggest issues these face is that they do not have the necessary developer-power to deliver some projects, because the current employees are not competent within the required technology or they cannot recruit people fast enough who have the necessary technical skills. So, if the technology stack has a common base, like the same programming language and the same development environment and offers possibility to deliver products on many different platforms, then, extending and managing delivery teams is much much easier, because there is a common knowledge and there is not much time needed to ramp-up technology-wise. I assume, I do not need to explain how much flexibility this offers to a tech company.

I really believe the points which I detailed are true and will shape our future and the tech industry's future too, but till we get there, there is still (much?) stuff to work on in the JS world.

]]>