Making asynchronous code look synchronous in JavaScript

Why go asynchronous

Asynchronous programming is a great paradigm which offers a key benefit over its synchronous counterpart – non blocking I/O within a single threaded environment. This is achieved by allowing I/O operations such as network requests and reading files from disk to run outside of the normal flow of the program. By doing so this enables responsive user interfaces and highly performant code.

The challenges faced

To people coming from a synchronous language like PHP, the concept of asynchronous programming can seem both foreign and confusing at first, which is understandable. One moment you were programming one line at a time in a nice sequential fashion, the next thing you know you’re skipping entire chunks of code, only to jump back up to those chunks at some time later. Goto anyone? Ok, it’s not *that* bad.
Then, you have the small matter of callback hell, a name given to the mess you can find yourself in when you have asynchronous callbacks nested within asynchronous callbacks several times deep – before you know it all hell has broken loose.
Promises came along to do away with callback hell, but for all the good they did, they still did not address the issue of code not being readable in a nice sequential fashion.

Generators in ES6

With the advent of ES6, along came a seemingly unrelated paradigm – generators. Generators are a powerful construct, allowing a function to “yield” control along with an (optional) value back to the calling code, which can in turn resume the generator function, passing an (optional) value back in. This process can be repeated indefinitely.

Consider the following function, which is a generator function (note the special syntax), and look at how its called:

function *someGenerator() {
  console.log(5); // 5
  const someVal = yield 7.5;
  console.log(someVal); // 10
  const result = yield someVal * 2;
  console.log(result); // 30
}

const it = someGenerator();
const firstResult = it.next();
console.log(firstResult.value); // 7.5
const secondResult = it.next(10);
console.log(secondResult.value); // 20
result.next(30);

Can you see what’s going on? The first thing to note is that when a generator is called, an iterator is returned. An iterator is an object that knows how to access items from a collection, one item at a time, keeping track of where it is in the collection. From there, we call next on the iterator, passing control over to the generator, and running code up until the first yield statement. At this point, the yielded value is passed to the calling code, along with control. We then call next, passing in a value and with it we pass control back to the generator function. This value is assigned to the variable someVal within the generator. This process of passing values in and out of the generator continues, with console log’s providing a clearer picture of what’s going on.

One thing to note is the de-structuring of value from the result of each call to next on the iterator. This is because the iterator returns an object, containing two key value pairs, done, and value. done represents whether the iterator is complete. value contains the result of the yield statement.

Using generators with promises

This mechanism of passing control out of the generator, then at some time later resuming control should sound familiar – that’s because this is not so different from the way promises work. We call some code, then at some time later we resume control within a thenable block, with the promise result passed in.

It therefore only seems reasonable that we should be able to combine these two paradigms in some way, to provide a promise mechanism that reads synchronously, and we can!

Implementing a full library to do this is beyond the scope of this article, however the basic concepts are:

  • Write a library function that takes one argument (a generator function)
  • Within the provided generator function, each time a promise is encountered, it should be yielded (to the library function)
  • The library function manages the promise fulfillment, and depending on whether it was resolved or rejected passes control and the result back into the generator function using either next or throw
  • Yielded promises should be wrapped in a try catch
For a full working example, check out a bare bones library I wrote earlier in the year called awaiting-async, complete with unit tests providing example scenarios.

How this looks

Using a library such as this (there are plenty of them out there), we can take the following code from this:

const somePromise = Promise.resolve('some value');

somePromise
  .then(res => {
    console.log(res); // some value
  })
  .catch(err => {
    // (Error handling code would go in here)
  });
To this:
const aa = require('awaiting-async');

aa(function *() {
  const somePromise = Promise.resolve('some value');
  try {
    const result = yield somePromise;
    console.log(result); // some value
  } catch (err) {
    // (Error handling code would go in here)
  }
});

And with it, we’ve made asynchronous code look synchronous in JavaScript!

tl;dr

Generator functions can be used in ES6 to make asynchronous code look synchronous.

Microservices make hotfixes easier

Microservices can ease the pain of deploying hotfixes to live due to the small and bounded context of each service.

Setting the scene

For the sake of this post, imagine that your system at work is written and deployed as a monolith. Now, picture the following situation: stakeholder – “I need this fix in before X, Y, and Z”. It’s not an uncommon one.
But let’s say that X, Y, and Z are all already in the mainline branch and deployed to your systems integration environment. This presents a challenge. There are various ways you could go about approaching this – some of them messier than others.

The nitty gritty

One approach would be to individually revert the X, Y, and Z commits in Git, implement the hotfix straight onto the mainline, and deploy the latest build from the there. Then, when ready, (and your hotfix has been deployed to production), you would need to go back and individually revert the reverts. A second deployment would be needed to bring your systems integration environment back to where it was, (now with the hotfix in there too), and life carries on. Maybe there are better ways to do this, but one way or another it’s not difficult to see how much of a headache this can potentially cause.

Microservices to the rescue!

But then you remember that you are actually using microservices and not a monolith after all. After checking, it turns out that X, Y and Z are all changes to microservices not affected by the hotfix. Great!
Simply fix the microservice in question, and deploy this change through your environments ahead of the microservices containing X, Y, and Z, and voila. To your stakeholders, it looks like a hotfix, but to you it just felt like every other release!

Conclusion

Of course, you could still end up in a situation where a change or two needs to be backed out of one or more of your microservice mainlines in order for a hotfix to go out, however I’m betting it will not only be less often, but I’m also betting that it will be less of a headache than with your old monolith.

 

Leading a team to deliver an MVP at breakneck speed

tl;dr

We were given the task of delivering an MVP within a short timeframe. We chose a microservice based architecture, allowing concurrent development of functionality. We adopted the popular MERN stack, unifying the language across the system. Working closely with the customer and stakeholders was key to delivering what was really wanted. High engineering standards underpinned the work that we did, providing us with fast feedback when the system broke and high confidence in deployments, and leaving us with a robust and extensible system.

The challenge

Several months ago we were given the task of delivering an MVP within only a handful of months, for a product that at its core is an over the phone variation of one of our existing offerings. It initially appeared that we could try to crowbar it into a cloned version of our existing offering, but we realized during our initial design sessions that this approach would be drastically hindered by the limitations of our existing system (which was not designed with this in mind). It would also not live up to the high engineering standards we have grown used to within our department.

We chose to adopt the strangler pattern, writing parts of our system from scratch and interfacing with our legacy system where appropriate, to make a deliverable feasible in only a handful of months, and to also build out a platform that is extensible.

In a nutshell, the plan was to quickly spin up a plain but functional UI, a new set of RESTful APIs (to overcome some technical debt we have been carrying with our legacy APIs) and a new set of data stores to keep our new products’ data separate from our existing offerings’. All this within a resilient microservice based architecture. We planned to marshall data between this new system and our legacy system where required, being careful to limit the number of microservices that would facilitate this communication with the legacy system to as few as possible. In fact, we managed to limit this to only one!

The MERN stack

The MERN stack consists of the Mongo database, the Express Node.js framework, the React front end library (often coupled with Redux), and the Node.js server environment.

Using this stack has several great advantages, including having only one language across the stack (JavaScript), being lightweight with server configuration within the code (making it ideal for microservices), being fast, and being well backed within the software industry.

Benefits of the approach

For starters, working with a microservice based architecture lends itself well to working on different parts of the system concurrently, due to the system’s decoupled nature. Assuming contracts are agreed up front, stub servers can be spun up easily using technologies such as Dyson, allowing us to isolate development and testing of components within our system without upstream components even existing yet. That’s pretty awesome.

Other great benefits include a highly decoupled back end system with a high level of cohesion, alongside a front end that separates concerns well and avoids the tie in of a large framework such as AngularJS. With React, it’s as much a philosophy as it is a library – declare the view and manage state separately, in our case with the popular Redux library. Redux enforces a unidirectional data flow and encourages a functional style, making code easier to reason about.

Engineering standards

Throughout development, we have prided ourselves on maintaining a high level of engineering standards. This goes much further than simply baking code linting into our pipeline.

For one, it has involved writing meaningful unit tests with a high level of test coverage. This allows us to validate that new functions work as desired and provide fast feedback about any regressions that have been introduced.

It has also involved writing tests at the component or integration level, providing us with confidence that the building blocks of our system fit together as components in the way we expect.

In addition, we have included several system tests, serving as a useful smoke test within our pipeline to verify that the system is up and running and not obviously broken.

Monitoring and logging are another key area for us and should be for anyone adopting a similar approach. We’ve used a “ping” dashboard to show the health of each microservice at any given time, providing us with another useful mechanism to spot failures fast.

It goes without saying that manual exploratory testing has remained a tool within our arsenal.

Working with the customer

It’s not surprising that customers will often ask for a fixed scope with fixed time but you can’t have both. We have tried to manage this by communicating well and prioritizing effectively. It’s been a key challenge for us working with the customer to establish which features are essential to delivering the key business value and which are not.

What next

As the project continues to move forward its key that we continue to gain fast and frequent feedback and involvement from the customer to help guide the future direction of the product. It’s also key that we continue to adhere to and even improve upon our high engineering standards.

In addition to incrementally adding features, I envisage us looking to add contract testing between services within our system. We know that tests at the unit and component/integration level are great to verify that those parts of the system work as expected, but it’s a great improvement to know that these parts of the system satisfy the contracts expected of them by the other services within the system. This will enable us to individually deploy microservices with truly high confidence.

Getting functional in JS with Ramda

Introduction

Lately, I’ve begun programming in JS using an increasingly functional style, with the help of Ramda (a functional programming library). What does this mean? At its core, this means writing predominantly pure functions, handling side effects and making use of techniques such as currying, partial application and functional composition. You can choose to take it further than this, however, that’s a story for another day.

The pillars of functional programming in JS

Pure functions

One key area of functional programming is the concept of pure functions. A pure function is one that takes an input and returns an output. It does not depend on external system state and it does not have any side effects. Pure functions will for a given input always return the same output, making them predictable and easy to test.

Side effects

It’s worth mentioning that side effects are sometimes unavoidable, and there are different techniques you can adopt to deal with these. But the key objective here is minimising side effects and handling these away from your pure functions.

Currying

One of the key building blocks of functional programming is the technique of currying. This is where you take a polyadic function (one with multiple arguments) and translate it into a sequence of monadic functions (functions that take a single argument). This works by each function returning its result as the argument to the next function in the sequence. This allows you to partially apply functions by fixing a number of arguments. Importantly, this also enables you to compose functions together which I’ll get onto later.

Example of partial application:

// Function for multiplying two numbers together
const multiplyTogether = (x, y) => {
return x * y;
}
multiplyTogether(2, 5);
// => 10

// Curried multiplication of two numbers
const multiplyTogetherCurried = x => y => {
return x * y;
}
multiplyTogetherCurried(2)(5);
// => 10

// Partial application used to create double number function
const doubleNumber = multiplyTogetherCurried(2);
doubleNumber(5);
// => 10

Composition

Building on currying and adopting another functional discipline of moving data to be the last argument of your function, you can now begin to make use of functional composition, and this is where things start to get pretty awesome.
With functional composition, you can create a sequence of functions which (after the first in the sequence) must be monadic, where each function feeds its returned value into the next function in the sequence as its argument, returning the result at the end of the sequence. We do this in Ramda using compose . Adopting this style can not only make code easier to reason about but also easier to read and write. In my opinion, where this style really shines is in data transformation, allowing you to break down potentially complex transformations into logical steps. Ramda is a big help here, as although you could simply choose to make use of compose and write your own curried monadic functions, it just so happens that Ramda is a library of super useful, (mostly) curried functions, containing functions for mapping over data, reducing data, omitting data based on keys, flattening and unflattening objects and so much more!

Imperative vs functional

Now that you’ve (hopefully) got a better idea of what functional programming is, the question becomes, is following an imperative style wrong? In my opinion, no. When it comes down to choosing between imperative and functional programming in JS, I believe you have to be pragmatic – whilst functional may be your go to choice, there are times when I believe you have to ask yourself if a simple if else statement will do the job. That said, adopting the discipline of writing pure functions where possible and managing side effects, along with handling data transformations using functional composition, will likely make your life as a developer a lot easier and more enjoyable. It sure has for me!

A worked example using Ramda

I’ve included a worked example of a function which I rewrote from a predominantly imperative style to a functional style, as I felt the function was becoming increasing difficult to reason about and with further anticipated additions, I was concerned it would become increasingly brittle.

Original function:

import R from 'ramda';

const dataMapper = factFindData => {
  const obj = {};

  Object.keys(factFindData).forEach(k => {
    if (k === 'retirement__pensions') {
      obj.retirement__pensions = normalizePensions(factFindData);
      return;
    }

    if (k !== 'db_options' && k !== 'health__applicant__high_blood_pressure_details__readings') {
      obj[k] = factFindData[k];
      return;
    }

    if (k === 'health__applicant__high_blood_pressure_details__readings') {
      if (factFindData.health__applicant__high_blood_pressure !== 'no') {
        obj.health__applicant__high_blood_pressure_details__readings = factFindData[k];
      }
    }
  });

  return {
    ...emptyArrays,
    ...R.omit(['_id', 'notes', 'created_at', 'updated_at'], obj),
  };
};
Refactored function:

import R from 'ramda';

const normalizeForEngine = x => ({ ...emptyArrays, ...x });
const omitNonEngineKeys = R.omit(['_id', 'notes', 'created_at', 'updated_at', 'db_options']);

const normalizeBloodPressure =
  R.when(
    x => x.health__applicant__high_blood_pressure === 'no',
    R.omit(['health__applicant__high_blood_pressure_details__readings'])
  );

const mapNormalizedPensions =
  R.mapObjIndexed((v, k, o) => k === 'retirement__pensions' ? normalizePensions(o) : v);

const dataMapper =
  R.compose(
    normalizeForEngine,
    omitNonEngineKeys,
    normalizeBloodPressure,
    mapNormalizedPensions
  );

As you can see, when trying to figure out what the data mapper function is doing in the original function, I have to loop through an object, update and maintain the state of a temporary variable (in my head), in each loop checking against multiple conditions, before then taking this result and sticking in into an object, remembering to remove certain keys.

With the refactored function, at a glance I can say that I’m normalising pensions, then normalising blood pressure, then omitting non engine keys, before finally normalising the data for the engine. Doesn’t that feel easier to reason about? If a new requirement came in to normalise let’s say, cholesterol readings, I would simply slot another curried function in after normalizeBloodPressure called for arguments sake normalizeCholesterol.

Conclusion

Functional programming in JS using Ramda can not only reduce your codebase in size, but it can also increase its readability and testability, and make it easier to reason about.

Writing my first microservice!

In the Wealth Wizards software team, we’ve recently embarked upon a journey to break down, or strangle the monolith, so to speak, and adopt a microservice-based architecture. In a nutshell this means moving away from having one large server side application with lots of often highly coupled classes, to a system where the functionality is divided up into small, single purpose services, that communicate with each other via some type of API.

One of the beauties of adopting this architecture is that not only can different microservices be written in different languages, but they can even communicate with different types of databases. For the time being however, we decided to roll with Node.js, starting out framework-less but quickly coming to the conclusion that using a framework such as Express was going to make our lives that little bit easier!

Whilst there’s naturally a learning curve that comes with all of this, I found it fascinating how your thought processes begin to shift when it comes to technology in general, across the board. You find yourself beginning to intrinsically look to avoid monolithic code repos or applications across the stack, and think more carefully about your technology/framework choices, trying to make single purpose, cohesive applications, and thinking ahead to try and avoid becoming tied into frameworks further down the line where migrating away may become increasingly costly as your ties to them grow!

One thing I’ve enjoyed and noticed as a particular benefit is how testable the code is, and the governance of ensuring that services are fully tested has now been built into our software delivery pipeline, which is super nice! By the time our new platform is deployed, we will have fully unit tested, component tested, and system tested code, making software releases far more automated, far less costly, and with a high level of confidence, something which is really powerful.

The ops side of micro services is, in a way, almost the key part/challenge, and whilst it’s not something I’m heavily involved in, at Wealth Wizards we’re trying to promote a culture of teams of engineers, as opposed to separate ops and dev teams, and as such I’ve been lucky enough to learn about and play with a whole bunch of new and exciting technology within this space, including but not limited to, Docker, Docker Compose, Jenkins, and Kubernetes. One of the side effects of our new stack that I’ve really liked is the concept of configuration and infrastructure as code, and this is something I would definitely recommend to others as a best practice.

In general, some of my key takeaways/learnings when it comes to implementing microservices with Node.js are:

  • spec out APIs up front using YAML files
  • plug the API YAML files into an API generation package such as Swagger UI
  • write unit tests and component tests as you go, integrating them into your pipeline, and failing builds when the tests fail, or if the coverage is not deemed sufficient
  • extract code which is reused commonly across services into npm packages to DRY up code, although accept that there will be a maintenance cost associated with this
  • lock down versions within npm to ensure that different dev machines and server
  • environments are using same exact version number for dependencies to avoid headaches!

As a final point, whilst there is a whole bunch of great knowledge and information out there on the topic, one thing I’ve found is that some level of planning and design up front is essential but to some degree you should accept that some of the design decisions such as the code structure, library selections, DRYing up code with packages, and other such governance tasks will naturally evolve over the initial weeks, and so at some point you need to just take the plunge into the microservice world and learn as you go!

  • Mark Salvin, Senior Developer