There’s No Show Business in Business?

Of all the trite terminology in the Scrum cookbook, Show & Tell has always been my least favourite.  Couple it with a desire to entertain and you’ve got my idea of a miserable time.  I just don’t see the place for Showbiz in a serious Engineering team.

Or at least I didn’t, and now like a converted zealot I’m happy to evangelise about the benefits of making Show & Tell as engaging as possible!

The Accidental Format

Let’s start with how we run Show & Tell after six months of practice.  It’s a Company-wide event across three locations run for an hour every other Tuesday (‘Turnaround Tuesday’ which is when our sprints end and start).  We have audio and video in each office, people can dial in from home, and we screen share from whichever location is leading one of the talks.

It’s used by every team to showcase what’s going on in their world (and to ask for what they want!).  Powerpoint is hardly used, with people casting realtime demos and content from their laptops.

Each talk is between five and eight minutes long and all presenters stick strictly to the timing.  I can’t emphasise enough what a key bit of choreography this is.  Timing sharpens messages and really makes everyone think about the key point they’re trying to get across.

Arriving here has taken a lot of thought and feedback and certainly isn’t how I expected it to turn out.

Constant Vigilance

The shared Show & Tell arose from two challenges.  One was the passion and energy of our Agile evangelist Andy Deighton.  The second was our then new CTO Peet’s challenge to make ourselves “great at everything we do”.  Andy made me take a step back and think about all our Agile practices.

That phrase from Peet was a lightbulb moment – of course I want to be great at everything I do, but there are only so many hours in the day right?  And as I mentioned above, I’d always disliked even the phrase “Show & Tell”, but at that point we resolved that if we were going to devote energy to it, then we would make it something really useful.

We started Show & Tell sessions pretty much as conventional feedback on each Engineering Team’s sprint.  The only difference was we treated the whole Company as stakeholders and invited them all along.  The first few sessions were pretty shambolic.  Tech failed, we sometimes rambled a bit, we bamboozled our audience with technical words.  But one thing we did do well was ask for, and act upon feedback from each session.

The first improvement was to do with timing.  I’ve already made the point above but I’ll make it again.  The evolution to short, sharp, strictly time-boxed talks was rapid and made the event shorter overall (and therefore less expensive!) and far more compelling.

Another crucial early item of feedback was that other teams within the Company should get involved.  Obvious right?  It feels that way now but at the time we weren’t that sure.  However as soon as the sessions alternated between marketing, customer services, sales, engineering etc. the value of the ceremony increased exponentially.

Next, and most uncomfortably for me, there was a consistent refrain of “Show more, Tell less”.  My road to Damascus moment on this came after a series of quite brilliant presentations which used humour and some strong visual elements to get their point across.  The effect that they had on the audience, and on the way future Show & Tell sessions were conducted was profound.

Finally, we’ve set ourselves a target of evolving the session in at least one way every month.  Whether it’s real time voting, cake or casting, we will always try something new to keep the meeting fresh.

Stage Fright

Going back to the introduction, I’m an Engineer, not an after dinner speaker and the vast majority of people in the Company feel the same way.  The prospect of presenting to the whole Company fills most of us with dread, but the fact is that most of us have now done it.  Coupled with a hugely friendly audience, this shared experience has normalised getting up and talking in front of a crowd.  I’m not saying everyone looks forward to it, but the fear factor has largely gone.

So there you have it, how we Show & Tell.  None of it’s rocket science, but it’s become an important event in the Company calendar.  Every two weeks, teams across the Company get to share their justified pride in the work that they’re delivering.  And if there’s a little bit of Showbiz involved then – grudgingly – I admit that that’s no bad thing!

Webpack 2 and code splitting

This week I’ve been updating our front end applications to Webpack 2. The upgrade proved to be fairly straightforward and one of the main benefits so far has been the ability to code split, dynamically load JavaScript at runtime and thus build a progressive web app.

I would recommend reading the following article for more information on how to do it:

https://hackernoon.com/straightforward-code-splitting-with-react-and-webpack-4b94c28f6c3f#.ikyl2htnu

The benefits for us are that we will be able to maintain different implementations of sub modules of our front end applications and serve them up based on the tenant who is logged in. These can either be maintained in one code base or pulled in via different npm packages.

This will also enable us to split code out into separate bundles to improve initial load speed.

Gotchas so far

Only one so far, in production if you are serving up your js bundles as gzipped files in your express server like so:


app.get('*.js', function (req, res, next) {
    req.url = req.url + '.gz';
    res.set('Content-Encoding', 'gzip');
    next();
});

Then you will need to exclude doing the above for the js you wish to load dynamically at runtime, since when the front end pulls down the js it won’t be able to handle it in gzip format.

I’ll keep you posted.

Our DevOps Toolbox

DevOps is rapidly being adopted within the IT industry. It’s a set of practices used within the IT industry to help bridge the gap between developers and operations. To help IT professionals achieve this, the industry has introduced various tools to help organisation’s achieve this.

Our toolbox

Here at Wealth Wizards we use a wide range of technologies which help us form a DevOps culture.

As our products matured, we realized how important it was to be able to perform operational task in an agile way as a part of not only keeping ahead of the competition, but also ensuring we release world class software such as Pension Wizards.

This led us down a road where we began to find various tools that we thought needed to be added to our DevOps toolbox.

Some of the tools we found along the way were open source technologies such as Ansible, Kubernetes, Node.js, Elasticsearch, Kibana, Logstash, Filebeat, Packetbeat, Consul and many more.

How our DevOps toolbox benefits Wealth Wizards

It was imperative that the tools added to our DevOps toolbox were open source, especially as we are a start-up.

Investing in open source technologies allows us to successfully carry out daily tasks whilst keeping costs down, as we don’t want to have to spend money unnecessarily by purchasing things like licenses alongside worry about complying to strict license policy.

With automation fueling the IT industry we decided to take full advantage of the DevOps tools available to us. Our DevOps toolbox enabled us to automate a lot of our manual tasks which then freed up engineers’ time allowing them to focus on other tasks that would add value to our products ultimately improving the customers experience.

Ups and Downs

Although using open source technologies has benefited us allowing us to rapidly release software updates, it has also introduced unforeseen problems.

As part of keeping our systems secure we make it a priority to remove any possible vulnerabilities that may pose a risk to our applications. This means we have had to schedule regular updates for any software used in our applications which can at times be time consuming.

As you may already be aware, out of date software can introduce inconsistencies in the way the software functions.

A perfect example of this is Kubernetes. We initially started using Kubernetes 1.3, which was the latest version at the time.

Initially, we were over the moon with the amount of out of the box functionality it had, but that was sadly short lived. We quickly began to run into problems as the open source community began to release rapid updates as part of fixing bugs and adding new additional features.

Now currently running version 1.5 we are happy with the results but are always looking out for new problems so that we can address them as soon as possible.

Although we have encountered problems we have learnt a lot along the way so we are happy to say it was worth it.

Conclusion

Our DevOps toolbox has helped us bridge the gap between developers and operations. It has also helped us simplify build and deployment process which as a result has rewarded our engineers with more time to invest in other areas of the business.

Now ask yourself, are you doing DevOps? What’s in your DevOps toolbox? Can you benefit from what’s in our DevOps toolbox?

Leading a team to deliver an MVP at breakneck speed

tl;dr

We were given the task of delivering an MVP within a short timeframe. We chose a microservice based architecture, allowing concurrent development of functionality. We adopted the popular MERN stack, unifying the language across the system. Working closely with the customer and stakeholders was key to delivering what was really wanted. High engineering standards underpinned the work that we did, providing us with fast feedback when the system broke and high confidence in deployments, and leaving us with a robust and extensible system.

The challenge

Several months ago we were given the task of delivering an MVP within only a handful of months, for a product that at its core is an over the phone variation of one of our existing offerings. It initially appeared that we could try to crowbar it into a cloned version of our existing offering, but we realized during our initial design sessions that this approach would be drastically hindered by the limitations of our existing system (which was not designed with this in mind). It would also not live up to the high engineering standards we have grown used to within our department.

We chose to adopt the strangler pattern, writing parts of our system from scratch and interfacing with our legacy system where appropriate, to make a deliverable feasible in only a handful of months, and to also build out a platform that is extensible.

In a nutshell, the plan was to quickly spin up a plain but functional UI, a new set of RESTful APIs (to overcome some technical debt we have been carrying with our legacy APIs) and a new set of data stores to keep our new products’ data separate from our existing offerings’. All this within a resilient microservice based architecture. We planned to marshall data between this new system and our legacy system where required, being careful to limit the number of microservices that would facilitate this communication with the legacy system to as few as possible. In fact, we managed to limit this to only one!

The MERN stack

The MERN stack consists of the Mongo database, the Express Node.js framework, the React front end library (often coupled with Redux), and the Node.js server environment.

Using this stack has several great advantages, including having only one language across the stack (JavaScript), being lightweight with server configuration within the code (making it ideal for microservices), being fast, and being well backed within the software industry.

Benefits of the approach

For starters, working with a microservice based architecture lends itself well to working on different parts of the system concurrently, due to the system’s decoupled nature. Assuming contracts are agreed up front, stub servers can be spun up easily using technologies such as Dyson, allowing us to isolate development and testing of components within our system without upstream components even existing yet. That’s pretty awesome.

Other great benefits include a highly decoupled back end system with a high level of cohesion, alongside a front end that separates concerns well and avoids the tie in of a large framework such as AngularJS. With React, it’s as much a philosophy as it is a library – declare the view and manage state separately, in our case with the popular Redux library. Redux enforces a unidirectional data flow and encourages a functional style, making code easier to reason about.

Engineering standards

Throughout development, we have prided ourselves on maintaining a high level of engineering standards. This goes much further than simply baking code linting into our pipeline.

For one, it has involved writing meaningful unit tests with a high level of test coverage. This allows us to validate that new functions work as desired and provide fast feedback about any regressions that have been introduced.

It has also involved writing tests at the component or integration level, providing us with confidence that the building blocks of our system fit together as components in the way we expect.

In addition, we have included several system tests, serving as a useful smoke test within our pipeline to verify that the system is up and running and not obviously broken.

Monitoring and logging are another key area for us and should be for anyone adopting a similar approach. We’ve used a “ping” dashboard to show the health of each microservice at any given time, providing us with another useful mechanism to spot failures fast.

It goes without saying that manual exploratory testing has remained a tool within our arsenal.

Working with the customer

It’s not surprising that customers will often ask for a fixed scope with fixed time but you can’t have both. We have tried to manage this by communicating well and prioritizing effectively. It’s been a key challenge for us working with the customer to establish which features are essential to delivering the key business value and which are not.

What next

As the project continues to move forward its key that we continue to gain fast and frequent feedback and involvement from the customer to help guide the future direction of the product. It’s also key that we continue to adhere to and even improve upon our high engineering standards.

In addition to incrementally adding features, I envisage us looking to add contract testing between services within our system. We know that tests at the unit and component/integration level are great to verify that those parts of the system work as expected, but it’s a great improvement to know that these parts of the system satisfy the contracts expected of them by the other services within the system. This will enable us to individually deploy microservices with truly high confidence.

Getting functional in JS with Ramda

Introduction

Lately, I’ve begun programming in JS using an increasingly functional style, with the help of Ramda (a functional programming library). What does this mean? At its core, this means writing predominantly pure functions, handling side effects and making use of techniques such as currying, partial application and functional composition. You can choose to take it further than this, however, that’s a story for another day.

The pillars of functional programming in JS

Pure functions

One key area of functional programming is the concept of pure functions. A pure function is one that takes an input and returns an output. It does not depend on external system state and it does not have any side effects. Pure functions will for a given input always return the same output, making them predictable and easy to test.

Side effects

It’s worth mentioning that side effects are sometimes unavoidable, and there are different techniques you can adopt to deal with these. But the key objective here is minimising side effects and handling these away from your pure functions.

Currying

One of the key building blocks of functional programming is the technique of currying. This is where you take a polyadic function (one with multiple arguments) and translate it into a sequence of monadic functions (functions that take a single argument). This works by each function returning its result as the argument to the next function in the sequence. This allows you to partially apply functions by fixing a number of arguments. Importantly, this also enables you to compose functions together which I’ll get onto later.

Example of partial application:

// Function for multiplying two numbers together
const multiplyTogether = (x, y) => {
return x * y;
}
multiplyTogether(2, 5);
// => 10

// Curried multiplication of two numbers
const multiplyTogetherCurried = x => y => {
return x * y;
}
multiplyTogetherCurried(2)(5);
// => 10

// Partial application used to create double number function
const doubleNumber = multiplyTogetherCurried(2);
doubleNumber(5);
// => 10

Composition

Building on currying and adopting another functional discipline of moving data to be the last argument of your function, you can now begin to make use of functional composition, and this is where things start to get pretty awesome.
With functional composition, you can create a sequence of functions which (after the first in the sequence) must be monadic, where each function feeds its returned value into the next function in the sequence as its argument, returning the result at the end of the sequence. We do this in Ramda using compose . Adopting this style can not only make code easier to reason about but also easier to read and write. In my opinion, where this style really shines is in data transformation, allowing you to break down potentially complex transformations into logical steps. Ramda is a big help here, as although you could simply choose to make use of compose and write your own curried monadic functions, it just so happens that Ramda is a library of super useful, (mostly) curried functions, containing functions for mapping over data, reducing data, omitting data based on keys, flattening and unflattening objects and so much more!

Imperative vs functional

Now that you’ve (hopefully) got a better idea of what functional programming is, the question becomes, is following an imperative style wrong? In my opinion, no. When it comes down to choosing between imperative and functional programming in JS, I believe you have to be pragmatic – whilst functional may be your go to choice, there are times when I believe you have to ask yourself if a simple if else statement will do the job. That said, adopting the discipline of writing pure functions where possible and managing side effects, along with handling data transformations using functional composition, will likely make your life as a developer a lot easier and more enjoyable. It sure has for me!

A worked example using Ramda

I’ve included a worked example of a function which I rewrote from a predominantly imperative style to a functional style, as I felt the function was becoming increasing difficult to reason about and with further anticipated additions, I was concerned it would become increasingly brittle.

Original function:

import R from 'ramda';

const dataMapper = factFindData => {
  const obj = {};

  Object.keys(factFindData).forEach(k => {
    if (k === 'retirement__pensions') {
      obj.retirement__pensions = normalizePensions(factFindData);
      return;
    }

    if (k !== 'db_options' && k !== 'health__applicant__high_blood_pressure_details__readings') {
      obj[k] = factFindData[k];
      return;
    }

    if (k === 'health__applicant__high_blood_pressure_details__readings') {
      if (factFindData.health__applicant__high_blood_pressure !== 'no') {
        obj.health__applicant__high_blood_pressure_details__readings = factFindData[k];
      }
    }
  });

  return {
    ...emptyArrays,
    ...R.omit(['_id', 'notes', 'created_at', 'updated_at'], obj),
  };
};
Refactored function:

import R from 'ramda';

const normalizeForEngine = x => ({ ...emptyArrays, ...x });
const omitNonEngineKeys = R.omit(['_id', 'notes', 'created_at', 'updated_at', 'db_options']);

const normalizeBloodPressure =
  R.when(
    x => x.health__applicant__high_blood_pressure === 'no',
    R.omit(['health__applicant__high_blood_pressure_details__readings'])
  );

const mapNormalizedPensions =
  R.mapObjIndexed((v, k, o) => k === 'retirement__pensions' ? normalizePensions(o) : v);

const dataMapper =
  R.compose(
    normalizeForEngine,
    omitNonEngineKeys,
    normalizeBloodPressure,
    mapNormalizedPensions
  );

As you can see, when trying to figure out what the data mapper function is doing in the original function, I have to loop through an object, update and maintain the state of a temporary variable (in my head), in each loop checking against multiple conditions, before then taking this result and sticking in into an object, remembering to remove certain keys.

With the refactored function, at a glance I can say that I’m normalising pensions, then normalising blood pressure, then omitting non engine keys, before finally normalising the data for the engine. Doesn’t that feel easier to reason about? If a new requirement came in to normalise let’s say, cholesterol readings, I would simply slot another curried function in after normalizeBloodPressure called for arguments sake normalizeCholesterol.

Conclusion

Functional programming in JS using Ramda can not only reduce your codebase in size, but it can also increase its readability and testability, and make it easier to reason about.