The Inaugural Birmingham AI Meetup

Wealth Wizards have been doing AI R&D in the WW lab for about a year, and our AI Guild has recently grown to 12 members. Now that our developments are starting to make their way into production, we decided to join the Birmingham AI community to share what we’ve learned and get some cross-pollination of ideas.

When we found there wasn’t an AI community in Birmingham, we decided to start one!

In the four weeks since we launched the group, around a hundred members have signed up.

We ran our first Meetup in Digbeth – an area which is quickly becoming the Silicon Roundabout of Birmingham. And it was awesome.

As the organiser, I was nervous that no-one would turn up, but as 18:30 approached more and more people turned up and helped themselves to beer and pizza. In the end, we had twenty-one attendees.

We had three fascinating talks from PushDoctor, 383 and Wealth Wizards (represent!) and loads of great discussion, and one thing that struck us is just how engaged everyone was.

Almost everyone there was working in AI, machine learning or data science, and I loved hearing the war stories from actual practitioners in the field.

Screen Shot 2017-11-23 at 16.58.11.png

Holly Emblem from 383 kicked us off with a talk “Real World applications for Bayesian Statistics and Machine Learning”. 383 have been pushing the boundaries of how to use Bayesian classification to enhance their understanding of their web analytics data.
She also gave some warnings about the next potential AI winter.

 

 

Then Josh Sephton from PushDoctor shared their experiences from the trenches running a chatbot in anger for past year. They’d condensed their wisdom down to four key personality traits they’ve found useful to see in a chatbot.

You can see the whole talk here on the new Brum AI YouTube channel.

We hope to get our AV woes sorted out and have videos of all of our talks in the future.

Kojo

Finally, Kojo Hinson gave an awesome and mind bending talk on “Understanding the ‘Natural’ in Natural Language Processing”. We hope to publish his slides on Slideshare soon, but here’s a teaser.

Screen Shot 2017-11-23 at 16.52.31

 

Several of the AI Boffins who attended our first Meetup are already signed up to deliver talks at future Meetups. The next one is in Digbeth on the 14th December. Why not sign up and come along!

There’s a waitlist already, but we’ll get a larger venue if enough people sign up 🙂

Birmingham Artificial Intelligence Meetup

Birmingham, GB
99 AI Boffins

A place for AI and Machine Learning boffins and get together and discuss how we’re using Deep Learning, Natural Language Processing, AI Planning, Clustering, Reinforcement Lea…

Next Meetup

BrumAI: December

Thursday, Dec 14, 2017, 6:30 PM
30 Attending

Check out this Meetup Group →

 

Microservices make hotfixes easier

Microservices can ease the pain of deploying hotfixes to live due to the small and bounded context of each service.

Setting the scene

For the sake of this post, imagine that your system at work is written and deployed as a monolith. Now, picture the following situation: stakeholder – “I need this fix in before X, Y, and Z”. It’s not an uncommon one.
But let’s say that X, Y, and Z are all already in the mainline branch and deployed to your systems integration environment. This presents a challenge. There are various ways you could go about approaching this – some of them messier than others.

The nitty gritty

One approach would be to individually revert the X, Y, and Z commits in Git, implement the hotfix straight onto the mainline, and deploy the latest build from the there. Then, when ready, (and your hotfix has been deployed to production), you would need to go back and individually revert the reverts. A second deployment would be needed to bring your systems integration environment back to where it was, (now with the hotfix in there too), and life carries on. Maybe there are better ways to do this, but one way or another it’s not difficult to see how much of a headache this can potentially cause.

Microservices to the rescue!

But then you remember that you are actually using microservices and not a monolith after all. After checking, it turns out that X, Y and Z are all changes to microservices not affected by the hotfix. Great!
Simply fix the microservice in question, and deploy this change through your environments ahead of the microservices containing X, Y, and Z, and voila. To your stakeholders, it looks like a hotfix, but to you it just felt like every other release!

Conclusion

Of course, you could still end up in a situation where a change or two needs to be backed out of one or more of your microservice mainlines in order for a hotfix to go out, however I’m betting it will not only be less often, but I’m also betting that it will be less of a headache than with your old monolith.

 

Webpack 2 and code splitting

This week I’ve been updating our front end applications to Webpack 2. The upgrade proved to be fairly straightforward and one of the main benefits so far has been the ability to code split, dynamically load JavaScript at runtime and thus build a progressive web app.

I would recommend reading the following article for more information on how to do it:

https://hackernoon.com/straightforward-code-splitting-with-react-and-webpack-4b94c28f6c3f#.ikyl2htnu

The benefits for us are that we will be able to maintain different implementations of sub modules of our front end applications and serve them up based on the tenant who is logged in. These can either be maintained in one code base or pulled in via different npm packages.

This will also enable us to split code out into separate bundles to improve initial load speed.

Gotchas so far

Only one so far, in production if you are serving up your js bundles as gzipped files in your express server like so:


app.get('*.js', function (req, res, next) {
    req.url = req.url + '.gz';
    res.set('Content-Encoding', 'gzip');
    next();
});

Then you will need to exclude doing the above for the js you wish to load dynamically at runtime, since when the front end pulls down the js it won’t be able to handle it in gzip format.

I’ll keep you posted.

Our DevOps Toolbox

DevOps is rapidly being adopted within the IT industry. It’s a set of practices used within the IT industry to help bridge the gap between developers and operations. To help IT professionals achieve this, the industry has introduced various tools to help organisation’s achieve this.

Our toolbox

Here at Wealth Wizards we use a wide range of technologies which help us form a DevOps culture.

As our products matured, we realized how important it was to be able to perform operational task in an agile way as a part of not only keeping ahead of the competition, but also ensuring we release world class software such as Pension Wizards.

This led us down a road where we began to find various tools that we thought needed to be added to our DevOps toolbox.

Some of the tools we found along the way were open source technologies such as Ansible, Kubernetes, Node.js, Elasticsearch, Kibana, Logstash, Filebeat, Packetbeat, Consul and many more.

How our DevOps toolbox benefits Wealth Wizards

It was imperative that the tools added to our DevOps toolbox were open source, especially as we are a start-up.

Investing in open source technologies allows us to successfully carry out daily tasks whilst keeping costs down, as we don’t want to have to spend money unnecessarily by purchasing things like licenses alongside worry about complying to strict license policy.

With automation fueling the IT industry we decided to take full advantage of the DevOps tools available to us. Our DevOps toolbox enabled us to automate a lot of our manual tasks which then freed up engineers’ time allowing them to focus on other tasks that would add value to our products ultimately improving the customers experience.

Ups and Downs

Although using open source technologies has benefited us allowing us to rapidly release software updates, it has also introduced unforeseen problems.

As part of keeping our systems secure we make it a priority to remove any possible vulnerabilities that may pose a risk to our applications. This means we have had to schedule regular updates for any software used in our applications which can at times be time consuming.

As you may already be aware, out of date software can introduce inconsistencies in the way the software functions.

A perfect example of this is Kubernetes. We initially started using Kubernetes 1.3, which was the latest version at the time.

Initially, we were over the moon with the amount of out of the box functionality it had, but that was sadly short lived. We quickly began to run into problems as the open source community began to release rapid updates as part of fixing bugs and adding new additional features.

Now currently running version 1.5 we are happy with the results but are always looking out for new problems so that we can address them as soon as possible.

Although we have encountered problems we have learnt a lot along the way so we are happy to say it was worth it.

Conclusion

Our DevOps toolbox has helped us bridge the gap between developers and operations. It has also helped us simplify build and deployment process which as a result has rewarded our engineers with more time to invest in other areas of the business.

Now ask yourself, are you doing DevOps? What’s in your DevOps toolbox? Can you benefit from what’s in our DevOps toolbox?

Getting functional in JS with Ramda

Introduction

Lately, I’ve begun programming in JS using an increasingly functional style, with the help of Ramda (a functional programming library). What does this mean? At its core, this means writing predominantly pure functions, handling side effects and making use of techniques such as currying, partial application and functional composition. You can choose to take it further than this, however, that’s a story for another day.

The pillars of functional programming in JS

Pure functions

One key area of functional programming is the concept of pure functions. A pure function is one that takes an input and returns an output. It does not depend on external system state and it does not have any side effects. Pure functions will for a given input always return the same output, making them predictable and easy to test.

Side effects

It’s worth mentioning that side effects are sometimes unavoidable, and there are different techniques you can adopt to deal with these. But the key objective here is minimising side effects and handling these away from your pure functions.

Currying

One of the key building blocks of functional programming is the technique of currying. This is where you take a polyadic function (one with multiple arguments) and translate it into a sequence of monadic functions (functions that take a single argument). This works by each function returning its result as the argument to the next function in the sequence. This allows you to partially apply functions by fixing a number of arguments. Importantly, this also enables you to compose functions together which I’ll get onto later.

Example of partial application:

// Function for multiplying two numbers together
const multiplyTogether = (x, y) => {
return x * y;
}
multiplyTogether(2, 5);
// => 10

// Curried multiplication of two numbers
const multiplyTogetherCurried = x => y => {
return x * y;
}
multiplyTogetherCurried(2)(5);
// => 10

// Partial application used to create double number function
const doubleNumber = multiplyTogetherCurried(2);
doubleNumber(5);
// => 10

Composition

Building on currying and adopting another functional discipline of moving data to be the last argument of your function, you can now begin to make use of functional composition, and this is where things start to get pretty awesome.
With functional composition, you can create a sequence of functions which (after the first in the sequence) must be monadic, where each function feeds its returned value into the next function in the sequence as its argument, returning the result at the end of the sequence. We do this in Ramda using compose . Adopting this style can not only make code easier to reason about but also easier to read and write. In my opinion, where this style really shines is in data transformation, allowing you to break down potentially complex transformations into logical steps. Ramda is a big help here, as although you could simply choose to make use of compose and write your own curried monadic functions, it just so happens that Ramda is a library of super useful, (mostly) curried functions, containing functions for mapping over data, reducing data, omitting data based on keys, flattening and unflattening objects and so much more!

Imperative vs functional

Now that you’ve (hopefully) got a better idea of what functional programming is, the question becomes, is following an imperative style wrong? In my opinion, no. When it comes down to choosing between imperative and functional programming in JS, I believe you have to be pragmatic – whilst functional may be your go to choice, there are times when I believe you have to ask yourself if a simple if else statement will do the job. That said, adopting the discipline of writing pure functions where possible and managing side effects, along with handling data transformations using functional composition, will likely make your life as a developer a lot easier and more enjoyable. It sure has for me!

A worked example using Ramda

I’ve included a worked example of a function which I rewrote from a predominantly imperative style to a functional style, as I felt the function was becoming increasing difficult to reason about and with further anticipated additions, I was concerned it would become increasingly brittle.

Original function:

import R from 'ramda';

const dataMapper = factFindData => {
  const obj = {};

  Object.keys(factFindData).forEach(k => {
    if (k === 'retirement__pensions') {
      obj.retirement__pensions = normalizePensions(factFindData);
      return;
    }

    if (k !== 'db_options' && k !== 'health__applicant__high_blood_pressure_details__readings') {
      obj[k] = factFindData[k];
      return;
    }

    if (k === 'health__applicant__high_blood_pressure_details__readings') {
      if (factFindData.health__applicant__high_blood_pressure !== 'no') {
        obj.health__applicant__high_blood_pressure_details__readings = factFindData[k];
      }
    }
  });

  return {
    ...emptyArrays,
    ...R.omit(['_id', 'notes', 'created_at', 'updated_at'], obj),
  };
};
Refactored function:

import R from 'ramda';

const normalizeForEngine = x => ({ ...emptyArrays, ...x });
const omitNonEngineKeys = R.omit(['_id', 'notes', 'created_at', 'updated_at', 'db_options']);

const normalizeBloodPressure =
  R.when(
    x => x.health__applicant__high_blood_pressure === 'no',
    R.omit(['health__applicant__high_blood_pressure_details__readings'])
  );

const mapNormalizedPensions =
  R.mapObjIndexed((v, k, o) => k === 'retirement__pensions' ? normalizePensions(o) : v);

const dataMapper =
  R.compose(
    normalizeForEngine,
    omitNonEngineKeys,
    normalizeBloodPressure,
    mapNormalizedPensions
  );

As you can see, when trying to figure out what the data mapper function is doing in the original function, I have to loop through an object, update and maintain the state of a temporary variable (in my head), in each loop checking against multiple conditions, before then taking this result and sticking in into an object, remembering to remove certain keys.

With the refactored function, at a glance I can say that I’m normalising pensions, then normalising blood pressure, then omitting non engine keys, before finally normalising the data for the engine. Doesn’t that feel easier to reason about? If a new requirement came in to normalise let’s say, cholesterol readings, I would simply slot another curried function in after normalizeBloodPressure called for arguments sake normalizeCholesterol.

Conclusion

Functional programming in JS using Ramda can not only reduce your codebase in size, but it can also increase its readability and testability, and make it easier to reason about.