Getting functional in JS with Ramda

Introduction

Lately, I’ve begun programming in JS using an increasingly functional style, with the help of Ramda (a functional programming library). What does this mean? At its core, this means writing predominantly pure functions, handling side effects and making use of techniques such as currying, partial application and functional composition. You can choose to take it further than this, however, that’s a story for another day.

The pillars of functional programming in JS

Pure functions

One key area of functional programming is the concept of pure functions. A pure function is one that takes an input and returns an output. It does not depend on external system state and it does not have any side effects. Pure functions will for a given input always return the same output, making them predictable and easy to test.

Side effects

It’s worth mentioning that side effects are sometimes unavoidable, and there are different techniques you can adopt to deal with these. But the key objective here is minimising side effects and handling these away from your pure functions.

Currying

One of the key building blocks of functional programming is the technique of currying. This is where you take a polyadic function (one with multiple arguments) and translate it into a sequence of monadic functions (functions that take a single argument). This works by each function returning its result as the argument to the next function in the sequence. This allows you to partially apply functions by fixing a number of arguments. Importantly, this also enables you to compose functions together which I’ll get onto later.

Example of partial application:

// Function for multiplying two numbers together
const multiplyTogether = (x, y) => {
return x * y;
}
multiplyTogether(2, 5);
// => 10

// Curried multiplication of two numbers
const multiplyTogetherCurried = x => y => {
return x * y;
}
multiplyTogetherCurried(2)(5);
// => 10

// Partial application used to create double number function
const doubleNumber = multiplyTogetherCurried(2);
doubleNumber(5);
// => 10

Composition

Building on currying and adopting another functional discipline of moving data to be the last argument of your function, you can now begin to make use of functional composition, and this is where things start to get pretty awesome.
With functional composition, you can create a sequence of functions which (after the first in the sequence) must be monadic, where each function feeds its returned value into the next function in the sequence as its argument, returning the result at the end of the sequence. We do this in Ramda using compose . Adopting this style can not only make code easier to reason about but also easier to read and write. In my opinion, where this style really shines is in data transformation, allowing you to break down potentially complex transformations into logical steps. Ramda is a big help here, as although you could simply choose to make use of compose and write your own curried monadic functions, it just so happens that Ramda is a library of super useful, (mostly) curried functions, containing functions for mapping over data, reducing data, omitting data based on keys, flattening and unflattening objects and so much more!

Imperative vs functional

Now that you’ve (hopefully) got a better idea of what functional programming is, the question becomes, is following an imperative style wrong? In my opinion, no. When it comes down to choosing between imperative and functional programming in JS, I believe you have to be pragmatic – whilst functional may be your go to choice, there are times when I believe you have to ask yourself if a simple if else statement will do the job. That said, adopting the discipline of writing pure functions where possible and managing side effects, along with handling data transformations using functional composition, will likely make your life as a developer a lot easier and more enjoyable. It sure has for me!

A worked example using Ramda

I’ve included a worked example of a function which I rewrote from a predominantly imperative style to a functional style, as I felt the function was becoming increasing difficult to reason about and with further anticipated additions, I was concerned it would become increasingly brittle.

Original function:

import R from 'ramda';

const dataMapper = factFindData => {
  const obj = {};

  Object.keys(factFindData).forEach(k => {
    if (k === 'retirement__pensions') {
      obj.retirement__pensions = normalizePensions(factFindData);
      return;
    }

    if (k !== 'db_options' && k !== 'health__applicant__high_blood_pressure_details__readings') {
      obj[k] = factFindData[k];
      return;
    }

    if (k === 'health__applicant__high_blood_pressure_details__readings') {
      if (factFindData.health__applicant__high_blood_pressure !== 'no') {
        obj.health__applicant__high_blood_pressure_details__readings = factFindData[k];
      }
    }
  });

  return {
    ...emptyArrays,
    ...R.omit(['_id', 'notes', 'created_at', 'updated_at'], obj),
  };
};
Refactored function:

import R from 'ramda';

const normalizeForEngine = x => ({ ...emptyArrays, ...x });
const omitNonEngineKeys = R.omit(['_id', 'notes', 'created_at', 'updated_at', 'db_options']);

const normalizeBloodPressure =
  R.when(
    x => x.health__applicant__high_blood_pressure === 'no',
    R.omit(['health__applicant__high_blood_pressure_details__readings'])
  );

const mapNormalizedPensions =
  R.mapObjIndexed((v, k, o) => k === 'retirement__pensions' ? normalizePensions(o) : v);

const dataMapper =
  R.compose(
    normalizeForEngine,
    omitNonEngineKeys,
    normalizeBloodPressure,
    mapNormalizedPensions
  );

As you can see, when trying to figure out what the data mapper function is doing in the original function, I have to loop through an object, update and maintain the state of a temporary variable (in my head), in each loop checking against multiple conditions, before then taking this result and sticking in into an object, remembering to remove certain keys.

With the refactored function, at a glance I can say that I’m normalising pensions, then normalising blood pressure, then omitting non engine keys, before finally normalising the data for the engine. Doesn’t that feel easier to reason about? If a new requirement came in to normalise let’s say, cholesterol readings, I would simply slot another curried function in after normalizeBloodPressure called for arguments sake normalizeCholesterol.

Conclusion

Functional programming in JS using Ramda can not only reduce your codebase in size, but it can also increase its readability and testability, and make it easier to reason about.

Running Jenkins on Kubernetes

by Sion Williams

tl;dr

This guide will take you through the steps necessary to continuously deliver your software to end users by leveraging Amazon Web Services and Jenkins to orchestrate the software delivery pipeline. If you are not familiar with basic Kubernetes concepts, have a look at Kubernetes 101.

In order to accomplish this goal you will use the following Jenkins plugins:

  • Jenkins EC2 Plugin – start Jenkins build slaves in AWS when builds are requested, terminate those containers when builds complete, freeing resources up for the rest of the cluster
  • Bitbucket Oauth Plugin – allows you to add your bitbucket oauth credentials to jenkins

In order to deploy the application with Kubernetes you will use the following resources:

  • Deployments – replicates our application across our kubernetes nodes and allows us to do a controlled rolling update of our software across the fleet of application instances
  • Services – load balancing and service discovery for our internal services
  • Volumes – persistent storage for containers

Credit

This article is an AWS variant of the original Google Cloud Platform article found, here.

Prerequisites

  1. An Amazon Web Services Account
  2. A running Kubernetes cluster

Containers in Production

Containers are ideal for stateless applications and are meant to be ephemeral. This means no data or logs should be stored in the container otherwise they’ll be lost when the container terminates.

– Arun Gupta

The data for Jenkins is stored in the container filesystem. If the container terminates then the entire state of the application is lost. To ensure that we don’t lose our configuration each time a container restarts we need to add a Persistent Volume.

Adding a Persistent Volume

From the Jenkins documentation we know that the directory we want to persist is going to be the Jenkins home directory, which in the container is located at /var/jenkins_home (assuming you are using the official Jenkins container). This is the directory where all our plugins are installed, jobs and config information is kept.

At this point we’re faced with a chicken and egg situation; we want to mount a volume where Jenkins Home is located, but if we do that the volume will be empty. To overcome this hurdle we first need to add our volume to a sacrificial instance in AWS, install Jenkins, copy the contents of Jenkins Home to the volume, detach it, then finally add it to the container.

Gotchas

Make sure that the user and group permissions in the Jenkins Home are the same. Failure to do so will cause the container to fail certain Write processes. We will discuss more about the Security Context later in this article.

To recursively change permissions of group to equal owner, use:

$ sudo chmod -R g=u .

Now that we have our volume populated with the Jenkins data we can start writing the Kubernetes manifests.  The main things of note are the name, volumeId and storage.

jenkins-pv.yml

apiVersion: v1
kind: PersistentVolume
metadata:
 name: jenkins-data
spec:
 capacity:
 storage: 30Gi
 accessModes:
 - ReadWriteOnce
 awsElasticBlockStore:
 volumeID: aws://eu-west-1a/vol-XXXXXX
 fsType: ext4

With this manifest we have told Kubernetes where our volume is held. Now we need to tell Kubernetes that we want to make a claim on it. We do that with a Persistent Volume Claim.

jenkins-pvc.yml

kind: PersistentVolumeClaim
apiVersion: v1
metadata:
 name: jenkins-data
spec:
 accessModes:
 - ReadWriteOnce
 resources:
 requests:
 storage: 30Gi

In the file above we are telling Kubernetes that we would like to claim the full 30GB. We will associate this claim with a container in the next section.

Create a Jenkins Deployment and Service

Here you’ll create a deployment running a Jenkins container with a persistent disk attached containing the Jenkins home directory.

jenkins-deployment.yml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
 annotations:
 labels:
 app: jenkins
 name: jenkins
spec:
 replicas: 1
 selector:
 matchLabels:
 app: jenkins
 template:
 metadata:
 labels:
 app: jenkins
 spec:
 containers:
 - image: jenkins:2.19.2
 imagePullPolicy: IfNotPresent
 name: jenkins
 ports:
 - containerPort: 8080
 protocol: TCP
 name: web
 - containerPort: 50000
 protocol: TCP
 name: slaves
 resources:
 limits:
 cpu: 500m
 memory: 1000Mi
 requests:
 cpu: 500m
 memory: 1000Mi
 volumeMounts:
 - mountPath: /var/jenkins_home
 name: jenkinshome
 securityContext:
 fsGroup: 1000
 volumes:
 - name: jenkinshome
 persistentVolumeClaim:
 claimName: jenkins-data

There’s a lot of information in this file. As the post is already getting long, I’m only going to pull out the most important parts.

Volume Mounts

Earlier we created a persistent volume and volume claim. We made a claim on the PersistentVolume using the PersistentVolumeClaim, and now we need to attach the claim to our container. We do this using the claim name, which hopefully you can see ties each of the manifests together. In this case jenkins-data.

Security Context

This is where I had the most problems. I found that when I used the surrogate method of getting the files onto the volume I forgot to set the correct ownership and permissions. By setting the group permissions to the same as the user, when we deploy to Kubernetes we can use the fsGroup feature.  This feature lets the Jenkins user in the container have the correct permissions on the directories via the group level permissions. We set this to 1000 as per the documentation.

If all is well and good you should now be able start each of the resources:

kubectl create -f jenkins-pv.yml -f jenkins-pvc.yml -f jenkins-deployment.yml

As long as you dont have any issues at this stage you can now expose the instance using a load balancer. In this example we are provisioning an aws loadbalancer with our AWS provided cert.

jenkins-svc.yml

apiVersion: v1
kind: Service
metadata:
 labels:
 app: jenkins
 name: jenkins
 annotations:
 service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:eu-west-1:xxxxxxxxxxxx:certificate/bac080bc-8f03-4cc0-a8b5-xxxxxxxxxxxxx"
 service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
 service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
spec:
 ports:
 - name: securejenkinsport
 port: 443
 targetPort: 8080
 - name: slaves
 port: 50000
 protocol: TCP
 targetPort: 50000
 selector:
 app: jenkins
 type: LoadBalancer
 loadBalancerSourceRanges:
 - x.x.x.x/32

In the snippet above we also use the loadBalancerSourceRanges feature to whitelist our office. We aren’t making our CI publicly available, so this is a nice way of making it private.

I’m not going to get into the specifics of DNS etc here, but if that’s all configured you should now be able access your Jenkins. You can get the ingress url using the following:

kubectl get -o jsonpath="{.status.loadBalancer.ingress[0].hostname}" svc/jenkins

EC2 Plugin

I guess you’re wondering; “why after all that effort with Kubernetes are you creating AWS instances as slaves?” Well, our cluster has a finite pool of resource. We want elasticity with the Jenkins slaves, but equally, we don’t want a large pool sat idle waiting for work.

We are using the EC2 Plugin so that our builder nodes will be automatically launched as necessary when the Jenkins master requests them. Upon completion of their work they will automatically be turned down ,and we don’t get charged for anything that isn’t running. This does come with a time penalty for spinning up new VM’s, but we’re OK with that. We mitigate some of that cost by leaving them up for 10 mins after a build, so that any new builds can jump straight on the resource.

There’s a great article on how to configure this plugin, here.

Bitbucket OAuth

Our Active Directory is managed externally, so integrating Jenkins with AD was a little bit of a headache. Instead, we opted to integrate Jenkins with Bitbucket OAuth, which is useful because we know all of our engineers will have accounts. The documentation is very clear and accurate, so I would recommend following that guide.

My Test Rig

When the time of year came around for me to rebuild my test rig, I spent longer than ever gathering components to build something that would last more than 12 months (for my daily usage!).

I decided on an “open-rig” style of build and this had more than one benefit:

  • I was able to chop and change components as I wished, without having to open and close a case.
  • Cooling and airflow issues that I had with a standard box disappeared.
  • I can remove my SSD’s and take them with me.

The specs of the machine are as follows:

  • Intel i7 6 core Skylake processor overclocked to 4.9ghz (runs a little hot!)
  • Closed-circuit processor water-cooling
  • 256gb M2 SSD with main Windows 10 OS installed on it
  • 4TB ( 2x 2TB ) Samsung Evo Pro SSDs
  • 64GB Corsair Quad Channel DDR4 RAM
  • ENVA Nvidia Geforce 980TI (Overclocked)

This setup allows the machine to load test systems and it’s strong enough not to fall over itself. It also runs around 10 separate VMs that have different roles within its usage. I also have rainbow tables on my NAS drive at home, enabling the PC to crunch through them at a remarkable rate for my security testing.

The Wizard’s Toolkit – Part 1 – SHED & BOATS

boatsheds

Since March we’ve been working with strategic adviser and coach, Andy Salmon, to develop our understanding of how to build high performance teams.  I’m sure many of you reading this blog will have heard this all before, the consultant coming in to borrow your watch and tell you the time, a load of buzz phrases which mean little to start with and then rapidly lose their usefulness and impact when you return to the office.

Well, like many changes we’ve made over the last year at Wealth Wizards, this new approach has worked out a little differently.

Firstly there’s Andy. In my lazy mind’s eye I imagined a competent version of General Melchett – a man used to command, loyalty and obedience. Instead, we had a charismatic collaborator in our midst, sharing simple ideas and tools which we can instantly apply to improve the way we work.  These kind of tools and ideas work because they become habits – this is the way we like to do things around here.

Lesson #1 – Look After Your SHED

Your mind won’t work if your body is broken. Sounds obvious right?  So how come so many of us are chained to our desks, working long stressful hours and finding excuses not to get to the gym? Of course we need to put in sometimes extreme bursts of effort to get things done, but if we don’t consider our team’s physical well-being then our work is shaping our body, not the other way around.  So our first credo is look after your SHED.

shed

What does this mean in practice?  Firstly it provides a great shorthand which everyone in the team understands.  Mention your SHED to anyone in the team and they’ll know what you’re talking about.  Making this a part of our every day conversations has led to many small changes in the way in which we approach our work.  For example:

  • Walking Meetings – If you only have two of you in a meeting, then conduct it on the hoof.  It’s such a simple change to make and it gets you out of the office, adding to your daily step count.
  • Health Wizards – This is the ultimate manifestation of SHED management. We’ve formed a group to capture and implement ideas to help with this. For example, each month we host a step count challenge. Everyone in the company has been given a pedometer wristband and every month we are formed into teams – and yes, the team with the most average steps a month gets a prize!
  • Commitment Buddies – At our last team offsite visit we all committed to change one thing to improve our SHED and buddied up with a colleague outside our immediate team to act as conscience and supporter.
  • Stand-up Desks – We all have desks which can be raised and lowered and we all use them to stand up for a portion of the day.

All simple stuff I hear you say! Of course it is, however, it’s a mantra that’s stuck and one that continues to drive changes in the way we behave every day.

Lesson 2 – Pilot your BOATS

Bill S. Preston, Esq. told us to “be excellent to each other”.  This has become one of our key Principles, which I’m sure we’ll discuss more in a future blog post. It’s difficult to live by this prinicple if you’re stressed, however we believe that you can choose your mood.  Whatever is going on at work, you can make a positive impact rather than hoovering anyone’s mood.

BOATS is a bit more abstract than SHED so probably bears a little more explanation.

boats

Body Posture sets the tone in a room or an interaction.  Walk into a room of people slouched over their desks, arms crossed and you feel your energy draining from you.  At a previous Company, our in-house legal advisor observed that we were like Dementors (a Harry Potter reference for any fans out there) during contract negotiations, gradually sucking the joy from the air as we battered suppliers into weary submission. We aim to be the opposite here! If you stand up straight and your posture is open, you will feel better – fact!

Breathing (O2) helps you think and control your emotions if they’re running high.  Breathing exercises help, and it’s not uncommon to see people closing their eyes and taking a few, meditative deep breaths. It helps, really!

We have other tools to encourage people to be excellent to each other (more in future blogs) but if you don’t Appreciate yourself, then who else will?  Stepping back and recognising that you’re doing a good job is very therapeutic – if you don’t feel you are, then you probably need to change something!

Do you know what Triggers positive and negative thoughts in you (or releases your chimp, or the metaphor of your choice)?  Understanding these can really help improve the way you work, and how you feel about interactions with your colleagues.

Lastly, do you think that talking to yourself (Self Talk) is the first sign of madness?  We don’t.  We think that the ability to stop, take a step back and have a word with yourself can make all the difference!

It’s become a thing – the Wizard’s Toolkit

These tools and others that I’ll discuss in future posts all come together in the Wizard’s Toolkit.  We didn’t set out to create it, but as we built these ideas with Andy it became increasingly obvious that we needed somewhere to bring them together so we could remind ourselves and share them with new team members. The Wizard’s toolkit was born, and lives in our intranet, on the wall and in our behaviour and thoughts every day.  Well OK, every other day then.

Wow, what an insight – get me back to the Tech!

So if you’ve made it this far hoping that I’d get back into kube and microservices…then I’m sorry!  We’re a tech company and we’re passionate about engineering, it’s what excites us. Our ambition is to be great at everything we do, and thinking about our physical and mental wellbeing is another important part of making this a great place to work.

 

learning to live with Kubernetes clusters.

In my previous post I wrote about how we’re provisioning kubernetes clusters using kube-up and some of the problems we’ve come across during the deployment process. I now want to cover some of the things we’ve found while running the clusters. We’re in an ‘early’ production phase at the moment. Our legacy apps are still running on our LAMP systems and our new micro services are not yet live. We only have a handful of micro services so far, so you can get an idea of what stage we’re at. We’re learning that some of the things we need to fix mean going back and rebuilding the clusters but we’re lucky that right now this isn’t breaking anything.

We’ve been trying to build our kubernetes cluster following a pretty conventional model; internet facing components (e.g. NAT gateways and ELBs) in a DMZ and the nodes sitting in a private subnet using NAT gateways to access the internet. Despite the fact that the kube-up scripts support the minion nodes being privately addressed, the ELBs also get created in the ‘private subnet’  thus preventing the ELBs from serving public traffic. There have been several comments online around this and the general consensus seems to suggest it’s not currently possible. We have since found though there are a number of annotations available for ELBs suggesting it may be possible by using appropriate tags on subnets (we’re yet to try this though. I’ll post an update if we have any success)

Getting ELBs to behave the way we want with SSL has also been a bit of a pain. Like many sites, we need the ELB to serve plain text on port 80 and TLS on port 443, with both listeners serving from a single backend port. As before, the docs aren’t clear on this. the Service documentation tells us about the https and cert annotations but doesn’t tell you the other bits that are necessary. Again, looking at the source code was a big help and we eventually got a config which worked for us.

kind: Service
apiVersion: v1
metadata:
  name: app-name
  annotations:
    service.beta.kubernetes.io/aws-load-balancer-ssl-cert: "arn:aws:acm:us-west-1:xxxxxxxxxx:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx”
    service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
    service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "443"
spec:
  selector:
    app:  app-name
  ports:
  - protocol: TCP
    name: secure
    port: 443
    targetPort: 80
  - protocol: TCP
    name: insecure
    port: 80
    targetPort: 80
  type: LoadBalancer

 

Kubernetes comes with a bunch of add-ons ‘out of the box’. By default when running kube-up, you’ll get heapster/influxDB/grafana installed. You’ll also get FluentD/Elasticsearch/Kibana along with a dashboard and DNS system. While the DNS system is pretty much necessary (in fact remember to ensure you want more than one replica, in one cluster iteration, the DNS system stopped and would’t start again rendering the cluster mostly useless.) The other add-ons are perhaps less valuable. We’ve found heapster consumes a lot of resource and gives limited (by which I mean, not what I want) info. InfluxDB is also very powerful but will get deployed into non-persistent storage. Instead we’ve found it preferable to deploy prometheus into the cluster and deploy our own updated grafana container. Prom actually gives far better cluster metrics than heapster and there are lots of pre-built dashboards for grafana meaning we can get richer stats faster.

Likewise Fluentd -> Elasticsearch gives an in-built log collection system, but the provisioned ES is non-persistent and by having fluent ship the logs straight to ES you loose many of the benefits of dropping in logstash and running grok filters to make sense of the logs. It’s trivial  (on the assumption you don’t already have fluentD deployed, see below!) to drop in filebeat to replace fluentD and this makes it easy to add logstash before sending the indexed logs to ES. In this instance we decided to use AWS provided Elasticsearch to save ourselves the trouble of building the cluster. When deploying things like log collectors, make you they’re deployed as a daemonSet. This will make sure you have an instance of the pod running on each node, regardless of how many nodes you have running, which is exactly what you want for this type of monitoring agent.

Unfortunately, once you’ve deployed a k8s cluster (through kube-up) with these add-ons enabled, it’s actually pretty difficult to remove them. It’s easy enough to removing a running instance of a container, but if a minion node gets deleted and a new one provisioned, the add-ons will turn up again on the minions. This is because kube-up makes use of salt to manage the initial minion install and salt kicks in again for the new machines. To date I’ve failed to remove the definition from salt and have found the easiest option is just to rebuild the k8s cluster without the add ons. To do this, export the following variables when running kube-up:

export KUBE_ENABLE_CLUSTER_MONITORING=false
export KUBE_ENABLE_NODE_LOGGING=false
export KUBE_ENABLE_CLUSTER_LOGGING=false

Of course, this means re-provisioning the cluster, but I did say we’re fortunate enough to be able to do this (at the moment at least!)

Our next task is to sufficiently harden the nodes, ideally running the system on CoreOS.

Writing my first microservice!

In the Wealth Wizards software team, we’ve recently embarked upon a journey to break down, or strangle the monolith, so to speak, and adopt a microservice-based architecture. In a nutshell this means moving away from having one large server side application with lots of often highly coupled classes, to a system where the functionality is divided up into small, single purpose services, that communicate with each other via some type of API.

One of the beauties of adopting this architecture is that not only can different microservices be written in different languages, but they can even communicate with different types of databases. For the time being however, we decided to roll with Node.js, starting out framework-less but quickly coming to the conclusion that using a framework such as Express was going to make our lives that little bit easier!

Whilst there’s naturally a learning curve that comes with all of this, I found it fascinating how your thought processes begin to shift when it comes to technology in general, across the board. You find yourself beginning to intrinsically look to avoid monolithic code repos or applications across the stack, and think more carefully about your technology/framework choices, trying to make single purpose, cohesive applications, and thinking ahead to try and avoid becoming tied into frameworks further down the line where migrating away may become increasingly costly as your ties to them grow!

One thing I’ve enjoyed and noticed as a particular benefit is how testable the code is, and the governance of ensuring that services are fully tested has now been built into our software delivery pipeline, which is super nice! By the time our new platform is deployed, we will have fully unit tested, component tested, and system tested code, making software releases far more automated, far less costly, and with a high level of confidence, something which is really powerful.

The ops side of micro services is, in a way, almost the key part/challenge, and whilst it’s not something I’m heavily involved in, at Wealth Wizards we’re trying to promote a culture of teams of engineers, as opposed to separate ops and dev teams, and as such I’ve been lucky enough to learn about and play with a whole bunch of new and exciting technology within this space, including but not limited to, Docker, Docker Compose, Jenkins, and Kubernetes. One of the side effects of our new stack that I’ve really liked is the concept of configuration and infrastructure as code, and this is something I would definitely recommend to others as a best practice.

In general, some of my key takeaways/learnings when it comes to implementing microservices with Node.js are:

  • spec out APIs up front using YAML files
  • plug the API YAML files into an API generation package such as Swagger UI
  • write unit tests and component tests as you go, integrating them into your pipeline, and failing builds when the tests fail, or if the coverage is not deemed sufficient
  • extract code which is reused commonly across services into npm packages to DRY up code, although accept that there will be a maintenance cost associated with this
  • lock down versions within npm to ensure that different dev machines and server
  • environments are using same exact version number for dependencies to avoid headaches!

As a final point, whilst there is a whole bunch of great knowledge and information out there on the topic, one thing I’ve found is that some level of planning and design up front is essential but to some degree you should accept that some of the design decisions such as the code structure, library selections, DRYing up code with packages, and other such governance tasks will naturally evolve over the initial weeks, and so at some point you need to just take the plunge into the microservice world and learn as you go!

  • Mark Salvin, Senior Developer