The Wizard’s Toolkit – Part 1 – SHED & BOATS


Since March we’ve been working with strategic adviser and coach, Andy Salmon, to develop our understanding of how to build high performance teams.  I’m sure many of you reading this blog will have heard this all before, the consultant coming in to borrow your watch and tell you the time, a load of buzz phrases which mean little to start with and then rapidly lose their usefulness and impact when you return to the office.

Well, like many changes we’ve made over the last year at Wealth Wizards, this new approach has worked out a little differently.

Firstly there’s Andy. In my lazy mind’s eye I imagined a competent version of General Melchett – a man used to command, loyalty and obedience. Instead, we had a charismatic collaborator in our midst, sharing simple ideas and tools which we can instantly apply to improve the way we work.  These kind of tools and ideas work because they become habits – this is the way we like to do things around here.

Lesson #1 – Look After Your SHED

Your mind won’t work if your body is broken. Sounds obvious right?  So how come so many of us are chained to our desks, working long stressful hours and finding excuses not to get to the gym? Of course we need to put in sometimes extreme bursts of effort to get things done, but if we don’t consider our team’s physical well-being then our work is shaping our body, not the other way around.  So our first credo is look after your SHED.


What does this mean in practice?  Firstly it provides a great shorthand which everyone in the team understands.  Mention your SHED to anyone in the team and they’ll know what you’re talking about.  Making this a part of our every day conversations has led to many small changes in the way in which we approach our work.  For example:

  • Walking Meetings – If you only have two of you in a meeting, then conduct it on the hoof.  It’s such a simple change to make and it gets you out of the office, adding to your daily step count.
  • Health Wizards – This is the ultimate manifestation of SHED management. We’ve formed a group to capture and implement ideas to help with this. For example, each month we host a step count challenge. Everyone in the company has been given a pedometer wristband and every month we are formed into teams – and yes, the team with the most average steps a month gets a prize!
  • Commitment Buddies – At our last team offsite visit we all committed to change one thing to improve our SHED and buddied up with a colleague outside our immediate team to act as conscience and supporter.
  • Stand-up Desks – We all have desks which can be raised and lowered and we all use them to stand up for a portion of the day.

All simple stuff I hear you say! Of course it is, however, it’s a mantra that’s stuck and one that continues to drive changes in the way we behave every day.

Lesson 2 – Pilot your BOATS

Bill S. Preston, Esq. told us to “be excellent to each other”.  This has become one of our key Principles, which I’m sure we’ll discuss more in a future blog post. It’s difficult to live by this prinicple if you’re stressed, however we believe that you can choose your mood.  Whatever is going on at work, you can make a positive impact rather than hoovering anyone’s mood.

BOATS is a bit more abstract than SHED so probably bears a little more explanation.


Body Posture sets the tone in a room or an interaction.  Walk into a room of people slouched over their desks, arms crossed and you feel your energy draining from you.  At a previous Company, our in-house legal advisor observed that we were like Dementors (a Harry Potter reference for any fans out there) during contract negotiations, gradually sucking the joy from the air as we battered suppliers into weary submission. We aim to be the opposite here! If you stand up straight and your posture is open, you will feel better – fact!

Breathing (O2) helps you think and control your emotions if they’re running high.  Breathing exercises help, and it’s not uncommon to see people closing their eyes and taking a few, meditative deep breaths. It helps, really!

We have other tools to encourage people to be excellent to each other (more in future blogs) but if you don’t Appreciate yourself, then who else will?  Stepping back and recognising that you’re doing a good job is very therapeutic – if you don’t feel you are, then you probably need to change something!

Do you know what Triggers positive and negative thoughts in you (or releases your chimp, or the metaphor of your choice)?  Understanding these can really help improve the way you work, and how you feel about interactions with your colleagues.

Lastly, do you think that talking to yourself (Self Talk) is the first sign of madness?  We don’t.  We think that the ability to stop, take a step back and have a word with yourself can make all the difference!

It’s become a thing – the Wizard’s Toolkit

These tools and others that I’ll discuss in future posts all come together in the Wizard’s Toolkit.  We didn’t set out to create it, but as we built these ideas with Andy it became increasingly obvious that we needed somewhere to bring them together so we could remind ourselves and share them with new team members. The Wizard’s toolkit was born, and lives in our intranet, on the wall and in our behaviour and thoughts every day.  Well OK, every other day then.

Wow, what an insight – get me back to the Tech!

So if you’ve made it this far hoping that I’d get back into kube and microservices…then I’m sorry!  We’re a tech company and we’re passionate about engineering, it’s what excites us. Our ambition is to be great at everything we do, and thinking about our physical and mental wellbeing is another important part of making this a great place to work.


learning to live with Kubernetes clusters.

In my previous post I wrote about how we’re provisioning kubernetes clusters using kube-up and some of the problems we’ve come across during the deployment process. I now want to cover some of the things we’ve found while running the clusters. We’re in an ‘early’ production phase at the moment. Our legacy apps are still running on our LAMP systems and our new micro services are not yet live. We only have a handful of micro services so far, so you can get an idea of what stage we’re at. We’re learning that some of the things we need to fix mean going back and rebuilding the clusters but we’re lucky that right now this isn’t breaking anything.

We’ve been trying to build our kubernetes cluster following a pretty conventional model; internet facing components (e.g. NAT gateways and ELBs) in a DMZ and the nodes sitting in a private subnet using NAT gateways to access the internet. Despite the fact that the kube-up scripts support the minion nodes being privately addressed, the ELBs also get created in the ‘private subnet’  thus preventing the ELBs from serving public traffic. There have been several comments online around this and the general consensus seems to suggest it’s not currently possible. We have since found though there are a number of annotations available for ELBs suggesting it may be possible by using appropriate tags on subnets (we’re yet to try this though. I’ll post an update if we have any success)

Getting ELBs to behave the way we want with SSL has also been a bit of a pain. Like many sites, we need the ELB to serve plain text on port 80 and TLS on port 443, with both listeners serving from a single backend port. As before, the docs aren’t clear on this. the Service documentation tells us about the https and cert annotations but doesn’t tell you the other bits that are necessary. Again, looking at the source code was a big help and we eventually got a config which worked for us.

kind: Service
apiVersion: v1
  name: app-name
  annotations: "arn:aws:acm:us-west-1:xxxxxxxxxx:certificate/xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxx” "http" "443"
    app:  app-name
  - protocol: TCP
    name: secure
    port: 443
    targetPort: 80
  - protocol: TCP
    name: insecure
    port: 80
    targetPort: 80
  type: LoadBalancer


Kubernetes comes with a bunch of add-ons ‘out of the box’. By default when running kube-up, you’ll get heapster/influxDB/grafana installed. You’ll also get FluentD/Elasticsearch/Kibana along with a dashboard and DNS system. While the DNS system is pretty much necessary (in fact remember to ensure you want more than one replica, in one cluster iteration, the DNS system stopped and would’t start again rendering the cluster mostly useless.) The other add-ons are perhaps less valuable. We’ve found heapster consumes a lot of resource and gives limited (by which I mean, not what I want) info. InfluxDB is also very powerful but will get deployed into non-persistent storage. Instead we’ve found it preferable to deploy prometheus into the cluster and deploy our own updated grafana container. Prom actually gives far better cluster metrics than heapster and there are lots of pre-built dashboards for grafana meaning we can get richer stats faster.

Likewise Fluentd -> Elasticsearch gives an in-built log collection system, but the provisioned ES is non-persistent and by having fluent ship the logs straight to ES you loose many of the benefits of dropping in logstash and running grok filters to make sense of the logs. It’s trivial  (on the assumption you don’t already have fluentD deployed, see below!) to drop in filebeat to replace fluentD and this makes it easy to add logstash before sending the indexed logs to ES. In this instance we decided to use AWS provided Elasticsearch to save ourselves the trouble of building the cluster. When deploying things like log collectors, make you they’re deployed as a daemonSet. This will make sure you have an instance of the pod running on each node, regardless of how many nodes you have running, which is exactly what you want for this type of monitoring agent.

Unfortunately, once you’ve deployed a k8s cluster (through kube-up) with these add-ons enabled, it’s actually pretty difficult to remove them. It’s easy enough to removing a running instance of a container, but if a minion node gets deleted and a new one provisioned, the add-ons will turn up again on the minions. This is because kube-up makes use of salt to manage the initial minion install and salt kicks in again for the new machines. To date I’ve failed to remove the definition from salt and have found the easiest option is just to rebuild the k8s cluster without the add ons. To do this, export the following variables when running kube-up:


Of course, this means re-provisioning the cluster, but I did say we’re fortunate enough to be able to do this (at the moment at least!)

Our next task is to sufficiently harden the nodes, ideally running the system on CoreOS.

Writing my first microservice!

In the Wealth Wizards software team, we’ve recently embarked upon a journey to break down, or strangle the monolith, so to speak, and adopt a microservice-based architecture. In a nutshell this means moving away from having one large server side application with lots of often highly coupled classes, to a system where the functionality is divided up into small, single purpose services, that communicate with each other via some type of API.

One of the beauties of adopting this architecture is that not only can different microservices be written in different languages, but they can even communicate with different types of databases. For the time being however, we decided to roll with Node.js, starting out framework-less but quickly coming to the conclusion that using a framework such as Express was going to make our lives that little bit easier!

Whilst there’s naturally a learning curve that comes with all of this, I found it fascinating how your thought processes begin to shift when it comes to technology in general, across the board. You find yourself beginning to intrinsically look to avoid monolithic code repos or applications across the stack, and think more carefully about your technology/framework choices, trying to make single purpose, cohesive applications, and thinking ahead to try and avoid becoming tied into frameworks further down the line where migrating away may become increasingly costly as your ties to them grow!

One thing I’ve enjoyed and noticed as a particular benefit is how testable the code is, and the governance of ensuring that services are fully tested has now been built into our software delivery pipeline, which is super nice! By the time our new platform is deployed, we will have fully unit tested, component tested, and system tested code, making software releases far more automated, far less costly, and with a high level of confidence, something which is really powerful.

The ops side of micro services is, in a way, almost the key part/challenge, and whilst it’s not something I’m heavily involved in, at Wealth Wizards we’re trying to promote a culture of teams of engineers, as opposed to separate ops and dev teams, and as such I’ve been lucky enough to learn about and play with a whole bunch of new and exciting technology within this space, including but not limited to, Docker, Docker Compose, Jenkins, and Kubernetes. One of the side effects of our new stack that I’ve really liked is the concept of configuration and infrastructure as code, and this is something I would definitely recommend to others as a best practice.

In general, some of my key takeaways/learnings when it comes to implementing microservices with Node.js are:

  • spec out APIs up front using YAML files
  • plug the API YAML files into an API generation package such as Swagger UI
  • write unit tests and component tests as you go, integrating them into your pipeline, and failing builds when the tests fail, or if the coverage is not deemed sufficient
  • extract code which is reused commonly across services into npm packages to DRY up code, although accept that there will be a maintenance cost associated with this
  • lock down versions within npm to ensure that different dev machines and server
  • environments are using same exact version number for dependencies to avoid headaches!

As a final point, whilst there is a whole bunch of great knowledge and information out there on the topic, one thing I’ve found is that some level of planning and design up front is essential but to some degree you should accept that some of the design decisions such as the code structure, library selections, DRYing up code with packages, and other such governance tasks will naturally evolve over the initial weeks, and so at some point you need to just take the plunge into the microservice world and learn as you go!

  • Mark Salvin, Senior Developer

Building Kubernetes Clusters

We’re in the early stages of deploying a platform built with micro services on Kubernetes. While there are a growing number of alternatives to k8s (all the cool kids are calling it k8s, or kube), Mesos, Nomad and Swarm being some of the bigger names, we came to the decision that k8s has the right balance of features, maturity and out of the box ease of use to make it the right choice for us. For the time being at least.

You may know k8s was gifted to the world by Google. In many ways it’s a watered down version of the Omega and Borg engines they use to run their own apps worldwide across millions of servers, so it’s got good provenance. The cynical among you (myself included) may suggest that Google gave us k8s as a way of enticing us to the Google Compute Engine. K8s is clearly designed with being run on GCE first and foremost with a number of the features only working on GCE. That’s not to say it can’t be run in other places, it can and does get used elsewhere from laptops and bare metal to other clouds. For example we run it in AWS and while functionality on AWS lags behind GCE, k8s still provides pretty much everything we need at this point.

As you can imagine, setting up k8s could be pretty complicated due to the number of elements involved in something which you can trust to run your applications in production with little more than a definition in a YAML file, fortunately the k8s team provide a pretty thorough build script to set it up for you called ‘kube-up’.

Kube-up is a script* capable of installing a fully functional cluster on anything from your laptop (Vagrant) through Rackspace, Azure, AWS and VMware and of course onto Google compute and container engine (GCE & GKE). Configuration and customisation for your requirements is done by modifying values in the scripts, or preferably exporting the appropriate settings into your env vars before running the script.

For a couple of reasons, which seemed good at the time, we’re running in AWS. While support for AWS is pretty good, the main feature missing currently that we’ve noticed is the lack of the ingress resource, which provides advanced L7 control such as rate limiting,  it’s actually pretty difficult to find good information on what actually is supported, both in the Kube-up script and once k8s is running and in use. The best option is to read through the script, see what environment variables are mentioned and then have a play with them.

Along with a kube-up script, there is also a kube-down script (supplied in the tar file downloaded by kube-up). This can be very handy if you’re building and rebuilding clusters to better understand what you need but be warned, it also means it’s perfectly feasible to delete a cluster you didn’t want deleted.

So far I’ve found a few guideline which I think should be stuck to when using kube-up, these, with a reason why, are:

Create a stand-alone config file (a list of export Env=Vars) and source that script before running kube-up instead of modding the downloaded config files. 

Having gone through the build process a couple of times now, I’ve come to the conclusion the best route is to define all the EnvVar overrides into a stand-alone file and source the file before running the main kube-up script. By default, kube-up will re-download the tar and replace the script directory, blowing away any overrides you may have configured. Downloading a new version of the tar file means you benefit from any fixes and improvements, keeping your config outside this means you don’t have to keep re-defining it. I should add too that I have had to hack the contents of various scripts to get the script to run without errors, so using the latest version doe help minimise this.

Don’t use the default Kubernetes cluster name, create a logical name (something that makes sense to use and stands the test of having 3-4, other clusters running alongside and still making sense what this one is) 

Kube-up/down both rely on the information held in ~/.kube. This directory is created when you run kube-up and lets the kubectl script know where to connect and what credentials to use to mange the system through the API. If you have multiple clusters and have the details for the ‘wrong’ cluster stored in this file, kube-down will merrily also delete the wrong cluster.

In addition to this, in AWS, kube-up/down both rely heavily on AWS name tags. These tags are used during the whole lifecycle of the cluster so are important at all times. When kube-up provisions the cluster it will tag items to know which resources it’ll manage. The same tags are used by the master to control the cluster. For example; to add the appropriate instance specific routes to the AWS route tables. If the tags are missing, or duplicated (which can happen if you are building and tearing down clusters frequently and miss something in the tear-down) you can end up with a cluster which is reported as fully functional, but applications running in the cluster will fail to run.

One problem I found was that having laid out a nice VPC config, including subnet and route tables with Terraform and then having provisioned the system, when I came to deploying the k8s cluster the k8s script failed to bind it’s route table to the subnet which I ha told it to use. It failed because I had already defined one myself in Terraform. kube-up did report this as an error, but continued on and provisioned what looked like a fully functioning cluster. It wasn’t until the following day that we identified that there were important per-node routes missing. kube-up had provisioned and tagged a route table. Because that table was tagged, that’s the table the kube master was updating when minions were getting provisioned. The problem being that route table was not associated to my subnet. Once I had tagged by terraformed subnet with the appropriate k8s tag, the master would then update the correct table with new routes for minions. I had to manually copy across the routes from the other table for the existing minions.

Understand your network topology before building the cluster and define IP ranges for the cluster that don’t collide with your existing network and allow for more clusters to be provisioned alongside in the future. 

If, for example you choose to deploy 2 separate clusters using the kube-up scripts they will both end up with the same IP addressing, they will also only be accessible over the internet. While this isn’t the end of the world, it’s not ideal and being able to access them using their private IP/name space is a huge improvement. Of course, if the kube-up provisioned IP range is the same as one of your internal networks, or you have 2 VPCs with the same IP ranges it becomes impossible to do this. Having a well thought-out Network and IP ranges also makes routing and security far simpler. If you know all your production services sit over there you can easily configure your firewalls to restrict access to that whole range.

Although you can pre-build the VPC, networks, gateways, route tables, etc. if you do, make sure they’re kube-up friendly, adding the right tags (which match the custom name you defined above.)

When building with dealt configs, kube-up will provision a new VPN into AWS. While this is great when you want to just get something up and running, it’s pretty likely you’ll actually want to build a cluster in a pre-existing VPC. You may also already have a way of building and managing these. We like to provision things with Terraform and so we found a way to configure kube-up to use an existing VPC (and to change it’s networking accordingly) there are still a number of caveats.

K8s makes heavy use of some networking tricks to provide an easy to use interface, however this means that to really understand k8s  (you’re running your production apps on this, right? so you want a good idea how it’s running, right?) you should also have a good understanding of it’s networks. In essence, Kubernetes makes use of 2 largely distinct networks. The first is to provide IPs to the master and nodes and allows you to reach the surface of the cluster (allowing you to manage it, and deploy apps onto it and for those to be served to the world). It uses the second network to manage where the apps are within the cluster and to allow the scheduler to do what it needs to without you having to worry about what node an apps is deployed to and what port it’s on. If either of these network ranges collides with one of your existing networks you can get sub-optimal behaviour, even if this means you have to hop through hoops just to reach your cluster.

Update the security groups as soon as the system is built to restrict access to the nodes. We’ve built ours in a VPC with a VPN connection to our other systems, so we can restrict access to private ranges only. 

Also note that by default, although kube-up will provision a private network for you in AWS, all the nodes end up getting public addresses and a security group which allows access to these nodes from anywhere over SSH and HTTP/S for the master. This strikes me as a little scary.

  • Kube-up is in fact far more than just a single script, it downloads a whole tar file of scripts, but let’s keep it simple.