Network Policies

making use of kubernetes network policies to add security to your clusters

There’s a neat feature in kubernetes called Network Policies which we’ve been using a little while now. Network Policies (or NetPols) are much like AWS security groups, but within the kube cluster.

The really great thing about NetPols is that they “understand” the tags you may already be attributing to your deployements, ingress controllers, etc. This means you can quite easily group related components and wrap a NetPol around these and limit access to these resources.

A great example would be to define a NetPol which restricts any access to a namespace except for any traffic which originates from the name space. This gives you a great level of isolation at the namespace level. You can the grant ingress to that namespace exclusively through a designated ingress controller on a specific port.

Like wise, you can create a network policy to restrict traffic to a database to only come from pods which actualy need to access a database. Using kubetags in your NetPols provides a really powerful and flexible way of locking down your intra-cluster, inter-namespace traffic for very little cost.

While Network Policies are managed through the kube API, much like ingress resources, they’re not actually implemented by kube. Instead you need something which understands these kube NetPols to actually turn then into something which can be enforced.

In our cluster we make use of the Weave plugin. Weave, or something similar, e.g. Calico, Flannel, etc tend to come out of the box with kube deployments these days, so the chances are you already have something which is capable of implementing Network Policies.

Weave is an agent which is deployed onto each kube node. It manages things like inter-pod routing, but becuase it’s installed at the OS layer, it also has access to manipulate the IP Tables rules. This is how it implements the access restrictions defined by the network policies. Each policy is converted to a collection of IP tables rules, co-ordinated across each machine, which translates the kube tags into something that can be recognised on each machine.

One thing we’re hoping to achieve with Weave net in the future, though not tried yet is to include standalone hosts, outside the kube cluster, in the network policy by running the weave agent on those nodes. I’ll create a future post on how we get on with that!

As you may have picked-up; becuase network policies are implemeted using IP Tables, they operate at layer 3/4 of the OSI model (i.e. IP/TCP) so you can define which IPs/Ports you may or may not access, but they’re not able to operate on URLs, for example, which are at layer 7.

In order to do this you need to build rules into an ingress controller, or look at something more powerful like a service-mesh, e.g. Istio. By combining the 2 you can build-up a great degree of security, with 2 independant, complimentary systems managing access to resources.

So, what does a network policy look like?

The following policy will prevent ingress access into a namespace to everything except traffic originating in the namespace (note that while Network Policies do support controlling egress, we’ve not yet tried this, but it’s on our roadmap!)

kind: NetworkPolicy
  name: namespace-isolation-live
  namespace: live
  - from:
    - namespaceSelector:
          name: live
  podSelector: {}

The policy is pretty simple:

Allow ingress from resources where the namespace matches “live” and apply to all pods (the ’{}’ ; denotes ALL pods in the namespace).

In order to actually serve traffic from the namespace we have another policy:

kind: NetworkPolicy
  name: elb-isolation-live
  namespace: live
  - ports:
    - port: 443
      protocol: TCP
    - port: 80
      protocol: TCP
  - {}
      name: ingress-controller

likewise, this policy is alo pretty simple:

Allow ingress from all pods, where the ports are TCP:443 and TCP:80 and apply the policy to pods with have the label matching: ingress-controller (which our ingress controllers do)

Voila – “Security groups” within the kube cluster. what are you waiting for? Get securing!

On Continual Education

I never went to university.

Well I did – I went to Leeds to study music –  but I dropped out after 3 months to go and join a metal band. And had a rip-roaring time of it too.Peet at 17

During most of my 20 years in tech, I’ve often regretted not sticking it out and getting a proper education with letters after my name and all the trimmings. But to be honest, it hasn’t really held me back – I’ve worked as a consultant for the likes of Sun Microsystems and BEA, I’ve run a couple of tech companies, taught programming and architecture at Oracle University (internationally) and spoken at several conferences.

The thing that has allowed me to do this has been the habit of learning all the time.

When I wanted to make the transition from teaching English to working in tech a couple of decades ago, I started a small web startup and bought Teach Yourself HTML and devoured it. On the train to work, at lunchtime, between lessons, on the train home, in bed…everywhere! And I’ve been learning constantly ever since.

This has not only opened up all sorts of doors for me in my career and taken me all over the world, but has made my life WAY more interesting.

The golden age of learning

The best universities have all got courses online for free, almost free or pretty cheap. What a time to be alive! This means that you’ve got access, right now, to a wealth of instruction from some of the best teachers in the world!

I’m a techie, so I’m going to focus here on how to get a great technical education, but the same applies across a  lot of disciplines.


This is where it all started. In 2002, the Massachusetts Institute of technology published 50 of their courses online for free. Videos, slides, exercises. All for free.

Their Math for Computer Science course is taught in the same lecture theatre that you’ll have seen Richard Feynman teaching in. The lecturers are truly world-class, and everything that comes out of their mouths seems like the truest thing that’s ever been said.

There are incredible courses on most of the areas you’d expect to see on a top-notch computer science degree, plus all the maths and physics you could ever want.

Check out Gilbert Strang’s seminal course on Linear Algebra.


This is the platform that really made MOOCs (Massive Open Online Courseware) household names.

Co-founded by AI Boffin and Stanford professor Andrew Ng, Coursera started off with courses from Stanford, and then blossomed to include courseware from several Ivy League Universities. Ng’s Machine Learning course is still one of the best introductions you can get to the field of AI.
What Coursera improves on, above what M.I.T. provides is that you have access to teaching assistants, a community of fellow students and graded exercises. Since the courses are run over a fixed time-frame (e.g. 11 weeks), you’re given a push to absorb a large amount of new information or skills in a short amount of time. And the fact that you need to complete both exercises and labs means that you end up understanding the material at a really deep level.

My favourite courses from this platform are:


Khan Academy

I don’t know where I’d be without Khan Academy.
It’s actually made for kids, but it’s saved my life, professionally.

Founder Sal Khan is one of the greatest teachers I’ve ever come across.
After getting two bachelors degrees in Maths and CS from MIT, and an MBA from Harvard, Sal worked in a hedge fund as an analyst for several years before starting to make videos to help his many nephews and nieces with maths. After seeing how popular they were with kids who were not related to him, he quit his job, postponed buying a house and dedicated a year to trying to get Khan Academy off the ground.

Now that it’s had investment from Bill Gates and other large charitable organisations, this not-for-profit foundation has created an amazing education platform to revolutionise the way that kids (and occasionally adults) are taught.

Take any maths or science subject that you’ve always struggled with, and I can almost guarantee that you’ll understand it deeply after completing the KA section on that subject. Do it now! 🙂

Plus it’s a case study in how to really do Gamification well. Check out this dashboard with unlock-your-next-avatar, points, and streak counters.

Screen Shot 2018-01-09 at 15.44.30

I do a lot of computer science courses in ML, AI and algorithms, and have to keep running back to KA to bone up on calculus, and the finer points of geometry and probability.

What next?

In a future blog, I’ll talk about some of the premium online education providers like Udacity and Stanford Centre for Professional Development. These providers charge a substantial fee, but offer qualifications from prestigious educational bodies for a fraction of the price of attending the actual universities.

In the meantime, why not carve out 6-8 hours a week for a couple of months and learn something brand new for fun and profit!


Wealth Wizards sponsors Silicon Canal

We’re pleased to be ‘Terabyte’ sponsors of Silicon Canal, a not-for profit organisation whose aim is to create a tech ecosystem in the Midlands. With our HQ in Leamington Spa, we want to encourage tech talent in the area, promote the Midlands as tech hub and get together with like-minded people. We want to show that if you want a successful career in tech, you don’t have to move to London! We will be involved and supporting Silicon Canal’s events throughout the year including sponsoring the ‘Most Influential Female in Technology’ award at the Silicon Canal Tech Awards.

Spot instances for the win!

Cloud computing is supposed to be cheap, right?

No longer do we need to fork out £5-10k for some silicon and tin, and pay for space and the power, the cables and the install, etc, etc. Building in the cloud meant we could go and provision a host and leave it running for a few hours and remove it when we were done. No PO/finance hoops to jump through, no approvals needed, just provision the host and do your work.

So, in some ways this was true, there was little or no upfront cost and it’s easier to beg forgiveness than permission, right? But the fact is we’ve moved on from the times when AWS was a demo environment, or a test site, or something that the devs were just toying with. Now it’s common for AWS (or Azure, or GCE) to be your only compute environment and the fact is the bills are much bigger now. AWS has become our biggest platform cost and so we’re always looking for ways to reduce our cost commitments there.

At the same time that AWS and cloud have become mainstream for many of us, so too have microservices, and while their development and testing benefits of microservices are well recognised, the little-recognised truth is that they also cost more to run. Why? Because as much as I may be doing the same amount of ‘computing’ as I was in the monolith (though I suspect we were actually doing less there) each microservice now wants its own pool of memory. The PHP app that we ran happily on a single 2GB server with 1CPU has now been split out into 40 different components, each with its own baseline memory consumption of 100MB, so I’ve already doubled my cost base just by using a more ‘efficient’ architecture.

Of course, AWS offers many ways of reducing your compute costs with them. There are many flavours of machine available, each with memory and CPU offerings tuned to your requirements. You can get 50%+ savings on the cost of compute power by committing to paying for the system for 3 years (you want the flexible benefits of cloud computing, right?). Beware the no-upfront reservations though – you’ll lose most of the benefits of elastic computing, with very little cost-saving benefits.

You could of course use an alternative provider, Google bends over backward to prove they have a better, cheaper, IaaS, but the truth is we’re currently too in-bed and busy to move provider (we’ve only just finished migrating away from Rackspace, so we’re in no hurry to start again!)

So, how can we win this game? Spot Instances. OK, so they may get turned off at any moment, but the fact is for the majority of common machine types you will pay 20% of the on-demand price for a spot instance. Looking at the historical pricing of spot instances also gives you a pretty good idea how likely it is that a spot instance will be abruptly terminated. The fact is, if you bid at the on-demand price for a machine – i.e. what you were GOING to pay, but put it on a spot instance instead, you’ll end up paying ~20% of what you were going to and your machine will almost certainly still be there in 3 months time. As long as your bid price remains above spot price, your machine will stay on and you will pay the spot price, not your bid!

AWS Spot Price History

What if this isn’t certain enough for you? If you really want to take advantage of spot instances, build your system to accommodate failure and then hedge your bids across multiple compute pools of different instance types. You can also reserve a baseline of machines, which you calculate to be the bare minimum needed to run your apps, and then use spots to supplement that baseline pool in order to give your systems more burst capacity.

How about moving your build pipeline on to spot instances or that load test environment?

Sure, you can’t bet your house on them, but given the right risk approach to them you can certainly save a ton of money of your compute costs.

Making asynchronous code look synchronous in JavaScript

Why go asynchronous

Asynchronous programming is a great paradigm which offers a key benefit over its synchronous counterpart – non blocking I/O within a single threaded environment. This is achieved by allowing I/O operations such as network requests and reading files from disk to run outside of the normal flow of the program. By doing so this enables responsive user interfaces and highly performant code.

The challenges faced

To people coming from a synchronous language like PHP, the concept of asynchronous programming can seem both foreign and confusing at first, which is understandable. One moment you were programming one line at a time in a nice sequential fashion, the next thing you know you’re skipping entire chunks of code, only to jump back up to those chunks at some time later. Goto anyone? Ok, it’s not *that* bad.
Then, you have the small matter of callback hell, a name given to the mess you can find yourself in when you have asynchronous callbacks nested within asynchronous callbacks several times deep – before you know it all hell has broken loose.
Promises came along to do away with callback hell, but for all the good they did, they still did not address the issue of code not being readable in a nice sequential fashion.

Generators in ES6

With the advent of ES6, along came a seemingly unrelated paradigm – generators. Generators are a powerful construct, allowing a function to “yield” control along with an (optional) value back to the calling code, which can in turn resume the generator function, passing an (optional) value back in. This process can be repeated indefinitely.

Consider the following function, which is a generator function (note the special syntax), and look at how its called:

function *someGenerator() {
  console.log(5); // 5
  const someVal = yield 7.5;
  console.log(someVal); // 10
  const result = yield someVal * 2;
  console.log(result); // 30

const it = someGenerator();
const firstResult =;
console.log(firstResult.value); // 7.5
const secondResult =;
console.log(secondResult.value); // 20;

Can you see what’s going on? The first thing to note is that when a generator is called, an iterator is returned. An iterator is an object that knows how to access items from a collection, one item at a time, keeping track of where it is in the collection. From there, we call next on the iterator, passing control over to the generator, and running code up until the first yield statement. At this point, the yielded value is passed to the calling code, along with control. We then call next, passing in a value and with it we pass control back to the generator function. This value is assigned to the variable someVal within the generator. This process of passing values in and out of the generator continues, with console log’s providing a clearer picture of what’s going on.

One thing to note is the de-structuring of value from the result of each call to next on the iterator. This is because the iterator returns an object, containing two key value pairs, done, and value. done represents whether the iterator is complete. value contains the result of the yield statement.

Using generators with promises

This mechanism of passing control out of the generator, then at some time later resuming control should sound familiar – that’s because this is not so different from the way promises work. We call some code, then at some time later we resume control within a thenable block, with the promise result passed in.

It therefore only seems reasonable that we should be able to combine these two paradigms in some way, to provide a promise mechanism that reads synchronously, and we can!

Implementing a full library to do this is beyond the scope of this article, however the basic concepts are:

  • Write a library function that takes one argument (a generator function)
  • Within the provided generator function, each time a promise is encountered, it should be yielded (to the library function)
  • The library function manages the promise fulfillment, and depending on whether it was resolved or rejected passes control and the result back into the generator function using either next or throw
  • Yielded promises should be wrapped in a try catch
For a full working example, check out a bare bones library I wrote earlier in the year called awaiting-async, complete with unit tests providing example scenarios.

How this looks

Using a library such as this (there are plenty of them out there), we can take the following code from this:

const somePromise = Promise.resolve('some value');

  .then(res => {
    console.log(res); // some value
  .catch(err => {
    // (Error handling code would go in here)
To this:
const aa = require('awaiting-async');

aa(function *() {
  const somePromise = Promise.resolve('some value');
  try {
    const result = yield somePromise;
    console.log(result); // some value
  } catch (err) {
    // (Error handling code would go in here)

And with it, we’ve made asynchronous code look synchronous in JavaScript!


Generator functions can be used in ES6 to make asynchronous code look synchronous.

What went wrong? Reverse-engineering disaster

Last week, we nearly pushed a bad configuration into production, which would have broken some things and made some code changes live that were not ready. Nearly, but not quite: while we were relieved that we’d caught it in time, it was still demoralising to find out how close we had come to trouble, and a few brave souls had to work into the evening to roll back the change and make it right.

Rather than shouting and pointing fingers, the team came together, cracked open the Post-Its and Sharpies and set to engineering. The problem to be solved: what one thing could we change to make this problem less likely, or less damaging?

What happened?

The first step was for the team to build a cohesive view of what happened. We did that by using Post-Its on the wall to construct a timeline: everybody knew what they individually had done and had seen, and now we could put all of that together to describe the sequence of events in context. Importantly, we described the events that occurred not the people or feelings: “the tests passed in staging” not “QA told me there wouldn’t be a problem”.

Yes, the tests passed, but was that before or after code changes were accepted? Did the database migration start after the tests had passed? What happened between a problem being introduced, and being discovered?

Why was that bad?

Now that we know the timeline, we can start to look for correlation and insight. So the tests passed in staging, is that because the system was OK in staging, because the tests missed a case, because the wrong version of the system ran in testing, or because of a false negative in the test run? Is it expected that this code change would have been incorporated into that migration?

The timeline showed us how events met our expectations (“we waited for a green test run before starting the deployment”) or didn’t (“the tests passed despite this component being broken”, “these two components were at incompatible versions”). Where expectations were not met, we had a problem, and used the Five Whys to ask what the most…problemiest…problem was that led to the observed effect.

What do we need to solve?

We came out of this process with nine different things that contributed to our deployment issue. Nine problems are a lot to think about, so which is the most important or urgent to solve? Which one problem, if left unaddressed, is most likely to go wrong again or will do most damage if it does?

More sticky things were deployed as we dot-voted on the issues we’d raised. Each member of the team was given three stickers to distribute to the one-three issues that seemed highest priority to solve: if one’s a stand-out catastrophe, you can put all three dots on that issue.

This focused us a great deal. After the dots were counted, one problem (gaps in our understanding of what changes went into the deployment) stood out above the rest. A couple of other problems had received a few votes, but weren’t as (un)popular: the remaining six issues had zero or one dot each.

I got one less problem without ya

Having identified the one issue we wanted to address, the remaining question was how? What shall we do about it? The team opted to create a light-weight release checklist that could be used in deployment to help build the consistent view we need of what is about to be deployed. We found that we already have the information we need, so bringing it all into one place when we push a change will not slow us down much while increasing our confidence that the deployment will go smoothly.

A++++ omnishambles; would calamity again

The team agreed that going through this process was a useful activity. It uncovered some process problems, and helped us to choose the important one to solve next. More importantly, it led us to focus on what we as a team did to get to that point and what we could do to get out of it, not on what any one person “did wrong” and on finding someone to blame.

Everyone agreed that we should be doing more of these root cause analyses. Which I suppose, weirdly, means that everybody’s looking forward to the next big problem.

Using Ansible with WordPress

WordPress is a great tool to use when creating websites as it provides flexibility when managing content.

As you may be aware, one of the operational downsides of managing websites run on WordPress is how frequent new releases are released as part of patching vulnerabilities. This comes with the overhead pain and cost of having to upgrade your WordPress instance every other week.

As we are living in a world that thrives of automation, here at Wealth Wizards we thought it would be a good idea to automate the upgrade process by using configuration management tools like Ansible combined with the power of AWS API’s.

As our WordPress sites are deployed in AWS, we decided to use the AWS API’s to provision instances, manage snapshots alongside configure and apply security groups. We then decided to use various Ansible modules to install packages, update configs, encrypt and decrypt files pushed and retrieved from AWS S3 as well as change permissions on files and directories as part of the upgrade process.

Switching from the traditional method of manually moving files using plugins and bash commands to an automated manor has allowed us to gain more control over our upgrades as well as reduce the time it takes from a day to a 2-hour process, with most of that being dedicated to AWS provisioning. Automating the process using Ansible has also given us the ability to upgrade multiple instances at once over the traditional method of doing one instance at a time.