Network Policies

making use of kubernetes network policies to add security to your clusters

There’s a neat feature in kubernetes called Network Policies which we’ve been using a little while now. Network Policies (or NetPols) are much like AWS security groups, but within the kube cluster.

The really great thing about NetPols is that they “understand” the tags you may already be attributing to your deployements, ingress controllers, etc. This means you can quite easily group related components and wrap a NetPol around these and limit access to these resources.

A great example would be to define a NetPol which restricts any access to a namespace except for any traffic which originates from the name space. This gives you a great level of isolation at the namespace level. You can the grant ingress to that namespace exclusively through a designated ingress controller on a specific port.

Like wise, you can create a network policy to restrict traffic to a database to only come from pods which actualy need to access a database. Using kubetags in your NetPols provides a really powerful and flexible way of locking down your intra-cluster, inter-namespace traffic for very little cost.

While Network Policies are managed through the kube API, much like ingress resources, they’re not actually implemented by kube. Instead you need something which understands these kube NetPols to actually turn then into something which can be enforced.

In our cluster we make use of the Weave plugin. Weave, or something similar, e.g. Calico, Flannel, etc tend to come out of the box with kube deployments these days, so the chances are you already have something which is capable of implementing Network Policies.

Weave is an agent which is deployed onto each kube node. It manages things like inter-pod routing, but becuase it’s installed at the OS layer, it also has access to manipulate the IP Tables rules. This is how it implements the access restrictions defined by the network policies. Each policy is converted to a collection of IP tables rules, co-ordinated across each machine, which translates the kube tags into something that can be recognised on each machine.

One thing we’re hoping to achieve with Weave net in the future, though not tried yet is to include standalone hosts, outside the kube cluster, in the network policy by running the weave agent on those nodes. I’ll create a future post on how we get on with that!

As you may have picked-up; becuase network policies are implemeted using IP Tables, they operate at layer 3/4 of the OSI model (i.e. IP/TCP) so you can define which IPs/Ports you may or may not access, but they’re not able to operate on URLs, for example, which are at layer 7.

In order to do this you need to build rules into an ingress controller, or look at something more powerful like a service-mesh, e.g. Istio. By combining the 2 you can build-up a great degree of security, with 2 independant, complimentary systems managing access to resources.

So, what does a network policy look like?

The following policy will prevent ingress access into a namespace to everything except traffic originating in the namespace (note that while Network Policies do support controlling egress, we’ve not yet tried this, but it’s on our roadmap!)

kind: NetworkPolicy
  name: namespace-isolation-live
  namespace: live
  - from:
    - namespaceSelector:
          name: live
  podSelector: {}

The policy is pretty simple:

Allow ingress from resources where the namespace matches “live” and apply to all pods (the ’{}’ ; denotes ALL pods in the namespace).

In order to actually serve traffic from the namespace we have another policy:

kind: NetworkPolicy
  name: elb-isolation-live
  namespace: live
  - ports:
    - port: 443
      protocol: TCP
    - port: 80
      protocol: TCP
  - {}
      name: ingress-controller

likewise, this policy is alo pretty simple:

Allow ingress from all pods, where the ports are TCP:443 and TCP:80 and apply the policy to pods with have the label matching: ingress-controller (which our ingress controllers do)

Voila – “Security groups” within the kube cluster. what are you waiting for? Get securing!

On Continual Education

I never went to university.

Well I did – I went to Leeds to study music –  but I dropped out after 3 months to go and join a metal band. And had a rip-roaring time of it too.Peet at 17

During most of my 20 years in tech, I’ve often regretted not sticking it out and getting a proper education with letters after my name and all the trimmings. But to be honest, it hasn’t really held me back – I’ve worked as a consultant for the likes of Sun Microsystems and BEA, I’ve run a couple of tech companies, taught programming and architecture at Oracle University (internationally) and spoken at several conferences.

The thing that has allowed me to do this has been the habit of learning all the time.

When I wanted to make the transition from teaching English to working in tech a couple of decades ago, I started a small web startup and bought Teach Yourself HTML and devoured it. On the train to work, at lunchtime, between lessons, on the train home, in bed…everywhere! And I’ve been learning constantly ever since.

This has not only opened up all sorts of doors for me in my career and taken me all over the world, but has made my life WAY more interesting.

The golden age of learning

The best universities have all got courses online for free, almost free or pretty cheap. What a time to be alive! This means that you’ve got access, right now, to a wealth of instruction from some of the best teachers in the world!

I’m a techie, so I’m going to focus here on how to get a great technical education, but the same applies across a  lot of disciplines.


This is where it all started. In 2002, the Massachusetts Institute of technology published 50 of their courses online for free. Videos, slides, exercises. All for free.

Their Math for Computer Science course is taught in the same lecture theatre that you’ll have seen Richard Feynman teaching in. The lecturers are truly world-class, and everything that comes out of their mouths seems like the truest thing that’s ever been said.

There are incredible courses on most of the areas you’d expect to see on a top-notch computer science degree, plus all the maths and physics you could ever want.

Check out Gilbert Strang’s seminal course on Linear Algebra.


This is the platform that really made MOOCs (Massive Open Online Courseware) household names.

Co-founded by AI Boffin and Stanford professor Andrew Ng, Coursera started off with courses from Stanford, and then blossomed to include courseware from several Ivy League Universities. Ng’s Machine Learning course is still one of the best introductions you can get to the field of AI.
What Coursera improves on, above what M.I.T. provides is that you have access to teaching assistants, a community of fellow students and graded exercises. Since the courses are run over a fixed time-frame (e.g. 11 weeks), you’re given a push to absorb a large amount of new information or skills in a short amount of time. And the fact that you need to complete both exercises and labs means that you end up understanding the material at a really deep level.

My favourite courses from this platform are:


Khan Academy

I don’t know where I’d be without Khan Academy.
It’s actually made for kids, but it’s saved my life, professionally.

Founder Sal Khan is one of the greatest teachers I’ve ever come across.
After getting two bachelors degrees in Maths and CS from MIT, and an MBA from Harvard, Sal worked in a hedge fund as an analyst for several years before starting to make videos to help his many nephews and nieces with maths. After seeing how popular they were with kids who were not related to him, he quit his job, postponed buying a house and dedicated a year to trying to get Khan Academy off the ground.

Now that it’s had investment from Bill Gates and other large charitable organisations, this not-for-profit foundation has created an amazing education platform to revolutionise the way that kids (and occasionally adults) are taught.

Take any maths or science subject that you’ve always struggled with, and I can almost guarantee that you’ll understand it deeply after completing the KA section on that subject. Do it now! 🙂

Plus it’s a case study in how to really do Gamification well. Check out this dashboard with unlock-your-next-avatar, points, and streak counters.

Screen Shot 2018-01-09 at 15.44.30

I do a lot of computer science courses in ML, AI and algorithms, and have to keep running back to KA to bone up on calculus, and the finer points of geometry and probability.

What next?

In a future blog, I’ll talk about some of the premium online education providers like Udacity and Stanford Centre for Professional Development. These providers charge a substantial fee, but offer qualifications from prestigious educational bodies for a fraction of the price of attending the actual universities.

In the meantime, why not carve out 6-8 hours a week for a couple of months and learn something brand new for fun and profit!


The Inaugural Birmingham AI Meetup

Wealth Wizards have been doing AI R&D in the WW lab for about a year, and our AI Guild has recently grown to 12 members. Now that our developments are starting to make their way into production, we decided to join the Birmingham AI community to share what we’ve learned and get some cross-pollination of ideas.

When we found there wasn’t an AI community in Birmingham, we decided to start one!

In the four weeks since we launched the group, around a hundred members have signed up.

We ran our first Meetup in Digbeth – an area which is quickly becoming the Silicon Roundabout of Birmingham. And it was awesome.

As the organiser, I was nervous that no-one would turn up, but as 18:30 approached more and more people turned up and helped themselves to beer and pizza. In the end, we had twenty-one attendees.

We had three fascinating talks from PushDoctor, 383 and Wealth Wizards (represent!) and loads of great discussion, and one thing that struck us is just how engaged everyone was.

Almost everyone there was working in AI, machine learning or data science, and I loved hearing the war stories from actual practitioners in the field.

Screen Shot 2017-11-23 at 16.58.11.png

Holly Emblem from 383 kicked us off with a talk “Real World applications for Bayesian Statistics and Machine Learning”. 383 have been pushing the boundaries of how to use Bayesian classification to enhance their understanding of their web analytics data.
She also gave some warnings about the next potential AI winter.



Then Josh Sephton from PushDoctor shared their experiences from the trenches running a chatbot in anger for past year. They’d condensed their wisdom down to four key personality traits they’ve found useful to see in a chatbot.

You can see the whole talk here on the new Brum AI YouTube channel.

We hope to get our AV woes sorted out and have videos of all of our talks in the future.


Finally, Kojo Hinson gave an awesome and mind bending talk on “Understanding the ‘Natural’ in Natural Language Processing”. We hope to publish his slides on Slideshare soon, but here’s a teaser.

Screen Shot 2017-11-23 at 16.52.31


Several of the AI Boffins who attended our first Meetup are already signed up to deliver talks at future Meetups. The next one is in Digbeth on the 14th December. Why not sign up and come along!

There’s a waitlist already, but we’ll get a larger venue if enough people sign up 🙂

Birmingham Artificial Intelligence Meetup

Birmingham, GB
99 AI Boffins

A place for AI and Machine Learning boffins and get together and discuss how we’re using Deep Learning, Natural Language Processing, AI Planning, Clustering, Reinforcement Lea…

Next Meetup

BrumAI: December

Thursday, Dec 14, 2017, 6:30 PM
30 Attending

Check out this Meetup Group →


Wealth Wizards sponsors Silicon Canal

We’re pleased to be ‘Terabyte’ sponsors of Silicon Canal, a not-for profit organisation whose aim is to create a tech ecosystem in the Midlands. With our HQ in Leamington Spa, we want to encourage tech talent in the area, promote the Midlands as tech hub and get together with like-minded people. We want to show that if you want a successful career in tech, you don’t have to move to London! We will be involved and supporting Silicon Canal’s events throughout the year including sponsoring the ‘Most Influential Female in Technology’ award at the Silicon Canal Tech Awards.

Spot instances for the win!

Cloud computing is supposed to be cheap, right?

No longer do we need to fork out £5-10k for some silicon and tin, and pay for space and the power, the cables and the install, etc, etc. Building in the cloud meant we could go and provision a host and leave it running for a few hours and remove it when we were done. No PO/finance hoops to jump through, no approvals needed, just provision the host and do your work.

So, in some ways this was true, there was little or no upfront cost and it’s easier to beg forgiveness than permission, right? But the fact is we’ve moved on from the times when AWS was a demo environment, or a test site, or something that the devs were just toying with. Now it’s common for AWS (or Azure, or GCE) to be your only compute environment and the fact is the bills are much bigger now. AWS has become our biggest platform cost and so we’re always looking for ways to reduce our cost commitments there.

At the same time that AWS and cloud have become mainstream for many of us, so too have microservices, and while their development and testing benefits of microservices are well recognised, the little-recognised truth is that they also cost more to run. Why? Because as much as I may be doing the same amount of ‘computing’ as I was in the monolith (though I suspect we were actually doing less there) each microservice now wants its own pool of memory. The PHP app that we ran happily on a single 2GB server with 1CPU has now been split out into 40 different components, each with its own baseline memory consumption of 100MB, so I’ve already doubled my cost base just by using a more ‘efficient’ architecture.

Of course, AWS offers many ways of reducing your compute costs with them. There are many flavours of machine available, each with memory and CPU offerings tuned to your requirements. You can get 50%+ savings on the cost of compute power by committing to paying for the system for 3 years (you want the flexible benefits of cloud computing, right?). Beware the no-upfront reservations though – you’ll lose most of the benefits of elastic computing, with very little cost-saving benefits.

You could of course use an alternative provider, Google bends over backward to prove they have a better, cheaper, IaaS, but the truth is we’re currently too in-bed and busy to move provider (we’ve only just finished migrating away from Rackspace, so we’re in no hurry to start again!)

So, how can we win this game? Spot Instances. OK, so they may get turned off at any moment, but the fact is for the majority of common machine types you will pay 20% of the on-demand price for a spot instance. Looking at the historical pricing of spot instances also gives you a pretty good idea how likely it is that a spot instance will be abruptly terminated. The fact is, if you bid at the on-demand price for a machine – i.e. what you were GOING to pay, but put it on a spot instance instead, you’ll end up paying ~20% of what you were going to and your machine will almost certainly still be there in 3 months time. As long as your bid price remains above spot price, your machine will stay on and you will pay the spot price, not your bid!

AWS Spot Price History

What if this isn’t certain enough for you? If you really want to take advantage of spot instances, build your system to accommodate failure and then hedge your bids across multiple compute pools of different instance types. You can also reserve a baseline of machines, which you calculate to be the bare minimum needed to run your apps, and then use spots to supplement that baseline pool in order to give your systems more burst capacity.

How about moving your build pipeline on to spot instances or that load test environment?

Sure, you can’t bet your house on them, but given the right risk approach to them you can certainly save a ton of money of your compute costs.

How to turn good ideas into great ones

To have a great idea, you first need to have several hundred crazy ideas. This is as true of organisations as it is of people. A culture where people feel free to speak up in meetings and throw crazy ideas about is a really useful thing to develop, as long as we can somehow sort the good from the bad, the great from the good, and the exceptional from the great.

Does your company have a culture of exploring, analysing and weighing new ideas? Typically, wild ideas are treated in one of two ways:

“Mhmm. Sounds good Jeff” (while thinking, that’s a terrible idea, hopefully this guy will stop talking about this soon).

“Yes, that sounds great” [doesn’t really understand]. The book The Mom Test describes how people will often heap superficial praise on the idea, without really understanding it well, because they like you as a person.

Instead of immediately forming an opinion on whether an idea is good or bad, a better approach is to try to put our initial gut-reactions on hold and spend 2 or 3 minutes exploring the concept in a more rigorous way.

A useful way of doing this is to use the ‘dialectic method’, otherwise known as Thesis, Antithesis, Synthesis.

In this paradigm you listen to what is being proposed and take the opposite stance playing ‘devil’s advocate’, to try and find arguments against whatever has been proposed. This criticism creates a tension which you and the proposer then try to resolve. This reconciliation between the two viewpoints usually ends up in something brand new called the ‘synthesis’.

For example:

Thesis: People can only get financial advice from a qualified financial planner, and this is expensive

Antithesis: People don’t need access to a qualified financial planner to get financial advice.

Synthesis: Can we create some software that gives the advice that a qualified financial planner would give, but at a lower price?

Thesis: We should get a ball pit for the office so that people can take naps during the working day

Antithesis: We can’t get a ball pit, the balls will get everywhere and will make the office look terrible

Synthesis: Let’s get a futon!

Thesis: Oil is becoming more and more scarce, we need to develop more efficient engines

Antithesis: Oil is becoming more scarce, but I wonder if developing more efficient engines is the only solution?

Synthesis: Build electric cars

Thesis: No one will buy electric cars, they look like a shoe and perform like a cow

Antithesis: People need to starting converting to electric cars

Synthesis: Build a sexy high-end electric supercar to show everyone the possibilities and use the proceeds to fund a mass-production car


How can we practice this as a company? First, someone needs to be the person in the team who starts flinging ideas about like Billy-o. Could this be you?

Then, here are some ideas about how we can use the dialectic method to explore and filter those ideas:

As a giver
If someone presents an idea, immediately take the contrary view. Do it with a cheeky smile, so everyone knows what the game is. Challenge the idea instead of the proposer. Treat it as an intellectual game.

As a receiver
If you have an idea, seek out people who are likely to think it’s a bad idea. Do you know any grumpusses? Go and find them, and pour your heart out. Keep your cool as they stomp your beautiful idea under their vile jackboots and write down all of their pearls of wisdom.  See if you can satisfy all of their beefs. If you can, you have a winner. If you can’t, then maybe we’ve all learned something today.

As a team
Run brainstorming sessions where you produce ideas on post-its. Then as a team rank them by the most controversial. Then explore the top two or three in depth in this paradigm.


TLDR; let’s take the lead in showing how we can challenge ideas with a smile, and use this as a way to create new even better ideas!

Making asynchronous code look synchronous in JavaScript

Why go asynchronous

Asynchronous programming is a great paradigm which offers a key benefit over its synchronous counterpart – non blocking I/O within a single threaded environment. This is achieved by allowing I/O operations such as network requests and reading files from disk to run outside of the normal flow of the program. By doing so this enables responsive user interfaces and highly performant code.

The challenges faced

To people coming from a synchronous language like PHP, the concept of asynchronous programming can seem both foreign and confusing at first, which is understandable. One moment you were programming one line at a time in a nice sequential fashion, the next thing you know you’re skipping entire chunks of code, only to jump back up to those chunks at some time later. Goto anyone? Ok, it’s not *that* bad.
Then, you have the small matter of callback hell, a name given to the mess you can find yourself in when you have asynchronous callbacks nested within asynchronous callbacks several times deep – before you know it all hell has broken loose.
Promises came along to do away with callback hell, but for all the good they did, they still did not address the issue of code not being readable in a nice sequential fashion.

Generators in ES6

With the advent of ES6, along came a seemingly unrelated paradigm – generators. Generators are a powerful construct, allowing a function to “yield” control along with an (optional) value back to the calling code, which can in turn resume the generator function, passing an (optional) value back in. This process can be repeated indefinitely.

Consider the following function, which is a generator function (note the special syntax), and look at how its called:

function *someGenerator() {
  console.log(5); // 5
  const someVal = yield 7.5;
  console.log(someVal); // 10
  const result = yield someVal * 2;
  console.log(result); // 30

const it = someGenerator();
const firstResult =;
console.log(firstResult.value); // 7.5
const secondResult =;
console.log(secondResult.value); // 20;

Can you see what’s going on? The first thing to note is that when a generator is called, an iterator is returned. An iterator is an object that knows how to access items from a collection, one item at a time, keeping track of where it is in the collection. From there, we call next on the iterator, passing control over to the generator, and running code up until the first yield statement. At this point, the yielded value is passed to the calling code, along with control. We then call next, passing in a value and with it we pass control back to the generator function. This value is assigned to the variable someVal within the generator. This process of passing values in and out of the generator continues, with console log’s providing a clearer picture of what’s going on.

One thing to note is the de-structuring of value from the result of each call to next on the iterator. This is because the iterator returns an object, containing two key value pairs, done, and value. done represents whether the iterator is complete. value contains the result of the yield statement.

Using generators with promises

This mechanism of passing control out of the generator, then at some time later resuming control should sound familiar – that’s because this is not so different from the way promises work. We call some code, then at some time later we resume control within a thenable block, with the promise result passed in.

It therefore only seems reasonable that we should be able to combine these two paradigms in some way, to provide a promise mechanism that reads synchronously, and we can!

Implementing a full library to do this is beyond the scope of this article, however the basic concepts are:

  • Write a library function that takes one argument (a generator function)
  • Within the provided generator function, each time a promise is encountered, it should be yielded (to the library function)
  • The library function manages the promise fulfillment, and depending on whether it was resolved or rejected passes control and the result back into the generator function using either next or throw
  • Yielded promises should be wrapped in a try catch
For a full working example, check out a bare bones library I wrote earlier in the year called awaiting-async, complete with unit tests providing example scenarios.

How this looks

Using a library such as this (there are plenty of them out there), we can take the following code from this:

const somePromise = Promise.resolve('some value');

  .then(res => {
    console.log(res); // some value
  .catch(err => {
    // (Error handling code would go in here)
To this:
const aa = require('awaiting-async');

aa(function *() {
  const somePromise = Promise.resolve('some value');
  try {
    const result = yield somePromise;
    console.log(result); // some value
  } catch (err) {
    // (Error handling code would go in here)

And with it, we’ve made asynchronous code look synchronous in JavaScript!


Generator functions can be used in ES6 to make asynchronous code look synchronous.