Taming the Beast

Security is a journey, we’ve all heard it said but how many of us believe it and who knows where they’re trying to go?  I think we do, and that’s ‘Our next audit’ – we want to breeze through each audit like passing street lamps on a motorway.

At Wealth Wizards we deal with personal data. We provide financial guidance to customers. To do this we need customers’ personal data. their valuable personal data. Not just address and email (which are actually considered freely-available, or business card data) but information on savings, investments, tax details, health conditions, etc. We don’t store credit card or bank details but we do have all the really personal stuff. What this means is that over time we will build up a large dataset of things that con men, attackers, villains and others want to get hold of.  We know this is valuable, as do our customers and we want our customers to trust us. We want to instill confidence that when a customer tells us something, it’s private and remains so. This means we know security is important, and without it, it doesn’t matter how much effort we put into building up our business, because if the data is stolen and exposed then it could be the downfall of our business.

One of the best ways for us to show prospective clients and customers that we’re serious about security is to show our credentials and accreditation. Evidence that we have a rigorous process that stands up to rigorous audits. ISO 27001 is designed to do just this. Which is why we are working towards achieving ISO 27001 this year. However, anyone who knows ISO 27001 will know it’s a beast and not for the faint of heart and so the trick is learning how we can use ISO to advantage instead of against us.  We can use ISO as a framework to build up the policies and process we use as a business.  Instead of trying to fight it, we’re going to make it help us.

We don’t just want to add security on to what we’ve done, we want to build security into what we do.  We deal with a lot of big companies. when we’re selling our products and things are getting close to signing contracts, those big companies (we’re talking tens of thousands of employees) start asking us about our processes, about our data security and more importantly, in their eyes, their data security. In other words, they start auditing us. No one likes an audit, but if you can show an auditor that you do care about things and that you do have processes then they tend to avoid asking the really difficult questions. Even then, when you really do care about things, it doesn’t matter if they do ask the difficult questions because you have an answer for them.

I’m currently going through the ‘Technical Measures’ questions with our team here and it feels endless; How can I prove what we did X, how can I prove why we did Y, how can I show what something looked like on this date compared to that date. Those are difficult questions to answer at the best of times but more so when you’re running in an elastic environment where a server instance may only exist for a day or two. What’s becoming apparent though is that ISO is asking questions that I actually want to be answered myself, regardless of what certification we go for.  I, as a sys admin, want to have a record of what happened, when and why. I also want to know that something happened because we made it happen. If I know this then I can start to answer questions about why something doesn’t work at 3 am.  So already I’m starting to find that while ISO is a beast, it can actually be tamed to be a friendly beast. On our path to ISO, we will build the framework that will define the tasks we need to do to build security into our platform, that will show the auditors what they want to see, as well as the meat behind it to prove it’s not just paperwork.

By doing a true risk assessment of our business and technical environment, we start to build an accurate picture of our weaknesses, both in terms of security as well as our processes. Once we have identified these then we can start to build suitable responses. It looks overwhelming to begin with but before long it starts to become clear that the automation that we’re building to allow hands-off delivery of our applications is also the solution we need to be able to record what was deployed when and why. The automation scripts are the perfect mechanism to build these audit trails rather than having to rely on someone to manually ensure these actions are identified!

How do we ensure there is a separation of concerns? That no one is putting back-door code into production? Why peer review of the code (both application and infrastructure) allows us to enforce this programmatically! Suddenly ISO has become my friend. Sure, it’s still a beast, but it’s not blocking our delivery but helping to define what processes we need and therefore it’s starting to write our automation algorithms. How cool is that!? Ok, perhaps cool is a little strong.

So while we’re still very much en-route, I’m confident we’re on the right path and that the next audit will be us proving we’re secure, not hiding the things we don’t want to be seen. Don’t be afraid of the beast called ISO, embrace it and use it to your advantage.

Be part of our engineering team

If you’re a software developer who wants to swap the urban jungle of London for the rolling hills of Warwickshire, look no further than Wealth Wizards. You’ll join a dynamic team of clever people, be absorbed in an energetic atmosphere, and benefit from an excellent work-life balance. Plus, you’ll be working in a team that’s breaking ground in the world of robo advice and artificial intelligence.

If Wealth Wizards sounds like your sort of company, take a look at our careers page for all the info.

Are you based in The Midlands already? Do you like beer? And DevOps? Come to out DevHops MeetUp.

Using Ansible with WordPress

WordPress is a great tool to use when creating websites as it provides flexibility when managing content.

As you may be aware, one of the operational downsides of managing websites run on WordPress is how frequent new releases are released as part of patching vulnerabilities. This comes with the overhead pain and cost of having to upgrade your WordPress instance every other week.

As we are living in a world that thrives of automation, here at Wealth Wizards we thought it would be a good idea to automate the upgrade process by using configuration management tools like Ansible combined with the power of AWS API’s.

As our WordPress sites are deployed in AWS, we decided to use the AWS API’s to provision instances, manage snapshots alongside configure and apply security groups. We then decided to use various Ansible modules to install packages, update configs, encrypt and decrypt files pushed and retrieved from AWS S3 as well as change permissions on files and directories as part of the upgrade process.

Switching from the traditional method of manually moving files using plugins and bash commands to an automated manor has allowed us to gain more control over our upgrades as well as reduce the time it takes from a day to a 2-hour process, with most of that being dedicated to AWS provisioning. Automating the process using Ansible has also given us the ability to upgrade multiple instances at once over the traditional method of doing one instance at a time.

Microservices make hotfixes easier

Microservices can ease the pain of deploying hotfixes to live due to the small and bounded context of each service.

Setting the scene

For the sake of this post, imagine that your system at work is written and deployed as a monolith. Now, picture the following situation: stakeholder – “I need this fix in before X, Y, and Z”. It’s not an uncommon one.
But let’s say that X, Y, and Z are all already in the mainline branch and deployed to your systems integration environment. This presents a challenge. There are various ways you could go about approaching this – some of them messier than others.

The nitty gritty

One approach would be to individually revert the X, Y, and Z commits in Git, implement the hotfix straight onto the mainline, and deploy the latest build from the there. Then, when ready, (and your hotfix has been deployed to production), you would need to go back and individually revert the reverts. A second deployment would be needed to bring your systems integration environment back to where it was, (now with the hotfix in there too), and life carries on. Maybe there are better ways to do this, but one way or another it’s not difficult to see how much of a headache this can potentially cause.

Microservices to the rescue!

But then you remember that you are actually using microservices and not a monolith after all. After checking, it turns out that X, Y and Z are all changes to microservices not affected by the hotfix. Great!
Simply fix the microservice in question, and deploy this change through your environments ahead of the microservices containing X, Y, and Z, and voila. To your stakeholders, it looks like a hotfix, but to you it just felt like every other release!


Of course, you could still end up in a situation where a change or two needs to be backed out of one or more of your microservice mainlines in order for a hotfix to go out, however I’m betting it will not only be less often, but I’m also betting that it will be less of a headache than with your old monolith.


Mars Attacks!!! Ack, Ack-Ack!

Last Tuesday we saw our first (recognised DDoS attack.  At 12:09 GMT we started to see an increase in XML-RPC GET requests against our marketing site, hosted on WordPress. We don’t serve XMLRPC so we knew this was non-valid traffic for a start.

By 12:11 GMT traffic volumes were well above what the system could handle and the ELBs started to return 503 responses. By 12:20 GMT the request rate was over 250 x higher than usual. At this point, we were trying to establish what was causing the demand. We don’t currently have the highest coverage of monitoring over our marketing sites so this took us a little while. Eventually, by 12:30, using the ELB logs, we had managed to establish we were seeing requests from all over the world, all making GET requests to /xmlrpc.php. We don’t typically see requests from China, Serbia, Thailand and Russia, among others so it was pretty obvious this was a straight forward DDoS attack.

Shortly after 12:30 GMT the request rate drops off just as quickly as it started and by 12:35 GMT it was over and the site recovered. Either the BotNet Attack got bored, they had achieved their purpose (investigation into the consequence of the attack continues with our security partner) or AWS Shield did its free, little-known job and suppressed the attack…

Whatever led to the attack, it passed as quickly as it arrived, and from initial assessment had little purpose. At least we’ve had our first taste of an attack and will be able to better tackle the next one. In the meantime, we continue to analyse logs to determine if there was any more to the attack than a simple DDoS, or if there was something more malicious intended.

Automated financial advice – it’s more complicated than you think!

As an engineering problem, it’s deceptively simple to explain how Wealth Wizards gives financial advice: we take data from an applicant, process it, and then generate some artefacts that explain what we’ve recommended. Sounds easy enough, eh? Not quite…

First off, it’s worth noting that financial advice is hard! Giving out the wrong financial advice can bring people to financial ruin and no-one wants to be responsible for that. So producing advice that’s suitable to an applicant’s circumstances (and fully-regulated by the FCA) is essential.

So in engineering our advice we first need to find all the relevant facts about an applicant’s circumstances so we’re not making a decision based on incomplete or inaccurate information. Once we’ve done that, we analyse whether any of those facts might make automated advice unsuitable for our applicant – although we aim to provide instant advice to all of our end-users, there are always going to be individuals with special circumstances which are more suited to a human adviser who can craft a suitable outcome. If the facts strongly support providing automated advice then we look at current market conditions and the range of options we can take, and map the best set of outcomes to the applicant’s personal and financial circumstances.

And in cases where we can’t provide automated advice, we need to be able to pass what we’ve learned on to qualified advisers so they can take the case forward without starting from scratch.

The advice process needs to suit today’s financial realities, but also be suitable and reusable for tomorrow across a range of activities.

One thing we always need is engagement from our applicant throughout the process. Putting your personal preferences and financial history into a computer can be a long and drawn out process and it’s very easy to lose a user as they move through it, so we adhere to UX practices that allow it to be done in small, two-way interactive chunks that engage the user with graphical feedback to show how we’ve interpreted their situation.

We also need everything to be performant, because there’s nothing worse than typing loads of information into a system and then having to wait while a spinner rolls on and on. We aim to make our units of functionality small so they can be fast and unconstrained by other components of our system. This also helps us to release things to production much more quickly when we need to change part of the advice process, and having an efficient Devops culture enables us to deliver that change.

We ensure the information we’re dealing with is stored, processed and delivered with measures that wouldn’t be out of place in a high street bank. Aside from the standard practices of encryption at rest, strong access control for our staff and a strict separation of concerns between developers and operations for applicant data, we also run internal security testing to ensure our authentication and security model is as tight as possible.

Everything we open to our applicants is tested and approved by the best software engineers and financial advisers in the UK.

We develop our products in agile teams. Agility is important as when government regulations change we need to be able to respond to them appropriately and quickly and deploy this as quickly as possible to ensure advice is suitable. On top of that, we ensure every advice outcome is carefully audited. In case we need to trace the decision-making process in formulating a piece of advice, we need a high degree of understandable documentation to outline the rationale for giving it. This is especially useful as we often need to release several iterations of our advice model in a short amount of time, so understanding which versions of our systems provided a piece of advice is critical for compliance activities.

So all in all there’s a lot that goes into providing advice that requires top-notch engineering practices. As a technology firm we’re always evolving our practices as well as our products, and we love anything that makes giving our end-users the best outcomes we possibly can.