In short, this blog post is a call for compliance teams to start paying attention to DevOps. For those unfamiliar with DevOps – do take a look at our report on the topic.
More recently Security professionals have been pushing for increased automation and testing of security principles as part of the DevOps process – often referred to as DevSecOps. This takes DevOps and its automation and overlays a security angle to it. Often security professionals have served as a governance gate in the past, reviewing code that has already been built and business models that are fairly well established – all too late to have real impact on what is delivered. By automating security tests and security principles, security professionals can influence how something is built and what is built as the team are building. Further, by automating this capability security professionals can spend their time on more high value tasks – particularly important given the difficulty most insurers have in finding these experts.
Moving away from pure IT development tasks and thinking more about insurance, I can’t help but draw a parallel between how compliance is often applied with how security is applied. Too often compliance is a governance gate, seen by the team building the proposition as another tick in the box required to go live when actually most compliance tests reflect good business practice or the interests of the customer.
Now automating compliance testing is tricky – much like security there isn’t a right and wrong, a black and white approach but there are things the system simply should not do. The thing that complicates compliance testing is that regulatory rules are often stated as constraints, and further the regulator might decide that something is non-compliant if it is not in the spirit of the rule. Let’s work with an example.
In the EU the Gender Directive states that insurers should not use gender to determine the price of insurance. This would be quite easy to test for – have a data set with varying scenarios in two parts one with male customers and one with female customers, but otherwise the same data. Price the whole set and the price should not vary by gender – simple? However, this doesn’t test for the spirit of the rule.
With the rise of the use of machine learning and AI in pricing in Europe some systems have found interesting correlations that are not in the spirit of the gender directive. For instance, drivers who are teenagers and are above average height or weight are more likely to crash their car. This of course observes that young male drivers tend to have more crashes than their female counterparts (in most European countries where I have observed the statistics over the last five years) and young male drivers of 18 or 19 typically weigh more than their female counterparts of the same age and are typically taller. So, this pricing benefits people who weigh a little less and happen to be shorter than the rest of the population but the pricing still favours female drivers over male drivers – the link is not explicit but it is not in the spirit of the directive.
Increasingly, it will become hugely important to test that our new developments and adaptive systems are acting in a way that is compliant with the regulation, and observes the spirit of the regulation as the local country and the insurer themselves chooses to interpret the regulation. This will require compliance teams to think through how they could automate this testing.
In the example above, we could fix the test but varying more of the data – so for instance, height, weight, and first name may all vary by gender and we may decide that the pricing where there are these variances should be within 5% for different genders. Such a data set and test would demonstrate good will on the part of the insurer to the regulator but also allow for automated checking that an AI or a human isn’t trying to get around the regulation, intentionally or otherwise.