The conundrum of Robo Responsibility

Panacea comment for Financial Advisers and Paraplanners

21 Nov 2017

The conundrum of Robo Responsibility

Earlier this month Professor Stephen Hawking issued a chilling warning about the imminent rise of artificial intelligence. During the new interview, Professor Hawking warned that AI will soon reach a level where it will be a ‘new form of life that will outperform humans.’

There is a move afoot to bring the delivery of financial advice into the 21st century. After all with the smart phone, tablet and virtual reality all breaking through boundaries, why should financial advice not find itself in the vanguard of change?

It should work, could work, but will not work until something very simple yet clearly requiring a considerable volte-face takes place.

So, here’s a thought for you lovers of Steve Jobs and even Ned Ludd.

This may take a little of your time but bear with me please.

Steve Jobs reckoned that “Older people sit down and ask, ‘What is it?’ but the boy asks, ‘What can I do with it?”.

Smart technology exists and is readily available in the average home. Algorithm based analytics are there, right now, to deliver for the mass market an automated method of providing the average family with the ability to self medicate their financial ailments and prescribe a solution.

This happens in many areas of web based life today so why not financial services?

The elephant in the room of progress is the word ‘advice’. Because in the financial services world where products are delivered/ sold/ distributed by the intermediated channel the buck of responsibility always stops with the financially weakest part of the process, the advisory firm.

Product failure, rather like design failure in modern airliners, is unheard of. With an airplane the crash blame is pretty much always directed at the pilot.

Robo or automated solutions should work, it is all in the ‘math’? Very complicated algorithms drive the customer to a very specific outcome.

This is where it gets complicated because at the moment should the algorithm prove in five, ten or fifteen years to have had an unforeseen glitch regulatory retrospective retribution will rain down on the advisory firm, not the maker of the programme.

There is a simple solution to a complex problem.

That is to have the algorithms certified as fit for the purpose they were designed for.

Fit for purpose accreditation already exists in other areas of regulation. Aircraft cannot fly in UK airspace without CAA approval. Drugs are certified as fit for purpose and prescription with the Medicines & Healthcare products
Regulatory Agency.

So why can the FCA not approve automated advice models as fit for purpose?

The answer according to Andrew Mansley at the FCA, who I spoke to at some length at the PFS Festival, is that it would be “anti competitive”.

What!!!!!!

There are examples of this statement being used to create chaos and detriment in this industry. The Maximum Commission Agreement springs to mind. For those new to the world of financial services this is an essential read

For those with not enough time served in this industry, you should know that from the late eighties increased commission levels from larger distribution channels were being sought after the OFT got rid of the Maximum Commission Agreement (MCA) as it was seen to be anti-competitive.

I suspect the real reason would be that, in the words of Hector Sants, not known to Mr. Mansley, “if the regulator was to take responsibility for it’s actions, nobody would want to do the job”.

The FCA needs to consider the following simple steps to improve the embrace of automated opportunities.

  1. All robo models should apply to the FCA for approval, that approval will certify what the programme can and cannot do and rather like a fully automated vehicle
  2. The FCA approval will apply to the algorithms and the programme
  3. Any changes, upgrades would require a certification upgrade
  4. The robo technology would require PI cover for any unforeseen failures and not the adviser firm
  5. The advisory firm would NOT be responsible for any advice/ guidance failure of the robo programme as part of the FCA sign off
  6. In October last year, Professor Stephen Hawking warned that artificial intelligence could develop a will of its own that is in conflict with that of humanity. With this in mind, the advice responsibility buck stops with the technology provider and not the adviser

 

Put these in place and both the regulator and the software house would think very carefully about failure, the adviser could engage with more consumers with confidence restored.

We can always dream?

Advertisements