written by Mark Polson
Wednesday, 06 July 2022
Most weeks I write the Update on a Tuesday night; the last few have been early Wednesday mornings. This is a Tuesday night one and as I write the entire British political system is convulsing in a highly satisfying manner.
This has made it hard to concentrate on financial stuff, but I’ll do what I can and if occasionally you read a ‘HAHAHAHAHA’ then please take pity on me.
(And if you want to know what the financial advice twitterati think, FT Adviser has all the news that’s fit to not print here.)
On with the show, such as it is.
Now, over the last couple of weeks we’ve been showing off the MPS module in Analyser at a couple of demos (available here and here but we’ll put them on YouTube soon). Last week’s was performance comparisons, and – much like price for platforms – it’s a contentious area.
Here’s what I think after having developed (well, Terry and Abbey developed it) and launched (well, Sam and Natalie launched it) an MPS comparison system: the only way to get a truly accurate MPS performance figure is from a client’s individual performance report.
I’ve looked at everything that’s out there, from specialist stuff the fund analysis powerhouses have done through to provider factsheets through to…well, that’s it really which is a shame as I was building a rhythm there. And nothing can give a completely accurate picture of past performance.
The reasons for this are valid.
First of all, the exact performance a client experiences will always vary from numbers on a factsheet. This is for all sorts of reasons, but include portfolio ‘drift’, where investors become detached from the core portfolio weightings depending on when subscriptions arrive.
Some platforms operate a type of floating allocation to minimise this, others believe it pretty much all comes out in the wash. Where you stand on it is up to you.
This also means that the same model may have different performance on different platforms – other factors contributing to this include how rebalancing works, when trades are placed, trading charges and more. And of course not every asset is available on every platform; more little differences.
Systems exist to try and narrow that gap by creating their own bottom-up performance (snark) and in the main they’re good, but of course there are some things they can’t control.
Many folk are exercised about this. “The differences can be huge!” they say; my response is “aye?”. I’m not sure that’s true; if they are then something else is usually going on like a platform charge being included in one calculation and not another. Even then, we’re normally in dancing-on-the-head-of-a-pin territory.
I know past performance is an element of suitability. But it seems strange to me that when a firm has decided to outsource investment construction because, presumably, they have neither the time, skills or energy to do it themselves, that they then bend every sinew to get performance accurate down to the basis point.
It’s certainly very hard to do that across dozens of providers, scores of ranges and hundreds of portfolios, not to mention across 25 different platforms.
I think suitability exists independently of past performance, in the same way as it exists independently of price – some suitable offerings will have performed well or a low price or both; others the opposite.
Beyond that we’re looking – at least at the first pass – at outliers. If something looks really great or really awful, what’s the reason? Most portfolios work within a risk budget and so tend to cluster anyway.
As we get into the sharp end of Consumer Duty the need to evidence suitability in a broader sense is going to get ever more important. I think we’re going to have to accept some kind of compromise on parsing past performance for some time to come.
(And if you’d like to see what we mean by ‘a broader sense’ come to this demo on Friday and we’ll go through it).
NIGHT OF THE LONG LINKS
See you next week unless I’ve been appointed to a major office of state
Mark