What makes advisory portfolio managers believe they can provide better returns than a professionally-run, risk-targeted multi-asset fund? And how would we know if they were better?
There is no central repository of either advisory or discretionary fund manager (DFM) portfolio performance data that is visible and accurate, let alone complete.
The Financial Conduct Authority (FCA) says authorised fund prices have to be made public in an appropriate manner. This could be in a national newspaper, on the internet or via a publicly available database of prices. Previous prices must also be made available.
Shrouded in secrecy
However, model portfolio managers, whether advisory or DFMs, do not have to abide by this.
Investors get their valuations with limited frequency, and they have no idea how their manager is faring relative to others, since portfolio data is not harvested, let alone published.
Many such portfolios do not even offer investors factsheets showing performance. Those that do are under no obligation to feature a performance comparator or benchmark.
Some DFMs will argue they can offer performance data to third parties, e.g. Asset Risk Consultants (ARC) and FE Transmission. However, this data is only available to advisers.
Many smaller DFMs do not use these databases because they are deemed too expensive. Since adviser clients are captive, there appears no need to market performance through ARC etc., unless you are keen to market your investment skills to other advisers.
In the absence of any database, advisory models’ performances cannot be checked independently and exposed to wider scrutiny. Model portfolios are not funds, so do not have a fund price; the calculation of comparative and absolute performance can only be performed by choosing a sample portfolio and tracking its daily valuation.
However, a single advisory model portfolio will have hundreds of price histories associated with it, since portfolio changes have to be agreed with the client, and they can be rather slow with their responses.
DFM sample portfolios should be more coherent, but investors will be making withdrawals, adding money and so on. This makes time- or money-weighted portfolio performance measurement more complex to calculate, and probably impenetrable for the investor. These vagaries mean a ‘clone’ portfolio has to be built (or often simulated) in order to maintain a performance record.
Software such as FE Analytics can be used to construct this. However, the clone portfolio needs to have all the rebalancing data accurate by date, fund changes and so on.
Devil in the detail
Even then, inaccuracies prevail because sales and repurchases of funds have a settlement period that can vary by four days or more. The software assumes switches occur on the same day.
It has been said that while all models are wrong, they can be measured on a scale of usefulness. Clone portfolios are the nearest thing to an indicator, given the error margins will be similar across the cohort of portfolios you might compare.
And yet while clients have no access to a wider universe of models against which to compare, the issue is compounded by the inconsistent use of benchmarks. Most model portfolios are merely risk labelled (as opposed to risk targeted), and have no specific objective so the lack of a common benchmark makes things more confused. Benchmarks can be selected to represent the portfolio in the best light, and changed with impunity.
Investors in advisory and discretionary model portfolios have no opportunity to compare and contrast. They are slaves to the whims of the portfolio manager.
The FCA’s asset management market study seeks to prescribe new rules for asset managers regarding better-articulated objectives, defined and immutable benchmarks, full transparency on portfolio turnover and associated transaction charges, and more. The platform study may persuade the regulator to impose the same rules on model portfolio managers.
Graham Bentley is managing director of investment consultancy gbi2