The publication of APRA’s inaugural performance test results has proved confronting, not only for funds that failed the test but also for their members. Anomalies and mixed messages abound and as the lessons sink in, we’re seeing funds trying to balance the long-term interests of their members with the shorter-term focus on passing the test. While we appreciate the policy intent to remove underperforming funds, there’s some concern about the implementation.
Outcomes of the 2021 performance test
In this year’s test, APRA analysed the performance of 80 MySuper products. Each fund was assessed against a benchmark portfolio with a matching asset allocation, using broad market indices for each asset sector. Returns were measured net of tax and administration fees, using fees from the most recent financial year.
To pass the test, funds had to be within 50bp of the benchmark return, adjusted for tax, less the median administration fee. 13 funds failed. As a consequence, those funds have had to write to all members, informing them that they’re in an underperforming fund and suggesting they consider an alternative. Funds are also required to direct members to the ATO’s YourSuper comparison tool for further information, which is in itself problematic because the tool shows funds’ raw performance only with no indication as to the level of risk. It also shows a range of returns for lifecycle products, with no indication of where in that range the enquiring member would sit.
We’re yet to see the full impact of the ‘underperforming fund’ letters, but there’s already plenty of anecdotal evidence that they’re causing uncertainty and even alarm. It’s reasonable to assume that some members, on receiving the letter, will take action and switch funds. At worst, which is possible but unlikely, a large outflow of members and assets may cause a significant change in the fund’s investment strategy, with short-term liquidity becoming a priority at the expense of long-term performance. If that occurs, the remaining members will suffer and the fund would be set up for the vicious circle of ongoing underperformance, failure of the next performance test and likely extinction. While transfers out of these funds so far have been not at these levels (about 7% of members and 4% of assets according to APRA), APRA is actively promoting that more members should be transferring out of these funds, so there may be more to come.
Technical and timing issues contributed to some failures
The reasons the 13 funds failed the test are many and varied. Some were inherently sub-standard and clearly destined to fail by any objective measure. Others failed because of the weight of history, in that their recent performance was not quite good enough to overcome weakness earlier in the seven-year ‘lookback’ period. The test is very much a blunt instrument, so the fact that previous underperformance might have been attributable to a different investment team and/or investment strategy is not taken into account.
Other funds were seeking to do the right thing by their members but were simply unlucky in their timing. There were those, for example, that put in place protection strategies due to their older member base, as the long-term bull market (interrupted only by the short ‘blip’ of COVID) looked to be nearing a correction point. Arguably that was the right thing to do for their members, but that protection came at a cost and being too conservative – or perhaps being conservative too soon – didn’t stand up as an excuse when the test numbers were crunched. Technical issues came into play, too, including:
- defensive alternatives, which some funds turned to as more attractive than conventional cash and bonds, were classified by APRA as 50% growth, which was an unattainable and inappropriate benchmark target for these assets
- real return multi-asset strategies were treated by APRA as 50% equities, despite them being much lower risk in reality, so again the actual return fell well short of the benchmark
- some funds took a conservative approach within asset sectors, eg focusing on defensive equities (mainly for the more conservative lifestages) which nonetheless were treated the same as other equities in terms of the benchmark return, and
- ‘rack rate’ administration fees were used, when in practice the major corporate master trusts charged much lower fees for medium and large employers.
These technical issues, coupled with share markets having performed strongly over the seven years to June 2021, mean that the performance test has punished some funds that have built investment strategies aimed at providing their members with a smoother return journey.
Whatever the explanation, we believe there were funds that failed the test that deserved to pass and conversely, funds that passed the test that probably deserved to fail. And with the test results being made so public, we now need to observe what impact they have on funds that failed, and on their members.
Weighing it all up
It would have been near-impossible for Treasury and APRA to come up with a perfect test – one that identifies the truly underperforming funds while not penalising a few other funds that are doing a good job for their members. We need to judge whether, on balance, the process and the outcomes are broadly acceptable but how much collateral damage is too much?
Some of the technical issues can, hopefully, be cleared up through further dialogue between APRA and the industry. Our main focus, therefore, should be on whether the test is fit for purpose and likely to achieve its objective.
While the 13 failures (generally smaller funds) amount to 16% of MySuper products, they account for just 6% of total assets and 7% of total members. To that extent, it can be argued that the exercise has achieved its objective of ‘addressing underperformance’ by identifying small, uneconomic funds that really deserve to be wound up or merged.
We still have several serious reservations. Firstly, it’s already evident that the test is influencing how funds behave, and in ways that are not in members’ best interests. The test encourages short-termism and managing towards a benchmark, which is a distraction from the main game of maximising long-term outcomes for members. It also stifles innovation in the use of some asset sectors, especially those that have ‘unfriendly’ benchmarks.
Above all, the test fails to measure the most important thing, which is whether the fund is pursuing an investment strategy (ie asset allocation) that’s appropriate for its members. By taking those crucial asset allocation decisions out of the picture and simply focusing on implementation, it ignores the most potent source of added value. The consequence is that funds can deliver great outcomes for their members but still fail the test, as may be the case for some well-constructed lifecycle products. Meanwhile other funds may deliver inferior outcomes for their members, compared with some of those that fail the test, but they still pass.
By publicly discrediting funds that fail the test, it’s harder for those funds to ‘right the ship’ because of the inevitable leakage of members and assets (which is already occurring, according to APRA). At a broader level, it diminishes confidence in the super system as a whole and gives ammunition to those who seek to undermine it.
It would be far preferable for APRA to engage in a rigorous review process with those funds that fail the test once – a ‘first strike’ – and only to name them once it becomes clear that their shortcomings are entrenched and unlikely to be rectified, and only when there’s a clear plan in place for the fund to exit the system in an orderly manner.
As the initial performance test has been completed, it’s highly unlikely there will be wholesale changes to the underlying methodology and consequences of the performance test. The focus of Chant West, and many others in the industry, has rightly turned to trying to make a flawed test a little better and reduce the unintended consequences.