Accuracy, transparency, objectivity just some of the themes no doubt in the minds of executives at Moodys when it decided to embark on another review of the methodology underpinning its sovereign credit ratings, the results of which have just been published.
Such exercises have been undertaken previously, in September 2008 and February 2010, but to limited effect. This one took on board users comments.
Addressing criticism of its approach in the wake of the 2008 global crisis, Moodys has revealed more information about how it assigns relative weights to each of the main bond default indicators; assuming, that is, the agencies are predicting a default at all.
Moodys scorecard approach to credit risk seems to be moving in the direction of Euromoneys Country Risk Survey, which includes, among other information, an evaluation by country experts, scoring 15 of the most important economic, political and structural indicators believed to fit a picture of creditworthiness.
Moodys notes its scorecard revision includes greater granularity to the scores and, more specifically, new sub-factors, changes to score combinations, a greater weight attached to economic growth and event risk assuming that can be mapped at all and, crucially, a scorecard-indicated rating range.
No longer, it seems, is Moodys satisfied with providing a single rating. It wants to build in a degree of confidence, or a range of error, around its assessment just to reassure its users in the form of a three-notch alpha-numeric rating range. It adds, perhaps less reassuringly, that the range applies in most cases.
The argument proposed by Moodys that its revised methodology is a refinement, a means toward greater openness and to improve the forward-looking nature of its output is surely welcome. Who wouldnt argue that greater scientific rigour is an appropriate path forward?
Surprisingly, however, the new approach has not led to any ratings changes. In other words, either Moodys was right all along obviating the need for any methodological change in the first place, with the revisions so small, say, as to have no meaningful effect or it has failed to apply the maths correctly. Still, it helps to know what it is looking for.
Yet heres the rub.
The ratings are not a purely quantitative approach. A degree of qualitative judgement is used. That isnt especially damning per se, or revelatory as it happens. Economists are known to tweak their models based on gut instinct; the rating agencies, too, and the ECR approach similarly builds in judgemental elements.
Plus, few would argue that Moodys focus on four main pillars economic strength, institutional strength, fiscal strength and susceptibility to event risk should not be the principal focus of default risk.
However, its analyst-based adjustment factors the qualitative bit is fiddling, whichever way you look at it, implying that its model is useful at times, but not at others.
In any event, what do these judgements consist of and how are they made? Moodys has not made this clear, other than stating its new model allows analysts to rely on undisclosed adjustment factors at their discretion, rather than leveraging the mathematical pretensions of the model.
Leaving aside the question as to whether historical ratings can be compared with those based on the new methodology, what should concern investors is the vagueness of these model-adjustments and whether Moodys is tying itself in knots in an attempt to justify its record.
The rating agency has indicated that additional economic adjustment factors might be included it just hasnt isolated them yet. So what use is this current model in the first place? And why does Moodys react more slowly and then in larger, discrete jumps? It seems Moodys will follow its own methodology when it suits but react differently with a lack of clarity when risk aversion accentuates.
Of course, Moodys-bashing is not an exclusive sport, or always justified. It was the first agency to begin the methodological introspection when its top brass were anxious to improve transparency for that, read trust in the wake of Argentinas 2002 default.
Its approach was new and original at the time, and the other agencies have made mistakes along the way. Indeed, to be fair, Moodys called Mexico correctly in the mid-1990s when it was convinced of a liquidity/currency crisis, but not a default, unlike S&P, which was tempted into a downgrade.
However, all that changed in in the choppy wake of 2008.
Statistical analysis by Norbert Gaillard, an economist and independent consultant, (see Credit rating agencies and the eurozone crisis: What is the value of Sovereign Ratings?) notes that the Greek crisis was particularly problematic for Moodys.
His computed accuracy ratios reveal that all of the rating agencies were slow to downgrade Greece before its debt restructuring (implied default) in February 2012, compared with market perceptions measured by credit default risk spreads what he terms the sticky sovereign rating syndrome.
However, unlike S&P, which was more pessimistic and in line with Greek CDS, Moodys was the last to move it had the lowest accuracy ratio.
In a conversation with Euromoney, Gaillard, author of A Century of Sovereign Ratings, said: Moodys played a strange game, not in the interest of investors.
S&P did what was expected. It altered its rating because Greece was riskier. But Moodys did not. It refused to downgrade, because by doing so it would have exacerbated the crisis. Being the last agency to move would have triggered the crisis, ie the ECB not accepting Greek sovereign bonds.
Moodys attempt to refine and update its practice is undoubtedly welcome. Its just that, to date, its efforts raise more questions than answers.
The author, Jeremy Weltman, is an economist for Euromoney Country Risk.