AI and ESG: the new trend in climate reporting
The use of AI for ESG reporting and assessments is spreading, and regulators can’t keep up. Lenders need to factor in a new set of governance risks that are hard to identify.
Artificial intelligence (AI) is the new buzzword in environmental, social and governance (ESG) reporting. So much so in fact, that the term is misused to characterise anything that goes beyond the collection and treatment of traditional ESG data.
But as disclosure rules become more granular, corporates and their financial partners are looking both at alternative data sources and at AI technologies as ways of enhancing extra-financial reporting.
For the financial sector, AI’s main selling point is efficiency. Machine learning identifies patterns from mass data to make predictions. Natural language processing (NLP) gives computers the ability to understand and respond to textual and voice data. Together, they could automate the lengthy ESG assessment processes.
Alt data on the other hand, which includes things like satellite imaging, has the capacity to address the thorny issue of missing climate data while possibly improving climate risk estimations.
Data-savvy financial institutions are tapping into these emerging resources to improve their assessment of corporate sustainability credentials for investment and lending decisions.
The question is whether they understand all the new risks. One of the biggest liabilities of AI-driven ESG reporting isn’t what users are doing with the technology. Rather, it is whether those users understand what it does not do.
AI and alt data could complement the work done by ratings agencies that study and classify companies according to their ESG credentials.
“The difference in ESG rating methodologies brings up a governance question, which drives this interest and enthusiasm for alternative data and alternative technologies,” says Marie Brière, head of investor intelligence and academic partnerships at Amundi Institute.
AI is not a revolution, but it is a way to have more sophisticated analytical methodologies
The French asset manager has several research initiatives underway, but the use of AI and alternative data is not currently standardised across all portfolio-management decision making.
“AI is not a revolution, but it is a way to have more sophisticated analytical methodologies,” Brière adds.
Questions of methodological integrity have preoccupied ESG scorers for years now. ESG ratings are notoriously uncorrelated because agencies use different methodologies to assess firms and there is no standard way to deal with missing ESG data. They are also only reviewed periodically.
As AI tools become more readily available, agencies and financial services companies speculate that it will become easier to evaluate larger numbers of companies more quickly and on a rolling basis.
“ESG data is hard to exploit,” writes Carmine de Franco, head of research and ESG at Ossiam, a subsidiary of French banking group Natixis. "An analyst will be better at it than an AI if they are covering a dozen companies, but the algorithm will work more efficiently when it comes to analysing millions of companies."
Knowledge is power
In ESG reporting, data blind spots are a weakness for companies and their investors. Having access to AI tools may bring a competitive advantage.
“The most important [thing] is your raw material, which is data,” says Pierre-Olivier Haye, chief technology officer at Iceberg data lab, a French environmental data provider. "Any technology that accelerates data extraction is beneficial."
For him, AI technologies like NLP can create a virtuous circle in climate reporting by encouraging companies to think about what the technology will be asking of them.
“If I’m [a corporate social responsibility] manager of a big company and I have access to such tools, one of the first things I’d want to do is input my competitors’ annual or ESG reports to know how they are answering key investor questions, and to know how I could do just a little bit better,” he adds.
AI also widens the scope of raw information that can be fed into ESG assessments undertaken by investors. One example is using NLP to filter any references of a company in the media for potential controversies and to assess the company’s public reputation against how it likes to portray itself.
“We can look at how the company communicates on ESG and if it deals with the topics that are relevant to its business model,” Brière says. "This can improve our critical analysis of the ways companies communicate on ESG."
There is a huge risk that companies and their investors rushing to meet disclosure regulation start using a technology that they don’t understand
And for any ESG information that companies don’t report on, AI tools and alt data providers are getting better at making estimates.
“We work a lot on the topic of physical risks, particularly the impact of weather changes,” Brière says. "Having these types of alt data really helps us to understand exactly what the companies are exposed to."
The question arises of whether companies have started to anticipate the use of AI and alt data by their investors and changed their reporting style as a result.
As the corporate landscape becomes more familiar with disclosure regulations, new service providers offering automated ESG reporting solutions that are compliant with the EU's Corporate Sustainability Reporting Directive (CSRD) or other frameworks are coming to market.
Typically, extra-financial reporting falls under the work of CSR teams who won’t necessarily look at ESG compliance through a financial lens. But that is now changing.
“With the arrival of CSRD, we’ve started engaging more with the CFOs who are concerned about what kind of information investors will be expecting in the reports,” says Matthieu Renard, climate change and sustainability services consultant at EY.
It might be too soon for companies to start reacting to this trend, however.
“They are not yet seeing the need to adapt their reporting to these algorithms as a priority, given the slow uptake of such tools for the moment,” Renard adds.
Proceed with caution
AI and alt data sourcing technologies are advancing so rapidly that regulators have a hard time catching up. As a result, there are governance risks that financiers have little visibility over.
“This is about how the world exploits large data,” says David Duffy, chief executive and co-founder of the Corporate Governance Institute (CGI). “There is a huge risk that companies and their investors rushing to meet disclosure regulation start using a technology that they don’t understand.”
There is the familiar question of data quality and the risk of poor-quality inputs generating faulty outputs – the so-called 'garbage in, garbage out' problem. With modelling for example, it is unclear if estimates generated to make up for missing climate data can be considered of the same value as traditional data.
“When you use an innovative and sophisticated solution to gather information, it can feel like the measurements will be very precise, but actually the margin of error can be huge,” says Duffy.
Alt data can be a useful indicator of how a corporate might be performing compared with its peer group, but the practical use of that information for financial decision making is a lot more complicated because there can be so many variables that influence the results of the estimation.
This governance risk is forcing lenders to keep a close eye on how AI and alt data are spreading across core sectors, as well as within their own operations.
“What we want is to get a view of the consequences of using the technology, what are its limits, what are the risks, and what this integration within business practices means,” says Yannick Ouaknine, head of sustainability research at Societe Generale.
The bank has a data and AI strategy in place as part of its digital transformation. The group’s portfolio has over 600 use cases in production, with an expected value creation of €500 million by 2026.
Importantly, the myth that AI can work independently to the analysts themselves needs to be debunked.
“The algorithm works based on what input you give it, so human biases are transferred,” says Ouaknine.
In fact, as AI tools proliferate, the risk is that human biases will become systemic.
Financial institutions with access to these technologies will need to make sure that there are checks and balances in place before integrating these tools into investment or lending decision making.
“We shouldn’t lose sight of the technical and sector expertise needed to make investment decisions,” adds Ouaknine. "The math might be right, but there are other elements or variables to consider."
He points to the potential ramifications of using AI without understanding it: “The users of these technologies have the responsibility to verify, to make sure that those biases are identified and that the risks are mitigated by recalibrating the machine.”
Users of AI also must remember that these are complementary tools, nothing more. Regulators can double down on ensuring that the quality of data in AI-assisted reports remains high, but if a poor investment decision comes because of AI-generated ESG assessments, the investor is still to blame.
“I believe that the danger of this kind of thing is letting the machine make the decision for you,” says Haye at Iceberg.