Finding a more effective method of protecting banks and companies against cyberattack has become a pressing issue.
Andrew Davies, vice-president of global market strategy, financial crime risk management at fintech provider Fiserv, says: “There has been a 61% increase in attacks on accounts in the US. It has become a major talking point.”
Concurrently comes the threat of attacks in real time. As systems and those attacking them become more sophisticated, so too must the lines of defence.
Size of the problem
The vulnerability of various points across the banking network may mean some banks have to put in extra resources, in accordance with their place in the value chain.
Davies says: “Central banks are operating with the same technology that is being used by the commercial banks. But, really, they should be working on a Doomsday scenario. If liquidity in the markets is compromised, then money cannot move around.”
The pool of groups involved with tackling crime and regulatory issues also needs to expand beyond the financial services, says Davies.
“Corporates are still waking up to the problems they face,” he says. “Big corporates have hundreds of bank relationships and a central funding location to manage. They face very complex issues.”
The use of machine learning and artificial intelligence (AI) is being touted as the solution to handle the vast amounts of data generated, especially when information is being shared as part of a collaborative approach. But regulations need to catch up.
Matt Armstrong-Barnes, chief technologist, AI, at Hewlett Packard Enterprise, says: “Collaborations are becoming a successful route for developing financial services, and AI is a key technology to bridge the gap across industries. Existing regulation faces a significant challenge because of the new markets that AI will open up.”
The benefits of AI do not lie in trying to replace all banking operations, but in removing some of the most labour-intensive operations.
Armstrong-Barnes says: “In the short term, AI will not replace core banking processes, but
Hewlett Packard Enterprise
Some of that data will come from historical records. These can be accessed to uncover patterns of behaviour.
Armstrong-Barnes says: “Financial service organizations are required to record their customer interactions. AI offers a new way that compliance can be demonstrated to the regulator, not only on current interactions, but also historical ones.”
Tool for accuracy
It is this ability to utilise the most complex of data pools that could lead to AI finding its strongest use case.
Says Armstrong-Barnes: “AI can be used to look at historic data and use what is learnt to predict future behaviour, thus it is a fantastic tool to spot or increase the accuracy in predicting fraudulent activities.”
While the systems are working through the data pools of archive material, the operators can focus their attention on the issues affecting the industry in the here and now, explains Armstrong-Barnes.
“Through the use of AI,” he says, “it is possible to free up people’s valuable time and allow them to focus where and when it is needed in the explosion of digital data. AI can be used to help build insights that previously were unattainable.”
For attacks that are happening now, though, people are still necessary for examining the data and understanding the human behaviour that has taken place.
The assumption has to be that if there has been a breach, then the first layer of protections has been penetrated and it can take a human eye to tell what’s going on.
Says Fiserv’s Davies: “The data breach detection software being used has to be better and has to be supplemented with behavioural monitoring in order to provide an effective layered approach to detection.”
Armstrong-Barnes too says that experts monitoring systems are a vital final step.
“The regulator will not accept ‘the computer told me to do it’ as an answer,” he says, “so human beings needs to be involved in the decision-making process.”
The UK-based security think-tank Royal United Service Institute noted at the end of 2017 that information sharing has become a core part of cyber defence.
It called for a wider range of institutions to share information, saying there was a need to “include smaller, regional and domestic banks, as well as those organizations that champion privacy and data protection, if we are to ensure that the benefits are recognized and shared by all stakeholders and not merely by those within this echo chamber.”
The arrival of AI is encouraging this push for more collaboration.
Says Armstrong-Barnes: “Financial institutions need to make the right choice on how they are implementing AI. The AI use-cases for financial services are so broad that no organization can do it on their own, working with an ecosystem of partners is essential.”