ChatGPT has been celebrated for its ability to generate human-like text but concerns about data privacy and accuracy have tempered the hype. As a result, many companies have already prohibited its use in corporate settings. ChatGPT may not be ready for use in compliance functions where security and reliability are essential, but it is just one of many examples of language AI-enabled solutions that compliance professionals should be enthusiastic about. This technology ultimately will offer real opportunities to improve the effectiveness of compliance functions, if a considered analysis of its limitations are taken into account. As new language AI-enabled solutions are developed and introduced securely into corporate environments, compliance professionals will be excited to see solutions that can generate human-like text and effectively analyze large sets of text-heavy data including policies, emails, and instant messages, helping with everyday compliance activities in a meaningful way.
Language AI-enabled solutions have the potential to automate many of the highly manual and inefficient activities that compliance professionals grapple with daily. Language AI- enabled solutions are particularly useful for routine compliance tasks like writing initial drafts of documents or analyzing emails for potential policy/regulatory concerns, allowing compliance professionals to focus on more analytically oriented activities (for example, risk assessments). However, language AI-enabled solutions cannot “think” the way a human does and they should not replace the judgement of qualified compliance professionals.
The use cases in compliance for language AI-based solutions are exciting but in highly regulated and technically complex fields like compliance, there are important limitations to consider. Specifically, these solutions sometimes provide inaccurate, superficial or incomplete responses or even provide different responses to the same question making it unwise to rely on answers without stringent oversight. In addition, the risks of misuse or data privacy are serious barriers to implementation.
Compliance professionals today should be excited by the potential of language AI-based solutions but need to remain cautious with information that is shared with these tools and how these tools are governed in the bank. Language AI-enabled tools have the potential to help Compliance professionals but they are likely best for specific use cases rather than a one-size-fits-all approach.
• Language AI-enabled tools can be helpful in generating drafts (such as procedures or training materials) or summarizing and analyzing large volumes of text data to jump-start compliance processes
• Language AI-enabled tools cannot be relied upon for decisions and cannot replace people in compliance functions. Compliance professionals must demonstrate appropriate oversight and develop governance for responsible AI
• Using language AI-enabled tools for compliance processes will not eliminate existing tools and practices any time soon. Language AI-enabled tools could be good for prioritizing processes with technologies that are already in place to manage risks but should not be relied on fully since they are still error-prone