// . //  Insights //  The Opportunities, Risks, And Future Of Generative AI

Mainstream use of generative artificial intelligence (AI) has arrived, and with it the promise of transformative potential for business. 

Generative AI is increasingly part of many individuals’ daily lives, speeding up personal tasks at home, at school, and at work. Businesses and large organizations are seeing potential everywhere they look to transform complex and expensive processes and do other things that were out of practical reach until now. At the same time, although rapidly advancing, generative AI still has significant limitations in certain areas, and widespread adoption brings a host of risks. Companies need a clear understanding of the strengths and weaknesses of these tools, as well as the future opportunities and pitfalls they’ll create.

How businesses are using generative AI to boost productivity

For guidance in this ever-shifting landscape, on June 7 and 8, Oliver Wyman held Generative AI — Revolution Or Evolution?,  a webinar where leading experts from multiple regions and practices across the firm offered their insights. The event — featuring remarks from Rainer Glaser, Julian Granger-Bevan, John Lester, Mario Rizk, Sian Townson, David Waller, Michael Zeltkevic, and Na Zhou — began with an explanation of how generative AI’s capabilities differ from those of older AI applications. Examples of the new wave of apps include the now-ubiquitous ChatGPT, a talented chatbot that helps users to write, brainstorm, learn, and code; MidJourney, for creating high-quality images from a simple text description; and Synthesia, a platform that can produce professional videos with humanlike avatars.

Generative AI possesses the remarkable ability to interpret open-ended human commands, write, summarize, code, brainstorm, and remix any ideas or skills that humans have demonstrated on the internet over the last 20 years. The technology has found an immediate application in domains where people spend significant amounts of time reading and writing, aiming to streamline information gathering and synthesis. By harnessing generative AI, organizations seek to optimize productivity and revolutionize how information is processed and assembled.

The key limitations and risk of generative AI, according to Oliver Wyman’s experts

1. Generative AI has advanced rapidly and garnered tremendous public interest. New models can now generate content, not just make predictions or classifications. They can produce illustrations, essays, code, and more. These interactions feel increasingly humanlike, even though the models themselves lack human values, common sense, and true understanding. 

2. So far, companies have primarily been using generative AI to enhance individual productivity, including drafting emails, organizing pitches, or searching documents. Widespread automation will take more time as companies adapt governance and workflows. Many companies are experimenting to better understand the technology's capabilities and limitations in context.  

3. The remarkable ability of generative AI systems to produce and understand language fluently gives a mistaken impression that they have certain skills of humans, or even of other computer programs. In truth, current iterations have a host of crucial shortcomings. Their logic and recall can be flawed, for example, and their reasoning ability is prone to unexpected failure. The systems are prone to “hallucinations,” outputting material that is factually incorrect but presented with a high level of confidence and polish. These issues may compound the already considerable risks associated with traditional AI systems, such as accountability and oversight, transparency, data privacy and security, and bias. The technology also enables heightened cyber risk, with bad actors using generative AI for voice cloning, deepfakes, and other techniques to penetrate information security defenses. Finally, generative AI tools may produce content that infringes on copyrighted source material, leaks confidential data, or proves defamatory. Companies must proceed carefully by proactively assessing risks and developing new governance approaches. 

4. Regulations on AI are still developing and vary regionally. Oliver Wyman’s experts argued that regulators should recognize that having guardrails in place can foster innovation within companies. Guardrails, along with stronger monitoring, also become critical when AI is applied to sensitive areas such as employment, access to utilities and credit, and border control. The EU is setting an example, with a particular focus on higher-risk uses of AI, working on regulations that provide higher-level expectations rather than prescriptive metrics and bright lines, understanding that fairness and other metrics depend heavily on context. Companies should monitor evolving regulations to ensure compliance and understand operational implications. 

5. Individuals and companies should start experimenting with generative AI but recognize that it is a journey. While bleeding-edge experimentation by researchers, entrepreneurs, and others continues to create a deluge of new models and techniques, enterprise adoption will be gradual. Companies’ early experiments are more about learning firsthand about this strange new technology than achieving a predictable return on investment (ROI). Still, companies that don't start controlled testing risk lagging others in seeing eventual widespread productivity gains. With prudent management of risks, generative AI can be tremendously empowering. But it also challenges existing governance and IT management frameworks, which may need to be redeveloped, and blurs the line between human and machine. 

In summary, generative AI is poised for mainstream adoption if governance and responsible development can keep pace. But it will remain an iterative learning process for the foreseeable future. With due diligence and an open, yet cautious mindset, individuals and companies alike can benefit from this promising but perilous new frontier.