In my previous column in May, when I wrote about generative AI uses and the cybersecurity risks they could pose, CISOs noted that their organizations hadn’t deployed many (if any) generative AI-based solutions at scale.
What a difference a few months makes. Now, generative AI use has infiltrated the enterprise with tools and platforms like OpenAI’s ChatGPT / DALL-E, Anthropic’s Claude.ai, Stable Diffusion, and others in ways both expected and unexpected.
In a recent post, McKinsey noted that generative AI is expected to have a “significant impact across all industry sectors.” As an example, the consultancy refers to how generative AI technology could potentially add $200 – $400 billion in added annual value to the banking industry if full implementation moves ahead on various use cases. The potential value across a broader spectrum of institutions could be enormous. But while organizations look to incorporate elements of generative AI to strengthen efficiency and productivity, others are worried about controlling its usage within the enterprise.
Generative AI on the loose in enterprises
I contacted a range of world-leading CIOs, CISOs, and cybersecurity experts across industries to gather their take regarding the recent surge in the unmanaged usage of generative AI in company operations. Here’s what I learned.
Organizations are seeing a dramatic rise in informal adoption of gen AI – tools and platforms used without official sanctioning. Employees are using it to develop software, write code, create content, and prepare sales and marketing plans. A few months ago, monitoring these unsanctioned uses was not on a CISO’s list of to-dos. Today, it is, as they create a mysterious new risk and attack surface to defend against.
One cybersecurity expert told me, “Companies are unprepared for the influx of AI-based products today – from a people, process, or technology perspective. Furthermore, heightening the issue is that a lot of the adoption of AI is not visible at the product level but at a contractual level. There are no regulations around disclosure of ‘AI Inside.’”