How CIOs navigate generative AI in the enterprise

In its infancy, gen AI is already transforming organizations and profoundly impacting IT strategies. But while language models (LLMs) accelerate engineering agility, they also open the floodgates to unprecedented technical debt accumulation. “Generative systems are likely to accelerate the amount of code that gets produced, so on that basis alone, technical debt will increase,” says Stephen O’Grady, principal analyst and co-founder at analyst firm Red Monk.

But this shouldn’t deter CIOs from exploring and implementing AI, adds Juan Perez, EVP and CIO at Salesforce. He views AI as just another application requiring the appropriate governance, security controls, maintenance and support, and lifecycle management. And since the number of AI products is increasing, he says, selecting the most suitable models and underlying data will be crucial to support an AI journey.

If implemented correctly, gen AI can be positioned to produce higher-quality products at a lower cost. “It’s not a question of if AI will positively affect the overall business, it’s a question of by how much and how fast,” says Neal Sample, CIO of Walgreens Boots Alliance. Yet he notes that both government regulation and corporate governance will be necessary to realize responsible AI development.

Gen AI: central to IT strategy

Machine learning models have the potential to unlock more rapid IT iteration. At the very least, they can automate the burden of mundane, repetitive tasks, freeing up software developers’ bandwidth to focus on more creative, higher-level operations, says Andrea Malagodi, CIO of Sonar, a code testing platform. “Investing in generative AI tools to support these teams is an investment in their growth, productivity, and general satisfaction,” he says.

Gen AI will dramatically increase development, especially code generation for well-established programming languages like Java, Python, and C++, adds Meerah Rajavel, CIO of Palo Alto Networks. But its power doesn’t end there. He sees AI as instrumental in shifting code testing left to assist with unit testing, debugging, and identifying misconfigurations earlier in the software development cycle. “As a CIO, providing our developers with the best tools to be successful is a critical component of the job, and AI will undoubtedly improve efficiency,” he says.

AI is also positioned to significantly advance operations across departments. For Carter Busse, CIO of no-code enabled automation platform company Workato, AI is at the center of his company’s IT strategy this year. But its benefits extend beyond the realm of IT, aiding areas such as customer support, increasing productivity, and driving cross-team innovation. “CIOs are tasked with helping grow the business efficiently, and AI is how we’ll do it moving forward,” he says.

So code generation isn’t the only area to benefit from the latest AI wave. According to Sunny Bedi, CIO and CDO of Snowflake, a cloud-based data-warehousing company, employee productivity will see the greatest impact. He foresees a future in which all employees work closely with an AI copilot to assist with actions like personalizing the onboarding experience for new hires, coordinating internal communication, and prototyping innovative ideas. By leveraging the out-of-the-box capabilities from LLMs, he adds, enterprises could also reduce third-party reliance for operations like search, document extraction, content creation and review, and chatbots.

How AI can contribute to technical debt

It’s not the generative AI models themselves — it’s how they’re applied in practice that’ll be the biggest determining factor in IT debt creation. “Where and how AI is implemented in organizations needs careful thought to avoid generating technical debt moving forward,” says Sample, adding there’s a higher risk of accumulating debt when applying AI models to an existing technology ecosystem, such as revising connectivities and integrating gen AI models while using an old stack.

On the other hand, if used appropriately, gen AI could help eliminate old technical debt by rewriting legacy applications and automating a backlog of tasks. That said, CIOs shouldn’t jump headfirst without the right cloud environment and strategy. “If organizations prematurely implement generative AI, existing technical debt may continue to grow or, in some cases, become chronic,” says Steve Watt, CIO at Hyland, developers of the enterprise management software suite, OnBase. Therefore, he advises setting a plan to address existing technical debt so new AI-driven initiatives don’t crumble.

At first, companies might increase IT debt while experimenting with AI and LLMs. But Busse agrees that LLMs will decrease it in the long run, but this hinges on AI’s ability to respond dynamically to changing requirements. “With AI embedded into your business process, you’ll be able to adjust quicker to process changes, so less technical debt,” he says.

Assessing the quality of AI-made code

Questions have been raised recently about the quality of AI-generated code, with one report highlighting an uptick in code churn and code reuse since the advent of AI pair assistants. According to Red Monk’s O’Grady, the quality of code produced by AI will depend on many factors, including the model deployed, the use case at hand, and the developer’s skillsets. “Just as with human developers, artificial systems do and will continue to output code with defects,” he says.

For example, Sonar’s Malagodi references a recent study from Microsoft Research that evaluated 22 models and found they generally falter when tested on its benchmark, hinting at fundamental blind spots in training setups. While artificial assistants can produce functional code, they don’t always go beyond functional correctness to consider other contexts like efficiency, security, and maintainability, not to mention adherence to code conventions, the report explains.

The takeaway for Malagodi is there’s still ample room for improvement. “While generative AI can produce more lines of code more quickly, if it’s not good quality, it can become a time-consuming nuisance,” he says. He urges CIOs and CTOs to take the necessary steps to ensure AI-generated code is clean. “This means it’s consistent, intentional, adaptable, and responsible, which leads to secure, maintainable, reliable, and accessible software.”

Quality concerns at the root of these models could adversely affect code output. While gen AI has the potential to produce superior technical artifacts, the quality of data, the model architecture, and training procedures could all lead to subpar outcomes, says Alastair Pooley, CIO of Snow Software, a cloud technology intelligence platform. “Inadequately trained models or unforeseen edge cases may lead to lower quality outputs, posing operational risks and compromising system reliability,” he says. All this necessitates continual review and verification of the output and quality.

AI is like any other tool, and the outcome depends on which tool you use and how you use it, adds Rajavel at Palo Alto Networks. To him, without the proper AI governance in place, your chosen model could create lower-quality artifacts that don’t adhere to the product’s architecture and intended outcomes. Another significant factor is which AI you select for the job at hand, since no model is one-size-fits-all, he adds.

The laundry list of potential AI risks

Outside of IT debt and code quality, there’s a spectrum of potential adverse outcomes to consider when deploying gen AI. “These could be around data privacy and security, algorithmic biases, job displacement, and ethical quandaries regarding AI-generated content,” says Pooley.

One prospect is how malicious individuals might capitalize on gen AI to further their efforts. Rajavel notes how cybercriminals are already utilizing the tech to execute attacks at scale for its ability to draft convincing phishing campaigns and spread disinformation. Attackers could also target gen AI tools and the models themselves, leading to data leakage or poisoning the outputs.

“It’s possible that generative systems could accelerate and enable attackers,” says O’Grady. “Arguably, the biggest concern for many enterprises, however, is the exfiltration of private data from closed vendor systems.”

These technologies can produce very convincing results that can be riddled with inaccuracies. Outside of bugs within the models, there are also cost implications to consider, and it’s very easy to unknowingly or unnecessarily spend a lot on gen AI, whether from using the wrong models, not having visibility into consumption costs, or not using them effectively.

“AI is not without risk,” says Perez. “It needs to be built from the ground up with humans in control of the areas that ensure anyone can trust its outcomes — from the most basic user to the most experienced engineer.” Another unanswered question for Perez is the ownership of AI development and maintenance since it’s also putting pressure on IT teams to keep up with the demand for innovation, as many IT workers lack time to implement and train AI models and algorithms.

The elephant in the room: employment

Then there’s the outcome that’s stirred up the mainstream media: the replacement of human labor by AI. But how gen AI will affect employment in IT groups has yet to be determined. “Impacts on employment are, at present, difficult to forecast, so that’s a potential concern,” says O’Grady.

While there’s undoubtedly a mix of opinions in this debate, Walgreens’ Sample doesn’t believe AI poses an existential threat to humanity. Instead, he’s optimistic about the potential for gen AI to improve the lives of employees. “The glass-half-empty viewpoint is AI will impact a lot of jobs, but the glass-half-full viewpoint is it’ll make humans better at what they do,” he says. “Ultimately, I think AI will eliminate people from having to do repetitive tasks, which can be automated, and allow them to focus on higher level jobs.”

How to soothe AI concerns

Responding to the deluge of concerns AI poses will take a manifold approach. For Perez, the quality of gen AI hinges on the data these models ingest. “If you want quality, trusted AI, you need quality, trusted data,” he says. The problem, however, is data is often riddled with errors, requiring tooling to integrate unstructured data in disparate formats from various sources. He also stresses going beyond “human in the loop” approaches to put humans more in the driver’s seat. “I see AI as a trusted advisor but not the sole decision maker,” he adds.

To uphold software quality, rigorous testing will also be required to check that AI-generated code is accurate and bug-free. To that end, Malagodi encourages companies to adopt a “clean as you code” approach that involves static analysis and unit testing to ensure proper quality checks. “When developers focus on clean code best practices, they can be confident their code and software is secure, maintainable, reliable, and accessible,” he says.

As with any new technology, adds Bedi, the initial enthusiasm needs to be tempered with proportionate caution. As such, IT leaders should consider steps to effectively use AI assistants, like observability tools, which are capable of detecting architectural drift, and can support preparation for application requirements.

Applying governance around AI adoption

“Generative AI represents a new era in technological advancement with the potential to bring substantial benefits if properly managed,” says Pooley. However, he advises CIOs to balance innovation with the inherent risks. Controls and guidelines must especially be applied to limit data exposure through uncontrolled usage of these tools. “As with many technology opportunities, CIOs will find themselves accountable should it go wrong,” he adds.

For Sample, the onus partially lies on regulators to adequately address the risks AI poses to society. For instance, he references a recent executive order from the Biden administration to establish new AI safety and security standards. The other aspect is spearheading corporate guidelines to govern this fast-paced technology. Walgreens, for example, has embarked on a journey to define a governance framework around AI that includes considerations like fairness, transparency, security, and explainability, he says.

Busse at Workato similarly advocates for setting internal directives prioritizing security and governance in the wake of AI. He advises educating employees with training, developing internal playbooks, and implementing an approval process for AI experimentation. Pooley notes that many firms have established an AI working group to help navigate the risks and harness the benefits of gen AI. Some security-aware organizations are taking even more stringent measures. To combat exfiltration, many buyers prioritize on-premise systems, adds O’Grady.

“CIOs should be leading the charge to ensure their teams have the right training and skills to identify, build, implement, and use generative AI in a way that benefits the organization,” says Perez. He describes how at Salesforce, product and engineering teams have implemented a trust layer between AI inputs and outputs to minimize the risks that come from using this powerful technology.

That said, being intentional with AI is just as important as governing it. “Organizations are rushing to implement AI without a clear understanding of what it does and how it’ll benefit their business the most,” says Hyland’s Watt. AI won’t fix every problem. So understanding the problems the technology can and can’t fix is fundamental to knowing how to maximize it, he says.

Positively impacting the business

With the proper checks in place, gen AI is set to catalyze greater agility across countless areas, and CIOs foresee it being utilized to realize tangible business outcomes, like user experiences. “Generative AI is going to allow companies to create experiences for their customers that once felt impossible,” says Perez. “AI is no longer just a tool for niche teams. Everyone will have opportunities to use it to be more productive and efficient.”

But UX benefits don’t end with external customers. Internal employee experience will benefit as well, adds Rajavel. AI copilots trained on internal data could cut IT ticket requests in half, he predicts, simply by instantly sourcing answers already found on internal company pages.

Walgreens is also improving customer experience with gen AI-driven voice assistants, chatbots, and text messaging, says Sample. By reducing call volume and improving customer satisfaction, team members can better focus on their in-store customers. Plus, the company is also deploying gen AI to optimize in-store operations, such as supply chain, floor space, and inventory management, helping leaders make decisions regarding the top and bottom lines of the business. But vigilance is key.

“As with all prior technical waves, AI is undoubtedly going to be accompanied by significant downsides and collateral damage,” says O’Grady. “Overall, it will accelerate development and augment human abilities while dramatically expanding the scope of problems.”

© Foundry