Navigating the Generative AI Era: An Essential Guide for CEOs

The release of ChatGPT in late 2022 sparked intense interest in generative AI, leading to a wide range of productivity enhancements and challenges for organizations. The technology’s potential to disrupt almost every industry, similar to how the Mosaic web browser revolutionized the internet era, presents a pressing need for business-model innovation.

CEOs must develop a strategic plan around three key pillars: understanding possible innovations due to AI’s infinite memory capabilities, redefining and managing employee roles in the AI context, and addressing potential issues with AI-generated false or biased output. The rapid evolution of generative AI raises numerous questions, but CEOs must prepare for the imminent obsolescence of current business models and strategically plan for the future.

Democratizing Power of AI: A Double-Edged Sword

Accessible AI tools like ChatGPT, DALL-E 2, Midjourney, and Stable Diffusion empower individuals to generate content across various domains, including websites, advertising, and video production. The “low-code, no-code” aspect of these tools facilitates widespread adoption and scalability within organizations, leading to significant cost reductions by automating tasks like document summarization. However, the democratizing effect of generative AI also levels the playing field, granting competitors similar access and capabilities. Maintaining pace requires the use of existing large language model applications like Github Copilot and Jasper.ai. But these tools fail to offer differentiation, as the only variability arises from users’ prompts to the system.

Navigating the Trade-offs in Generative AI Implementation

The main challenge for CEOs is identifying the “golden” use cases for their companies that provide substantial competitive advantages and impact compared to existing solutions. These use cases could originate from any point in the value chain, offering growth through improved offerings, reduced time-to-market, cost savings, and innovative idea generation. For instance, automating customer engagement or speeding up R&D in biopharma through AI can significantly enhance a company’s value.

Upon identifying their golden use cases, leaders must decide between fine-tuning existing large language models (LLMs) or training custom models. Fine-tuning existing models is cost-effective and can save substantial time, making it suitable for quick experimentation. However, these models heavily rely on the core model’s training data and offer limited options for protecting proprietary data.

In contrast, training a custom LLM offers more flexibility, but it incurs high costs and demands substantial capabilities, even when managed through a partnership. For instance, AI21 Labs estimates the cost of training a 1.5-billion-parameter model at about $1.6 million. While the investment bar is high, a truly differentiated use case could generate value that outweighs the cost.

Strategic Considerations in Generative AI Adoption

Leaders must prudently assess the timing of investments in generative AI, balancing the potential costs of premature adoption against the risks of lagging behind. Given the current limitations of generative AI, primarily its error propensity, it should be implemented in use cases with high variability tolerance. Deciding on the source of funding for data and infrastructure, be it IT, R&D, or elsewhere, is critical if custom development is deemed time-sensitive and vital.

The “fine-tune versus train” conundrum further influences long-term competitive advantage. As generative AI research transitions from public and open-source domains to proprietary settings, open-source models risk falling behind cutting-edge solutions, indicating an impending generative AI arms race. As these algorithms grow increasingly sophisticated, data scientists will require enhanced training, skills, and expertise to keep pace. Large organizations aiming for independence while utilizing the latest AI technologies must build robust internal tech teams.

Navigating the Human Impact of Generative AI

Generative AI, as a disruptive technology, prompts CEOs and HR leaders to redefine roles and responsibilities and adjust operating models within their organizations. Generative AI is set to augment many job roles by enhancing productivity, performance, and creativity. Employees in clerical roles could use generative AI for initial drafts, freeing up their time for refinement and solution identification. However, the effect of AI on employees’ emotional well-being and professional identity should not be ignored.

This AI impact is a crucial cultural and workforce issue. CEOs should collaborate with HR to comprehend how roles will evolve, conduct regular pulse checks to monitor employee sentiment, and design transparent change-management initiatives. The message should underscore the indispensable role of humans in effectively and ethically deploying AI.

As AI adoption picks up speed, CEOs need to develop a strategic workforce plan, ensuring the right people and management are in place to maintain competitiveness and optimize AI investments. This requires addressing questions around the necessary competencies for project leaders, the creation of the optimal experience curve for future talent development, and adjustments in training and recruitment strategies.

In adjusting the operating model, agile (or bionic) models with centralized IT and R&D departments equipped with experts are predicted to be most effective and scalable. A chief AI officer role could be beneficial to oversee the business and technical requirements for AI initiatives. Structurally, this might involve department-focused teams with cross-functional members or, preferably, cross-departmental and cross-functional teams aligned to the business and technical platforms.

Managing Risks and Ensuring Quality in the Era of Generative AI

Generative AI, despite its potential, presents significant risks due to its inability to distinguish between factually correct and incorrect information, a characteristic known as “hallucination”. Other risks include copyright infringement, proprietary data leaks, and unexpected functionality or “capability overhang”. Companies need to establish policies for the safe use of generative AI, ensuring its use is within well-defined parameters. Policies should track experiments, clarify data ownership, establish content review processes, and protect proprietary data.

Employees should be trained to use generative AI within their expertise areas, as its low-code, no-code properties might make users overconfident and expose sensitive information. Leaders should advocate for healthy skepticism of AI-generated content and ensure all content is reviewed by data owners. Quality and security measures should include stringent documentation, institutional review boards, and licensing for downstream uses. Caution should also be advised against using public chatbots for sensitive information.

Currently, using large language models (LLMs) without disclosing data is challenging. For data privacy, options include storing the full model on-site or on a dedicated server, though this may limit the ability to use cutting-edge solutions. Leaders should also consider techniques such as named entity recognition to remove sensitive information. As LLMs evolve, solutions to protect sensitive data will also improve, necessitating frequent updates to security protocols and policies.

Despite the inherent risks and unknowns, generative AI provides unprecedented opportunities. Leaders must strategize effectively to exploit these opportunities, reorganize their workforce and operating models to support generative AI innovation, and ensure that experimentation doesn’t compromise security and ethics, thereby securing long-term competitive advantage.

Source:
The CEO’s Guide to the Generative AI Revolution


Get in touch

Whether you’re looking for expert guidance on an AI initiative or want to share your AI knowledge with others, our network is the place for you. Let’s work together to build a brighter future powered by AI.