A new article by Wharton professor Kartik Hosanagar and Ramayya Krishnan, Dean of the Heinz College of Information Systems and Public Policy at Carnegie Mellon University, explores the key players, opportunities, and challenges within generative AI technology.
The generative AI landscape is evolving rapidly; understanding the value chain, deciding between building and borrowing, and navigating emerging challenges are critical for success. Companies that leverage their unique strengths and capitalize on user experience are best positioned to thrive in this transformative era.
Below is a snapshot.
The Generative AI Stack
Imagine a technological ladder, with each rung representing a crucial component of the generative AI ecosystem:
- Hardware layer: Powerful graphics processing units (GPUs) form the foundation, providing the processing power to train and run complex models.
- Data: The lifeblood of AI, massive datasets like internet crawls and domain-specific information fuel the training process.
- Computing infrastructure: Cloud giants like Amazon Web Services and Google Cloud offer the infrastructure to build and run models.
- Foundation models: These are the giants of the stack, like OpenAI’s GPT-4 and Google’s Gemini, acting as versatile, pre-trained models for various tasks.
- RAGs and Fine-Tuned Models: For specific contexts and tasks, these models leverage techniques like retrieval-augmented generation (RAG) and fine-tuning to improve performance.
- LLM applications: At the top, applications like GitHub Copilot (code generation) and Jumpcut Media (content summarization) utilize these models to serve specific user needs.
The Battle for Value
There is potential for consolidation in the foundation model market, with a few giants likely to emerge. This dominance stems from:
- High barriers to entry: The colossal costs of data, computing resources, and technical expertise create a significant hurdle for new entrants.
- Demand-side network effects: As ecosystems around popular models grow, switching costs rise and new entrants struggle to compete with established players.
- Economies of scale: Leading players benefit from lower per-query costs due to resource pooling and demand aggregation.
Building vs. Borrowing
Companies considering entry have crucial choices:
- Build on existing models: Convenient and cost-effective, but raises security concerns and relies on others’ advancements.
- Use open-source models: Offers transparency and potential cost savings, but may require in-house expertise and lag behind closed-source models in performance.
- Build their own models: Provides complete control and potential differentiation, but demands significant resources and ongoing upkeep.
Domain-Specific Advantage
- Organizations with access to large, high-quality data in specific domains, like finance or healthcare, can leverage this advantage to build specialized models that outperform general-purpose ones.
- With the functionality of applications potentially easily replicable, companies need to differentiate at the user interface (UI) level.
- Incumbents with established user bases, like GitHub with Copilot, have a significant advantage due to easier adoption and data collection for further improvements.
Key Takeaways for Managers
- Incumbents: Leverage your domain expertise and data to build specialized models for unique value propositions. Integrate AI into existing products and services to capitalize on your established user base.
- Entrepreneurs: Build on top of existing models for initial offerings, focusing on agility and exploiting the inertia of larger players.
- Organizations: Carefully evaluate use cases for generative AI, considering factors like regulations, error management capabilities, and access to unique data for differentiation.
NOTE: One of the central emerging challenges is the question of copyright issues. In particular, concerns about training models on copyrighted material might favor established players who have the resources to navigate legal battles and licensing agreements.
Read the full article here.
(Outlet: MIT Sloan Management Review)