Key takeaways
We believe AI demand will create growth opportunities for companies in the semiconductor value chain, enabling greater computing power as well as hyperscale cloud providers and software companies through enhanced services and new product development.
The surge in generative AI usage is accelerating demand for graphics processing units (GPUs) that are the building blocks of high-volume parallel processing as well as for makers of chips for data centers, foundries and advanced equipment makers.
Mega cap companies that have captured the lion’s share of growth in public cloud are equally well-positioned in generative AI, as they own both the foundational language models and the raw compute power needed to apply generative AI at scale.
The potential to infuse AI into a broad range of existing applications and across every layer of the software stack should increase the industry’s total addressable market as software automates more manual tasks.
Large language models signal inflection point in AI development
The World Wide Web was released to the public four years after its creation and more than 20 years after the initial development of network communications. Artificial intelligence (AI) is experiencing a similar inflection point with the rollout of generative AI. While AI has been in commercial use for over a decade, continuous advances in natural language processing and computing power over the last four to five years have led to increasingly sophisticated capabilities. Whether in voice recognition devices like Siri and Alexa or in autonomous driving, AI has unlocked a new cycle of rapid innovation.
Looking past the enthusiasm and calls for caution spawned by ChatGPT and similar large language models (LLMs), we believe AI is entering a period of broad adoption and application that will enhance business efficiency and expand existing end markets. As with any emerging innovation, the AI development ball remains in constant motion, with new opportunities and competitive risks emerging on an ongoing basis.
From an investment standpoint, we believe AI demand will create growth opportunities in the short- to medium-term for the companies in the semiconductor value chain enabling greater computing power as well as hyperscale cloud providers and software companies through enhanced services and new product development. Generative AI could create new competitive risks in some areas of Internet usage and force increased spending among incumbents to catch up with peers. Early mover advantages could make a difference in some areas, while others could become commoditized through competition. How LLMs develop, for example, and whether open source becomes a competitive threat, could have significant long-term business implications for first-to-market hyperscalers.
Generative AI driving explosive demand for GPUs
AI refers to the development of computing power and related technologies such as robots to emulate and even surpass human capabilities. Computers gain these capabilities by training themselves on enormous amounts of data, which requires substantial processing power. Generative AI refers to the ability of natural language processing models to generate textual and graphical responses to queries.
The most optimal way for servers to analyze data is through a substantial number of cores (or processing units) embedded within a GPU, a specialized chip that can process a high volume of low-precision calculations efficiently and in parallel. The massive parallel computing requirements for training LLMs is spurring a tidal shift away from serial processors, also known as central processing units (CPUs), to GPUs (Exhibit 1). GPUs are the enabler of AI, and the surge in interest and usage of generative AI is leading to accelerating demand for these building blocks. ChatGPT has resulted in an inflection in AI adoption, with various industries leveraging AI algorithms and machine learning to improve productivity and enhance revenue generation.
Exhibit 1: AI Servers Rely on GPUs

Source: J.P. Morgan estimates.
Within data centers, which house a variety of types of servers for different computing needs, the growing penetration of AI is driving an acceleration in AI server shipments. AI adoption within the data center is expected to increase substantially from mid-single-digit percentages today to about one-third of data center servers having AI-related semiconductor content over the medium term.
Exhibit 2: AI Server Shipment Growth Runway

Source: IDC, J.P. Morgan estimates. There is no assurance that any estimate, forecast or projection will be realized
The dominant provider of GPUs, with an estimated 95%-100% share of the AI training semiconductor market, is expected to retain its market leadership as generative AI demand expands due to its full stack computing platform, the high performance of its GPUs and its cost of compute advantage over competing chips, as well as its head start in software such as industry-specific libraries and pre-trained models to facilitate enterprise adoption. Another semiconductor designer is a distant second in the data center server market, while cloud providers are also developing chips in-house. Several privately held companies offering enhanced computing technology could also vie for enterprise customers but currently lack a full ecosystem crucial to deploying effective AI infrastructure and addressing niche use cases.
Exhibit 3: AI Server Penetration Uplift from Generative AI

Source: Bank of America Merrill Lynch, J.P. Morgan, UBS, Visible Alpha. There is no assurance that any estimate, forecast or projection will be realized.
Heightened demand trends also benefit semiconductor makers serving cloud hyperscalers with other products related to AI infrastructure deployment. These include custom chips and networking solutions, semiconductor foundries and semiconductor equipment makers that are critical to producing the leading-edge chips required for AI.
Cloud adoption to accelerate with AI usage
Well before the recent rollout of ChatGPT and advanced LLMs, compute workloads were rapidly migrating to the cloud, making large hyperscalers the most important providers of sophisticated technology infrastructure to enterprise customers. Scale matters in public cloud, which caused a small group of companies to capture the lion’s share of growth in the space. These companies are just as well-positioned in the generative AI era, as they own both the foundational language models and the raw compute power needed to apply generative AI at scale. We therefore see the infrastructure layer behind generative AI development shaping into an oligopoly over time.
Exhibit 4: Cloud Hyperscalers Poised to Maintain Leadership in AI

Source: Morgan Stanley Research.
As the pace of cloud adoption normalizes from its pandemic-era surge, we see generative AI catalyzing the next leg of its growth. Public cloud provides both the speed and flexibility needed to apply AI to business problems. Early adopters can build AI-driven applications in a matter of weeks using hyperscalers’ API and infrastructure-as-a-service (IaaS) layer, rather than months or years if they build from scratch using on-premise infrastructure. Customizing LLMs involves vast amounts of data that are often housed in the cloud, expanding the pie for hyperscale cloud providers and the ecosystem behind them, including startups and services firms.
Hyperscalers, however, could be challenged by increasing competition from open-source LLMs. Some within the cloud industry believe open source could eventually make LLMs a commodity, with many companies able to provide fairly undifferentiated LLMs at a low cost. But users of open-source models must consider “who owns the data” that drive the models. While it is still early days in LLM development, we believe concerns over security and usage of proprietary data present a significant risk for open-source vendors/technologies, which should favor public clouds with existing safeguards in place. While some customers will likely experiment with open-source LLMs, many larger enterprises are unlikely to incur the risks associated with this model.
Beyond cloud services, AI has the potential to reshape trillion-dollar industries such as online advertising. From a web search perspective, chatbots like ChatGPT can drastically compress the time it takes to answer complex questions versus a traditional search engine (e.g., “What is the best canyon in Colorado to hike with a dog?”). This could have a negative impact on search monetization for incumbents, at least in the near term, much like the desktop-to-mobile transition in the early 2010s. The incremental investment to implement generative AI at scale could also lead to higher capital expenditure for leading online advertising platforms, pressuring cash flows as margins come under pressure.
Once we get past the growing pains, AI tools are expected to provide tailwinds to both platforms and advertisers by allowing better targeting of ads. Generative AI can be used to dynamically generate advertising content tailored to individual users of search and YouTube. Online ad platforms that have had to rethink customization due to Identifier for Advertisers (IDFA) privacy regulations should regain some of those targeting capabilities with generative AI capabilities. For instance, Instagram could use these tools to generate video ads from a brand’s static images, driving up conversion rates. Chatbots built into WhatsApp can help small businesses connect with more of their customers in real time. We are closely watching shifts in consumer Internet usage to understand how these headwinds and tailwinds might play out for Internet firms of all sizes as they incorporate generative AI.
Another key area to watch regarding LLMs is the application layer, which will entail development of vertical and company-specific software. While the largest models are good at providing generalized knowledge gleaned from massive data sets, models trained on domain-specific data will have an advantage over larger, less targeted models for most enterprise applications. This will require access to proprietary first-party data as well as real-world usage by millions of end-users to refine the quality of an LLM through human feedback. A good example is a conversational search engine powered by generative AI where its users implicitly help improve the model over time through their clicks, engagement levels and follow-up questions. As LLMs themselves get commoditized over time, we believe companies that leapfrog their peers in leveraging generative AI will also possess superior design and user experience skills. This is one of the key areas to consider when evaluating AI’s impact on software and services providers.
Generative AI to drive next software innovation wave
A handful of leading software makers are already marketing AI-enhanced versions of their software, offering a preview of the requirements for successful software integration of AI: having good data, domain expertise and the ability to apply LLMs to solve specific customer problems. The potential to infuse AI into a broad range of existing applications and across every layer of the software stack should increase the industry’s total addressable market as software automates more manual tasks. Code development as well as data management and analytics in particular appear well-suited to see significant improvements from AI integration. Software vendors servicing areas with high barriers to entry should also command pricing power to enable greater customer productivity.
Exhibit 5: AI Share of IT, Software Spend to Become Meaningful

Source: ClearBridge Investments. 2026 projections based on October 2022 IT and software spending estimates from Gartner. Total IT spending excludes devices. There is no assurance that any estimate, forecast or projection will be realized.
Software-as-a-service (SaaS) vendors have quickly embraced AI to leverage opportunities to remain competitive, unleashing a rapid innovation cycle in generative AI applications. While seeing fewer users (or “seats”) per enterprise customer remains a risk in some cases, we see that as more than offset by higher pricing on AI-enabled offerings over time. Moreover, SaaS companies with large amounts of customer data and significant regulatory barriers to entry, such as in human resources and financial applications, are best positioned to maintain their competitive advantage as AI automates more functions. We believe the risk of software disintermediation, on the other hand, will be highest in categories that are driven by manual processes, focused on consumers and content, and characterized by low barriers to entry and low customer retention rates.
Services companies will play an important role in guiding customers through initial integration of AI, an exercise that could last three to five years. What is unknown at this point is how much AI automation will take over going forward, potentially lessening the need for ongoing services and IT consulting support.
What’s next
Taking into account the early adoption of generative AI across enterprise IT and consumer markets, the integration of AI into the global economy is still in the very early innings.From a business model and investment standpoint, we believe some key areas to watch as generative AI gains wider usage include the implementation cost curve, consumer Internet behavior with AI-enabled search, and actions by regulators and publishers to control and likely limit the proprietary data available to train LLMs. In addition to vertical company- and industry-specific impacts, generative AI will have broader impacts as use cases expand across more segments of the economy. We plan to take a closer look at the macroeconomic impacts of generative AI and how it could affect long-term productivity and inflation expectations in a follow-up report.
Definitions
Artificial intelligence (AI), also known as machine intelligence, is a branch of computer science that focuses on building and managing technology that can learn to autonomously make decisions and carry out actions on behalf of a human being.
Generative AI is a broad label that’s used to describe any type of artificial intelligence (AI) that can be used to create new text, images, video, audio and code.
The World Wide Web (also known as the web, www or Web3) refers to all the public websites or pages that users can access on their local computers and other devices through the internet.
A large language model (LLM) is a type of artificial intelligence (AI) algorithm that uses deep learning techniques and massively large data sets to understand, summarize, generate and predict new content.
ChatGPT is an AI chatbot that uses natural language processing to create humanlike conversational dialogue.
A central processing unit (CPU) is the core component that defines a computing device.
A graphics processing unit (GPU) is a computer chip that renders graphics and images by performing rapid mathematical calculations.
Infrastructure as a service (IaaS) is a form of cloud computing that provides virtualized computing resources over the internet.
Software as a service (SaaS) is a licensing model in which access to software is provided on a subscription basis, where the software is located on external servers rather than on servers located in-house.
Identifier for Advertisers (IDFA) is a device identifier (unique and random) that Apple assigns to every IOS device, similar to a cookie on a webpage.
WHAT ARE THE RISKS?
All investments involve risks, including possible loss of principal. The value of investments can go down as well as up, and investors may not get back the full amount invested. Stock prices fluctuate, sometimes rapidly and dramatically, due to factors affecting individual companies, particular industries or sectors, or general market conditions.
Investments in fast-growing industries like the technology sector (which historically has been volatile) could result in increased price fluctuation, especially over the short term, due to the rapid pace of product change and development and changes in government regulation of companies emphasizing scientific or technological advancement or regulatory approval for new drugs and medical instruments.
The opinions are intended solely to provide insight into how securities are analyzed. The information provided is not a recommendation or individual investment advice for any particular security, strategy, or investment product and is not an indication of the trading intent of any Franklin Templeton managed portfolio. This is not a complete analysis of every material fact regarding any industry, security or investment and should not be viewed as an investment recommendation. This is intended to provide insight into the portfolio selection and research process. Factual statements are taken from sources considered reliable but have not been independently verified for completeness or accuracy. These opinions may not be relied upon as investment advice or as an offer for any particular security.
Any companies and/or case studies referenced herein are used solely for illustrative purposes; any investment may or may not be currently held by any portfolio advised by Franklin Templeton. The information provided is not a recommendation or individual investment advice for any particular security, strategy, or investment product and is not an indication of the trading intent of any Franklin Templeton managed portfolio. Past performance does not guarantee future results.



