Taiwan Semiconductor (TSMC) is the supplier for major design companies, such as Apple, Nvidia, AMD, Arm, Qualcomm, Broadcom, MediaTek and Marvell. TSMC is a foundry that manufactures the world’s most advanced chips, designated by node size. The most advanced node in production today is the 3nm and is primarily used by Apple in iPhones and MacBooks. The 5nm/4nm is used by Nvidia and others for AI accelerators, with high-performance computing quickly moving to 3nm and even 2nm.

Taiwan Semiconductor reported earnings on May 15th. The company topped analyst estimates and its internal guide with revenue growth of 12.9% YoY growth for US$18.9 billion. EPS beat by 4.5% at $1.38 reported compared to $1.32 expected.

Advanced node revenue continues to remain strong, though 3nm revenue dipped sequentially. Per the opening remarks: “3-nanometer process technology contributed 9% of wafer revenue in the first quarter.” This is down from 15% last quarter. The decline is temporary with Trend Force expecting 3nm production capacity utilization to be up 80% by year end. This quarter, revenue from 5nm and 7nm both expanded 2 points.

Despite warning of a slowdown in the broader semiconductor industry this year, TSMC’s April sales surged 60% YoY and 21% MoM. This marks a positive start to the 20-percentage point acceleration to 33% revenue growth that analysts expect as soon as the September quarter.

Background on Advanced Nodes: 5nm, 3nm and upcoming 2nm

Currently, AI accelerators use TSMC’s 5nm process. Nvidia’s Hopper and Blackwell are built with a N4X process that is tailored for high-performance computer applications. This is a customized variant called “4N” that Nvidia uses, yet TSMC recognizes this as 5nm revenue in their earnings report. AI accelerators are expected to quickly move to smaller nodes to help lower power consumption. TSMC’s 3nm process is more energy efficient, and energy efficiency will improve further with the 2nm process.

3nm (N3) and 2nm (N2) technology:

The 3nm process is currently the most advanced semiconductor technology, representing a full node advance from the 5nm generation. At the foundry level, the 3nm process offers 15% better performance than the 5nm process when power level and transistors are equal. TSMC also states the 3nm process can lower power consumption by as much as 30%. The die sizes are also an estimated 42% smaller than the 5nm.

In 2023, TSMC made 3nm chips for Apple’s iPhone 15 Pro, iPhone 15 Pro Max and MacBook’s M3 chips. In 2024, TSMC will expand its 3nm customers to include AMD and Intel. What is interesting is that Nvidia is not using the 3nm node in 2024, despite industry-wide expectations Blackwell would feature the most advanced node. Instead, Blackwell is relying on architecture for its advancement leap from the Hopper architecture.

TSMC offers enhanced 3nm processes, such as the N3E, N3P and N3X, which allows a company like Apple to customize the 3nm chips differently than those for hyperscalers. N3E is the baseline for IP design with 18% increased performance and 34% power reduction, N3P has higher performance and lower power consumption, whereas the N3X will offer high-performance computing very high performance but with up to 250% power leakage.

The 3nm marks the end of FinFET transistors, which stands for field-effect transistor. With FinFET, the gate is wrapped on three sides, whereas with gate-all-around (GAA), as the name implies, the gate is wrapped around on all sides. FinFET is used in 14nm, 10nm and 7nm nodes. TSMC uses FinFETs in the 5nm, yet will phase out FinFET after the 3nm. As TSMC moves toward GAA for the 2nm, having the gate wrap “all-around” will create a greater surface area for better electrostatic control and to also reduce leakage.

Regarding FinFET, the FinFlex technology unique to TSMC allows for chip designers to customize the number of fins per transistor. There are three configurations that balance performance with power consumption. Hybrid CPUs use FinFlex where high-performance cores are matched with power efficient cores, with the ability to activate whichever cores are needed most depending on the workload. The end result is that chip designers can have control over the configuration.

2nm: Nanosheet Transistors and Backside Power Delivery

The 2nm will be the first node to use gate-all-around field-effect transistors (GAAFETs), which will increase chip density. The GAA nanosheet transistors have channels surrounded by gates on all sides to reduce leakage, yet will also uniquely widen the channels to provide a performance boost. There will be another option to narrow the channels to optimize power cost. The goal is to increase the performance-per-watt to enable higher levels of output and efficiency. The N2 node is expected to be faster while requiring less power with an increase of performance by 10%-15% and lower power consumption of 25%-30%.

For TSMC, the 2nm will feature NanoFlex technology, which is similar to FinFlex to where designers can use cells from different libraries. However, due to the new gate-all-around (GAA) nanosheet transistors, there are additional benefits, such as customizing the width and height of cells.

Intel’s 20A will be the first to feature backside power delivery for faster switching and to alleviate routing congestion. With this release, Intel is introducing the “angstrom” era” which translates to future process generations where the process nodes are not smaller necessarily, rather the transistors they’re built with will be improved upon. For Intel, instead of the GAAFET, the company is introducing RibbonFET transistors where multiple flat nanosheets are stacked to enable better current flow.

In the future, we will dive deeper for our free newsletter subscribers into the fierce competition that is heating up between TSMC and Intel at the foundry level. For now, the main points are that TSMC’s N3 will rely on FinFET with GAAFET being introduced for N2. The expectations is that N2 will be available by the second half of 2025. Intel is emerging as a more capable competitor to TSMC with the 20A featuring RibbonFET gate-all-around and backside power delivery, due late 2024-early 2025.

Here is what TSMC’s management has stated about the competition, which communicates that TSMC is not sweating Intel right now:

“In fact, let me repeat again, our 2nm technology without backside power (N2) is more advanced than both N3P and 18A, and will be the semiconductor industry’s most advanced technology when it is introduced in 2025.”

In terms of timing, management recently offered the following: “Randy, the N2’s ramp profile we say is very similar to N3 because of, look at the cycle time, we start the N2 production in the second half of 2025, actually in the last quarter of 2025. And because of the cycle time and all the kind of back-end process, and so we expect the meaningful revenue will start from the end of the first quarter or beginning of the second quarter of 2026.”

Advanced Nodes Contribute 65% of Revenue in Q1

TSMC’s advanced nodes (3nm to 7nm) contributed 65% of revenue in Q1, up from 51% last year. This was driven primarily by the 5nm node, at 37% of revenue, as well as the continual ramp of the 3nm node, although 3nm revenues dipped quite heavily QoQ.

Revenue contribution from TSMC’s most advanced 3nm node dropped QoQ from 15% to 9% in Q1. Revenues fell nearly 40% QoQ from $2.9B to $1.7B in the most recent quarter. This is not necessarily unusual in the early ramp stages, given that the 5nm node saw a similar pattern in Q4 2020 and Q1 2021, where revenue contribution dipped before accelerating for multiple quarters. There is indication that TSMC will have to allocate more resources to 3nm, and this will come from 5nm fabrication equipment. Therefore, it may be in 2025 that we see 3nm exceed 20% percentage of revenue, which was forecast by management in a previous earnings call.

We can convert one technology node capacity to the next one is because of our GI’s physical advantage, meaning, let me give you one example, our 3-nanometer and 5-nanometer are adjacent to each other, the fabs, and they are all connected. So it’s much easier for TSMC to convert from 5 to 3. And that doesn’t mean that every node can do the same.”

In dollar terms, advanced nodes notched their two best quarters in Q4 and Q1, generating $13.2 billion and $12.3 billion, respectively. Q1’s soft 3nm sales were offset by sequential dollar gains in 5nm and 7nm, with advanced node revenue falling just 6.8% QoQ.

CEO C.C. Wei clarified in the earnings call that most of the current AI accelerators on the market “are in the 5- or 4-nanometer technology,” hence why we’re seeing strong 5nm sales and sequential growth in a seasonally slower quarter.

Despite most AI accelerators currently being produced on a 5nm node or 4NP node, including Nvidia’s upcoming Blackwell lineup, TSMC sees a clear path to increasing the 3nm node’s revenue contribution through the rest of the year. This will include converting 5nm node tools to support 3nm capacity and demand. Some of the capacity constraints are coming from HBM3e and the surge in CoWoS advanced packaging, which we’ve covered in more detail in our analysis: “Nvidia Q1 Earnings Preview: Blackwell and the $200 Billion Data Center.”

While 3nm’s ramp so far has been strong, management has been dropping hints that customer adoption on its upcoming 2nm node, set for production by year end 2025, will be even stronger.

On the most recent earnings call, it was stated: “observing a high level of customer interest and engagement at N2 and expect the number of the new tape-outs from 2-nanometer technology in its first 2 years to be higher than both 3-nanometer and 5-nanometer in their first 2 years.”

Margins Guided Sequentially Weaker

Despite strong HPC growth, bucking what’s normally a seasonal decline in Q1 to report sequential growth, margins face some headwinds through the rest of the year.

TSMC reported a 53.1% gross margin in Q1 and a 42% operating margin. For Q2, TSMC guided for a lower gross margin of 51% to 53%, primarily impacted by the recent 25% electricity price hike in Taiwan, some impacts from the earthquake, and 3nm’s ramp with the 3nm not at corporate gross margins yet. Operating margin was guided to be 40% to 42%, pointing to a slight 1-point QoQ decline, at midpoint.

Here’s what management said about Q2’s guide and some lasting headwinds through the rest of the year:

“After last year’s 17% electricity price increase from April 1, TSMC’s electricity price in Taiwan [has] increased by another 25% starting April 1 this year. This is expected to take out 70 to 80 basis points from our second quarter gross margin. Looking ahead to the second half of the year, we expect the impact from higher electricity costs continue and dilute our gross margin by 60 to 70 basis points […]

In addition, we expect our overall business in the second half of the year to be stronger than the first half. And revenue contribution from 3-nanometer technologies is expected to increase as well, which will dilute our gross margin by 3 to 4 percentage points in second half ’24 as compared to 2 to 3 percentage points in first half of ’24.

Finally, as we have said before, we have a strategy to convert some 5-nanometer tools to support 3-nanometer capacity given the strong multiyear demand. We expect this conversion to dilute our gross margin by about 1 to 2 percentage points in the second half of 2024.”

Overall, the largest headwinds to gross margin stem from ramping the 3nm node, which is to be expected, given TSMC has historically seen 3 to 5 percentage point headwinds in the initial (3-4 quarters) ramp phase before ultimately realizing higher margins once the node has scaled. This has occurred with both the 7nm and 5nm node.

AI-Related Revenue Reaches Fresh Record, Driving Strong Outlook

As the leading foundry for AI accelerators, TSMC is riding the enormous wave of demand from Big Tech. The chipmaker’s high-performance computing (HPC) revenues rose 3% QoQ to ~$8.68 billion, a fresh record despite the first quarter typically being seasonally weaker. HPC revenues (which are AI-related) increased 18% YoY as well.

Q2 is already off to a strong start. TSMC’s April sales rose nearly 60% YoY and 21% MoM to NT$236.02 billion, or US$7.28 to 7.30 billion.

TSMC had guided revenue for Q2 between $19.6 billion and $20.4 billion, and April’s surge puts it on track to land in the upper half of or above the guided range.

Much of this surge is likely attributed to HPC applications, given that we saw Big Tech discuss increased capex spending this year, predominantly for AI infrastructure. Our firm has been especially strong on correlating capex to AI investments for our paid research members, where we held a 1-hour webinar in April discussing our expectations that capex would increase in Q1 in support of AI stocks. We followed this up with free analysis in our newsletter that tracked a 35% YoY increase to $200 billion across Big Tech companies. A disproportionate amount of this will go to Nvidia.

We’re closely tracking Big Tech’s capex plans for 2024 and how this will flow downstream to AI hardware companies. The I/O Fund had a 45% allocation to AI going into 2023, one of the highest on record. Today, the AI allocation is higher with many lesser-known names. Learn more here.

There are also reports of Nvidia and AMD fully booking out TSMC’s advanced packaging capacity through the end of 2025, signaling strong demand from some of TSMC’s primary HPC customers. This lends to a strong AI-driven outlook.

Notably, TSMC’s management was much more cautious on the broader semiconductor industry. CEO C.C. Wei explained that for 2024, “We lowered our forecast for the 2024 overall semiconductor market, excluding memory, to increase by approximately 10% year-over-year.”

That caution does not translate through to AI, with TSMC seeing a “strong AI-related demand outlook.” Wei noted that the “continued surge in AI-related demand supports our already strong conviction that structural demand for energy-efficient computing is accelerating.”

TSMC’s positioning and value to the AI supply chain is expected to increase in the age of AI and high-performance computing. Wei added that TSMC forecasts “revenue contribution from several AI processors to more than double this year and account for low-teens percent of our total revenue in 2024. For the next 5 years, we forecast it to grow at 50% CAGR and increase to higher than 20% of our revenue by 2028.” This will include more than just data center GPUs, but will also include on-device AI.

The I/O Fund has been covering on-device AI on our research site to prepare for the next leg up in AI with many lesser-known names.

Analyst Estimates Falling Slightly

What’s interesting to see is that consensus revenue estimates have not only failed to move higher, but have actually been revised lower despite a top and bottom line beat in Q1 and strong guide above consensus for Q2.

Analysts are expecting revenue growth of 29.1% YoY to $19.94 billion in Q2, slightly below the midpoint of TSMC’s guided range, before accelerating to 32.1% YoY to $22.32 billion in Q3. This is expected to be ‘peak’ growth, with revenue growth rates decelerating back into the low 20% range heading into 2025.

Now compare this to analyst estimates from late January – while Q2’s estimate has moved higher, we’ve actually seen Q3’s revenue estimate revised $110 million lower, even with a $50 million increase to Q4’s estimate. Here’s January’s figures below for reference:

It’s not unusual to see EPS estimates come down slightly given the quantified gross margin headwinds TSMC is expecting to see in Q2 with 3nm’s ramp headwind persisting through the rest of the year.

Regarding the Q3 revenue estimates softening by $110 million, it may be linked to management not raising full year guidance, which was addressed in the Q&A from the recent earnings call:

Mehdi Hosseini (SIG):

You had a very nice upside to revenue expectation for the first half of ’24, but has kept the year-end unchanged. Is that a reflection of that slow recovery that you were highlighting? Or would you prefer to wait to have more visibility before updating 2024 target?

[…]

Wendell Huang (CFO, TSMC):

Yes. Mehdi, our guidance for the quarterly profile did not change. We always said that quarter-over-quarter, there will be growth. And also, the full year guidance will stay the same. So I don’t think there is a so-called upside, as you just said.

—End Quote

Conclusion:

As we’ve emphasized in this analysis and many others on AI stocks, the weakness is coming from non-AI segments. TSMC is a bellwether for semiconductors and can offer unparalleled visibility. In other commentary, this is what management stated in terms of where a lack of upside is coming from, which matches our understanding.

“Yes, smartphone end-market demand is seeing gradual recovery and not a steep recovery, of course. PC has been bottomed out and the recovery is slower. However, AI-related data center demand is very, very strong. And the traditional server demand is slow, lukewarm. IoT and consumer remain sluggish. Automotive inventory continues to weaken.” -TSMC

Our firm closed our TSMC position late last year for a 22% gain, when it was at $92. We decided to instead focus on stocks with heavier AI concentration with less geo-political risk. The stock has risen an impressive 62% since then. Around that time, we re-allocated and built an AI position that is up 51% in a similar time frame. We continue to be focused on stocks with high AI concentration and TSMC will remain on our watchlist as we build out our AI portfolio with many lesser-known names.

The I/O Fund we had five positions with returns over 100% and seven positions beat the Nasdaq in 2023. This contributed to cumulative returns of 131% since May of 2020. For more in-depth research from Beth, including 15-page+ deep dives on the stock positions that the I/O Fund owns, subscribe here.

If you would like notifications when my new articles are published, please hit the button below to “Follow” me.

Read the full article here

Share.
Exit mobile version