PENG April 1, 2026

Penguin Solutions Q2 FY2026 Earnings Call - Memory Surge Offsets Falling Advanced Computing as CEO Recasts Company as AI Factory Platform

Summary

Penguin reported mixed Q2 results, with total net sales of $343 million, non-GAAP gross margin of 31.2%, and non-GAAP EPS of $0.52. The story line is clear, management said: memory demand driven by inference workloads is powering a 63% year‑over‑year jump in Integrated Memory, while Advanced Computing continues to be impacted by the wind down of hyperscaler and Penguin Edge business, leaving revenue lumpy and timing dependent.
The new CEO, Kash Shaikh, is repositioning Penguin as an AI factory platform company centered on compute, scalable memory, software, services, and partner integration. Management raised full year guidance thanks to a stronger memory outlook, but warned that supply constraints, higher memory input costs, and deployment timing will keep Advanced Computing revenue volatile and put pressure on second half gross margins.

Key Takeaways

  • Q2 net sales were $343 million, down 6% year over year, with non-GAAP gross margin at 31.2% and non-GAAP diluted EPS of $0.52, flat year over year.
  • Management raised full year midpoint guidance, now targeting approximately 12% net sales growth and $2.15 of non-GAAP diluted EPS, up from prior guidance of 6% and $2.00.
  • Integrated Memory was the growth engine, with Q2 net sales of $172 million, up 63% year over year, representing 50% of company revenue.
  • Advanced Computing fell to $116 million in Q2, down 42% year over year, and management now expects full year Advanced Computing sales to decline between 25% and 15% year over year.
  • Non-hyperscale AI HPC is expanding, with non-hyperscale net sales down 35% in the quarter but up 50% for the first half, and five new AI HPC customer logos in Q2, seven in the first half versus three a year ago.
  • CFO says much of the memory upside in the near term is pricing driven, with demand also strong but constrained by component availability, prompting strategic inventory buys.
  • Inventory rose to $322 million from $200 million a year ago, days inventory moved to 51 days, and accounts payable climbed to $401 million, reflecting ahead-of-need memory purchases and timing of payments.
  • Penguin is positioning as an AI factory platform built on six elements: ICE ClusterWare, Memory AI systems, Advanced Computing systems, OriginAI architectures, end-to-end services, and a partner ecosystem including NVIDIA and Dell.
  • New product initiatives include Memory AI servers and a CXL-based Memory AI KV cache server, sold to customers including a tier one financial institution and a generative AI buyer focused on inference.
  • Management emphasized a structural shift: inference and agentic AI are more memory and latency sensitive than training, which increases the importance of CXL, KV cache and future photonic memory appliances.
  • Company received roughly $32 million in proceeds from the Celestial AI acquisition by Marvell, it closed the quarter in a net cash position with $489 million in cash and short-term investments and $450 million of debt.
  • Gross margin outlook was trimmed to 28% plus or minus 0.5 points for the full year, driven by a higher mix of lower margin memory sales and rising memory costs in AI hardware.
  • OpEx guidance remains $250 million plus or minus $5 million, and non-GAAP diluted share count is expected near 53 million shares following buybacks; $32 million of repurchases occurred in the quarter.
  • Management flagged deployment timing and supply chain lead times as the primary sources of revenue lumpiness, noting typical AI HPC sales cycles of 12 to 18 months and longer lead times for memory components.
  • Photonic memory is viewed as an enabler rather than a prerequisite; CXL adoption is already delivering use cases and revenue, while photonics would amplify memory pooling and scale.

Full Transcript

Operator: I will now hand the conference over to Suzanne Schmidt, Investor Relations. Suzanne, please go ahead.

Suzanne Schmidt, Investor Relations, Penguin Solutions: Thank you, operator. Good afternoon, and thank you for joining us on today’s earnings conference call and webcast to discuss Penguin Solutions’ second quarter fiscal 2026 results. On the call today are Kash Shaikh, Chief Executive Officer, and Nate Olmstead, Chief Financial Officer. You can find the accompanying slide presentation and press release for this call on the investor relations section of our website. We encourage you to go to the site throughout the quarter for the most current information on the company. I would also like to remind everyone to read the note on the use of forward-looking statements that is included in the press release and the earnings call presentation.

Please note that during this conference call, the company will make projections and forward-looking statements including, but not limited to, statements about the market demand, technology shifts, industry trends, and the company’s growth trajectory and financial outlook, business plans and strategy, including investment plans, product development and roadmap, anticipated sales, orders, revenue, and customer growth and diversification, and existing and potential strategic agreements and collaborations. Forward-looking statements are based on current beliefs and assumptions and are not guarantees of future performance and are subject to risks and uncertainties, including, without limitation, the risks and uncertainties reflected in the press release and the earnings call presentation filed today, as well as in the company’s most recent annual and quarterly reports.

The forward-looking statements are representative only as of the date they are made, and except as required by applicable law, we assume no responsibility to publicly update or revise any forward-looking statements. We will also discuss both GAAP and non-GAAP financial measures. Non-GAAP measures should not be considered in isolation from, as a substitute for, or superior to our GAAP results. We encourage you to consider all measures when analyzing our performance. A reconciliation of the GAAP to non-GAAP measures is included in today’s press release and accompanying slide presentation. With that, let me now turn the call over to Kash Shaikh, CEO. Kash?

Kash Shaikh, Chief Executive Officer, Penguin Solutions: Good afternoon. Thank you for joining our second quarter FY 2026 earnings call. This is my first earnings call as CEO of Penguin Solutions, and I’m excited to step into this role. I want to start by thanking Mark Adams for his leadership and for the strong foundation he built. Since joining in early February, I’ve spent significant time with customers, partners, and our teams around the world. I’ve witnessed the strength of the company, both in our technology and our customer relationships. What is clear is this: AI is moving from experimentation to production, with workloads increasingly shifting towards real-time inference. We are already seeing this translate into customer demand beyond hyperscale across enterprise, Neocloud, and sovereign AI markets. We expect this transition to expand our addressable market and drive increased demand for integrated AI infrastructure, where Penguin is already winning.

We see this firsthand in the breadth of our deployments from a sovereign AI factory, Hain, in South Korea to enterprise voice AI with Deepgram, to large-scale research systems with Georgia Tech, along with a growing pipeline across all three market segments. What makes this opportunity so significant is that the architecture of AI is also changing. Model training was largely compute bound. Inference, however, agentic AI is memory bound and latency sensitive. We believe this is driving a re-architecture of the data center across compute, memory, interconnect, and software. We also see AI driving memory demand, not only for the high bandwidth memory or HBM used with GPUs or other accelerators, but also for general purpose memory. General purpose compute wraps around every GPU build-out, and whether it’s reinforcement learning pipelines or inference serving. That workload runs on processors backed by significant memory content across the entire system.

While memory markets are cyclical, we believe AI is adding a more durable layer of demand for memory. As AI factories scale, I expect customers to increasingly prioritize partners that deliver with speed and precision. Along with full stack AI factory platform capabilities, including compute, scalable memory systems, cluster management software, end-to-end services, and a partner ecosystem to deliver a differentiated solution. Time to deployment is now directly tied to time to first token. Against this backdrop, we are building Penguin into an AI factory platform company. Our AI factory platform is built around six core elements. First, ICE ClusterWare, our AI infrastructure management software. Second, our new Penguin Memory AI line of systems, designed specifically for AI inference workloads. Third, Penguin Advanced Computing systems optimized for AI workloads. Fourth, Penguin OriginAI factory architectures, our reference designs for AI factories.

Fifth and sixth, end-to-end services and our partner ecosystem. Production-grade AI factories require full stack design across compute, memory, storage, networking, and software. We partner with leading AI companies, including NVIDIA and SK Telecom, and partners like Dell. We also offer complete end-to-end services spanning design, build, deploy, and managed services. We are strategically positioned at the intersection of AI infrastructure and memory with a long track record in both. Few, if any, companies combine these capabilities at scale. We believe that together, our AI infrastructure and memory expertise position us to meet the evolving requirements of AI infrastructure as it shifts towards inference workloads. This supports our ability to develop differentiated solution. Given the momentum we are seeing in our AI infrastructure business and the significant market opportunity ahead of us, we are very focused in this area.

We plan to invest more in our AI factory platform to accelerate our AI business growth, specifically in product innovation, go-to-market, and customer engagement. In March at NVIDIA GTC conference, we announced two AI inference-centric solutions aligned with this strategy. First, the Penguin Memory AI server. Building upon our Compute Express Link or CXL-based memory expansion capabilities, we introduce a new line of scalable memory systems called Memory AI. CXL is a high speed interconnect that enables scalable shared memory across GPUs and CPUs. We also announced the immediate availability of our new Memory AI KV cache server. Here, KV or key-value cache stores inference context to accelerate large language model responses. Second, the expansion of our OriginAI factory architecture portfolio, which now includes blueprints that address the larger workloads and the low latency demands of AI inference.

We also continue to expand capabilities of Clusterware toward a unified control plane for AI factory infrastructure, integrating the open ecosystem to deliver repeatable production scale deployments. To accelerate the innovation and strengthen our leadership team, we recently appointed Ian Colle as Senior Vice President and Chief Product Officer. Ian brings more than two decades of experience building AI infrastructure platforms and scaling high-performance computing, most recently at Amazon Web Services. He was recently named by HPCwire to its People to Watch 2026 list, reflecting his reputation in the industry. Now, let me briefly address our second quarter performance. In Q2, we delivered net sales of $343 million. Non-GAAP gross margin was 31.2%. Non-GAAP diluted earnings per share were $0.52. These results reflect strong demand and execution in memory and continued progress in our AI HPC business.

Before turning to the segments, I would like to address our updated outlook. As Nate will describe in further detail, following our solid Q2 net sales and EPS performance, we are raising the midpoint of our full year net sales and EPS outlook. We are raising our outlook for our Integrated Memory business, fueled by AI-driven demand, strong execution by our team, and favorable pricing dynamics. While our second half Advanced Computing net sales outlook is lower than our prior expectations, we are encouraged by strong year-over-year Q2 bookings growth for non-hyperscaler AI HPC business, which included five new AI HPC customer wins that brings our first half total this year to seven new AI HPC logos, compared to three in the first half of last year. With that context, let me take a closer look at each of the segments.

Starting with Advanced Computing, net sales for the quarter were $116 million, representing 34% of total company net sales and declined year-over-year. Advanced Computing net sales for the second quarter reflect both the timing of large deployments and our transition away from hyperscaler concentration. They also reflect the previously disclosed wind down of our Penguin Edge business. We believe diversification of net sales and wind down of Penguin Edge will strengthen the long-term quality of the business. As I mentioned, we are transitioning our AI infrastructure business from hyperscaler concentration toward a more diversified customer base across enterprise, Neocloud, and sovereign AI. This transition is showing very encouraging progress, but we still have more work to do.

Non-hyperscale AI HPC net sales grew 50% year-over-year for the first half of the year, representing over 40% of first half segment net sales, supported by strong non-hyperscale year-over-year booking growth in the quarter, including 5 new AI HPC logos across financial services, biomedical research, and energy. We expect further diversification in the second half of the fiscal year. Our AI HPC pipeline continues to strengthen with opportunities to acquire additional logos in the second half of the fiscal year across enterprise, Neocloud, sovereign AI customers. As previously discussed, these engagements typically progress over many months from prospecting to design to award, followed by contracting and ultimately system build and deployment. While this sales cycle can be long, often 12-18 months, and can introduce quarterly net sales variability, it also supports deeper customer relationships, repeat business, and a more durable long-term growth.

I’m encouraged by the trajectory of the business and the signals we are seeing in the market. Beyond the numbers, we are also seeing increased activity in specific enterprise verticals. For example, we recently announced our collaboration with Deepgram and Dell to support enterprise voice AI deployments. This win highlights the growing demand for low latency, production scale inference infrastructure in real-time applications. In this engagement, Penguin designed and deployed an optimized inference environment built on Dell PowerEdge servers and NVIDIA RTX PRO 6000 Blackwell GPUs. This solution facilitates Deepgram’s speech-to-text, text-to-speech, and voice agent functionalities for applications within healthcare and retail sectors. This case study also demonstrates how design and integration expertise delivers differentiated value. As inference workloads scale, we expect these types of deployments to become an increasingly important driver of AI infrastructure demand. Georgia Tech’s AI makerspace, developed in partnership with NVIDIA, is a strong example.

Our relationship with Georgia Tech continues to grow and validates Penguin’s ability to help organizations move efficiently from concept to production-grade AI infrastructure. Now turning to Integrated Memory. Net sales for the quarter were $172 million, representing 50% of total company net sales and grew 63% year-over-year. AI-driven demand remains strong across networking, telecommunications, and computing market segments. Pricing dynamics were favorable and although supply remained tight, we continued to manage constraints effectively through our supplier relationships and disciplined procurement. Stepping back, our AI, HPC, and memory segments taken together enable us to integrate, compute, and memory architecture in ways that meet the requirements of production AI environments. Memory architecture is becoming increasingly central to AI performance, particularly as inference workloads scale. Our early investments in CXL position us well as customers evaluate more dynamic memory architectures.

Furthermore, we are beginning to see this demand translate into customer deployments, including a recent substantial order for CXL cards from a generative AI company building solutions for inference workloads. This reinforces our strategic position at the intersection of memory and AI infrastructure to capitalize on the next phase of AI, focused on inference powering agentic AI workloads. These solutions are sold to enterprise AI infrastructure buyers, the same customers we serve in our AI HPC business. For example, we sold our CXL-powered KV cache servers to a tier one financial institution for their on-premise AI factory. In parallel, we continue to advance development of our photonic memory appliance or PMA, formerly referred to as OMA, which is designed to extend memory capacity and bandwidth for large scale AI environments.

We were an early investor in a photonic memory company, Celestial AI, reflecting our long-standing focus in memory architecture innovation and our early conviction in the importance of optical interconnects for next-generation AI systems. Celestial AI was recently acquired by Marvell in a multi-billion-dollar deal. Beyond the portion of proceeds we received from the acquisition as an investor, we are positioning ourselves for future growth in this market. As inference workloads expand, technologies like PMA can help address key memory scaling challenges in the next-generation AI systems. Last but not least, LED. Net sales for the quarter were $56 million, representing 16% of total company net sales, and were down 7% year-over-year. The business continues to operate with focused leadership and dedicated operational discipline. While market conditions remain mixed, we are maintaining a disciplined approach to investment and capital allocation.

We are focused on optimizing portfolio value while concentrating resources on areas where we see the strongest long-term returns. In closing, the demand for data center AI infrastructure and memory is expanding rapidly. AI factories are becoming infrastructure that powers artificial intelligence across a range of industries. As AI shifts toward inference and agentic systems and scales across large enterprise, neocloud and sovereign AI environments, we expect demand to accelerate. At the same time, memory is becoming a defining constraint and a defining opportunity. Penguin sits at the intersection of AI infrastructure and memory innovation, and we believe that is a powerful position to be in. Our focus is clear. We are prioritizing four areas. First, to invest in product innovation across our AI factory platform, particularly at the intersection of AI infrastructure and memory to drive profitable growth. Second, to execute with speed and precision.

Third, to deepen customer engagement and our ecosystem to support long-term growth. Fourth, to continue diversifying our customer base while building toward more consistent and predictable growth. We believe this focus positions us well to execute in a rapidly evolving market while continuing to build a durable and scalable business. With that, I’ll turn it over to Nate.

Nate Olmstead, Chief Financial Officer, Penguin Solutions: Thanks, Kash. I will focus my remarks on our non-GAAP results, which are reconciled to GAAP in our earnings release tables and in the investor materials available on our website. With that, let me now turn to our second quarter results. In the quarter, total Penguin Solutions net sales were $343 million, down 6% year-over-year. Non-GAAP gross margin came in at 31.2%, which was up 0.4 percentage points versus Q2 last year. Non-GAAP operating margin was 13.2%, down 0.2 percentage points versus last year. Non-GAAP diluted earnings per share were $0.52, flat year-over-year. In the second quarter of fiscal 2026, our overall services net sales totaled $64 million, up 1% versus the prior year.

Product net sales were $279 million in the quarter, down 8% versus the prior year. Net sales by business segment were as follows: In Advanced Computing, Q2 net sales were $116 million, which was 34% of total company net sales and down 42% year-over-year. This sales decline reflects both the ongoing wind down of our Penguin Edge business and hyperscale hardware sales in Q2 last year, which did not recur in Q2 this year. Drilling down deeper into our Advanced Computing results, our non-hyperscale AI HPC net sales were down 35% year-over-year in the quarter, but up 50% for the first half of the year.

Given the project nature of the business, where sales can be lumpy from one quarter to the next, we believe looking at the multi-quarter trend is a helpful way to evaluate the growth in this portion of our business. In addition to solid first half growth in our non-hyperscale AI HPC business, we continue to make good progress on diversifying our net sales to new customer segments. For the first half of the year, the non-hyperscale AI HPC business represented more than 40% of total Advanced Computing net sales versus approximately 20% in the first half of last year. We expect to see our mix of net sales from enterprises, Neoclouds, and sovereign AI customers increase further in the second half of this fiscal year.

In Integrated Memory, Q2 net sales were $172 million, which was 50% of total company net sales and up strongly with 63% growth year-over-year. In Optimized LED, Q2 net sales were $56 million, which was 16% of total company net sales and down 7% versus the same quarter last year. Non-GAAP gross margin for Penguin Solutions in the second quarter was 31.2%, up 0.4 percentage points year-over-year, and up 1.2 percentage points sequentially with strong margin performance in each business driven primarily by product mix in Advanced Computing, favorable pricing in memory, and tariff recovery in LED.

We currently project lower gross margins in the second half, driven by a higher mix of lower margin AI hardware and memory sales, rising memory costs in our AI factory solutions, and less tariff cost recovery in LED. Non-GAAP operating expenses for the second quarter were $62 million, down 3% year-over-year and relatively flat sequentially. We expect a modest sequential increase in operating expenses in the second half, reflecting normal seasonality and increased investments in R&D, including for our Clusterware software and memory AI solutions. Q2 non-GAAP operating income was $45 million, down 8% year-over-year and up 9% versus last quarter. Operating margins were down 0.2 percentage points versus the prior year, but up 1.1 points sequentially, driven by higher sequential gross margins in both Memory and Advanced Computing.

Non-GAAP diluted earnings per share for the second quarter were $0.52, flat versus Q2 last year and up 7% versus the prior quarter. Adjusted EBITDA for the second quarter was $50 million, down 6% year-over-year and up 11% versus the prior quarter. Turning to the balance sheet. For working capital, our net accounts receivable totaled $371 million compared to $330 million a year ago, with the increase driven by higher memory sales volumes and variations in sales linearity across the quarters. Day sales outstanding were healthy at 50 days, consistent with the prior year and down one day versus last quarter.

Inventory totaled $322 million at the end of the second quarter, up from $200 million a year ago, reflecting increased memory costs, growth in our memory business, and strategic purchases to fulfill memory and AI demand in the second half of the year. Days of inventory was 51 days, up from 37 days a year ago and 38 days last quarter, primarily due to our strategic memory purchases and the timing of receipts and shipments. Accounts payable were $401 million at the end of the quarter, up from $238 million a year ago, due primarily to higher memory costs, growth in our memory business, and the timing of purchases and payments. Days payable outstanding was 63 days compared to 44 days last year and 55 days last quarter.

The year-over-year and quarter-over-quarter movements were due to the timing of purchases and payments. Our cash conversion cycle was 38 days, an improvement of 5 days compared to Q2 last year and up 3 days versus last quarter due to the timing of purchases and payments. Consistent with past practice, days sales outstanding, days payables outstanding, and inventory days are calculated on a gross sales and a gross cost of goods sold basis, which were $672 million and $578 million respectively in the second quarter. As a reminder, the difference between gross and net sales is primarily related to our memory business’ logistics services, which are accounted for on an agent basis, meaning that we only recognize the net profit on logistics services as net sales.

Cash, cash equivalents, and short-term investments totaled $489 million at the end of the second quarter, down $158 million versus Q2 last year and up $28 million sequentially. The year-over-year fluctuation was primarily due to proceeds from the issuance of preferred shares in Q2 of last year, offset by debt repayments for our term loan in Q4 of last year. Sequentially, the cash increase was due to cash generated from operating activities, as well as approximately $32 million received from proceeds from the disposition of our investment in Celestial AI in connection with its sale to Marvell Technology. These sources of cash were partially offset by our share repurchase activity in the quarter. We ended the quarter with $450 million of debt, down $20 million versus last quarter due to the retirement of our 2026 convertible notes.

In total, we closed the quarter in a net cash position, and based on our current debt maturity schedule, have no further scheduled debt payments due until 2029. Second quarter cash flows provided by operating activities totaled $55 million, compared to $73 million provided by operating activities in the prior year quarter. The decrease in cash flow in the quarter versus last year was due primarily to investments in net working capital to support growth for the second half of this fiscal year. For those of you tracking capital expenditures and depreciation, capital expenditures were $2 million in the second quarter, and depreciation was $5 million for the quarter. Wrapping up our cash flow activities, we spent $32 million to repurchase approximately 1.7 million shares in the second quarter under our stock repurchase program.

As of February 27, 2026, an aggregate of $64.5 million remained available for the repurchase of our common stock under the current authorizations. Now turning to our outlook. Given our solid half one performance and an improved half two outlook for our memory business, we are raising our full company net sales and non-GAAP diluted EPS outlook for the year, which at the midpoint now calls for 12% net sales growth and $2.15 of non-GAAP diluted EPS, up from our previous outlook of 6% net sales growth and $2 of non-GAAP diluted EPS. As a reminder, our full year outlook assumes that we will continue to diversify our customer sales mix and does not include any Advanced Computing AI hardware sales to hyperscale customers.

Consistent with our assumptions from last quarter, our FY 2026 financial outlook reflects the ongoing wind down of our high margin Penguin Edge business. We expect sales from this business to essentially cease by the end of fiscal 2026. The combined effect of these two assumptions in our FY 2026 outlook remains approximately a 14 percentage point unfavorable year-over-year impact to our total company net sales growth and approximately a 30 percentage point unfavorable impact to Advanced Computing. With that said, our full year net sales outlook reflects the following full year growth ranges by segment. For Advanced Computing, we now expect full year net sales to change between -25% and -15% year-over-year.

While our Advanced Computing net sales outlook for this fiscal year is lower than our previous forecast, we are encouraged by our AI HPC bookings, including several new logos and pipeline growth. As it has previously, this outlook reflects the Penguin Edge and hyperscale hardware sales impacts mentioned earlier. For memory, we now expect net sales to grow between 65% and 75% year-over-year, driven by strong demand and a favorable pricing environment. For LED, we continue to expect net sales to decline between -15% and -5% year-over-year. Our non-GAAP gross margin outlook for the full year is now 28% ±0.5 percentage points.

We adjusted our gross margin outlook down by 1 percentage point to account for a higher mix of memory sales, which have a lower gross margin than our company average and higher memory costs in our AI hardware business. Our full year expectation for total non-GAAP operating expenses remains $250 million, and we have narrowed that range to ±$5 million. For FY 2026, we now expect a non-GAAP diluted share count of approximately 53 million shares down from our prior outlook, primarily reflecting the impact of our recent share repurchases. Our non-GAAP full year diluted earnings per share is now expected to be approximately $2.15 ±$0.15. Our forecasted FY 2026 non-GAAP tax rate remains at 22%.

While we expect to use this normalized non-GAAP tax rate throughout FY 2026 and beyond, the long-term non-GAAP tax rate may be subject to changes for a variety of reasons, including the rapidly evolving global and U.S. tax environment, significant changes in our geographic earnings mix, or changes to our strategy or business operations. Our outlook for fiscal year 2026 is based on the current environment, which contemplates, among other things, the global macroeconomic environment and ongoing supply chain constraints, especially as they relate to our Advanced Computing and Integrated Memory businesses. This includes extended lead times for certain components that are incorporated into our overall solutions, impacting how quickly we can ramp existing and new customer projects and fulfill customer orders.

Our outlook also contemplates the industry-wide higher costs for memory, which may slow customer demand for our products and solutions and may lower our gross margins in our Advanced Computing and memory businesses. Overall, we believe our focused execution, disciplined expense management, and balance sheet strength provide a strong foundation for sustained profitable growth. We expect these qualities to support our continued progress as we pursue opportunities to enhance long-term shareholder value. Please refer to the non-GAAP financial information section and the reconciliation of GAAP to non-GAAP measures tables in our earnings release and the investor materials on our website for further details. With that, operator, we are ready for Q&A.

Operator: We will now begin the question and answer session. Please limit yourself to one question and one follow-up. If you would like to ask a question, please press star one to raise your hand. To withdraw your question, press star one again. We ask that you pick up your headset when asking a question to allow for optimum sound quality. If you are muted locally, please remember to unmute your device. Please stand by while we compile the Q&A roster. Your first question comes from the line of Michael Ng from Goldman Sachs. Your line is open. Please go ahead.

Michael Ng, Analyst, Goldman Sachs: Ask about the raised memory segment outlook for 65%-75% growth. For how much of this is from increased favorable pricing versus demand for new product categories? As a follow-up, how should we think about the impacts to the operating margin outlook for this segment and the investments that need to be made into new technologies like CXL and photonic memory appliances? Thank you very much.

Nate Olmstead, Chief Financial Officer, Penguin Solutions: Hey, Cat, it’s Nate. On the memory outlook, listen, we’re really pleased with the demand that we’re seeing as well as the favorability that we see in the pricing environment. I would say, for the increase that we’re seeing in the second half, that’s majority pricing, but demand is also very strong across telco, networking. AI-driven demand is just very strong. In fact, you know, to get to the high end of that outlook really just refers to our ability to secure materials, which is really the only inhibitor we see right now to raising that outlook here in the second half. We’re chasing materials. We’re using the balance sheet to strategically purchase ahead where we can. The demand is very strong in memory.

In terms of the investments, you know, we’ve reflected it in the outlook, so I’ve kept the OpEx for the year at $250 million ± $5 million. We’re balancing the portfolio, as we always do, to look for opportunities to accelerate our investments in innovation in AI or in the memory solutions that we’ve been talking about. That’s all included in the outlook. I expect the operating margins for memory to remain pretty healthy in the back half of the year. I do expect some pressure on gross margins in AI as we see a higher mix of new hardware shipments in the second half, as well as factoring in some of the higher memory input costs that we have in that business.

Michael Ng, Analyst, Goldman Sachs: Thank you very much, Nate.

Nate Olmstead, Chief Financial Officer, Penguin Solutions: You bet.

Operator: Sorry. Your next question comes from the line of Brian Chin. We’re experiencing some mild technical difficulties. My apologies. Your next question comes from the line of Brian Chin from Stifel. Your line is open. Please go ahead.

Brian Chin, Analyst, Stifel: Hi. Great. Thank you for letting us ask a few questions, and good afternoon. Maybe first question, I guess in Advanced Computing, what changed that caused you to lower the midpoint of your prior guidance to the new range you’ve communicated? And can you describe how booked you are to that midpoint of that new range?

Nate Olmstead, Chief Financial Officer, Penguin Solutions: One of the main factor is the lag between our bookings and the revenue. Our revenue lags about 3-6 months from the time of the bookings. This is primarily driven by the timing of the deployments in some cases.

Kash Shaikh, Chief Executive Officer, Penguin Solutions: The material availability and so on and so forth. Given where we are in terms of our fiscal year, we have five months remaining. Going forward, most of the bookings that we are expecting may not materialize into the revenue for the second half of this fiscal, but we believe that it will have a positive impact, obviously, going into the first half of the next fiscal. That’s one of the reasons that, you know, we are lowering the guidance for Advanced Computing driven by the deployments. We are seeing strong momentum in our pipeline as well as bookings. Bookings grew very significantly in Q2 for non-hyperscale AI HPC business, which is very strategic for us, and we are encouraged to see the progress.

We closed 5 new logos with AI HPC in Q2 and in first half that takes the total to 7 new logos as compared to 3 new logos last year. We are very confident in our ability to execute. The main issue at this point is timing.

Brian Chin, Analyst, Stifel: Okay. Yeah, I appreciate that, Kash. Sounds like you’re pretty well booked into the fiscal second half lowered outlook, and that some of these new bookings are more kind of beyond a six-month window. Also thinking about, you know, growth in the business. Obviously, there’s that sort of headwind that you helped to clarify in terms of reduction in hardware revenue to the new hyperscaler, the wind down of Penguin Edge. You know, 30 percentage point impact, if we kind of net that against the guidance, you know, maybe 10% growth for this year, net of that in that segment.

Moving forward, you know, as you survey the business, and you haven’t been in the role that long, and you think about what that sort of apples-to-apples growth rate was or is tracking to for this fiscal year, how are you thinking about sort of target growth rates for the Advanced Computing business moving forward?

Kash Shaikh, Chief Executive Officer, Penguin Solutions: Overall, let me give you a data point. The first half of this fiscal, our net sales grew about 50% year-over-year for non-hyperscale AI HPC business, representing 40% of the overall mix of Advanced Computing, which is almost 2x of what we closed last fiscal. The growth is substantial in terms of the bookings as well as the revenue that we see, and we expect that to continue. As we continue to close the bookings, converting the pipeline, we see strong pipeline across all three segments that I mentioned between enterprises on-prem AI deployments, significant activity with sovereign AI customers, as well as Neocloud customers.

Brian Chin, Analyst, Stifel: Okay. Thank you.

Kash Shaikh, Chief Executive Officer, Penguin Solutions: Thanks, Brian.

Operator: Your next question comes from the line of Matthew Calitri from Needham & Company. Please go ahead.

Matthew Calitri, Analyst, Needham & Company: Hey, guys. Matthew Calitri here from Needham. Thanks for taking our question. Do the new memory launches mark a shift in strategy on that front? Just curious because in the past the company has talked kinda more about the niche parts of the Integrated Memory business and noted it’s early on things like the CXL front. Now it sounds like memory is expected to be a larger driver as part of this AI factory platform. Just wondering if anything has changed there and what gives you confidence there’s durable demand here.

Kash Shaikh, Chief Executive Officer, Penguin Solutions: Yeah. It is a part of our strategy. The Memory AI appliances that we launched about a month ago, starting with GTC, is a part of us investing more in our AI factory platform strategy. There are six elements to this strategy, and Memory AI is one of the strategic elements where it is very timely if you look at where how AI is transitioning from model training to inference. In the workloads where you are, you know, focused on inference, memory becomes an increased requirement because of lower latency as well as larger context size for inference powering the agentic AI. This is very strategic for our business.

In fact, we are leading the market in this area, taking advantage of our unique position at the intersection of memory and AI infrastructure and combining the deep understanding and architecture. We introduced this Memory AI KV cache server as one of the products in the line of Memory AI. We are working on other products, and we will continue to invest and, in fact, invest more in this area to take advantage of the market opportunity because the timing is perfect and our leadership in the Memory AI line of products.

To give you a proof point, one of the new logos we acquired, tier one financial institution, not only are we deploying the AI infrastructure, AI factory deployment for them, they also purchased our CXL-based KV cache server, which is a proof point as customers are transitioning from training and bringing AI on premise in their factories, deploying on premise, focusing on inference and powering agentic AI. It is very strategic for us, and the timing is just right. We expect to see this demand, and we plan to continue to invest in this area.

Matthew Calitri, Analyst, Needham & Company: Awesome. That’s great to hear. Nate, with a new CEO in the seat and some moving pieces around sales cycles and supply chains, did you change the guidance philosophy at all or embed any additional conservatism? Any color on the puts and takes there would be helpful.

Nate Olmstead, Chief Financial Officer, Penguin Solutions: Yeah. Hey, Matt. No change in the philosophy. You know, we, Cash and I, very quickly aligned, I think, on how we think about tracking the business and looking at things. In fact, I think with our new CRO, who came in a couple quarters ago, he’s done a nice job of adding some more rigor to the planning process in our AI business and just improving the visibility there a little bit. It’s a challenging environment from a supply chain standpoint and we’re, of course, got a lot of experience managing supply chain in our memory business, and I think that, you know, that’s an advantage for us in an environment like this.

Matthew Calitri, Analyst, Needham & Company: Awesome. Thank you guys so much.

Nate Olmstead, Chief Financial Officer, Penguin Solutions: Thank you.

Operator: Your next question comes from the line of Samik Chatterjee from JP Morgan. Please go ahead.

MP (Samik Chatterjee), Analyst, JP Morgan: Hi. Thank you for taking my question. This is MP on behalf of Samik Chatterjee. My first question is I just want to double-click on your Advanced Computing guidance. You mentioned that a lag of 3-6 months for the revenue which you will book in your second half. But was there a change observed for the bookings which you did in first quarter or any change relative to what were you expecting to do in 2Q? I have a follow-up as well.

Nate Olmstead, Chief Financial Officer, Penguin Solutions: Yeah. Hey, MP. You know, I think bookings were strong in Q2, really good growth sequentially and year over year. I do think that the deployment cycle has lengthened a little bit with some of the supply constraints. In particular on memory, things have gotten a little bit longer. But we’re really pleased with the five new logos. And, you know, I think demand is good. We’re seeing good strength in the pipeline, and it’s also diversifying nicely across, you know, the non-hyperscale segments such as enterprise and Neocloud and Sovereign. I think we feel really good about the demand. I think this is just an issue of a little bit of timing as we, you know, can convert bookings into revenue.

MP (Samik Chatterjee), Analyst, JP Morgan: Okay. My second question would be also on Advanced Computing and your AI factory related business. Like, does NVIDIA coming up with their own reference designs for factory level solutions, like, how does that play relative to you? Like, is that a tailwind for you, or is that a headwind for you? Like, can you please help us understand?

Kash Shaikh, Chief Executive Officer, Penguin Solutions: We believe this is an advantage for us. We work very, very closely with NVIDIA and some of the wins that I mentioned, for example, the tier one financial institution recently, along with, you know, our Memory AI product in this transaction. NVIDIA worked very closely with us, and we are working with NVIDIA, leveraging their reference design, combining that with our AI factory platform and complementing NVIDIA’s NVL, as an example, to provide full stack to our customers. Their blueprints are more complementary to our AI factory platform and the components that make up for it. We are actually quite excited about those blueprints and working very closely with NVIDIA to capture the opportunities, especially as NVIDIA is increasingly focused on enterprise. It aligns with our strategy and go-to-market.

MP (Samik Chatterjee), Analyst, JP Morgan: Thank you. Thank you very much.

Nate Olmstead, Chief Financial Officer, Penguin Solutions: Thanks, MP.

Operator: Your next question comes from the line of Ananda Baruah from Loop Capital. Please go ahead.

Ananda Baruah, Analyst, Loop Capital: For taking the question. Good afternoon. A couple if I could. Kash, and maybe Nate as well. Earlier remarks were that you’re seeing increased momentum across Neocloud, Sovereign and Enterprise. You mentioned one or two of the new wins. I think, Kash, you had mentioned you’ve made some specific, or at least general inferencing remarks, including around agentic. Do you have any specific context you can give us around what your customers are telling you their thrust in inferencing is right now and maybe the degree to which agentic is showing up there? Like, if we just wanted to get a sense of what the customer activity tone is like behaviorally, say, over the last 90-180 days.

Do you have anything there you can share with us to make it a little bit more experiential for us? I have a quick follow-up. Two things.

Kash Shaikh, Chief Executive Officer, Penguin Solutions: Sure. We believe we are early in the adoption of inference with these customers, but it is increasingly deployed as in customers, as they move towards agentic. Inference provides the opportunity for powering the agentic. When you think about inference, I’ll give you an example of why the architecture is changing and why memory is becoming increasingly critical in inference as compared to the model training. So for example, let’s say if you are writing a book, and if you have to write a new sentence without having the memory as a supporting component for you will have to reread the entire book before writing the next sentence.

In the inference, you know, you’re doing an inference on a lot of data you already have, and if you have a component where the book you have written so far is stored, so before writing the new sentence, you don’t have to reread the book. That’s kind of how it is changing for the enterprises and other segments. We see customers already deploying it, and the architecture is changing, which is why not only we have the opportunity and advantage to provide them our AI infrastructure as well as the services, increasingly, we are seeing the demand for our Memory AI portfolio, where they are deploying AI infrastructure and increasingly inference. They need products like that to be able to provide that memory component for the inference so that the responses of LLMs can be much more faster than they would be otherwise.

Ananda Baruah, Analyst, Loop Capital: I got it. That’s helpful. Just one last quick follow-up on mine for the time here, in case there’s anybody behind me. The CXL product, it sounds like to the earlier question, it sounds like you guys are a little bit more enthusiastic about the CXL sleeve today than you were maybe 90 days ago. You have the new products out at GTC. Is that accurate statement? Are you expecting maybe it’s because of these new products a little bit more and certainly some of the NVIDIA announcements at CES as well. But are you expecting a little bit more revenue a little bit sooner than maybe you were CXL-wise 90 days ago? A quick second part to that, do you need photonics to work before you really get CXL amplification?

Like, do you need CPO or photonics to work before you can really amplify, CXL and scale out or scale up? Thanks. That’s it for me.

Kash Shaikh, Chief Executive Officer, Penguin Solutions: Yep. Let me address your CXL question first. I think CXL adoption is timely given the transition to inference. As I mentioned, with inference, you need increased memory for faster LLM responses. What CXL provide Compute Express Link is you can share the memory between GPUs and CPUs. What it allows is now memory pooling, which is an advantage in inference workloads. While CXL was obviously available for the last, I’d say few quarters, it is that inference adoption is driving the adoption for CXL and this transaction that I mentioned where we received an order.

It’s actually an enterprise generative AI company working on inference workloads, so you can imagine CXL cards make sense for them because those workloads need increased memory, and the memory pooling capabilities provided by CXL between GPUs and CPUs are an advantage for those kind of customers. In terms of photonic memory appliance that we are working on in our partnership with Celestial AI, which is now obviously Marvell, that provides increased capability because obviously when you have photonic connectivity, then you have increased capacity to share the memory. It takes it to the next level. However, CXL in itself is an advantage. We can take it to the next level with the photonic appliance.

There is another element, which is KV cache that I mentioned, memory AI, KV cache server, which is essentially providing much more responsiveness for larger context workloads, again, used in inference. Various requirements, you can think of it as inference has various requirements related to memory and the type of workloads it has, and some of it is latency. These components between CXL or the CXL-based KV cache, which provides increased responses and larger memory size, larger context sizes, and then taking it to the next level, photonic memory, make up various use cases for inference. Inference gets mainstream. We will have an advantage of this fair portfolio helping with various use cases of inference.

Ananda Baruah, Analyst, Loop Capital: That’s great context. Thanks so much.

Kash Shaikh, Chief Executive Officer, Penguin Solutions: Thanks. Thank you.

Operator: Your next question comes from the line of Kevin Cassidy from Rosenblatt. Your line is open. Please go ahead.

Kevin Cassidy, Analyst, Rosenblatt: Hi. Yes. Thanks for squeezing me in. Just the gross margin for the memory. You know, your gross margin was up in the quarter, and memory revenue was up strong. I just wanna understand that, what the dynamics were there?

Nate Olmstead, Chief Financial Officer, Penguin Solutions: Yeah. Sure, Kevin. You know, we saw a little favor building in memory margins. Some of that is mixed. A little bit stronger demand in flash actually, which is a little bit higher margin product for us within the portfolio. Then also some of the pricing increases, we were able to capture a little bit of margin upside on that just based on the timing of our inventory purchases relative to the timing of, you know, shipments and sales to customers.

Kevin Cassidy, Analyst, Rosenblatt: Okay. You kind of as you look out to the second half of the year, you see that catching up to, you know, the price increases.

Nate Olmstead, Chief Financial Officer, Penguin Solutions: I think-

-compared to.

Yeah. As the price increase is slow, right? If that’s an assumption that you use, that price increases are gonna slow, then we would expect to see less margin favorability from that because it’d be less of a timing difference between or less

Less of a price variation between the timing between purchasing inventory and selling. You know, we have been using the balance sheet to try to secure inventory where we can. It’s a tight market, so it’s not unlimited supply. Where we can, we’re using the balance sheet to try to gain a little bit of an advantage.

Kevin Cassidy, Analyst, Rosenblatt: Okay. Maybe just as we’re talking about memory, as you get to these CXL systems, would you expect that’s gonna be a higher margin than the module business?

Nate Olmstead, Chief Financial Officer, Penguin Solutions: Yeah, we do. It’s really a solution. It’s got software, aspects to it. Some good differentiation on the hardware as well. I see that as a nice margin opportunity for us down the road.

Kevin Cassidy, Analyst, Rosenblatt: Okay, great. Thanks.

Nate Olmstead, Chief Financial Officer, Penguin Solutions: Thank you.

Operator: At this time, there are no further questions. I will now hand the call over to Kash Shaikh, CEO, for closing remarks.

Kash Shaikh, Chief Executive Officer, Penguin Solutions: Thank you, operator. We see AI shifting toward inference, with demand expanding beyond hyperscaler to enterprise, Neocloud, and sovereign AI customers. We are still in early shift in this transition, but the combination of our customer demand, product innovation, and booking momentum gives us the confidence in the path ahead. We believe we are well positioned at the intersection of AI compute infrastructure and memory, and we are making good progress diversifying our customer base. My focus is on strong execution across product innovation, customer engagement, and diversification, disciplined capital allocation, and investment in our AI HPC business to support the long-term growth. We look forward to updating you on our progress.

Operator: This concludes today’s call. Thank you for attending. You may now disconnect.