Aehr Test Systems Q2 Fiscal 2026 Earnings Call - Robust AI-Driven Bookings Signal Potential Revenue Surge in Fiscal 2027
Summary
Aehr Test Systems’ Q2 fiscal 2026 earnings exposed a mixed near-term performance with revenue softening to $9.9 million, down 27% year-over-year, hampered by lower wafer pack shipments. However, the company signaled a robust future, underpinned by substantial AI market momentum and key customer forecasts suggesting $60-$80 million in bookings in H2 2026—well above current revenue levels. The growth engines are wafer-level and package-part burn-in systems, critical for reliability testing in AI processors, flash memory, and silicon photonics amid the AI and data center infrastructure boom. Management emphasized wafer-level burn-in’s cost-saving edge over package-part testing and outlined aggressive capacity to manufacture up to 20 systems per month. Despite investment-weighted R&D costs depressing margins, Aehr’s focused execution on expanding AI-related solutions alongside diversification into gallium nitride, silicon carbide, and data storage sectors positions it for a strong fiscal 2027 rebound.
Key Takeaways
- Q2 revenue was $9.9 million, down 27% year-over-year, influenced by decreased wafer pack shipments but offset somewhat by stronger Sonoma system demand.
- Management reinstated fiscal 2026 guidance, expecting H2 revenues between $25 million and $30 million with bookings projected between $60 million and $80 million.
- AI processors are the primary driver of strong bookings, both in wafer-level and package-part burn-in segments, representing a major growth opportunity.
- The company’s Sonoma ultra-high-power package-part burn-in systems secured $5.5 million in recent orders including from a premier Silicon Valley test lab.
- Wafer-level burn-in technology is gaining traction for high-current AI processors, offering yield and cost advantages by testing at the wafer stage rather than packaged components.
- Benchmark testing for wafer-level burn-in with top AI processor customers is progressing, with added evaluations from two other AI processor firms underway.
- Strategic partnership expansion with ISE Labs and ASE aims to accelerate production ramp of wafer-level burn-in for high-performance computing and AI.
- Gallium nitride and silicon photonics remain significant markets with some shipment delays addressed; silicon carbide demand remains conservative with potential growth deferred to fiscal 2027.
- Gross margin declined to 29.8% due to lower volumes and product mix shifts; operating expenses decreased slightly but R&D investment increased to support AI and memory initiatives.
- Manufacturing capacity is scalable to at least 20 burn-in systems per month per segment, positioning Aehr to meet potential surge in demand during 2026 and beyond.
Full Transcript
Chris Siu, Chief Financial Officer, Aehr Test Systems: Greetings. Welcome to the Aehr Test Systems Fiscal 2026 Second Quarter Financial Results Conference Call. At this time, all participants are in listen-only mode. A question-and-answer session will follow the formal presentation. If anyone should require operator assistance during the conference, please press star zero on your telephone keypad. Please note this conference is being recorded. I will now turn the conference over to your host, Jim Byers of MKR Investor Relations. You may begin.
Jim Byers, Investor Relations Representative, MKR Investor Relations: Thank you, Operator. Good afternoon and welcome to Aehr Test Systems Second Quarter Fiscal 2026 Financial Results Conference Call. With me on today’s call are Aehr Test Systems President and Chief Executive Officer Gayn Erickson and Chief Financial Officer Chris Siu. Before I turn the call over to Gayn and Chris, I’d like to cover a few quick items. This afternoon, right after market close, Aehr Test issued a press release announcing its Second Quarter Fiscal 2026 results. The release is available on the company’s website at aehr.com. This call is being broadcast live over the internet for all interested parties, and the webcast will be archived on the investor relations page of the company’s website.
I’d like to remind everyone that on today’s call, management will be making forward-looking statements that are based on current information and estimates and are subject to a number of risks and uncertainties that could cause actual results to differ materially from those in the forward-looking statements. These factors are discussed in the company’s most recent periodic and current reports filed with the SEC. These forward-looking statements, including guidance provided during today’s call, are only valid as of this date, and Aehr Test Systems undertakes no obligation to update the forward-looking statements. Now, with that, I’d like to turn the conference call over to Gayn Erickson, President and CEO.
Gayn Erickson, President and Chief Executive Officer, Aehr Test Systems: Thanks, Jim. Good afternoon, everyone, and welcome to our Second Quarter Fiscal 2026 Earnings Conference Call. I’ll begin with an update on the key markets we’re targeting for semiconductor test and burn-in, with a particular focus on the common growth drivers we’re seeing across these markets, which is namely the massive explosion of AI and data center infrastructure. After that, Chris will walk through our financial performance for the quarter, and then we’ll open up the call for questions. While second quarter revenue was softer than anticipated, we made significant progress in both wafer-level burn-in and package-part burn-in segments and are very excited about our prospects moving forward.
Based on customer forecasts recently provided to Aehr, we believe our bookings in the second half of this fiscal year will be between $60 million and $80 million, which would set the stage for a very strong fiscal 2027 that begins on May 30th. During the quarter, we made substantial progress with wafer-level burn-in engagements in production installations across AI processors, flash memory, silicon photonics, gallium nitride, and hard disk drives. We’re encouraged to see that one of our key growth strategies, focused on reliability solutions for the exploding demand for AI and data center infrastructure, is beginning to bear fruit. In package-part burn-in, we secured key new device wins for our Sonoma system supporting high-temperature operating life qualifications for AI devices.
These wins are expected to drive additional capacity at test houses, including at least one customer that has elected to move into production in late calendar 2026, which we believe could result in meaningful volumes of Sonoma production systems. In addition, in the last month, we received a very large forecast from our lead Sonoma production customer for AI ASIC production capacity. This forecast is expected to drive very strong and potentially record bookings for the company this fiscal year and position us well for significant revenue growth next fiscal year, with their requested shipments starting in the first fiscal quarter of our next fiscal year. Taken together, our increased visibility across multiple end markets gives us great confidence in our outlook. As a result, we’re reinstating financial guidance in fiscal 2026, which we’ll touch on later in today’s call. Now, let’s talk about our key segments.
Starting with our wafer-level burn-in during the quarter, we expanded engagements and completed additional production installations across several end markets. Our lead AI wafer-level burn-in customer continues development of its next-generation processor and is currently discussing additional capacity with us. They’re forecasting additional system and wafer pack capacity orders this fiscal year and plan to transition to our fully integrated automated wafer pack aligner for 300-millimeter wafers. We expect this customer to continue scaling and excited to support their growth. We also announced a strategic expansion of our partnership with ISE Labs during the quarter to deliver advanced wafer-level test and burn-in services for next-generation high-performance computing and AI applications. This partnership accelerates time to market, improves performance, and gives customers the option of either package-part or wafer-level test and burn-in for their production volumes.
ISE, together with its parent company ASE, represents the world’s leading outsourced semiconductor assembly and test, or OSAT, platform, serving a global roster of top-tier semiconductor customers. As part of our benchmark evaluation program with a top-tier AI processor supplier we announced last quarter, we completed development of our new fine-pitch wafer packs for wafer-level burn-in of high-current AI processors. These are currently in test with this potential customer’s processors and are designed to validate our Fox XP production systems for wafer-level burn-in and functional test of their high-performance, high-power AI processors. We’re currently completing startup procedures such as power-up sequencing, thermal profiling, test vectors, timing, and high-speed differential clocks, and expect to complete data collection this quarter. While we’re demonstrating our new fine-pitch high-current wafer packs for this benchmark, many customers can utilize lower-cost wafer pack designs if certain design-for-test rules are incorporated upfront.
These approaches reduce cost and lead time and are especially attractive to customers focused on faster time to market or wafer-level high-temp operating life qualification. We also have two additional AI processor companies planning wafer-level benchmark evaluations since last quarter’s earnings call. These benchmarks typically take about six months, and we expect to make meaningful progress beginning this quarter. Both customers are evaluating wafer-level test and burn-in as an alternative to package-part or system-level test for large advanced AI modules that combine multiple AI accelerators and stacked high-bandwidth memory. Moving burn-in upstream to the wafer level significantly reduces cost and yield risk by avoiding scrapping expensive substrates and memory stacks when early failures occur later in the process. We have seen estimates that show the cost of the substrate is more than a single processor, and the cost of the high-bandwidth memory is even higher.
Turning to flash memory, we completed our wafer-level benchmark with a global leader in NAND flash just prior to the holidays. The customer has now taken the wafers back for further processing to validate correlation with their internal process. This benchmark demonstrated our ability to test flash memory wafers with significantly higher parallelism and power than is possible using traditional probers and group probers from companies such as TEL or Accretech. We’ve also proposed a next-generation solution enabling test of a new emerging flash memory device called high-bandwidth flash, or HBF, designed for AI workloads. This proposed solution leverages our Fox XP platform, wafer packs, and auto aligner technology and would support single-touch done, high-power test on 300 millimeter wafers. While development of this system would take over a year following customer commitment, we believe this represents a compelling entry point into a large and evolving memory market.
We look forward to sharing more details as this progresses. Turning to silicon photonics, we believe that silicon photonics is used in data center and also chip-to-chip IO is going to be a significant market driving production burn-in capacity for our Fox wafer-level burn-in systems and wafer packs. Our lead customer has now firmed up its production ramp, which we expect to begin early next fiscal year. While this timing is later than previously expected, it aligns with recently announced AI processor platforms and positions us well for calendar 2026 orders and deliveries in fiscal 2027. We’ve also finalized a forecast with another major silicon photonics customer initially targeting data center applications with a roadmap toward optical IO. We expect to book their initial turnkey Fox system soon, with delivery planned for May of this year.
In gallium nitride power semiconductors, we continue to support our lead production customer, though we experienced delays related to unanticipated high-voltage fault conditions that required wafer packs and protection circuit redesigns. This delayed approximately $2 million in wafer pack shipments from last quarter into this quarter, along with some system enhancements. Shipments have now resumed, and lessons learned have significantly strengthened our GaN power supply burn-in capability. If anyone tells you that testing and burning in full wafers of GaN power semiconductors with up to 600 volts or more is easy, don’t listen to them.
We also continue to engage with multiple new potential GaN customers and are developing wafer packs for several new device designs that are expected to go to high-volume production for applications like data center infrastructure and power delivery, automotive electrical power distribution on both ICE and hybrid electric vehicles, and even power semiconductors used for electrical breakers. Aehr has a unique solution that can deliver full turnkey, fully automated wafer handling and probing for test and burn-in of GaN wafers in sizes from 6-8 inches and even 12 inches or 300-millimeter wafers. Turning to silicon carbide, as we’ve previously discussed, silicon carbide demand has been way toward the end of this fiscal year. Customers continue to be optimistic about this market and their capacity needs, but we’ve tried to take a very conservative stance that is mostly show us the orders before we believe them.
Our lead customer recently transitioned from 150 millimeters to 200 millimeters wafers, nearly doubling output without adding new Fox XP systems and supported by Aehr’s proprietary wafer packs that we developed to accommodate both 150 and 200 millimeter wafers, contacting 100% of the die on each in a single touchdown. They’re now seeing additional needs for wafer packs this year, but additional capacity for systems appears to be a year out. We pushed out expected orders until next fiscal year from our near-term forecast, but have capacity of systems or wafer packs to continue to support their surge capacity needs as well as our other silicon carbide customers. While electric vehicle-related demand has slowed industry-wide, we remain well-positioned with the most competitive wafer-level burn-in solution available, and we expect to benefit when growth resumes.
In semiconductors used in data center hard disk drives, we’re installing the additional Fox XP systems for a major supplier of hard disk drives for wafer-level burn-in of their special components in their drives. They’ve indicated plans for additional purchases later this calendar year. While their device unit volumes are very large, the overall revenue opportunity remains modest due to short stress times and the massive parallelism achieved on our Fox XP system and proprietary high-power WaferPak wafer contactors. Now, let me talk about package-part burn-in. We’re seeing continued momentum in package-part qualification and production burn-in for AI processors, driving growth in our new Sonoma ultra-high-power package-part burn-in systems and consumables.
As we announced today in a separate press release, during our fiscal third quarter to date, we have received orders from multiple customers, totaling more than $5.5 million for our Sonoma ultra-high-power package-part burn-in systems, including initial orders from a premier Silicon Valley test lab for a newly introduced higher-power configured Sonoma system that can also support full automation. These orders already exceed the total Sonoma orders for the entire second quarter, highlighting the accelerating demand we’re seeing for our package-level burn-in of high-powered AI and compute devices. This quarter, we also secured key new device wins on the Sonoma platform for high-temp operating life qualification. These wins are expected to drive additional capacity at test houses, with at least one customer planning to transition to production later this calendar year, generating significant system demand.
Our lead package-part burn-in production customer for AI processors continues to ramp and is forecasting substantial growth in 2026 and beyond. Although we have not yet received the purchase order, we have received a substantial forecast from this customer for AI ASIC production capacity, with requested Sonoma production package-part burn-in system and BIM shipments beginning in the fiscal first quarter of 2027. That starts May 30th, which we expect to contribute to very strong bookings in fiscal 2026 and generate significant revenue growth in fiscal 2027. This customer also plans to introduce much higher-power ASICs later this year, for which we are already developing the high-temp operating life qualification burn-in modules and sockets to be used on the Sonoma systems at one of the premier Silicon Valley test services companies that have many systems installed.
This AI accelerator ASIC processor is also forecasted to go to production burn-in and drive even higher volume needs for production burn-in systems downstream at the OSATs in Asia. We feel we’re very well-positioned with our Sonoma system for this production capacity need and believe this could drive very substantial volumes of Sonoma systems in our next fiscal year. During the quarter, we completed development of a next-generation fully automated higher-power Sonoma system supporting up to 2,000 watts per device. This system enables continuous flow operation, improved throughput, and seamless transition from qualification to high-volume production using the same fixtures and sockets. These capabilities enable customers who are focused on high-temp operating life reliability testing to have a system that is fully software and hardware compatible with the Sonoma systems they have installed, which simplifies and accelerates time to market that is critical for HTOL testing of new AI processors.
This Sonoma burn-in system can also simply bolt on a fully automated handler developed and sold by Aehr Test as a turnkey solution to allow hands-free operation with less than a couple of minutes of overhead per burn-in cycle, which is amazing for production burn-in needs. We’re also seeing increased demand for our lower-power Echo and Tahoe package-part burn-in systems driven by our installed base of more than 100 systems across over 20 semiconductor companies worldwide. But I’ll wait for another call to discuss these systems and the markets they serve in more detail. As stated last quarter, the rapid advancement of generative AI and the accelerating electrification of transportation and global infrastructure represent two of the most significant macro trends impacting the semiconductor industry today.
These transformative forces are driving enormous growth in semiconductor demand while fundamentally increasing the performance, reliability, safety, and security requirements of the devices used across computing and data infrastructure, telecommunications networks, hard disk drive and solid-state storage solutions, electric vehicles, charging systems, and renewable energy generation. As these applications operate at ever higher power levels and in increasingly mission-critical environments, the need for comprehensive test and burn-in has become more essential than ever. Semiconductor manufacturers are turning to advanced wafer-level and package-level burn-in systems to screen for early life failures, validate long-term reliability, and ensure consistent performance under extreme electrical and thermal stress conditions. This growing emphasis on reliability testing reflects a fundamental shift in the industry from simply achieving functionality to guaranteeing dependable operation throughout a product’s lifetime, a requirement that continues to expand alongside the scale and complexity of next-generation semiconductor devices.
This year, we’re making significant progress expanding into additional key markets for our semiconductor test and burn-in solutions, including AI processors, gallium nitride power semiconductors, data storage devices, silicon photonics, integrated circuits, and flash memory. This diversification of our markets and customers is significant given our revenue concentration in silicon carbide for electric vehicles the last two years. This progress and key initiatives expands our total addressable market, diversifies our customer base, and provides us with new products, capabilities, and capacity, all aimed at driving revenue growth and increasing profitability. The progress we made this quarter with a significant number of customer engagements and production installations provides improved visibility into future demand. As a result, we’re reinstating guidance for the second half of fiscal 2026.
For the second half of fiscal 2026, which began November 29th, 2025, and ends this May 29th of 2026, Aehr expects revenue between $25 million and $30 million. As stated earlier, although we’re not providing formal bookings guidance, based on customer forecasts recently provided to Aehr, we believe our bookings in the second half of this fiscal year will be much higher than revenue, between $60 million and $80 million in bookings, which would set the stage for a very strong fiscal 2027 that begins on May 30th of 2026. With that, let me turn it over to Chris, and then we’ll open up the lines for questions. Thank you again, and good afternoon, everyone. I’ll begin with bookings and backlog, then walk through our second quarter financial performance, cash position outlook, and investor activity.
Your company recognized bookings of $6.2 million in the second quarter of fiscal 2026, compared to $11.4 million in the first quarter. At the end of the quarter, our backlog was $11.8 million. Importantly, during the first six weeks of the third quarter, we received an additional $6.5 million in bookings. This increase was driven primarily by an order from a premier Silicon Valley test lab for our newly introduced high-power configured Sonoma system, which we announced this afternoon. Including these recent bookings, our effective backlog has now grown to $18.3 million, providing increased visibility as we move through the remainder of fiscal 2026. Turning to our second quarter results, revenue was $9.9 million, down 27% from $13.5 million in the prior year period. The decline was primarily driven by lower shipments of wafer packs, partially offset by stronger demand for our Sonoma systems from our hyperscaler customer.
Contact revenues, which include wafer packs for our wafer-level burn-in business and BIMS and BIPS for our package-part burn-in business, total $3.4 million, representing 35% of total revenue. This compares to $8.6 million or 64% of revenue in the second quarter last year. Non-GAAP gross margin for the second quarter was 29.8%, compared to 45.3% a year ago. The year-over-year decline reflects lower overall sales volume and a less favorable product mix, as last year’s quarter included a higher proportion of higher margin wafer pack revenue. Non-GAAP operating expenses in the second quarter were $5.7 million, down 4% from $5.9 million in Q2 last year. The decrease was primarily due to lower personnel-related expenses, which were partially offset by high research and development costs, including higher project spending, as we continue to invest resources in AI benchmark initiatives and memory-related programs.
As previously announced, we successfully closed the InCal facility on May 30th, 2025, and completed the consolidation of personnel and manufacturing into Aehr’s Fremont facility at the end of fiscal 2025. During the quarter, we negotiated an early lease termination with the landlord, reducing our obligation by five months of rent. As a result, we recorded a reversal of $213,000 related to a previously accrued one-time restructuring charge. During the quarter, we recorded an income tax benefit of $1.2 million, resulting in an effective tax rate of 27.3%. Non-GAAP net loss for the quarter, which excludes the impact of stock-based compensation, acquisition-related adjustments, and restructuring charges, was $1.3 million, or negative $0.04 per diluted share, compared to net income of $0.7 million, or $0.02 per diluted share in the second quarter of fiscal 2025. Turning to cash flow, we used $1.2 million in operating cash during the second quarter.
We ended the quarter with $31 million in cash, cash equivalents, and restricted cash, up from $24.7 million at the end of Q1. The increase was primarily due to proceeds from our at-the-market equity program. As a reminder, in the second quarter of fiscal 2025, we filed a new $100 million S3 self-registration that was approved by the SEC for three years, followed by an ATM offering of up to $40 million. During the second quarter of fiscal 2026, we raised $10 million in gross proceeds through the sale of about 384,000 shares. At quarter end, $30 million remained available under the ATM. We intend to utilize the ATM selectively with a disciplined approach focused on market conditions and shareholder value.
Looking ahead to the second half of fiscal 2026, which began on November 29th, 2025, and ends on May 29th, 2026, we expect total revenue between $25-$30 million, and non-GAAP net loss per diluted share between -$0.09 and -$0.05 for the six-month period. On the investor relations front, last month, on December 17th, 2025, Lake Street Capital initiated analyst research coverage on Aehr Test, along with equity research firm Freedom Broker, which initiated coverage last June, and now a total of four research firms covering the company. Lastly, look at the investor relations calendar. We will meet with investors at the 20th Annual Needham Growth Conference in New York on Tuesday, January 13th, and then return to New York in February for the 15th Annual Susquehanna Technology Conference on Thursday, February 26th.
We’ll also be participating virtually in the Oppenheimer Emerging Growth Conference on Tuesday, February 3rd. We hope to see you at these conferences. That concludes our prepared remarks. We’re now happy to take your questions. Operator, please go ahead. Thank you. At this time, we will be conducting a question-and-answer session. If you would like to ask a question, please press Star 1 on your telephone keypad. A confirmation tone will indicate your line is in the question queue. You may press Star 2 if you would like to remove your question from the queue. For participants using speaker equipment, it may be necessary to pick up your handset before pressing the Star keys. One moment, please, while we poll for questions. Once again, please press Star 1 if you have a question or a comment. Our first question comes from Christian Schwab with Craig-Hallum. Please proceed.
Hey, again, thanks for all the details on the call. What wasn’t clear to me exactly is on the booking strength, potential booking strength of $60-$80 million in the second half of this fiscal year. Is that almost entirely on the AI accelerator processor line? There’s some silicon carbide, not much, not very much at all. There is some silicon photonics for sure, but the bulk of it is across wafer level and package part burn-in for AI processors, yes. Okay. Perfect. And then given that such a material bookings from the AI processor market, can you give us any indication or idea? I know we’ve talked about the opportunity in that marketplace being bigger than silicon carbide, but let’s narrow it down to kind of a multi-year time frame, kind of including 2027 and 2028. Do you see that business after initial orders expanding meaningfully from there?
We do. We do. And we’ve been taking a pretty conservative stance on how large, particularly AI on the wafer level side of it is. And conservative may not be fair. Candidly, we’re still trying to get our arms around how big it is. What we get is visibility of a specific GPU or CPU or network processor or an ASIC. And then we hear these things from the customer, and then we look externally, and what are they telling the street and try and correlate through those lookups. And I’d say pretty consistently, we hear bigger numbers from the customer than the street. I’m not sure what that all means, okay? And then as they give us test time estimates of what the burn-in conditions are, we can start to put some numbers around it.
But a single processor for some of these big guys at wafer level burn-in is 20-30 systems or so. And these are $4-$5 million machines. So you get a feel for the size of what that looks like. And the estimates of today, if you were to look at AI spend in test between test and burn-in, is it $8-$10 billion, maybe $15 billion or so? I mean, it’s a really large number. So we don’t want to get ahead of ourselves here, but when customers ask you things like, "How many can you make?" Use your hands, okay? So can the AI business be measured in hundreds of millions of dollars for Aehr Test a few years out? Yes, for sure.
Now, what’s interesting is that we’re in this, I think it’s an awesome position to be in because the Sonoma system is a highly preferred system for HTOL, the high-temperature operating life reliability testing for these AI processors. It has the largest installed base in all the test houses around the world. We’re getting people that approach us because we are like, I don’t want to say we’re the de facto standard. That’s probably bold, but we have more capacity than everybody else. And therefore, they are saying, "You’re kind of the go-to guy." I like those words. And we can build lots of them. So customers are using that, and we get a front row seat to actually bring them up.
Then we say, "Oh, by the way, if you want, you can take this machine, add production handling to it, and do production on it." In the meantime, if you come to our facility and you do a tour and you can see that production test cell for the Sonoma automation, we, of course, will walk you by a Fox wafer level burn-in test cell and mention, "Oh, by the way, that happens to be doing a benchmark on a 300-millimeter wafer. We can’t tell you who it is." And so they’re like, "Whoa, whoa, whoa. What is that?" So we’re in a position to be able to talk about both of them. And the ASPs are actually higher on the wafer level side of things, but the value proposition way outweighs that because of the yield advantage of doing it wafer level.
The yield savings dwarfs any of the costs of the cost to test the wafer level burn-in. So as we get our arms around the market, the market data that would be out there would be package part because no one’s doing wafer level except for us. And so we’re creating our own models related to, "Okay, for that unit capacity, if you went to wafer level burn-in, what would that look like?" Kind of similar to what we had to go through in the original silicon carbide side of things of if the whole market, and we’re not sitting here, everybody included NVIDIA and Google and Microsoft and Tesla and these guys all went with us. How big is that market? We haven’t even tried to put our arms around that yet, but it’s substantial. Great.
And then I guess one last question, if I may, and follow up on your comment about capacity. How many systems do you think you’re capable of manufacturing in a year for wafer level? We have talked to customers about capacities exceeding 20 systems a month at either package or wafer level. If we had to, we could ship 20 systems a month of each during this calendar year. Now, that’s bigger than our forecast by a lot. But you know what? When people are saying, "Could you do something like this and intercept something?" It’s like if they gave you an order for 50 or 100 Sonomas, how long is it going to take you to build them? Makes sense? Makes perfect sense. No other questions. Thanks, gang. You’re welcome. The next question comes from Jed Dorsheimer with William Blair. Please proceed. Hey, Jed. Oh, Jed?
We got you on mute, Jed. Oh, okay. I was that guy. So anyways, you got me on mute. Oh, there you go. Yep, yep. All right. So thanks for taking my question. Yeah, I guess maybe just to start, on the wafer level, I think your prior comments around the timing of the benchmark. It seems like that’s taken a little bit longer. And I’m just wondering, is that a function of, is it because it’s new and what you’re seeing from the customer? Is that they’re changing parameters that’s extending that out? Because I think you had maybe talked about by February time frame, and we’re almost. Jed, do you want me to throw my customer under the bus? Is that what you’re trying to tell me? But. No, no, no, no. All right. No, no, no, no, no, no. Let me answer that. No, I got it. I got it.
No, it’s totally fair. Okay. What I do in all of these things is try and describe exactly what we feel, what we know, what we knew at the time. One of the things that’s very interesting and fun about this particular customer who is a very notable customer, okay? When they gave us - and I don’t think I’m overstating it - when they gave us the vectors, the test vectors, etc., they were giving it off of a platform from package level, okay? Package and wafer are different. We had a huge arm wrestle with them related to what they could actually do at wafer level, and ultimately, we’re able to demonstrate to them significant DFT, lower pin count modes, etc., to be able to do it at wafer level, which was a big deal because they’d never understood that because, of course, nobody’s ever done this before with us, okay?
I’ll just leave it at this. They actually gave us some things that were implied based upon package that weren’t totally applicable to wafer level, and we struggled with some of that, and it turns out, so it actually did delay a little bit. I think it’s mutually understood. It’s like, "Oh, sorry. We were thinking package. We forget about wafer and sort," and that’s a growing thing. We’ve seen this with other customers. On the very first time you’re doing wafer level burn-in, you just don’t think about it from the challenges or the differences at what happens when you’re talking about a device that shares common substrates or from a probing environment. So is it longer? Maybe a little bit, measured in weeks or a couple of months or something.
But some of the things that mechanically wafer physical contact to the device, to using our auto aligner to pack these new fine-pitch WaferPaks, the test plan itself, the vectors, those things were all going along pretty well. So I wish it was a little bit sooner, but I think we’re still very much on track to try and get them some data over the next couple of months here or maybe even this month. So now the question, of course, parlays into, "What do they do with it? What’s the timing? Do you understand what device they want to cut in?" We do. We’re not going to share that with you guys. Are we going to make it? We believe we’re still. There’s lots of reasons to actually want to cut in wafer-level burn-in, and the sooner, the better.
So I’m actually, we’re really excited about this particular one. And then now we’ve got another couple of guys that are saying, "Pick me, pick me too," and are generating the information to give us so that we can actually do design reviews and walk through a wafer pack design for them as well. Got it. That’s helpful. Thanks. And I just want to address the potential of cannibalization between package and wafer level. And if I read through your comments, it seems like the AI processor is what’s moving along with this customer on the wafer level. You had mentioned briefly, actually, on the ASIC side. Are you anticipating that the ASICs basically run with package level and that AI processors are wafer level, or are you anticipating both at wafer level? Thanks. Yeah. Okay. Okay. So vocabulary for everybody that’s listening out there, right?
So when you talk about processors and the AI, arguably, there’s even maybe at least two or three different broad flavors of them, okay? You’re going to have the actual GPU if it’s an NVIDIA or ASIC when you talk about everybody else’s. In reality, the GPU is kind of an ASIC at NVIDIA too. Jensen said that at one point. These are AI accelerator platforms, okay? And they can be used for a lot of language models or for inference type things. There’s also processors like CPUs, like Intel or Grace or Vera type CPUs and others that are making them that are also going through a burn-in process. And then you could argue there’s even network processors and things like that.
But generally, when we talk about AI processors, we’re generally in the CPU and GPU type or ASIC type that are combined together in these AI processor clusters. And things like you hear at GB200 is Grace CPU and two Blackwell AI accelerators in one package, if you will, or in one cluster. What’s happening with the roadmap is that devices are going from a single AI accelerator or CPU in a package to a package that includes embedded memory like high bandwidth memory and high bandwidth flash over time, and then to having more than one compute chip in it. So having two processors in it or four or eight, like you look at the Intel or the AMD roadmap. Everyone has a roadmap to two or four more AI processors on a single substrate.
What’s happening is that there is the qualification of those are all done today in a full package. The whole device in a big substrate is done. And it can take months to even go through to get the packaging to qual that. So there are people that would like to be able to qual the processor inside when it’s still in wafer form, okay? From a production perspective, the value proposition is you’re burning in these devices, and when they fail, you take out the other compute chip and all the memory plus the co-packaged substrate, which costs more than the silicon of the compute chip itself. So the roadmap is getting more intense. So there’s people that are like, "Oh, I want to evaluate this for this device.
This would make sense." But boy, the next one makes twice as much sense, and the one next to that is four times as much sense because of this evolution. So a lot of times we discuss, "Okay, is there a window? What happens if you just miss this one device?" It doesn’t feel like that. It’s a treadmill of you can always step on. And the customers are like, "Okay, how do I cut you in?" I’ve said publicly that our large package part production customer, we’ve talked about it as an ASIC hyperscaler, they’re actually on Sonoma production. We’re qualifying their next device that’s going to go to production. We believe and hope it’ll go on Sonoma as well, okay? The third one, they’re giving us design files of so we can make sure that Sonoma is ready for that. But they’ve also said, "You know what?
By then, maybe we want to consider Fox wafer-level burn-in." And the interesting thing is it’s like, "Well, what will you do with all the package systems from us? Who cares?" It’s like, "What?" Because if I could move it to wafer level, I don’t need to do it in package anymore. Now, will it cut over just like that? We’ll see. I think the world’s going to be both for a long time, and we’re in a great position to do both. But is there cannibalization? For sure. We had a customer come in who wanted to talk about what we thought was package-part burn-in. Alberto, our VP over the package-part business, and I met with them.
Fifteen minutes into the meeting, he goes, "I’d like to talk about wafer level." Alberto looked over at me, and I’m like, "Okay, new slides." It’s like, "So at least we got both, and we’re in a great position." Actually, I would say all three. We do the high temp operating life today only at package over time at wafer level, and we do production burn-in either package or wafer level. So a great front row seat. Gayn, that’s helpful. I’ll jump back in the queue. Thanks. Okay. Thanks, Jed. Our next question comes from Max Michaelis with Lake Street Capital. Please proceed. Hey, Max. Hey, guys. Thanks for taking my question. First one for me just around the bookings guide.
I know you previously shared that majority of around AI, but just given the distinction between the low end and the high end, if we just take the midpoint around $70 million, I mean, to get to that $80 million, is that all basically around AI, or does that suggest any improvement around silicon carbide or GaN? It’s the least in that number is silicon carbide, okay? And then GaN’s pretty close. Hard disk drive’s a little bigger. Then silicon photonics is a chunk. I mean, we’ve got production systems in there for our lead customer. We have a new customer that wants a system. They want it shipped by May. We’re suggesting to them that they really should get their order in before we ship it. Joke, joke. I’m kidding.
It’s a challenge right now because they’re like, "Please, please build it." We actually have a system on our floor, and if they get their PO in, if you’re listening, you get to get it. If not, we’ll give it to the next guy. But anyhow. And then it would be wafer level burn-in, and then I think package is the biggest. I’m sorry, wafer level burn-in, AI, and then package part AI is the biggest. Okay. And yeah, that just suggests the $16-$80 million. The $80 million suggests just greater volume orders from wafer level burn-in, package part. Okay. And then lastly, I haven’t had time to run through the entire press release, but that $5.5 million order you noted in your prepared remarks, can you share some more detail on that?
Is there anything new that we should be looking for, or is it just kind of standard? You know what? It has a mix of some customers that already had Sonomas that were buying more that were AI related. It had some burn-in modules that was important because it was for a new design of a really expected to be high runner that’s going to production. It has a big order from what we call a premier Silicon Valley test services company. We’ll leave it at that. They actually bought a number of the new Sonoma configurations, which are the very high power ones that allow them to go to 2,000 watts. We have some devices that we’re going to be testing this spring that are almost 2,000 watts per device, right?
Everybody’s out there talking about how can you do what does it take to get to 1,000 watts? We’re jumping right past that. And this is in a high volume Sonoma system, so they’ll be able to test a large number of devices in that system. And I think the numbers I should know this number. I think it’s 44 devices, but I mean, it’s a large number of devices to be able to test those. And by the way, it’s either 22 or 44. I should know that. Sorry, folks, to go through the math on that particular application because of the number of resources and power supplies and things. But it’s the biggest part we’ve seen that’s in development, and that’s going to be going to production. So that’s a big deal. So it’s a combination of several different orders.
Every one of them is kind of sort of strategic to us. All right. Thanks for taking my questions. You’re welcome. Thanks, Max. The next question comes from Larry Chlebina with Chlebina Capital. Please proceed. Hey, Larry. Yes, and Gayn. Hey. We tried to line up your ramp or at least your demand for the systems that you’re working on developing for these customers on the AI processors with what’s publicly disclosed in terms of the product launch. Is there a case where they may start up on package-part, wherever they have the capacity to do that? And then when they feel comfortable, maybe if it’s after the product’s launched, would they cut over to wafer level burn-in because it’s so much more efficient and saves them money? Would they do that, or would they just do it initially on a brand new product launch at the beginning?
That’s kind of do you have a sense of that? Okay. So there’s two things in there. What I definitely see happening is we know for a fact a customer was doing system level or rack test, okay? The only time they identified infant mortality or early life failures was when it’s installed in the data center. Pretty nasty, okay? That’s test or not or burn-in. So they said, "We’ll run it for two weeks. If it hasn’t died, we’ll accept it," kind of thing. And then they’ll actually plug it into the network. Pretty expensive way of doing it. Then there are companies like AEM and Advantest and Teradyne that have talked about system level test machines, which is a type of ATE machine that is designed to be doing a high-speed insertion and boot up the operating system.
It’s a great way to do a very high degree of test coverage for a specific application. People were saying, "Oh, we’re going to do burn-in with that." Well, that doesn’t really. Those systems are designed for high speed. They’re designed to be at the user mode. They’re designed to run cold. They’re not really designed for burn-in, and they’re quite expensive and large, but the market was pulling on that because it’s sure better than doing it in a rack. And there wasn’t another system available in what a lot of people refer to as ovens, which is a large-scale system that you put lots of burn-in modules or trays with lots of devices and test all at once. Those were from KYC or something, maybe 600 watts and below or something, and there really wasn’t a tool out there for that.
This is where Sonoma was pulled up because we were doing. Intel was using it for the high temp operating life, but it’s like, "Well, wait a minute. Can I use that in production? Can you add automation? Can you do these things support? And can you quadruple or 50x your capacity?" So that’s where Sonoma is coming in. When Sonoma enters that market, doing system level test or rack test makes no sense whatsoever. So it’s highly competitive as that. Now, having said that, wafer level burn-in is even better. But a lot of people may say, "Well, I need to think through that. Where do I put that insertion?
I might need to implement some design-for-test modes to be able to implement it, at least to take advantage of the very low-cost full wafer contactors from Aehr Test and things like that." So I think it’s an evolution, but I think the conversation we have with customers is there’s like, "I need package-part burn-in. Let’s talk about that." But boy, wafer-level burn-in would be better. How do we engage on that? And then specifically on a per customer basis, I don’t want to get too carried away with our strategy, but if you have an installed base of something, package-part burn-in systems or I could go in and displace you with maybe Sonoma, but it’s probably better for me to go displace you with wafer-level burn-in because it’s not even a price thing in that sense. It’s yield or capacity.
So it depends on the customer, and we have some customers that have some devices they want to think about wafer level, some they want to think about package, and then eventually the wafer level over time. I hope that wasn’t. As I look back, that was pretty confusing, but it’s an evolution of it. And guess what we do? The customer’s always right. You tell me what you want, and we’re in. Well, if all these evaluations they have going on with wafer level burn-in, if it takes longer and the product ends up getting launched, would they still cut over to some portion of the production on wafer level burn-in once it’s proven out for the particular product or the predicted? Would they do that midstream? I think it depends. It’s not a slam dunk.
I mean, I think traditionally, people will start a product and do the release of that one product on one test platform or something, and then you cut in on the next one. I think that’d be fair to say, but there are certain devices we know that their intended application, there’s two or three different applications for it. So for a large language model, maybe they think about it one way, but if it’s going to be automotive, then that’s a different thing, right? So even within a product, there might be an evolution, or they get by until they can implement wafer level burn-in. That particularly comes into fact when you think about a multi-chip module, right?
As soon as you could do wafer level burn-in, if I could save you 1% yield per die on a four-die AI processor that has a $15,000 BOM, of course you would do that, right? So I’m not sure if they would. Yeah. So we’re trying to be as open as we can. We know as much as we know, but there’s definitely advantages to do wafer level. I mean, ultimately, that’s the most kind of the best place you could ever do it. And if you implement some DFT and you implement some of the things we do, I could build you a WaferPak in eight weeks, have you on wafer. Let’s shift gears on the flash benchmark that you completed right a little bit ago before the holidays. When do you expect the customer to get back to you?
And more importantly, when do you expect them to come with an order? I was waiting for somebody. Yeah. That’s where my head’s at too. My guess is, Larry, a next couple of months or so for them really to get back, depending on how they the wafer’s going back to test, which is tested at wafer. I don’t think they’re going to package it up and go through some stress qualification. That might be something. But we’ve already had some design reviews with them on our new tester and planted the seeds. They were very impressed is how I would describe it. The big shift here was when we even started this thinking to do the benchmark with them, which is like a year ago, okay, if I get that right. Yeah. Over a year, a year and a half ago. Yeah. Yeah. Fair enough, right?
When we were starting to even build up to get the design files and what wafer we were going to be testing with them, it was not aimed at high bandwidth flash because that didn’t even exist, okay? They were looking at it for commodity data center SSDs. Now, with HBF, it broke their infrastructure, the power supplies, IO pins, etc., and parallelism. And now they have a power problem, which we love. Well, we’re good at power. So people that have power problems, that’s music to our ears. So yeah. Well, I recall you originally said the driver, their motivation was as the 3D NANDs got higher levels of what are they at? They’re even talking about getting to 400 levels. Layers. Layers. Yes. Layers. That required more power and exceeded the power in their existing systems so that they need your high power.
So here we are a year and a half later. And so how are they getting by to this point? And don’t they need your high power capability for endpoint flash? They’re having to they can’t test a whole wafer in one touchdown, as an example. And but that what I described there, which people, if you follow along with that, that was actually referred to as hybrid bonded flash. Same letters, by the way, okay? Hybrid bonded flash was a novel idea that the base substrate layer was logic done on a logic process. And then you build up just the stacked memory, and you do that in a memory process, and then you bond them together. The result of it is that memory stack is a taller building with a smaller footprint, so you get more die per wafer. That’s good, right? But the power was much higher.
HBF, as in high bandwidth flash, is in some ways architecturally similar, except for it’s more power. Because of its speed, it has additional power supplies, and it’s taller, it actually is even more of a problem for them. Which I guess if you’re a tester guy, the bigger the problem, you have more to solve. But we had to go back and redesign the tester because we were originally aiming it at the other device. I would think they would need more capacity for the enterprise flash part of it before they ever start needing something for HBF. So the enterprise flash, I’m wondering when is something going to happen there? It seems like it’s overdue, so. Yeah. I mean, our goal in this case would be we had originally hoped to finish the benchmark at the end of last year, okay? So we’re six months later.
I think, as I shared with you, if you read through all of the notes, around March, it felt like you’re pushing a rope. Something was going on. If you knew who the company was, it’d be very obvious what was going on, okay? But what really happened is they kind of shifted from enterprise focus to HBF. And so that slowed some things down in terms of even reviewing our tester. And then they came back to us in the summer and were like, "Okay, here’s the new tester we’d like." So okay, maybe that’s good. For people that you’re tapping your fingers, it’s taking a long time, but that’s part of what happened there. But at this point, again, we walked up.
They thought we were just going to take their wafer and stick it into one of our NPs with a manual setup, and we showed them a fully integrated machine. So they walked up, and we put their wafer in a FOUP, put the FOUP onto the Sierra automated wafer pack aligner, ran the wafer. It opened up the blade, stick the wafer, put the wafer in the WaferPak, put the WaferPak in the blade, close the blade, ran the test, gave them the results. It’s pretty impressive. They said, "Well, so you’re ready to go for production." So it seems like they’re going to need more capacity based on everything that’s going on in the memory market. Exactly. And right now, they’re all flush with margins. How’s that? Right? So I agree. You know what?
Larry, you would, as people that follow Larry is our greatest cheerleader along with me in memory strategy for us. We are spending money, okay? It is part of, as Chris alludes to, we could be doing better. Well, at these revenue levels, we’re not happy with these revenue levels, right? We’re not making money at these levels. But we would be making more money. We’re spending money. We got our foot on the gas. And in fact, it’s our expectation that we’ll increase the R&D spend, particularly in the AI wafer-level burn-in, a little bit in the package because we’ve spent a lot of money on that in just this last year for package, getting this new product out. And then the memory system, which will be a blade in our Fox system, basically. It should pay off. It’s hopefully soon, sooner rather than later.
I vote yes too. As a shareholder, I think it’s good money to be spent. That’s all I had. Thanks, Gayn. Thank you, Larry. Again, if there are any remaining questions, please indicate so by pressing star one on your touch-tone phone. Okay. I’m showing no further questions in the queue. I would like to turn the call back to management for closing remarks. Thank you, Operator. And thank you, everybody. We really appreciate you guys taking the time to spend an hour with us. I think about that exactly again. And we’ll keep you guys updated. Stay tuned. We’re really excited about this and hope that the orders will come in shortly enough to be able to make this less dramatic as we go forward and set ourselves up for a really strong year heading into next year. So appreciate it.
If you are in town, we are in Fremont, California, Silicon Valley. Give us a call, set something up, come by, take a look at the facility. If you haven’t seen our tools, they’re very impressive, and you can get a feel of the capacity because we have a lot of systems on the manufacturing line right now. So take care and happy New Year to everyone. This concludes today’s conference, and you may disconnect your lines at this time. Thank you for your participation.