Celestial AI developed a fundamentally new architecture for AI computing in datacenters by using light to create ultrafast, energy-efficient data links between AI accelerators and memory. Their goal was not a mere incremental improvement but the removal of a fundamental performance bottleneck, the AI memory wall, that limited the entire industry. This disruptive solution was made possible through technology informed, in part, by imec’s leading-edge work in silicon photonics.
Celestial AI’s Photonic Fabric™ platform addresses one of the most pressing constraints in modern AI infrastructure: data movement. Turning this concept into a manufacturable silicon solution required not only technical expertise but an ecosystem capable of supporting a deep-tech journey over many years—perfectly setting the stage for Marvell’s acquisition of Celestial AI in February 2026.
In this case study, Marvell executives Preet Virk and Subal Sahni discuss why optical fabrics are becoming essential for scale-up AI systems, the technical challenges they had to overcome, and how collaboration with imec accelerated the journey from concept to manufacturable technology. Preet, who co-founded Celestial AI and served as its COO, became Senior Vice President and General Manager of the Photonic Fabric Business Unit at Marvell following the acquisition. Subal served as Vice President of Photonics Engineering at Celestial AI, and is now Vice President of Technology for Marvell.
What problem originally motivated you to start Celestial AI?
Preet: When we started Celestial AI, we saw that AI systems were becoming limited not by compute, but by data movement.
As models grew from hundreds of billions to trillions of parameters, the amount of data exchanged between processors and memory exploded. GPUs were increasingly stalled waiting for data rather than performing computation. At the same time, data centers were consuming enormous amounts of power just to move information around.
Electrical interconnects have fundamental constraints. You can either have higher bandwidth but limited reach, or longer reach but limited bandwidth—but not both. The use of high-performance DSPs for copper interconnects also introduces higher latency and higher power consumption. Another issue is limited beachfront: all ingress and egress I/O must pass through the edge of the silicon die. That “beachfront” real estate is finite, while demands for more memory and bandwidth keep rising.
Early conversations with key customers confirmed that our focus on data movement was exactly the right place for our team.
As AI models kept growing, the most critical bottleneck increasingly became the scale-up networking needed to build larger XPU clusters. The interconnects from XPU to XPU, from XPU to the switch, and to memory were inadequate. Studies have shown that over 60% of AI data center energy is spent on data movement, and most of that data movement is in the scale-up domain. The side effect of this inefficiency was very low XPU utilization. If we could improve efficiency there, we would directly increase XPU utilization, lower the power needed for data movement, and deliver more usable compute per watt. That insight became the foundation of Celestial AI.
What did the competitive landscape for optical technology look like at the time?
Preet: In 2020 and 2021, most optical innovation focused on scale-out networking, pluggable transceivers connecting servers to top-of-rack switches, and data center interconnects. That remains an important segment, but we shifted our focus to the biggest bottleneck in data movement: the scale-up domain.
AI clusters were rapidly shifting toward tightly coupled, multi-GPU architectures. Performance depended on how quickly processors could exchange data with one another and with memory inside a pod. That’s a very different problem from longer-reach networking.
We decided to build scale-up links that prioritize bandwidth, its density, latency, and energy per bit. Since a scale-up network is typically deployed within a closed XPU-switch-XPU ecosystem, we had more architectural freedom than traditional standards-based pluggables.
There was also timing involved. AI models scaled faster than most infrastructure roadmaps anticipated. Single-accelerator workloads quickly evolved from clusters with thousands of XPUs to tens of thousands, and now to 100s of thousands of XPUs working together. That dramatically increased the importance of efficient processor-to-processor connectivity.
What differentiated you from the competition?
Preet: Our differentiation started with system-level problem solving intent. We optimized specifically for scale-up AI fabrics, environments that are thermally intense, space-constrained, and extremely sensitive to latency and power.
That led us to focus on three core metrics:
- Bandwidth density: maximizing bits per millimeter of die edge
- Energy efficiency: minimizing picojoules per bit
- Latency: reducing synchronization overhead between processors
Subal: Device selection was critical. In optical modulation, you typically consider Mach–Zehnder modulators, ring modulators, or electro-absorption modulators (EAMs).
Mach–Zehnders are thermally robust but relatively large. Ring modulators are compact but highly temperature-sensitive. In scale-up AI systems, temperatures fluctuate rapidly and significantly.
We chose EAMs because they offer the compact footprint needed for high bandwidth density while maintaining strong thermal stability. We drew on insights from imec’s research on integrated EAM technology as part of our broader technical approach. While EAMs are widely used in high-volume optical applications, . integrating them into silicon photonics in a manufacturable way suitable for AI packaging environments was a key challenge.
Preet: Beyond the device choice, we co-designed the entire link. From high-speed analog to photonics to packaging. The goal wasn’t just to make optics work; it was to make optics manufacturable, testable, and at volume in hyperscaler environments.
What was the biggest risk or unknown at that stage?
Preet: Market demand wasn’t the risk. By 2021, it was clear that investment in AI infrastructure was accelerating.
The real risk was execution across multiple engineering domains simultaneously: advanced CMOS for high-speed analog, silicon photonics devices, heterogeneous integration, packaging, testing, and supply chain.
Early on, we made a pragmatic decision not to chase a “perfect integration” solution. Other photonic companies attempted to integrate photonics and advanced CMOS on a single wafer. Instead, we used the most appropriate process node of technology for each function — advanced nodes for mixed-signal electronics and more forgiving geometries for photonics and focused on solving the integration and packaging challenge.
Another major unknown was volume ramps. Hyperscalers can move from initial evaluation to large-scale deployment very quickly. It’s not enough to demonstrate a lab prototype. We have successfully shown a credible path to high-yielding, high-volume manufacturing and supply chain capability.
What did the technology look like at the beginning?
Subal: The silicon photonics ecosystem is less mature than CMOS. You can’t always walk into a foundry and simply design against a fully standardized process library. Often, device performance, materials tuning, and process parameters need refinement – and this is precisely where our collaboration with imec was instrumental to help us navigate this process.
A major part of our early work involved transforming photonics from a prototype-capable technology into something suitable for volume manufacturing. That required close collaboration between device engineering, process development, and packaging design.
Packaging was particularly important. The architecture had to support dense optical I/O, be testable at wafer and package levels, and scale economically. Many packaging configurations are theoretically possible; we made deliberate early choices that balanced performance, reliability, and manufacturability, and we’ve largely stayed aligned with those decisions.
How big was the team in the early days? What capabilities were represented?
Subal: We were approximately 25 people by the end of 2021 and grew strategically from there.
Preet: We deliberately kept the core team focused on differentiated capabilities that we must innovate on and own, such as mixed-signal design, silicon photonics devices, packaging architecture, and system integration.
Where we didn’t differentiate, we partnered. We outsourced certain design tasks early on, purchased non-strategic IP, and collaborated with universities for specialized testing infrastructure. That allowed us to remain capital-efficient while concentrating on areas of differentiation.
Solving the data movement problem requires excellence across multiple engineering disciplines. It’s not just a device problem or a networking problem. It spans silicon, photonics, firmware, and system architecture. Building a team that could operate across those layers was one of our most important accomplishments.
Where did the name come from?
Preet: We briefly called the company Inorganic Intelligence, but it didn’t last long. It was too hard to type and didn’t quite reflect what we were building.
When we changed it to Celestial AI, it felt more aligned with our direction. Light is the medium of our interconnect technology, and from the beginning, we were focused on infrastructure that could scale far beyond incremental improvements.
The name stayed, and so did the mission. AI progress increasingly depends on how efficiently we can move data. Advances in computing alone are no longer sufficient. The future of AI infrastructure interconnects will be defined by bandwidth, latency, energy efficiency, and the ability to manufacture and deploy at scale.
That’s the challenge we set out to solve at Celestial AI, and it’s the work we continue today at scale with the Marvell® Photonic Fabric platform.
Building the right partnerships early
As Celestial AI refined its early architecture, the executive team searched globally for the building blocks that could enable the company’s idea. Imec’s advanced silicon photonics platform quickly stood out because it offered three things:
- The high-performance devices Celestial AI needed
- The engineering rigor required to validate ambitious architecture decisions
- The credibility essential for a deep-tech company entering a competitive and emerging field
Around 2020, Celestial AI approached imec to seek access to specific photonics components, particularly those available in imec’s Process Development Kit (PDK).
With PDK access, Celestial AI began designing its own photonic ICs using imec’s validated building blocks, while imec fabricated the wafers needed to develop its early prototypes.
Imec.xpand
While the technical engagement unfolded, a parallel storyline began. Around the same period, imec.xpand was introduced to Celestial AI to evaluate whether the vision could translate into a scalable company.
Created in 2017, imec.xpand is an independent global venture capital fund that focuses on transformative semiconductor and nanotechnology innovations whose technology success imec contributions can positively impact.
Because imec.xpand had deep familiarity with the silicon photonics platform, its performance, limits, and future potential, they were uniquely positioned to evaluate Celestial AI’s claims.
A compelling picture emerged:
- a founder with a clear, validated vision,
- a strong technology foundation anchored in imec photonics,
- the formation of an early team handpicked from top technical and managerial talent, and
- early customer insights shaping the product direction.
Convinced by the technological feasibility and leadership strength, imec.xpand became Celestial AI’s original seed investor, assembling a syndicate that raised the first $5 million. They continued investing in subsequent rounds, giving the company stability, strategic guidance, and long-term conviction.
Crossing the manufacturing hurdle
One of the biggest challenges deep-tech startups face is the transition from R&D to manufacturing. Many technologies stall in this phase due to cost, complexity, and a lack of scalable pathways. Imec helped Celestial AI cross this gap early.
Through IC-Link by imec, Celestial AI gained access to:
- Seamless transitions from imec’s R&D environment to production-ready processes,
- Access to commercial foundries
- Continuity of the validated PDK ecosystem
- Manufacturing-relevant support crucial for scaling a photonics-based architecture.
For Celestial AI, working with imec wasn’t just about accessing world‑class technology. It was about entering a deep‑tech ecosystem capable of turning a high‑risk, high‑reward idea into a company that would eventually become an industry‑defining force.
This matters for one reason: engaging with imec early allowed Celestial AI to move faster, with fewer risks, and with a clear path toward industrialization.
IC-Link by imec
IC-Link by imec provides customized solutions for innovative chip manufacturing, leveraging the expertise and ecosystem of imec, a world-leading research and innovation center for nanoelectronics and digital technologies. IC-Link enables the scalable and reliable manufacturing of semiconductors to go from idea to product.
To bring products to life, IC-Link has a network of foundries, including TSMC, for high-volume manufacturing and offers some services and prototypes through the imec fab. The technology offering includes:
- ASICs in CMOS, down to TSMC N2
- Photonic ICs, using imec's state-of-the-art PDK to go seamlessly to volume production via our commercial foundry partnerships
- Advanced 2.5D/3D packaging
- Custom wafer processing, including custom imagers and detectors, fine-resolution wafers and CMOS post-processing
The offering will continue to expand as new technologies are validated. To tailor to customer requirements and capabilities, IC-Link offers a range of services and business models.
Celestial AI is a strong example of how imec’s deep-tech venturing strategy—combining access to imec’s disruptive semiconductor technologies, unique infrastructure, foundry ecosystem, and funding—helps deep-tech startups scale globally.
Imec deep-tech venturing
Imec’s deep-tech venturing offering helps startups navigate the unique hurdles of bringing complex, science-driven innovations to market. Deep-tech ventures often face long development cycles, high capital requirements, and significant technical risk, particularly in areas such as semiconductors, photonics, and advanced materials.
Through its venture offering, imec provides semiconductor-driven deep-tech startups with access to its world-class R&D infrastructure, engineering expertise, and a proven framework for transforming early research into scalable technology platforms. This support helps teams validate their technology faster, reduce development risk, and reach critical technical milestones.
Beyond technology development, imec’s ecosystem also connects startups with technical experts, industry partners, and investors who understand the demands of deep-tech commercialization. imec helps founders refine strategies and prepare for fundraising, while also providing credibility that can accelerate partnerships and customer adoption. By combining technical expertise with business and investor support, imec’s deep-tech venturing gives startups the tools, network, and guidance needed to move from a breakthrough concept to a viable company.
How did imec’s technology expertise and infrastructure support Celestial AI’s early development?
Preet: Imec was both an investor and a hands-on development partner. Their team worked closely with us in translating silicon photonics research, particularly around EAM integration, into a path compatible with high-volume, manufacturable processes.
They weren’t just an R&D institute in the background; they functioned as an extension of our engineering team. As the platform matured, that foundation supported a broader, more resilient manufacturing strategy, including multi-fab engagement where appropriate.
Subal: Imec has been working on high-performance electro-absorption modulators in silicon photonics for well over a decade. That depth of experience significantly accelerated our device development and process tuning. It shortened iteration cycles and reduced technical risk at a critical stage.
Conclusion
In many ways, Celestial AI is a blueprint for the next generation of deep‑tech success stories. It shows how the right technology and the right partners can turn innovation into a company ready to scale and shape the future of AI compute.
More about these topics:
Published on:
23 March 2026












