Optical Interconnects: Lightmatter’s Optical Interposers Could Start Speeding Up AI in 2025


Fiber-optic cables are creeping closer to processors in high-performance computers, replacing copper connections with glass. Technology companies hope to speed up AI and lower its energy cost by moving optical connections from outside the server onto the motherboard and then having them sidle up alongside the processor. Now tech firms are poised to go even further in the quest to multiply the processor’s potential—by slipping the connections underneath it.

That’s the approach taken by
Lightmatter, which claims to lead the pack with an interposer configured to make light-speed connections, not just from processor to processor but also between parts of the processor. The technology’s proponents claim it has the potential to decrease the amount of power used in complex computing significantly, an essential requirement for today’s AI technology to progress.

Lightmatter’s innovations have attracted
the attention of investors, who have seen enough potential in the technology to raise US $850 million for the company, launching it well ahead of its competitors to a multi-unicorn valuation of $4.4 billion. Now Lightmatter is poised to get its technology, called Passage, running. The company plans to have the production version of the technology installed and running in lead-customer systems by the end of 2025.

Passage, an optical interconnect system, could be a crucial step to increasing computation speeds of high-performance processors beyond the limits of Moore’s Law. The technology heralds a future where separate processors can pool their resources and work in synchrony on the huge computations required by artificial intelligence, according to CEO Nick Harris.

“Progress in computing from now on is going to come from linking multiple chips together,” he says.

An Optical Interposer

Fundamentally, Passage is an interposer, a slice of glass or silicon upon which smaller silicon dies, often called chiplets, are attached and interconnected within the same package. Many top server CPUs and GPUs these days are composed of multiple silicon dies on interposers. The scheme allows designers to connect dies made with different manufacturing technologies and to increase the amount of processing and memory beyond what’s possible with a single chip.

Today, the interconnects that link chiplets on interposers are strictly electrical. They are high-speed and low-energy links compared with, say, those on a motherboard. But they can’t compare with the impedance-free flow of photons through glass fibers.

Passage is cut from a 300-millimeter wafer of silicon containing a thin layer of silicon dioxide just below the surface. A multiband, external laser chip provides the light Passage uses. The interposer contains technology that can receive an electric signal from a chip’s standard I/O system, called a serializer/deserializer, or SerDes. As such, Passage is compatible with out-of-the-box silicon processor chips and requires no fundamental design changes to the chip.

Four roughly rectangular shaped objects stacked atop each other. The second from the top is in pieces.Computing chiplets are stacked atop the optical interposer. Lightmatter

From the SerDes, the signal travels to a set of transceivers called
microring resonators, which encode bits onto laser light in different wavelengths. Next, a multiplexer combines the light wavelengths together onto an optical circuit, where the data is routed by interferometers and more ring resonators.

From the
optical circuit, the data can be sent off the processor through one of the eight fiber arrays that line the opposite sides of the chip package. Or the data can be routed back up into another chip in the same processor. At either destination, the process is run in reverse, in which the light is demultiplexed and translated back into electricity, using a photodetector and a transimpedance amplifier.

Passage can enable a data center to use between one-sixth and one-twentieth as muchenergy, Harris claims.

The direct connection between any chiplet in a processor removes latency and saves energy compared with the typical electrical arrangement, which is often limited to what’s around the perimeter of a die.

That’s where Passage diverges from other entrants in the race to link processors with light. Lightmatter’s competitors, such as
Ayar Labs and Avicena, produce optical I/O chiplets designed to sit in the limited space beside the processor’s main die. Harris calls this approach the “generation 2.5” of optical interconnects, a step above the interconnects situated outside the processor package on the motherboard.

Advantages of Optics

The advantages of photonic interconnects come from removing limitations inherent to electricity, which expends more energy the farther it must move data.

Photonic interconnect startups are built on the premise that those limitations must fall in order for future systems to meet the coming computational demands of artificial intelligence. Many processors across a data center will need to work on a task simultaneously, Harris says. But moving data between them over several meters with electricity would be “physically impossible,” he adds, and also mind-bogglingly expensive.

“The power requirements are getting too high for what data centers were built for,” Harris continues. Passage can enable a data center to use between one-sixth and one-twentieth as much energy, with efficiency increasing as the size of the data center grows, he claims. However, the energy savings that
photonic interconnects make possible won’t lead to data centers using less power overall, he says. Instead of scaling back energy use, they’re more likely to consume the same amount of power, only on more-demanding tasks.

AI Drives Optical Interconnects

Lightmatter’s coffers grew in October with a $400 million Series D fundraising round. The investment in optimized processor networking is part of a trend that has become “inevitable,” says
James Sanders, an analyst at TechInsights.

In 2023, 10 percent of servers shipped were accelerated, meaning they contain CPUs paired with GPUs or other AI-accelerating ICs. These accelerators are the same as those that Passage is designed to pair with. By 2029, TechInsights projects, a third of servers shipped will be accelerated. The money being poured into photonic interconnects is a bet that they are the accelerant needed to profit from AI.

From Your Site Articles

Related Articles Around the Web



Source link