Building on my PhD work with large-scale nonlinear photonic lattices, I'm exploring their potential as a hardware platform for optical computing — using light's natural parallelism and topology's robustness to build a different kind of processor.
Exploring whether the same lattices that taught us topology can be the substrate for a different kind of processor.
Building on my PhD work with large-scale nonlinear photonic lattices, I'm exploring their potential as a hardware platform for optical computing — using light's natural parallelism and topology's robustness to build a different kind of processor.
The idea: the lattice itself is the computation. Send in an encoded input; the nonlinear coupling between rings creates a rich, high-dimensional response. A small trained readout layer at the output interprets that response.
The chip does not "know" about cats or dogs. It just transforms optical signals. A small neural network learns to call them from the response.
Change the input and the lattice's response changes too — deterministically, but in ways no one has to hand-design. The same readout weights discriminate both classes. The chip stays fixed. Only the tiny classifier is trained.
Today's AI workloads are running into the limits of electronic computation — energy, bandwidth, latency. Photons don't have those same limits. The bet is that a photonic processor designed around topology and nonlinearity could do certain kinds of calculations vastly more efficiently than transistors. The TOPAI patent (Topological Photonics Architectures for Optical Computing and AI) is one early step in that direction.
Light–matter interactions — specifically how the integration of atomic systems might overcome the inherent limitations of photonics alone, to enable scalable quantum networking and computing. Photons are fast and lossless, but they don't interact with each other. Atoms can hold information and couple to single photons. Combining the two is an old idea, but a hard one to actually engineer at chip scale.