Building the next generation of ultra-efficient compute for AI.
Modern AI is built on two pillars: the transformer architecture and high-performance CMOS chips. While AI models have grown exponentially in capability, the hardware running them—standard CPUs and GPUs—is approaching the physical limits of fabrication.
This pilot program will catalyze “next-wave” AI hardware that is fundamentally different from conventional digital accelerators. Our focus is on analog and physics-based computing approaches that can execute core neural network operations with extreme parallelism, low energy and very low latency. Examples include resistive crossbars, photonic and optical processors, and other non-CMOS architectures. The central opportunity is that modern deep neural networks can tolerate low numerical precision and significant noise. This creates an opening for hardware that trades perfect determinism for large gains in speed and efficiency.
A second core goal of the program is co-design training methods and model architectures that are explicitly built for low-precision, noisy, drifting hardware. This includes noise-aware training, analog-friendly learning rules, and architectural adaptations that work under constraints like limited depth and restricted nonlinearity. The aim is not to compete with GPUs for rack space, but to develop credible “paths from below” where new hardware outperforms GPUs in specific applications constrained by factors like low-energy, high-throughput, low-latency or edge computing.
Check back at a later date for funding opportunities in 2026. Inquiries should be directed to [email protected].
Opportunities for Funding
-
Apply
2026 Unconventional Compute RFP
Unconventional Compute