Nuro SDK — v0.7.0
Train on GPU.
Deploy to silicon.
Zero rewrites.
Nuro is an open-source Python SDK (Apache 2.0) for spiking neural networks. One API compiles to GPU, Intel Loihi 2, SpiNNaker 2, BrainChip Akida, or Vantar Cloud. NIR interop, ANN-to-SNN conversion, auto-quantization, synaptic delays. 168 tests passing.
5
Backends
GPU, Loihi, SpiNNaker, Akida, Cloud
4
Neuron models
LIF, IF, Izhikevich, AdEx
168
Tests passing
Full stack coverage
0
Lines to change
When switching backends
Quick Start
Define once. The network definition never changes — only the compile target does.
import torch
import nuro
# Define your spiking network — this never changes
sensory = nuro.Population(size=100, dynamics="lif", params={"tau": 20e-3})
cortex = nuro.Population(size=50, dynamics="izhikevich", params={"preset": "fast_spiking"})
motor = nuro.Population(size=10, dynamics="lif", params={"tau": 10e-3})
c1 = nuro.Connection(source=sensory, target=cortex, pattern="dense", delay=1e-3)
c2 = nuro.Connection(source=cortex, target=motor, pattern="dense")
inp = nuro.Input(population=sensory, data=torch.rand(100, 100))
graph = nuro.Graph([sensory, cortex, motor], [c1, c2], inputs=[inp])
# Train on GPU with surrogate gradients
model = nuro.compile(graph, target="gpu", requires_grad=True)
output = model.run(duration=0.1)
# Deploy to neuromorphic hardware — zero code changes
model = nuro.compile(graph, target="loihi") # Intel Loihi 2
model = nuro.compile(graph, target="spinnaker2") # SpiNNaker 2
model = nuro.compile(graph, target="akida") # BrainChip Akida
# Or convert an existing PyTorch model to SNN
snn = nuro.convert_ann(pytorch_model, input_shape=(784,))
model = nuro.compile(snn, target="loihi") # auto-quantizesInstall
pip install nuro[gpu]
Python 3.10+ · PyTorch 2.0+
License
Apache 2.0
Free to use and modify
Version
v0.7.0
NIR + Akida + ANN-to-SNN + delays
What Makes It Different
Every other SNN framework solves one piece. Nuro solves the full pipeline.
One API, any backend
Define your network once using populations, connections, and inputs. Nuro's intermediate representation (IR) is the boundary — backends never touch your Python objects. Change one argument to switch hardware.
Surrogate gradients built in
Set requires_grad=True and train with backpropagation-through-time. ATan, sigmoid, and triangular surrogates included. Standard PyTorch optimizers work without modification.
NIR interop
Import models from SpikingJelly, Norse, snnTorch, or any NIR-compatible framework via nuro.from_nir(). Export with nuro.to_nir(). Full ecosystem interoperability.
ANN-to-SNN conversion
Convert trained PyTorch models to spiking networks. nuro.convert_ann() walks your nn.Module, maps layers to IF populations, folds BatchNorm, and auto-quantizes for hardware.
Auto-quantization
When compiling to Loihi, SpiNNaker 2, or Akida, weights are automatically quantized to match hardware precision. QAT support for training with quantization-aware fake gradients.
Synaptic delays
Connection(delay=1e-3) adds biologically realistic spike propagation delays. Ring buffer implementation on GPU, native hardware support on Loihi and SpiNNaker.
Batch simulation
Run 32-128 networks in parallel on GPU. 10-50x throughput vs sequential simulation. Critical for training loops where single-sample simulation is the bottleneck.
Neuromorphic datasets
Built-in loaders for N-MNIST, DVS-CIFAR10, and DVS Gesture. Event streams converted to spike tensors ready for nuro.Input(). No preprocessing code needed.
Backends
Same IRGraph, five compile targets. Nuro wraps each hardware SDK so you never have to learn Lava, py-spinnaker2, or MetaTF directly.
target="gpu"v0.1+Development workbench. Surrogate gradients, batch training, BPTT.
target="loihi"v0.5+Nuro compiles to Lava — you never write Lava directly. On-chip STDP learning supported.
target="spinnaker2"v0.6+Full SpiNNaker 2 support. Sim works out of the box.
target="akida"v0.7+Most commercially deployed neuromorphic chip. 1-8 bit quantization.
target="cloud"v0.8No hardware required. Submit IRGraph, get results back.
NeuroCopilot — AI Coding Assistant
Describe your SNN task in plain English. NeuroCopilot generates complete, deployable Nuro code. Fine-tuned on Qwen2.5-Coder-7B. Runs locally via Ollama — no internet required.
import nuro
# Ask NeuroCopilot in plain English
code = nuro.copilot.ask(
"Build a recurrent SNN with Izhikevich neurons for pattern recognition on SpiNNaker2"
)
print(code)
# → Generates complete Nuro Python code:
# import nuro
# pop = nuro.Population(128, dynamics="Izhikevich", ...)
# conn = nuro.Connection(pop, pop, pattern="recurrent", ...)
# ...Local (Ollama)
ollama pull vantar-ai/nuro-copilot
Runs offline · 7B model · RTX 3060+
SDK integration
nuro.copilot.ask("...")
Auto-detects Ollama · falls back to HF API
Open weights
VANTAR-AI/nuro-copilot-7b
Apache 2.0 · HuggingFace
Neuron Models
From simple baselines to biologically detailed. All supported across every backend.
LIF
Leaky Integrate-and-FireStandard workhorse. Exponential membrane decay, threshold firing. Fast to simulate, well-understood.
IF
Integrate-and-FireNo leak term. Accumulates input indefinitely. Simple baseline for benchmarks.
Izhikevich
Izhikevich (5 presets)Biologically rich. Presets: regular_spiking, intrinsically_bursting, chattering, fast_spiking, low_threshold_spiking.
AdEx
Adaptive Exponential LIFExponential spike initiation + adaptation current. Closest to biological cortical neurons.
Get Started
Open source. No signup required.
Install Nuro and start training SNNs on GPU today. Join the waitlist for early access to hardware backends and Vantar Cloud.
Waitlist — hardware backends + Vantar Cloud