Vantar Dev Kit — Exploring

Event camera.
Neuromorphic chip.
One device.

The first edge AI module that processes event camera data natively on neuromorphic silicon. No GPU. No frame conversion. Sensor to inference at under 1mW — with Nuro as the programming layer.

Status

Early exploration — hardware partnerships in progress

Join the interest list to shape what we build.

Express Interest →

The Problem

Event cameras and neuromorphic chips are the right tools for edge AI. Nobody has connected them into a usable product.

Event cameras have no processing stack

Cameras like iniVation DAVIS or Prophesee EVK4 produce asynchronous spike streams. But every downstream system expects frames. Teams throw away the temporal structure by converting spikes back to images — and then run a GPU model on them. That defeats the entire point.

Neuromorphic chips have no sensor story

Intel Loihi 2 and SpiNNaker 2 are designed to process spikes natively. But they ship as bare chips with no sensor integration. Users wire together event cameras and neuromorphic boards manually, with custom glue code, every time.

The software layer doesn't exist

There is no SDK that takes event camera output and runs it on neuromorphic hardware end-to-end. Researchers maintain two or three separate codebases and spend weeks on integration instead of their actual work.

Why Hybrid Vision

Pure event cameras are powerful but hard to work with. A new generation of hybrid sensors changes that.

Pure event cameras

  • ·Microsecond temporal resolution
  • ·High dynamic range (120dB+)
  • ·Sparse output — low bandwidth
  • ·No spatial context in static scenes
  • ·Hard to integrate with existing vision pipelines

Hybrid vision sensors

  • ·Events + frames from the same pixel array
  • ·Spatially aligned — no registration needed
  • ·Frames provide context, events provide timing
  • ·SNN processes events; frames used for supervision
  • ·Works with existing computer vision tooling

The Dev Kit uses next-generation hybrid vision silicon with matched-resolution event and frame outputs at the pixel level.

The Full Stack

Four layers. Each one is useful standalone. Together they are the complete pipeline from photon to inference result.

01

Hybrid Vision Sensor

Frames + Events

A next-generation sensor outputs both a standard image frame and an asynchronous event stream simultaneously — pixel-aligned, from the same silicon. You get spatial context from frames and microsecond-precision temporal data from events. No information is lost.

1.3MP frames1.3MP eventspixel-alignedlow latency
02

Nuro SDK

Dual-stream SNN

Nuro ingests both streams natively. Events feed directly into a spiking neural network — no conversion to frames. The frame channel provides spatial context for tasks that need it. Train the dual-stream model on GPU with surrogate gradients. Same model, same weights.

event stream → SNNframe contextsurrogate gradientsGPU training
03

Neuromorphic Processor

Intel Loihi 2 / SpiNNaker 2

The trained network runs on a neuromorphic chip co-located with the sensor. Neurons only compute when they receive a spike — no clock cycle waste on silence. Always-on inference at under 1mW. The chip speaks the same language as the sensor: spikes.

<1mW inferencealways-onspike-nativezero idle power
04

Vantar Cloud

Development + Monitoring

Develop and benchmark remotely via Vantar Cloud before deploying to the physical module. Monitor energy draw, spike rates, and latency from anywhere. Over-the-air model updates when you retrain.

remote compileOTA updatespower monitoringAPI access

Use Cases

Any application that needs always-on vision at the edge without a GPU.

Robotics — always-on perception

Drones and mobile robots need collision avoidance that never sleeps and never drains the battery. Event cameras detect motion at 1μs resolution. Neuromorphic inference runs on milliwatts. The Dev Kit is the perception module.

Industrial inspection

High-speed production lines move too fast for frame cameras. Event cameras capture micro-defects at microsecond resolution. Neuromorphic processing gives real-time classification without a GPU rack at every station.

AR/VR — low-latency tracking

Head and hand tracking requires sub-millisecond latency and cannot afford GPU inference on a battery-powered headset. Event cameras + neuromorphic processing cuts latency by 10x and power by 100x vs frame-based tracking.

Automotive — edge ADAS

Event cameras handle high dynamic range and fast motion better than frame cameras. Running inference on neuromorphic silicon eliminates the need for power-hungry edge GPUs in the perception stack.

Why Not Just Use a GPU

GPUs are the right tool for training. At the edge, they are the wrong tool for inference.

GPU Edge
Dev Kit
Idle power
5–15W
<0.1mW
Inference power
10–25W
<1mW
Latency (motion event → result)
~10ms
<1ms
Boot time
2–10s
instant
Input format
frames (converted)
native spikes
Always-on capable
no (power budget)
yes

Interest List

Help us build the right thing.

The Dev Kit is in early exploration. We're talking to robotics engineers, drone teams, and edge AI researchers to understand the exact form factor and API they need. If this is relevant to your work, join the list — we'll reach out directly.