Back to Blog
technicalembeddedasyncarchitecturerusttokioembassywasmno_stdruntime-abstraction

"Portable Async Rust: Targeting Embassy, Tokio and WASM from One Codebase"

March 27, 2026AimDB Team8 min read

The Hidden Cost of Distribution

Schema drift is a common issue while working on distributed systems. A lot of solutions like schema registries and serialization frameworks gained significant importance, but all of them feel like workarounds. Our production pipelines were packed with duplicated de-/serialization code and the maintenance efforts grew with every service.

Typical distributed pipeline — every service boundary duplicates schema definitions and serialization logic

We were building a sensor mesh, collecting thousands of readings per second with services running on different hosts from Linux servers down to MCUs. Async Rust was an easy bet for this kind of IO-bound workload and Rust's trait system opened other doors for us:

One codebase. Typed structs. The same API from MCU to Cloud.

What AimDB Is

The idea behind AimDB is a type-safe and platform-agnostic data pipeline, where the Rust type system is the schema and trait implementations define the behavior.

Data contract pipeline — schema, serialization and migration defined once, called from every service

Records per definition consist of a key and a type, what ensures that multiple records of the same type can be ingested into AimDB.

/// Compile-time safe record keys. Each variant maps to a unique string identifier /// and optionally carries connector metadata like an MQTT topic. #[derive(RecordKey, Clone, Copy, PartialEq, Eq, Debug)] #[key_prefix = "sensor."] pub enum SensorKey { #[key = "temp.indoor"] TempIndoor, #[key = "temp.outdoor"] TempOutdoor, } /// A plain Rust struct — no framework traits, no macros. #[derive(Clone, Debug)] pub struct Temperature { pub sensor_id: String, pub celsius: f32, }

AimDB records, without additional trait implementations, can already drive a typed producer-consumer pipeline:

/// Producer: reads a sensor and pushes typed values into AimDB. async fn sensor_producer<R: Runtime>(ctx: RuntimeContext<R>, producer: Producer<Temperature, R>) { loop { let reading = read_sensor().await; producer.produce(Temperature { sensor_id: "outdoor-001".into(), celsius: reading, }).await.ok(); ctx.time().sleep(ctx.time().secs(1)).await; } } /// Consumer: subscribes to the buffer and reacts to every new value. async fn temp_logger<R: Runtime>(ctx: RuntimeContext<R>, consumer: Consumer<Temperature, R>) { let mut reader = consumer.subscribe().unwrap(); while let Ok(temp) = reader.recv().await { ctx.log().info(&format!("{}: {:.1}°C", temp.sensor_id, temp.celsius)); } } // The builder ties key, type, buffer, producer and consumer together: builder.configure::<Temperature>(SensorKey::TempOutdoor, |reg| { reg.buffer(BufferCfg::SpmcRing { capacity: 64 }) .source(sensor_producer) .tap(temp_logger) .finish(); });

The thinking model behind AimDB is purely data-driven, the way records relate to each other is captured in a directed-acyclic graph (DAG), validated at build time and introspectable at runtime. Records and their relationship to each other build the software architecture:

/// An alert derived from a temperature reading. #[derive(Clone, Debug)] pub struct TemperatureAlert { pub sensor_id: String, pub celsius: f32, pub level: &'static str, } /// Stateless transform: Temperature → TemperatureAlert (only when above threshold). fn to_alert(temp: &Temperature) -> Option<TemperatureAlert> { if temp.celsius > 35.0 { Some(TemperatureAlert { sensor_id: temp.sensor_id.clone(), celsius: temp.celsius, level: "high", }) } else { None } } /// Consumer: logs every alert. async fn alert_logger<R: Runtime>(ctx: RuntimeContext<R>, consumer: Consumer<TemperatureAlert, R>) { let mut reader = consumer.subscribe().unwrap(); while let Ok(alert) = reader.recv().await { ctx.log().info(&format!( "{} alert for {}: {:.1}°C", alert.level, alert.sensor_id, alert.celsius )); } } // Wire transform and consumer into the builder: builder.configure::<TemperatureAlert>("alert.temp.outdoor", |reg| { reg.buffer(BufferCfg::SingleLatest) .transform::<Temperature, _>(SensorKey::TempOutdoor, |t| t.map(to_alert)) .tap(alert_logger) .finish(); });

The only platform-specific code is choosing which runtime adapter to pass into the builder:

// On Linux / Cloud — Tokio let runtime = Arc::new(TokioAdapter::new()?); let mut builder = AimDbBuilder::new().runtime(runtime); // On Cortex-M4 — Embassy let runtime = EmbassyAdapter::new(); let mut builder = AimDbBuilder::new().runtime(runtime);

Everything else — records, keys, producers, consumers, transforms — stays identical across platforms.

How the Runtime Abstraction Works

The runtimes only share the Future trait - nothing else. Our abstraction resolves to zero-cost concrete calls at compile time. Therefore four traits were defined with no platform dependencies: RuntimeAdapter (identity), Spawn (task creation), TimeOps (clocks and sleep), and Logger (structured output).

A generic Runtime trait glues them together, which is erased at compile time — so ctx.time().sleep(ctx.time().secs(1)).await compiles directly to tokio::time::sleep or embassy_time::Timer::after.

Portable code never imports a concrete adapter. Instead it receives a RuntimeContext that carries all runtime capabilities. Two accessors are currently exposed:

  • ctx.time() → returns Time<R> with .sleep(), .now(), .secs(), etc.
  • ctx.log() → returns Log<R> with .info(), .warn(), etc.
/// This function compiles for Tokio, Embassy and WASM — unchanged. /// `R: Runtime` is the only bound. No platform imports. async fn heartbeat<R: Runtime>(ctx: RuntimeContext<R>, producer: Producer<Heartbeat, R>) { let time = ctx.time(); let log = ctx.log(); let start = time.now(); loop { time.sleep(time.secs(5)).await; let uptime = time.duration_since(time.now(), start); log.info(&format!("alive — uptime: {:?}", uptime)); producer.produce(Heartbeat { uptime }).await.ok(); } }

The RuntimeContext is injected automatically when you register a .source(), .tap() or .transform() and it is easily extensible for future use cases.

Where It Stands Today

Dynamic spawning on a static executor

The default and idiomatic way for declaring tasks in Embassy is fully static. Each task is named #[embassy_executor::task] and spawned by name. Embassy pool-size enables spawning multiple instances of the same task. We use this mechanism to create TASK_POOL_SIZE static task slots at compile time. Any future gets boxed, passed into one of the slots and driven to completion.

type BoxedFuture = Box<dyn Future<Output = ()> + Send + 'static>; #[embassy_executor::task(pool_size = TASK_POOL_SIZE)] async fn generic_task_runner(mut future: BoxedFuture) { let pinned = unsafe { Pin::new_unchecked(&mut *future) }; pinned.await; }

The unsafe Pin::new_unchecked in the runner is sound, since the boxed future is owned and not moved after pinning.

Send + Sync bounds leak into single-threaded targets

The core traits RuntimeAdapter, Spawn, TimeOps and Logger are designed for the most demanding target (multi-threaded Tokio), so all of them require Send + Sync. For WASM and Embassy this is satisfied with unsafe impl Send/Sync scoped to the adapter crates, justified by the single-threaded executor contract where no concurrent access can occur.

What ships today

Try It

cargo add aimdb-core aimdb-tokio-adapter

AimDB is open source under Apache 2.0.

GitHub · docs.rs · live demo

We'd love to hear what you think — open an issue or start a discussion on GitHub.


Graphics and spell checks in this post were created with the help of an LLM.