Skip to Main Content

Nordic nRF52840: Is Rust a Good Fit for Embedded Applications?

C has been the language for microcontroller embedded development for decades. It’s definitely a good fit for low-level programming, but C has some features that make certain classes of errors more likely, such as memory errors. Rust is a more recent programming language that aims to improve upon these challenges while retaining similar benefits to languages like C. Rust has seen steady adoption in systems programming and application development, and was recently integrated into the Linux kernel. When it comes to embedded development, Rust may or may not be a suitable alternative to C or C++, as it depends on a number of factors.

This article attempts to provide a balanced view of the pros and cons of using Rust for embedded development and gives an overview of the embedded ecosystem through examples.

Considering Rust for Embedded Applications

Choosing the appropriate programming language for a particular project is crucial, and the decision is often made more complicated when considering the use of new languages. They tend to have relatively limited real-world application experience. If you find yourself debating whether Rust is a good fit for your project, there are a few things to keep in mind going forward: 

The Good

Though Rust currently has limited real-world application, especially when it comes to embedded systems, its newness is also the source of Rust’s strengths. The language has been developed with an awareness of its predecessors’ strengths and weaknesses.


One of Rust’s big promises is safety, and it delivers in a couple of major ways. The first is type safety. Rust is a strongly and statically typed language and eliminates type errors common in weakly typed languages such as C. Furthermore, the conscious decision to eliminate certain types (such as null pointers) decreases the chances of memory issues, as might be the case in accidental dereferencing. The second is memory safety. Rust eliminates whole classes of common memory bugs in languages that aren’t memory safe. It does this through a unique feature called ownership. The Rust book provides an excellent overview:

“Ownership is a set of rules that governs how a Rust program manages memory. All programs have to manage the way they use a computer’s memory while running. Some languages have garbage collection that regularly looks for no longer used memory as the program runs; in other languages, the programmer must explicitly allocate and free the memory. Rust uses a third approach: memory is managed through a system of ownership with a set of rules that the compiler checks. If any of the rules are violated, the program won’t compile. None of the features of ownership will slow down your program while it’s running.”

Klabnik, Steve, and Carol Nichols. The Rust Programming Language. No Starch Press, 2019.

One of the key points to emphasize here is that this checking has no impact on runtime performance. Which leads to the next key benefit: performance.


Performance can mean a lot of different things. Rust places a lot of emphasis on “zero cost abstractions,” which allows Rust to deliver certain features with no impact to runtime performance. The ownership feature mentioned earlier or its asynchronous paradigm both serve as good examples and highlight Rust’s attempt to deliver the best of both worlds. This, of course, doesn’t speak much to Rust’s actual performance compared to established languages like C. What does the memory footprint look like? For equivalent programs, does Rust produce equivalent or better-performing code than C? These questions are important when developing for constrained systems. 

To start answering these questions, let’s establish some basics. Rust is a compiled language with no runtime environment, zero garbage collection, and options for heapless implementation and exclusion of the standard library. These features do not differentiate Rust but show that it meets the minimum requirements for use in resource-constrained microcontroller embedded systems. As mentioned above, the Rust compiler uses LLVM, and takes advantage of the optimizations it provides. Rust also offers a wide selection of tools for hand-tuning, such as support for SIMD intrinsics and control over inlining. 
Realistically, this is a topic that could be an entire blog post on its own, but at a high level, Rust is capable of being optimized for runtime performance and code size just as well as C. That being said, Rust’s higher-level abstractions and multitude of packages make code bloat a concern. For more information on this topic, check out the Rust Performance Book.


Rust enables developers to be productive by providing a unified set of tools to the embedded space. Rust brings with it a build system and dependency manager (cargo), formatter (rustfmt), linter (clippy), document generator (cargo doc), and Language Server (RLS). This “batteries-included” approach makes environment setup and management incredibly smooth. 

From a development perspective, Rust is also a feature-packed language compared to options such as C, more in line with C++ in some regards. Rust supports multiple different programming paradigms, has an expansive type system, support for generics, a powerful macro paradigm, and high-level features normally not found in embedded development such as support for iterators and closures, and an async paradigm. Rust’s philosophy of zero-cost abstractions mentioned in the previous section, means that these features don’t incur a runtime performance penalty. 

From a developer productivity perspective, these features can enable developers to be more productive but also contribute to the steep learning curve mentioned previously, which can negatively impact productivity in the short term. It’s also worth noting that there are Real-Time Operating Systems available in Rust, though the options are somewhat limited. Developers will need to invest time to determine whether there’s an RTOS available that meets their needs and come up to speed on the selected framework.

Easy Incremental Adoption

Rust makes communicating with existing C applications a viable option via a Foreign Function Interface (FFI). The communication is zero-overhead, meaning function calls between Rust and C have identical performance to calling said functions in C. Using Rust in a C project also means one can take advantage of Rust’s strong safety guarantees, even on C code being called from Rust. Rust provides tools that do much of the work to generate Rust bindings to C, and vice versa, removing much of the boilerplate associated with generating FFIs.

The Bad

According to StackOverflow’s annual poll, Rust was the most loved programming language in 2022, taking first for the seventh consecutive year. Rust has a strong community of vocal advocates.  Though the love may be strong, it’s still important to factor in some of the potential drawbacks:


Currently, Rust is not standardized. There’s no qualified document that describes how the compiler should behave. Additionally, the Rust compiler toolchain has not been certified by any regulatory body. This means Rust is a non-starter for safety-critical software components.  This doesn’t, however, mean that Rust isn’t safe. The Ferrocene Project aims to provide a qualified Rust toolchain and associated language specification. This project targets initial certification by the end of 2023 with ISO 26262, a regulatory standard for the automotive industry, but aims for eventual certification in other regulatory domains as well. Keep an eye on this project if you’re considering Rust for a safety-critical system sometime in the future.

Lack of Vendor Support

There’s currently no official support for Rust from chip manufacturers. This means that the SDKs supplied by various chip manufacturers such as Nordic, Espressif, and ST Micro don’t exist for Rust. That said, there are a large number of third-party packages (known as “crates” in Rust) that offer SDKs for these platforms written in Rust. For example, to develop a BLE project using a Nordic nRF52, one could use open-source crates for both SoftDevice Bindings and the hardware abstraction layer. Typically, these libraries are well-maintained and robust. However, community support means maintenance is not guaranteed, and support lags behind the manufacturer. These factors mean third-party crates are higher-risk compared to manufacturer supported libraries. Furthermore, the community-driven nature of these crates means support is not guaranteed. Check whether your target platform has support for Rust in some form, as this will have a major impact on project scope.

Fewer Targets

In the embedded space, applications semi-regularly target niche target architectures. In practice, C support is basically ubiquitous. Rust support, however, is not. The Rust compiler uses LLVM for machine code generation, and LLVM supports fewer targets compared to GCC. Rust further limits these targets by providing varying tiers of support, dividing them into “guaranteed to work,” “guaranteed to build,” and “may or may not work.” Many common embedded targets, such as the ubiquitous cortex-M series, fall into the “guaranteed to build” tier of support. Confirm whether the Rust compiler supports your target architecture and that you are comfortable with the level of support it provides.

Learning Curve

It’s a widely held belief that the learning curve for Rust is steep. This is subjective but generally seems to hold true based on feedback from the community. Rust’s immaturity also means experienced developers may be difficult to find, especially for relatively specialized applications – embedded systems included. When considering whether Rust is the right language for your project,factor in the extra time and budget needed to get the team up to speed

Though these issues are varied, ultimately they all stem from Rust’s relative immaturity. The language and its ecosystem have not had the time to reach parity with more established alternatives. And though the language is growing rapidly, adoption is not guaranteed. It also brings a different set of challenges compared to established languages due to its immaturity, and the impact of these challenges needs thorough consideration before moving forward with Rust for your project.


Rust is already seeing enterprise-level adoption at the systems level and above, due to the unique strengths of the language. That day may come for embedded systems as well, but for now, the immaturity of the language presents obstacles that will likely deter most developers in a professional context, at least for safety-critical systems.

Now that we’ve discussed the pros and cons, you may be wondering, “What does embedded development using Rust actually look like?” Here’s a glimpse at the Rust embedded ecosystem by running a simple application on an nrf52840 Dev Kit.

Rust on a Nordic nRF52840

Let’s examine two approaches to a simple NRF52840dk application. Specifically, blinking an LED based on a hardware timer wired up via Nordic’s Programmable Peripheral Interconnect (PPI). Before we dig into the specifics, let’s talk about the required setup for Rust-embedded projects.


Developing for a bare metal environment requires certain adjustments compared to application software for a standard computer. The standard library must be disabled using the #[!no_std] directive. The application will link against the core crate, a platform-agnostic subset of the standard library, instead.

When working with an ARM Cortex-M CPU as with the nRF52, the project will depend on the cortex-m or cortex-m-rt crates. These crates set up a minimal runtime, map memory, and define the entry point for the application.

As a final note before diving in, this example uses a project called Knurling Tools to make setup straightforward. Specifically, we use the App Template tool – a cargo project template that sets up flashing, logging, and stack overflow protection in a new project based on a provided HAL and target. These tools are provided by Ferrous Systems, and the Readme on their site includes great instructions. Take a look there if you would like to try this for yourself.

A Basic Example

Embedded devices interact with the world using peripherals connected to an MCU. These peripherals are often memory-mapped, meaning that they are written to specific locations in memory and manipulated. The MCU’s data sheet defines this information.

The example mentioned before relies on a few different nRF52 peripherals/features: GPIO, GPIOTE (GPIO tasks and events), PPI (Programmable Peripheral Interconnect), and a Hardware Timer. Configuring these without any abstraction requires writing specific values to the relevant registers. To do this, we will use Rust’s ptr::write_volatile function. These calls must be wrapped in an unsafe block to compile because the `write_volatile` function bypasses Rust’s memory-safety guarantees. Unsafe requires careful use and scrutiny as unsafe code is at risk for memory issues.

The code below enters our main function, which ends in an infinite loop, enforced by the function’s type signature, -> !. The application then configures Nordic hardware to toggle a GPIO following the expiration of the hardware timer. This hardware behavior uses Nordic’s GPIOTE and PPI peripherals to “wire” the timer and GPIO together.


use core::ptr;
use my_app as _;

fn main() -> ! {
	let led_pin_num = 13;

	// LED Configuration registers
	let gpio0_base = 0x50000000;
	let gpio0_cfg = gpio0_base | (0x700 + (led_pin_num * 0x04));

	// GPIOTE Configuration registers
	let gpiote_base = 0x40006000;
	let gpiote0_cfg = gpiote_base | 0x510;
	let gpiote0_cfg_val = 0x00130D03;

	// PPI Configuration registers
	let ppi_base = 0x4001F000;
	let ppi0_enable = ppi_base | 0x504;
	let ppi0_event_endpoint = ppi_base | 0x510;
	let ppi0_task_endpoint = ppi_base | 0x514;

	// Timer Configuration registers
	let timer0_base = 0x40008000;
	let timer0_event_register = timer0_base | 0x140;
	let timer0_shorts = timer0_base | 0x200; //write 1;
	let timer0_bitmode = timer0_base | 0x508; //write 3;
	let timer0_cc = timer0_base | 0x540;

	unsafe {
    	    ptr::write_volatile(gpio0_cfg as *mut u32, 0x03);
    	    ptr::write_volatile(gpiote0_cfg as *mut u32, gpiote0_cfg_val);
    	    ptr::write_volatile(ppi0_event_endpoint as *mut u32, timer0_event_register);
    	    ptr::write_volatile(ppi0_task_endpoint as *mut u32, gpiote_base);
    	    ptr::write_volatile(ppi0_enable as *mut u32, 1);
    	    ptr::write_volatile(timer0_shorts as *mut u32, 1);
    	    ptr::write_volatile(timer0_bitmode as *mut u32, 3);
    	    ptr::write_volatile(timer0_cc as *mut u32, 0xFFFFF);
    	    ptr::write_volatile(timer0_base as *mut u32, 1);

	loop {}

One must refer to the datasheet to determine what specific values are needed. This code is difficult to parse/maintain, difficult to debug, and forgoes the safety guarantees Rust provides. Luckily, there’s a better way.

A Better Example

Rust has excellent community-driven Hardware Abstractions Layers (HALs) for a variety of platforms. In Rust, HALs are not built on top of hardware directly. Instead, they rely on peripheral access crates (PACs).

Peripheral Access Crate

PACs are one of the building blocks for a Rust HAL. They contain a singleton `Peripherals` type that provides access to all the peripherals associated with a device. This type also defines functions that allow interaction with said peripherals. Since it’s a Singleton, the `Peripherals` are only obtained once. Peripherals must be passed as references where needed. This enforces safe configuration and use of peripherals via Rust’s ownership model. A tool called svd2rust generates PACs programmatically using System View Description files (SVDs) provided by the manufacturer.

Embedded Hal

The embedded HAL crate defines traits common to certain peripherals such as GPIO, Timers, and Clocks. For those that don’t know, traits are part of Rust’s type system and can be thought of as an interface in some ways. Traits are used to define the functionality a type has. Trait Bounds can be used to limit generic types by requiring them to implement certain functionality. The traits defined by the embedded HAL crate are used in a device-specific HAL implementation and provide guarantees around capabilities and methods offered by types that implement said traits. The nrf52840 hal used below is an example of this.


use hal::{gpio, prelude::*, timer::Timer};
use my_app as _;
use nrf52840_hal as hal;

fn main() -> ! {
	let p = hal::pac::Peripherals::take().unwrap();
	let p0 = gpio::p0::Parts::new(p.P0);
	let led = p0.p0_13.into_push_pull_output(gpio::Level::High).degrade();
	let gpiote = hal::gpiote::Gpiote::new(p.GPIOTE);

	let mut ppi0 = hal::ppi::Parts::new(p.PPI).ppi0;

	let mut timer = Timer::periodic(p.TIMER0);

	loop {}

As demonstrated in the above example, using a HAL makes embedded code easier to write/read and takes advantage of the safety guarantees that Rust provides. When combined with Rust’s expressive type system, these safety guarantees allow for novel implementations that are uniquely suited to embedded development.


At this point, it’s clear that Rust has the potential to see adoption in the embedded space, but likely needs more time to mature. Hopefully, this article provided a clear picture of what embedded development in Rust looks like, why it’s worth considering, and whether it’s a good choice for your specific needs.

Need Embedded Software Development Expertise? We Can Help!

At Punch Through, every article you read is a testament to our engineers' dedication and technical prowess in each project, not just in embedded software but across the IoT development spectrum. We don't just share insights—we're in the trenches, building secure and seamless solutions. Bringing your IoT device to life with robust, scalable, and secure embedded software development.