coreboot Is Not a Bootloader
coreboot firmware bootloader FSPWhat is coreboot? If not a bootloader, does it fit into any other box?
I watched a recording of an OSFC talk[1] about advances of Intel's FSP
lately, where coreboot was referred to as a bootloader. There was also
a new boot flow presented which didn't fit coreboot at all. This made
me wonder, is coreboot fundamentally misunderstood? And also, how does
FSP fit into the picture?
[1] https://www.osfc.io/2025/talks/intel-r-signed-fsp-and-verified-boot
The coreboot architecture separates hardware initialisation, which happens in coreboot, and a program payload outside the scope of coreboot. The latter is run after coreboot to continue the boot process. I believe, this separation clearly shows that coreboot is not a bootloader. But first, let’s have a look at what a bootloader technically is.
In PC history, probably the first thing that we could consider a bootloader was the boot sector or boot record of PC-DOS. It was a tiny program, fitting well within the first 512-byte sector of a floppy disk. It’s purpose: Find and load operating-system (OS) files from the FAT12 file system. It was soon superseded by the master boot record (MBR), which sometimes served a similar purpose, or loaded another program from the “active” partition. The term bootloader became more prominent, when the PC outgrew DOS and its FAT file systems. There was no standardised way to locate the operating-system kernel, so every OS deployed its own bootloader. Linux, for instance, often used LILO the Linux Loader. Another example is NTLDR that came with Windows NT.
All the aforementioned programs follow a common pattern: they discover and load operating-system files, and they are loaded by a BIOS. The BIOS was the PC’s firmware. It served two purposes: 1. initialise hardware, and 2. provide hardware drivers, that the bootloader would use by calling into the BIOS. Beyond the PC, however, there wasn’t always a BIOS.
Das U-Boot (German for “The Submarine”) evolved from a PowerPC (PPC) bootloader, and is probably the most prominent embedded bootloader today. Beside PPC, it runs on ARM, MIPS, x86, and many other architectures. Without a BIOS that could help with the hardware access, U-Boot implements its own device drivers and also has to perform hardware initialisation specific to the platform it runs on. So compared to the traditional PC bootloaders, this is a much more comprehensive project. Still, the focus and final goal stays the same: to discover and load OS files.
Now we have already crossed the boundary to firmware, in other words software that ships with and is tightly coupled to the hardware it runs on. And that’s also what is happening on the PC: The classic separation of firmware that provides hardware drivers but doesn’t know about file systems, and a software bootloader is vanishing since the introduction of UEFI. If we look at the EDKII project (the UEFI reference implementation) for instance, it has a similar scope as U-Boot. It knows how to control the hardware, knows about different storage media and file systems, and sometimes even provides network access. Everything that is needed to discover and load OS files. While it’s often used to load another bootloader from a hard drive, it can also load a Linux kernel directly.
There is also one example where the whole program from system reset to OS loading is called bootloader, in the ARM Trusted Firmware project. I think this interpretation is fair, as it still covers the original goal to load the OS. Hence, I suggest the following definition for what a bootloader is and will use it throughout this article:
A bootloader discovers and loads the operating system.
If necessary, it can access local storage media and file
systems, or remote file services.
coreboot History
If we look into coreboot’s Git history, it actually didn’t start as coreboot, originally it was called LinuxBIOS. And there is more, there are two more historic branches:
$ git branch -a | grep coreboot
remotes/origin/coreboot-v1
remotes/origin/coreboot-v3
For those who are wondering, “Isn’t there something missing, where’s v2?”. That’s actually today’s main branch. Some of the v3 development was eventually backported to v2, and so it became v4 that is still continued today.
The v1 branch shows us that LinuxBIOS didn’t do things much different than coreboot today:
- there’s raminit to get the DRAM working (back then mostly in assembler),
- PCI enumeration and resource allocation,
- a little bit of configuration per mainboard,
- additional configuration of the northbridge, southbridge and super-i/o,
- CPU initialisation, in particular if there was SMP,
- tables to communicate details to the OS, beside the LinuxBIOS table also PIRQ or MP tables for instance.
The purpose of it all? to do exactly what is needed to get Linux running so it could take over. The Linux kernel and its initrd were stored in the firmware flash. There was no file system, no storage driver, no discovery of OS parts. If any such bootloader features were needed, Linux would fill the gap.
If anyone is interested to dive deeper into the fundamentals, I can recommend Lennart Benschop’s Weekly Coreboot Column from 2011.
Execution Flow
Somewhere in the early 2000’s was a wrinkle in the development: The Linux kernel grew faster than firmware flash chips. Legacy BIOS development was still dominant and didn’t need as much space as today’s firmware. So Linux couldn’t be used as the one and only bootloader for every mainboard. This was, supposedly, how the payload idea was born. Instead of a Linux kernel + initrd, a generic payload program should be stored in the firmware flash and continue the boot process. On coreboot.org it says
coreboot doesn’t try to mandate how the boot process should look, it
merely does hardware init and then passes on control to another piece
of software that we carry along in firmware storage, the payload.
This “mere” hardware init that coreboot performs, can be quite comprehensive today. The coreboot architecture tries to assist to keep the process well structured and to be flexible at the same time. Thus, the coreboot flow is separated into stages, all but one of them are optional.
bootblock --> verstage --> romstage --> ramstage (--> payload)
The coreboot bootblock is the first code that usually runs on the main CPU. It sets up a stack, either by configuring the processor cache to be used as RAM or in dedicated SRAM, and then continues with C code. The verstage is an optional, intermediate stage that verifies digital signatures of the later stages.
The main purpose of the romstage is to bring the DRAM controller up. For everything since DDR3-1600, this is one of the most complex procedures within coreboot. To be able to access modern DRAM at high rates, the link between the DRAM controller and DRAM has to be “trained”. This involves special algorithms to find the best timing parameters, to compensate for different trace lengths on the mainboard, different connectors, DIMM designs etc. The results of this training are stored in the firmware flash, so they can be used for subsequent boots. Consequently, the romstage actually contains two paths: One for the DRAM training, and the regular boot path that uses the cached training results.
Most important, and the only mandatory stage is the ramstage. As the name suggests, it runs from the main DRAM. On some platforms, the DRAM controller is initialised by a coprocessor. Then, the ramstage can be the first and only stage of coreboot. In this case, it’s still the first code that runs on the main CPU. Primary purpose of the ramstage is to configure the System-on-Chip (SoC) or CPU and chipset, and other chips for a particular mainboard. It continues with filling tables, e.g. ACPI / SMBIOS, that can later be consumed by the OS, and eventually runs the coreboot payload.
At the Heart of coreboot: A Devicetree
Execution of the ramstage is centered around a devicetree structure, a bit similar in nature to the Flattened Devicetrees (FDT) used in Linux. The coreboot devicetree, however, doesn’t provide information to drivers to use the devices, but primarily to initialise the hardware. The process walks over the devicetree multiple times in distinct steps:
chip init
enumeration
resource discovery
resource allocation
device init
Chips can contain multiple devices or device functions. Sometimes these functions need to be enabled first, or should be hidden if they are unused. A modern SoC for instance, can contain dozens of functions that may or may not be used on a particular mainboard. The chip init step allows to perform chip-global configuration that doesn’t belong to a particular device function.
Not all devices on a mainboard have to be known at compile time. Some actually can’t be known. Consider plug-in PCIe cards for instance. They can change over the lifetime of a mainboard, hence have to be enumerated during every boot process. During this enumeration, new device nodes are added to the static devicetree that was know at compile time. Once all device nodes are known, coreboot continues in a rather object-oriented fashion. Every device node has an initialisation driver attached that contains procedures for the later ramstage steps.
Many devices need addresses assigned to their resources, i.e. register spaces, device memory like VRAM, etc. We call this process resource allocation. It happens in two steps: First, coreboot asks all the drivers to add information about all resources to the devicetree. Then, when coreboot can see the global picture, it sorts the resources and assigns them addresses.
All this leads to the most important step, the device initialisation, or silicon initialisation as some call it today. Most device functions of an SoC and other chips need to be configured to adapt to
- the mainboard design,
- plug-in devices present,
- choices made during firmware development, and
- firmware settings made on each individual machine.
The coreboot devicetree allows us to separate the vast amount of initialisation steps for an SoC or chipset and tackle them individually. It helps to structure hardware initialisation, and its object-oriented elements make it easy to extend coreboot.
Evolution and Competition
As DRAM training became more complex over the years, and silicon vendors less likely to provide documentation or even source code, coreboot decided to allow proprietary binary blobs for this purpose. Later more blobs were used which led to today’s situation for Intel platforms: coreboot relies on their Firmware Support Package (FSP) to perform some of the hardware initialisation. Like coreboot, FSP is composed of several binaries that roughly fit the requirements of the first firmware steps:
- FSP-T (temp raminit) sets the CPU cache up to be used as RAM (bootblock)
- FSP-M (memory init) performs DRAM training (romstage)
- FSP-S (silicon init) performs remaining hardware initialisation (ramstage)
Specified, but not used with upstream coreboot so far, are also: FSP-I for SMM initialisation, and FSP-O (OEM additions) which will do signature verification in the future.
While this seems to fit coreboot’s needs, there is a wrinkle: Not having to implement DRAM training that is often specific to one generation of silicon reduces the burden to bring up a new platform. But everything else, all the smaller silicon initialisation steps done in a huge opaque binary, creates more churn than benefit. With FSP-S, there is no way to divide and conquer during development, everything has to be carefully arranged for a single opaque step. This way, many of the benefits of coreboot’s design are lost.
When coreboot was mentioned as a bootloader in Intel’s verified FSP talk, I started to wonder what coreboot better compares to. There was also a slide (at [02:20]) about FSP enhancements over the years. Some of the points there looked quite familiar, so I decided to put this side-by-side with coreboot’s history:
coreboot
started 1999 with
• mostly hardware init FSP
• PCI enum / allocation
started around 2013 with
it later gained • TempRamInit (CAR)
• Cache-As-RAM (CAR) • DRAM training, plus some
• SMP init undefined CPU/chipset init
• devicetree driven flow
• native GFX init it later gained
• ACPI (2004) • PCI enumeration (2015?)
• public standards, • GOP driver (GFX init) (2015)
e.g. HDA (2007) • SMP init (2015)
• SMI handler (2008) • public standards,
• verified boot (2015) e.g. HDA (2016)
• early GFX init (2022) • μGOP driver (2023)
(early sign of life) (early sign of life)
future FSP brings
• ACPI, SMI handler
• verified boot
So what is coreboot? I believe the following definition fits well:
coreboot is an open-source silicon-initialisation framework
And what is FSP? well it seems to strive to be just the same, only proprietary.
FSP is gaining more and more features that were first implemented in open-source coreboot. And the more it gains, the harder it becomes to integrate. The following comparison probably shows where we are: When using EDKII as payload for instance, an open-source coreboot would do almost the same things as a full 2026 FSP when integrated with EDKII.
Final Thoughts
coreboot is not a bootloader. After writing it all down, I’m more convinced than ever. It’s a framework for hardware initialisation, its whole design is meant to break the configuration of complex silicon down into small, manageable steps. Looking at its history, I believe coreboot was far ahead of its time. With the ever growing SoCs we deal with today, its design could probably help to speed the introduction of new silicon up a lot.
Alas, a lot of coreboot’s potential lies idle when silicon vendors try to integrate their own, different designs. That’s another point I haven’t written about so far: coreboot is vendor agnostic. And it does a very good job at it. During more than a decade of coreboot development, I haven’t encountered any silicon design that coreboot couldn’t handle.
Calling coreboot a bootloader, especially in the context of FSP, causes too much confusion. If coreboot was a bootloader, so would be FSP. This confusion might explain why people try to push something into coreboot that doesn’t fit. I don’t think it’s all bad, though. FSP doesn’t look like it was meant to be a proprietary coreboot replacement. Albeit it could serve as such, and that’s a risk. It lost its path as it tries to push the same binaries into actual bootloaders and coreboot. Even if we’d need more Intel-specific parts than the DRAM training in a proprietary blob, that blob could offer a more fine-grained interface that better fits coreboot’s devicetree model. Ideally, there would be one blob for each device function, and the same code could even be used to fill the traditional FSP-M/S binaries. That thought could fill another article, though.