Conflicted about sysroots

Dear Brother Yocto,
I have two images: production-image.bb and manufacturing-image.bb. The production-image includes a recipe, production-systemd-scripts, and the manufacturing-image includes a different recipe, manufacturing-systemd-scripts. The two recipes have different versions of a systemd service, startup-ui.service. One starts the production app and the other does not. If I start with a clean tmp directory they both build fine, but if I build one first and then build the other, I get an error.


kc8apf@blog$ bitbake production-image

Summary: There was 1 WARNING message shown.
(it works)
kc8apf@blog$ bitbake manufacturing-image

ERROR: manufacturing-systemd-scripts-0.0.1+gitAUTOINC+60423a2c5f-r0 do_populate_sysroot: The recipe manufacturing-systemd-scripts is trying to install files into a shared area when those files already exist. Those files and their manifest location are:
/home/user/fsl-release-bsp/x11_ull_build/tmp/sysroots/imx6ull14x14evk/lib/systemd/system/startup-ui.service
Matched in b’manifest-imx6ull14x14evk-pika.populate_sysroot’
Please verify which recipe should provide the above files.
The build has stopped as continuing in this scenario WILL break things, if not now, possibly in the future (we’ve seen builds fail several months later). If the system knew how to recover from this automatically it would however there are several different scenarios which can result in this and we don’t know which one this is. It may be you have switched providers of something like virtual/kernel (e.g. from linux-yocto to linux-yocto-dev), in that case you need to execute the clean task for both recipes and it will resolve this error. It may be you changed DISTRO_FEATURES from systemd to udev or vice versa. Cleaning those recipes should again resolve this error however switching DISTRO_FEATURES on an existing build directory is not supported, you should really clean out tmp and rebuild (reusing sstate should be safe). It could be the overlapping files detected are harmless in which case adding them to SSTATE_DUPWHITELIST may be the correct solution. It could also be your build is including two different conflicting versions of things (e.g. bluez 4 and bluez 5 and the correct solution for that would be to resolve the conflict. If in doubt, please ask on the mailing list, sharing the error and filelist above.
ERROR: manufacturing-systemd-scripts-0.0.1+gitAUTOINC+60423a2c5f-r0 do_populate_sysroot: If the above message is too much, the simpler version is you’re advised to wipe out tmp and rebuild (reusing sstate is fine). That will likely fix things in most (but not all) cases.
ERROR: manufacturing-systemd-scripts-0.0.1+gitAUTOINC+60423a2c5f-r0 do_populate_sysroot: Function failed: sstate_task_postfunc

The recipe production-systemd-scripts is not in the image I asked to build. Why doesn’t bitbake remove the output of unused recipes from tmp/sysroots before I get this error?

-Conflicted about sysroots

 

Dear Conflicted,
When I first started with Yocto, I assumed that a recipe is built in isolation and should have no side-effects on other packages.  Of course, that isn’t strictly true as a recipe has access to all of the files installed by its dependencies (both build-time and run-time).  If I were designing a new build system, I’d probably implement the build steps as:
  1. Create an empty directory
  2. Install all dependent packages into that directory
  3. Enter this directory as a “chroot”
  4. Build the recipe
  5. Package the results

The Problem

On the surface, this seems great: each package only has access to the files provided by its dependency chain (note that transitive dependencies are still a problem). There is a serious problem lurking under the surface though.  Assume I have two recipes: A and B.  A and B both depend on a common library C but they require different, incompatible major versions.  Let’s say A depends on C v1.0.0 while B requires C v2.0.0.  The builds of A and B will succeed because each build environment includes only its own dependency chain.  When the resulting packages are installed, however, one of the packages will be broken.  Why?  Because the two major versions of C use the same library and header names.  When C v2.0.0 is installed after C v1.0.0, B will work just fine but A will either fail to find some symbols or, worse, start up and then crash due to calling functions with the wrong arguments.

“Wait!,” I hear you screaming, “this is already handled by library major versions using different library names.”  That’s true, most major libraries have encountered this exact problem and worked around it by effectively creating a whole new project for the new major version.  That’s only a solution if you know of the hazard though.  What about smaller libraries with developers that haven’t encountered this situation before?  They can wreak havoc on an OS build. This type of conflict happens often enough that every OS build system I’ve encountered has some type of warning or error when this either happens or is possible.

Enter the sysroot

How does Yocto deal with this situation? As the error messages you encountered indicate, it has something to do with sysroots.  Before I dive into details, please note that Yocto’s sysroot behavior changed in 2.5 (Sumo).  I’ll be referencing documentation from 2.5 because it has much better descriptions of the build process.  When 2.5 behavior differs from prior releases, I’ll indicate this and explain the differences.

Section 4.3 of Yocto Project Overview and Concepts Manual provides a thorough introduction to Yocto’s various build stages, their inputs, and their outputs. While it does discuss sysroots, it doesn’t provide a succinct description of what they are used for.  For that, we need to look at Yocto Reference Manual’s staging.bbclass section. The rough concept is that sysroots are the mechanism used to share files between recipes.

As described in Section 4.3.5.3 of Yocto Project Overview and Concepts Manual, a recipe’s do_install task is responsible for installing a recipe’s build products into the recipe’s staging directory ($D).  Why doesn’t it install directly into the sysroot? Two reasons.

First, the sysroot already contains the files installed by every dependency.  While this might work to build a single image, isolating the recipe’s files allows the recipe’s output to be cached, bundled into a package (deb, rpm, opkg, etc), and used in multiple images that may have different collections of recipes installed.

Second, instead of simply creating a single package containing every file installed by the recipe, Yocto splits the installed files into multiple packages such as -dev and -dbg to allow finer-grain control over image contents. That way a production image only includes executables while the matching debug image can include symbol files and the developer image has all the headers and static libraries. In addition to the packages used to construct images, the do_populate_sysroot task copies files and directories specified by the SYSROOT_DIR* variables to a staging area that is made available at build time of other recipe’s that depend on this recipe.  Note that this means that a recipe that depends on another recipe, A, may have a different set of files available than if you installed an SDK that includes the A-dev package.  Put another way, sysroots are how files are shared between recipes during a build of an image while -dev packages are used to construct the final contents of an SDK image.

But that doesn’t seem to cause a problem….

As mentioned in The Problem, sysroot problems are rarely encountered during the build or install phases of a recipe.  There are two places after building where sysroots can run into problems: during staging (do_populate_sysroot) and during setup as a dependency of another recipe (do_prepare_recipe_sysroot).

During staging, the files matching SYSROOT_DIRS* are copied into a sysroot. If a dependency has already staged a file with the same path, it would be overwritten.  Because this is likely to be a Very Bad Thing™, Yocto raises a warning.  I would expect this to be uncommon except for one big caveat: before Yocto 2.5, a single, global sysroot was used for all recipes.  Instead of file path uniqueness within a dependency chain being required, globally unique file paths were necessary.  As illustrated in the figure below, Yocto 2.5 generates a sysroot for each recipe based solely on its dependencies which avoids path collisions from recipes with independent dependency chains.

Diagram showing a recipe receives its own working directory under $TMPDIR/work/$PN/$PV-$PR. Inside is a build directory ($BPN-$PV), a destination (image), and two sysroots (recipe-sysroot, recipe-sysroot-native)
Recipe work directory layout from Yocto Project Overview and Concept Manual, Section 4.3.5.3
The second place where collisions can happen is when constructing a recipe’s initial sysroot based on its dependency chain. As described in The Problem, the individual dependencies might build without issue but still result in a conflict when they are installed together.  Going back to that example, an image D that depends on both A and B will result in two different versions of C being installed that overwrite each other.  Note that this only happens in Yocto 2.5 and later. In earlier versions of Yocto, these collisions would be encountered during staging into the global sysroot.

Wrapping up

So, what’s going on in your build?  You described two recipes installing a file to the same path but those recipes are never used together in a single image.  This should work in general and it will work in Yocto 2.5 and later.  For earlier versions, the global sysroot will cause a conflict.

A common workaround is to use a single image with custom IMAGE_FEATURES to select which recipe is included. Any modifications to IMAGE_FEATURES or DISTRO_FEATURES is a signal that a new build directory should be used.  The downside is that the production and manufacturing images will be built independently and may not be representative of each other.

Another approach is to use separate systemd unit names for production-systemd-scripts and manufacturing-systemd-scripts. Assuming your main concern is controlling which services are started during boot, the names of the units shouldn’t matter.  In fact, a recipe named production-systemd-scripts points to a bigger, conceptual issue: systemd unit files should be coupled with the service that the unit controls.  That way, the systemd units started reflect the services available rather than a desired behavior.  Behaviors should be separate  recipes that install a single systemd unit whose presence triggers that behavior.  If you have a systemd unit that automatically starts manufacturing tests, put that in an appropriately-named recipe separate from the tests themselves.  That way, the tests can be installed on top of a production image without triggering the manufacturing workflow.  Even better is if you create your own pseudo-targets for these behaviors and then use WantedBy and Conflicts to allow switching between modes by activating or deactivating the psuedo-targets.  That gets you a single image that can be used for both manufacturing test and production.

— Brother Yocto

 

Monitoring GitLab on Kubernetes

TL;DR

If you want decent monitoring of GitLab deployed via GitLab’s helm charts, skip the built-in Prometheus config and go straight to InfluxDB. Make sure you enable InfluxDB’s UDP interface and replace the autogenerated, default retention policy with a 1h policy. Use GitLab’s Grafana dashboards by using jq to extract the dashboard node from each .json file and then changing the datasource name to match yours.

Distributed systems have pros and cons

A few weeks ago, I migrated our self-hosted GitLab Omnibus install to a Kubernetes deployment using GitLab’s helm charts. Since moving everyone over to using this instance for their day-to-day activities, the whole system has fallen over once or twice.  The beauty of putting GitLab on Kubernetes is that horizontal autoscaling will spin up additional app servers as load increases.  The downside is that there are a lot more pieces that can fail compared to an Omnibus install.

After the 2nd outage, I realized I needed better ways to get data about the state of the system.  As it was, I had to guess what was going wrong based on the observed end behavior.  Well, GitLab’s chart sets up Prometheus monitoring out of the box.  Let’s see what we can do with that.

Skip Prometheus, go straight to InfluxDB

At first glance, not much.  Prometheus was happily scraping all the GitLab pods and storing the data in time series but there was no dashboard or charts.  The only available option was using Prometheus’s built-in Expression Browser which is oriented toward ad-hoc queries.  I took the suggestion from Prometheus’s and GitLab’s docs and deployed Grafana to handle data visualization.

I can’t be the only person wanting to monitor GitLab installs, right? GitLab’s documentation points you to https://gitlab.com/gitlab-org/grafana-dashboards which has a dashboard for monitoring Omnibus installs. GitLab chart deployments aren’t quite the same as Omnibus but they’re kinda close. Not close enough. This dashboard provides only a high-level overview of process uptime and it doesn’t seem to work with how Prometheus is labeling the data.

The other folder in that repo is a pile of dashboards used by GitLab.com to monitor their public infrastructure. If it’s good enough for them, maybe it’ll be good enough for me.  At first, I couldn’t get any of the dashboards to import. Reading the README, these are formatted as API requests rather than just the dashboard. A quick pass through jq to extract the dashboard node and I can import them into Grafana!  Woohoo!

Uh, all of the queries are failing. Peeking at a few, it seems GitLab.com has evolved from using Prometheus to InfluxDB and the dashboards have a mix of the two. Most of the more detailed dashboards seem to use InfluxDB so let’s go that route. After deploying InfluxDB via Helm (and turning on the UDP API and creating a database) and telling GitLab to send data there (https://docs.gitlab.com/ee/administration/monitoring/performance/gitlab_configuration.html), I can see measurements arriving in InfluxDB. Add InfluxDB as a datasource to Grafana, change the datasource name in the dashboards, and a few of the charts start to work!  But not all of them.

Diving in further to these queries, it seems GitLab.com uses InfluxDB’s retention policies and continuous queries rather extensively. These dashboards depend on the latter for a variety of things.  How am I going to recreate these? Poking around on GitLab’s repos, I stumbled across https://gitlab.com/gitlab-org/influxdb-management which seems to set all of this up. First attempt at running it throws an error:

rake aborted!
InfluxDB::QueryError: retention policy duration must be greater than the shard duration
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/client/http.rb:91:in `handle_successful_response'
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/client/http.rb:15:in `block in get'
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/client/http.rb:53:in `connect_with_retry'
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/client/http.rb:11:in `get'
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/query/core.rb:100:in `execute'
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/query/retention_policy.rb:26:in `alter_retention_policy'
/Rakefile:25:in `block in <top (required)>'
/usr/local/bundle/gems/rake-12.3.1/exe/rake:27:in `<top (required)>'
/usr/local/bin/bundle:23:in `load'
/usr/local/bin/bundle:23:in `<main>'
Tasks: TOP => default => policies
(See full trace by running task with --trace)

Huh? I didn’t set up any retention policies? Searching for the error message, I find https://gitlab.com/gitlab-org/influxdb-management/issues/3. Well, at least it’s a known issue.  InfluxDB’s default configuration autogenerates a default retention policy when a database is created. That retention policy holds the data for infinity which implies a shard data duration of 168h. Shard durations cannot be changed so I deleted that retention policy and created a new one with a 1h retention since that’s what these GitLab scripts try to do.

Finally, I can deploy at least a few of these dashboards (after modifying the datasource name) and they just work.  Which ones are actually useful is still an exercise ahead of me.

Unpacking Xilinx 7-Series Bitstreams: Part 3

In Part 2, I detailed the configuration packet format and how the programming operation is conveyed as a sequence of register writes. As we move up to the configuration memory frame layer (see layer diagram in Part 1), the  construction of a Xilinx 7-series device becomes important. A major clue about this relationship comes from the Frame Address Register, a key register in any programming operation.

Frame Address Register (FAR)

Just before the first write of a configuration frame to the Frame Data Register, Input Register (FDRI), a 32-bit value is written to the Frame Address Register (FAR) to indicate where the new frame data should be placed in the configuration memory. What do these addresses tell us about the device construction?  UG470 tells us these 32-bit addresses are comprised of the following fields:

Xilinx 7-Series Frame Address Register

Blocks are further described as only using a few specific values:

  • 000 CLB, I/O, CLK
  • 001 Block RAM content
  • 010 CFG_CLB

This looks very much like a geographical addressing scheme similar to the Bus/Device/Function scheme used for PCI Configuration Space. That is, the device is constructed of a hierarchy of component groupings. In the case of PCI, a system may contain 256 buses, each of which may contain up to 32 devices.  Further each device may contain up to 8 functions. Identifying a specific function requires identifying the bus and device that contain it as well. Thus, function addresses in PCI are a tuple of (bus, device, function). Balancing complexity of address decoding logic with address compactness leads to representing each component of the tuple as a binary number with the minimum number of bits needed to represent the maximum allowed value and then concatenating those numbers into a single binary number padded to a common alignment size (8, 16, 32, or 64 bits).

Inferring Device Architecture

What does this tell us about 7-series devices then? A device is constructed of some hierarchy of block types, device halves, rows, columns, and minor frames. The FAR field descriptions of UG470 gives us a few more details:

  • Rows are numbered outward from the device center-line in the direction specified by the top/bottom bit
  • Columns are numbered with zero on the left and increasing to the right
  • Minor frames are contained within a column

Looking back at the FAR description, it seems that the fields are ordered such that each component contains all the components to the right of its field in FAR. That matches with traditional meanings of the terms used except the relationship between block types and halves. If rows are numbered growing outward from the center-line of the device, that implies there are only two halves in a device, not two per block type. Recall that only three block type values are used. What if instead of being part of the hierarchy, the block type selects one of multiple data buses going into the hierarchy? That would match the terms used better.

Combining those conclusions with the field bit-widths in FAR, we end up with the following addressing limits:

  • Block Types: 3
  • Halves: 2
  • Rows: up to 32 per half
  • Columns: up to 1024 per row
  • Minor frames: up to 128 per column

Putting all of that together, the device looks something like this:

Next: Verifying Against a Bitstream

In Part 4, I’ll look at the FAR and FDRI writes done by a Vivado-generated bitstream and see how well it matches my inferred device architecture. I’ll use a bitstream debugging feature to figure out the valid frame addresses in a device which results in a few surprises.

Unpacking Xilinx 7-Series Bitstreams: Part 2


In Part 1, I walked through the various file formats generated by Xilinx tools, the BIT file format header, and the physical interface layer of the bitstream protocol stack. In this part, I’ll dive into the gory details of the configuration packet format and how those packets control the overall programming operation.

As I briefly mentioned in Part 1, the physical interface layer transports a stream of packetized register read/write operations that constitute the configuration packet layer. The sync word that begins the packet stream also serves to establish 32-bit alignment within the overall byte stream carried by the physical interface layer. From that point on, all data formats are described in 32-bit, big-endian words.

Note that the physical interface used may impose limitations on the features available at the configuration packet layer. I’ll call out these limitations when describing features that are impacted.

Configuration Packet Format

Xilinx 7-Series Configuration Packet Header

Each configuration packet begins with a one-word header. The contents of the header change according to the header type which is contained in the top 3 bits. Only types 1 and 2 are officially documented though type 0 exists in practice as we’ll see later.

Xilinx 7-Series Configuration Packet Type 1 Header

Type 1 packets specify a complete operation to be performed with opcodes being defined as follows:

  • 00 – NOP
  • 01 – Read
  • 10 – Write

For a NOP, the remaining header fields are unused for this operation but the address field is important for type 2 packets. Reads and writes are directed at a specific register specified in the address field. While 14 bits of address space is defined in the header, 7-series devices seem to only use the lower 5 bits. Payload length is the number of data words to be read or written. These data words immediately follow the header with writes being sent to the device and reads being sent from the device. Note: reads are only available over SelectMAP and JTAG physical interfaces.

Xilinx 7-Series Configuration Packet Type 2 Header

Type 2 packets are used when the payload length exceeds the 11 bits available in a type 1 packet. Note the lack of an address field. Remember how I mentioned the address field being important for a NOP? The address field of the last type 1 packet is reused as the target of a type 2 packet. Only the address is reused so, in theory, a type 1 read could be followed by a type 2 write. In practice, I’ve only seen type 2 used immediately after a zero-length type 1 write.

Configuration Registers

Addresses specified in configuration packets are mapped 1-to-1 to a set of variable-width registers. Most of the registers are a single word wide but FDRI and FDRO are notable exceptions. I have not experimented with what happens if a packet attempts a short write or a write past the end of the register.

These registers provide low-level control over the chip including boot configuration and programming. Many of the available knobs are related to tuning physical interface behavior and which status/debug signals are available on pins. A few of the key registers used during programming are:

  • IDCODE
    Before writing to the configuration memory, a 32-bit device ID code must be written to this register. Reads from the register return the attached device’s ID code.
  • CRC
    When a packet is received by the device, it automatically updates an internal CRC calculation to include the contents of that packet. A write to the CRC register checks that the calculated CRC matches the expected value written to the register. This CRC check is only used to provide integrity checking of the packet stream, not the configuration memory contents, and are not required for programming. If you are modifying a bitstream, CRC writes can simply be removed instead of recalculating them.
  • Command
    Most of the programming sequence is implemented as a state machine that is controlled via one-shot actions. Writes to this register arm an action that, depending on the action requested, may be triggered immediately or delayed until some other condition is met.
    Important Note: During autoincremented frame writes (described later), the current command is rewritten during every autoincrement. This has the effect of rearming the action on every frame written.
  • Frame Address Register (FAR)
    Writes to this register set the starting address for the next frame read or write.
  • FDRI
    When a frame is written to FDRI, the frame data is written to the configuration memory address specified by FAR. If the write to FDRI contains more than one frame, FAR is autoincremented at the end of each frame.

For more details on these registers and others I didn’t mention, refer to Table 5-23 in UG470.

Programming Sequence

I’ll only be providing a high-level overview of a programming sequence for a complete write of the configuration memory. Partial reconfiguration uses a slightly different sequence that I’ll document in a separate post. I highly suggest looking at a bitstream as there are details such as NOPs that I am omitting that may be important when actually programming a device.

  1. Write TIMER: 0x000000000
    Disable the watchdog timer
  2. Write WBSTAR: 0x00000000
    On the next device boot, start with the bitstream at address zero.  This may be different if the bitstream contains a multi-boot configuration.
  3. Write COMMAND: 0x00000000
    Switch to the NULL command.
  4. Write COMMAND: 0x00000007
    Reset the calculated CRC to zero.
  5. Write register 0x13: 0x00000000
    Undocumented register. No idea what this does yet.
  6. Write Configuration Option Register 0: 0x02003fe5
    Setup timing of various device startup operations such as which startup cycle to wait in until MMCMs have locked and which clock settings to use during startup.
  7. Write Configuration Option Register 1: 0x00000000
    Writing defaults to various device options such as the page size used to read from BPI and whether continuous configuration memory CRC calculation is enabled.
  8. Write IDCODE: 0x0362c093
    Tell the device that this is a bitstream for a XC7A50T. If the device is an XC7A50T, configuration memory writes will be enabled.
  9. Write COMMAND: 0x00000009
    Activate the clock configuration specified in Configuration Option Register 0. Up to this point, the device was using whatever clock configuration the last loaded bitstream used.
  10. Write MASK: 0x00000401
    Set a bit-wise mask that is applied to subsequent writes to Control 0 and Control 1. This seems unnecessary for programming but is used to toggle certain bits in those registers instead of using precomputed values. It might make more sense in a use case where the exact value of Control 0 or Control 1 is unknown but a bit needs to be flipped.
  11. Write Control 0: 0x00000501
    Due to the previous write to MASK, 0x401 is actually written to this register which is the default value. Mostly disable fallback boot mode and masks out memory bits in the configuration memory during readback.
  12. Write MASK: 0x00000000
    Clear the write mask for Control 0 and Control 1
  13. Write Control 1: 0x00000000
    Control 1 is officially undocumented. See Part 3 for at least one bit I’ve figured out.
  14. Write FAR: 0x00000000
    Set starting address for frame writes to zero.
  15. Write COMMAND: 0x00000001
    Arm a frame write. The write will occur on the next write to FDRI.
  16. Write FDRI: <547420 words>
    Write desired configuration to configuration memory. Since more than 101 words are written, FAR autoincrementing is being used. 547420 words is 5420 frames. Between each frame, COMMAND will be rewritten with 0x1 which re-arms the next write. Note that the configuration memory space is fragmented and autoincrement moves to the next valid address. As we’ll see in Part 3, this is a rather annoying feature that makes reading bitstream configuration data a bit more challenging.
  17. Write COMMAND: 0x0000000A
    Update the routing and configuration flip-flops with the new values in the configuration memory. At this point, the device configuration has been updated but the device is still in programming mode.
  18. Write COMMAND: 0x00000003
    Tell the device that the last configuration frame has been received. The device re-enabled its interconnect.
  19. Write COMMAND: 0x00000005
    Arm the device startup sequence. Documentation claims both a valid CRC check and a DESYNC command are required to trigger the startup. In practice, a bitstream with no CRC checks works just fine.
  20. Write COMMAND: 0x0000000D
    Exit programming mode. After this, the device will ignore data on the configuration interfaces until the sync word is seen again. This also triggers the previously armed device startup sequence.

Next: Configuration Memory

In Part 3, I’ll cover how configuration memory is addressed and how that gives us some clues about the physical chip structure. I’ll also look at a very curious detail that violates the protocol stack encapsulation.

Unpacking Xilinx 7-Series Bitstreams: Part 1

For the past few months, I’ve been writing Xilinx 7-series bitstream manipulation tools for SymbiFlow. After building a mostly-working implementation in C++, I started to wonder what a generic framework for FPGA development tools would look like. Inspired by LLVM and partly as an excuse to learn Rust, I started a new project, Gaffe, to prototype ideas. With Xilinx 7-series fresh in my mind, I chose to reimplement the bitstream parsing as a first step. While most of the bitstream format is documented in UG470 7 Series FPGA Configuration User Guide, subtle details are omitted that I hope to clarify here.

File Formats Galore

Xilinx 7-series devices can be programmed through multiple interfaces (JTAG, SPI, BPI, SelectMAP) and multiple tools (iMPACT, SPI programmer, SVF player). This has led to multiple file formats being devised for different scenarios:

  • BIT – Binary file containing BIT header followed by raw bitstream
  • RBT – ASCII file with text header followed by raw bitstream written as literal ‘0’ and ‘1’ characters for each bit
  • BIN – Raw bitstream
  • MCS – PROM file format (includes address and checksum info)

Even though a BIN contains all the necessary data for programming a part, BIT is the default format generated by Vivado’s write_bitstream command and is what I’ll focus on.

BIT Header

Thankfully, this header format was documented on FPGA FAQ back in 2001.  It’s mostly a Tag-Length-Value (TLV) format but with a few quirks.  The information provided (design name, build date/time, target part name) are purely informational (ignored by the chip).  The main reason I mention this format is that most other tools (Vivado, openocdrequire this header to be present.

Layers of encapsulation

Past the BIT header, the raw bitstream is literally a stream of bytes that is interpreted by a 7-series part’s programming logic. Similar to networking protocols, part programming is built out of a protocol stack.

Stack of three layers from bottom to top: physical interface, packets, and frames.
Xilinx 7-Series Bitstream Layers

Starting at the base is the physical interface (JTAG, SPI, etc) used to connect to the part.  The physical interface carries a packetized format that controls the overall programming operation through a series of register reads/writes.  Part of the register set provides indexed access to the top layer of the stack: configuration memory frames.

Physical Interface Layer

As multiple physical interfaces are available, the electrical details depend on the specific interface you choose to use.  The only common piece of the physical interface layer is the detection of a sync word (0xAA995566) that begins the parsing of packets.

Any data received prior to to the sync word will not parsed as a packet but may have other effects.  For example, a few of the physical interfaces allow for multiple parallel bus widths.  The interface hardware looks for a magic sequence, called the bit width detection pattern, to determine the width of the parallel interface.  For details on how this works, see Chapter 5 of UG470.

Moving up the stack

In Part 2, I’ll be describing the packet format and the overall programming sequence.  Part 3 will focus on configuration memory frame addressing and a few places where this careful encapsulation gets violated.

Ubiquiti EdgeRouter Lite USB surgery

Keeping up with updates on all the devices in my home lab usually isn’t a big deal. Windows, macOS, Android, and iOS mostly take care of themselves. Ubuntu needs the occasional touch of ‘apt upgrade’ but is otherwise painless. My printer hasn’t had new firmware in over a year (there’s definitely a vuln or two that won’t ever be patched). Given the typical monotony, I was caught off guard when the firmware update on my router failed.

A/B firmware updates

Instead of a typical WiFi router, I use a Ubiquity EdgeRouter Lite to get some fancy features like BGP routing. Being geared toward WISPs and other industry applications, firmware upgrades are made safer by maintaining two OS images. When a new update is loaded onto the device, it overwrites the inactive OS image (the one not currently running). Once the update is written, checksummed, and ready to go, the device is rebooted and the bootloader loads the new OS image. If the new OS image fails for some reason, the bootloader will fallback to the old OS image that was running previously.

When failures repeat themselves

I SSH’d to the router and ran the update command:

kc8apf@router0:~$ add system image https://dl.ubnt.com/firmwares/edgemax/v1.9.7/ER-e100.v1.9.7+hotfix.4.5024004.tar
Trying to get upgrade file from https://dl.ubnt.com/firmwares/edgemax/v1.9.7/ER-e100.v1.9.7+hotfix.4.5024004.tar
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 80.8M 100 80.8M 0 0 4740k 0 0:00:17 0:00:17 --:--:-- 5082k
Download suceeded
Checking upgrade image...Done
Preparing to upgrade...Failed to mount partition

Uh, oh. This is about the time that I remember that this particular router failed to boot after a power outage a few months ago. The internal storage had been corrupted. I ran a recovery procedure, got it back online, and promptly forgot about it. Seems like the internal storage is failing. What can I do about it?

Surprisingly serviceable

From the Ubiquity Community forums, I learned that the internal storage is actually a USB thumb drive and can be replaced. Perfect! I have a pile of extra USB drives that I keep around for OS installers. I grabbed an 8GB drive and promptly realized I had no idea how to load the OS onto the new drive. Sure, I could pull the old drive and copy it but that would copy any corruption already present. A little more hunting in the forums turned up mkeosimg, a script that prepares a USB drive from an EdgeRouter firmware update.

The README is pretty clear that I need a Linux machine to use it. A quick look at the source confirms that it relies on a handful of tools such as parted and mkfs.ext3. The only Linux machines I have at home at rack-mount servers running containers. Getting access to a USB port on those is a bit of a chore so I took the “easier” approach of firing up an Ubuntu VM on Virtualbox.

Why so slow?!?

Almost immediately, mkeosimg tells me it is “Creating ER-e100.v1.9.7+hotfix.4.5024004-configured.img” and then sits there for a few minutes. What the heck is it doing? Creating a zero-filled file of course! While waiting for dd to copy 8GB worth of zeros to a file inside a VM is an exciting way to spend time, I like to live a fast and dangerous life. I replaced dd with a call to fallocate. fallocate simply asks the filesystem to create a file entry but don’t actually allocate sectors for data. That is, it creates an 8GB file almost instantly because it only updates the filesystem metadata. The data sectors will be allocated as data is written to the file. Now the script runs in seconds. I have an image! I write it to a USB drive. I’m nearing the end!

Opening the case

Yes, there is a USB stick inside this router. I’m not sure I’d call it a “standard” USB stick though.

EdgeRouter-Lite internal USB drive next to a 2.5" hard drive.
2.5″ hard drive for scale.

By removing all the normal casing, the router enclosure can be slightly smaller. The problem: my new USB drive has all the regular casing. A trip to the bench grinder was in order.

Now it just barely fits in the enclosure.

Well, that was fun.

Snakes in a Sysroot

Dear Brother Yocto,
Here’s a weird bug I found on my setup: pythonnative seems to be using
python3.4.3 whereas the recipe specified 2.7.

*What I tested:*
I found the code in phosphor-settings-manager, which inherits pythonnative,
seems to be working per python 3, so added the following line into a native
python function:

python do_check_version () {
  import sys
  bb.warn(sys.version)
}
addtask check_version after do_configure before do_compile

WARNING: phosphor-settings-manager-1.0-r1 do_check_version: 3.4.3 (default,
Nov 17 2016, 01:08:31)

This is with openbmc git tip, building MACHINE=zaius

*What the recipe should be using is seemingly 2.7:*
$ cat ../import-layers/yocto-poky/meta/classes/pythonnative.bbclass

inherit python-dir
[...]

$ cat ../import-layers/yocto-poky/meta/classes/python-dir.bbclass

PYTHON_BASEVERSION = "2.7"
PYTHON_ABI = ""
PYTHON_DIR = "python${PYTHON_BASEVERSION}"
PYTHON_PN = "python"
PYTHON_SITEPACKAGES_DIR = "${libdir}/${PYTHON_DIR}/site-packages"

*Why this is bad:*
well, python2.7 and python3 have a ton of differences. A bug I discovered
is due to usage of filter(), which returns different objects between
Python2/3.

Before diving into the layers of bitbake recipes to root cause what caused
this, I want to know if you have ideas why this went wrong.

-Snakes in a Sysroot

Dear Snakes in a Sysroot,
As you’ve already discovered, the version of Python used varies depending on context within a Yocto build. Recipes, by default, are built to run on the target machine and distro. For most recipes, that is what we want: we’re building a recipe so it can be included in the rootfs. In some cases, a tool needs to be run on the machine running bitbake. An obvious example is a compiler that generates binaries for the target machine. Of course, that compiler likely has dependencies on libraries that must be generated for the machine that runs that compiler. This leads to three categories of recipes: native, cross, and target. Each of these categories place their output into different directories (sysroots) both to avoid file collisions and to allow BitBake to select which tools are available to a recipe by altering the PATH environment variable.

native
Binaries and libraries generated by these recipes are executed on the machine running bitbake (host machine). Any binary that generates other binaries (compiler, linker, etc) will also generate binaries for the host machine.

cross
Similar to native, the binaries and libraries generated by these recipes are executed on the host machine. The difference is that any binary that generates other binaries will generate binaries for the target machine.

target
Binaries and libraries generated by these recipes are executed on the target machine only.

For a recipe like Python, it is common to have a native and a target recipe (python and python-native) that use a common include file to specify the version, most of the build process, etc. That’s what you see in yocto-poky/meta/classes/python-dir.bbclass.

So, why isn’t python-native being used to run the python function in your recipe? A key insight is that you are using ‘addtask’ to tell BitBake to run this python function as a task. That means it is being run in the context of BitBake itself, not as a command executed by a recipe’s task. The sysroots described above don’t apply since a new python instance is not being executed. Instead, the recipe is run inside the existing BitBake python process. This is also how in-recipe python methods can alter BitBake’s variables and tasks (see BitBake User Manual Section 3.5.2 specifically that the ‘bb’ package is already imported and ‘d’ is available as a global variable). Since bitbake is executed directly from the command line and is a script, its shebang line tells us it will be run under python3.

If you want to run a python script as part of the recipe’s build process (perhaps a tool written in python), you’d execute that script from within a task:

do_check_version() {
  python -c '
import sys
print(sys.version)
'
}
addtask check_version after do_configure before do_compile

Since python is being executed within a task, the version of python in the selected sysroot will be used. The output will not be displayed on the console in BitBake’s output. Instead, it will be written to the task’s log file within the current build directory. If you need to do this, I highly suggest writing the script as a separate file that is included in the package rather than trying to contain it all inside the recipe.

All about those caches

Dear Brother Yocto,
When building an image, I want to include information about the metadata repository that was used by the build.  The os-release recipe seems to be made for this purpose.  I added a .bbappend to my layer with the following:

def run_git(d, cmd):
  try:
    oeroot = d.getVar('COREBASE', True)
    return bb.process.run("git --work-tree %s --git-dir %s/.git %s"
        % (oeroot, oeroot, cmd))[0].strip('\n')
  except:
    pass

python() {
  version_id = run_git(d, 'describe --dirty --long')
  if version_id:
    d.setVar('VERSION_ID', version_id)
    versionList = version_id.split('-')
    version = versionList[0] + "-" + versionList[1]
    d.setVar('VERSION', version)

  build_id = run_git(d, 'describe --abbrev=0')
  if build_id:
    d.setVar('BUILD_ID', build_id)
}

OS_RELEASE_FIELDS_append = " BUILD_ID"

The first build using this .bbappend works fine: /etc/os-release is generated with VERSION_ID, VERSION, and BUILD_ID set to the ‘git describe’ output as intended. Subsequent builds always skip this recipe’s tasks even after a commit is made to the metadata repo. What’s going on?
-Flummoxed by the Cache

Dear Flummoxed by the Cache,

As you’ve already surmised, you’ve run afoul of BitBake/Yocto’s caching systems. Note the plural. I’ve discovered three and I’m far from convinced that I’ve found them all. Understanding how they interact can be maddening and subtle as your example shows.

Parse Cache

Broadly speaking, BitBake operates in two phases: recipe parsing and task execution. In a clean Yocto Poky repository, there are 1000s of recipes. Reading these recipes from disk, parsing them, and performing post-processing takes a modest amount of time so naturally the results (a per-recipe set of tasks, their dependency information, and a bunch of other data) are cached. I haven’t researched the exact invalidation criteria used for this cache but suffice it to say that modifying the file or any included files is sufficient. If the cache is valid on the next BitBake invocation, the already parsed data is loaded from the cache and handed off to the execution phase.

Stamps

In the execution phase, each task is run in dependency order, often with much parallelism. Two separate but related mechanisms are used to speed up this phase: stamps and sstate (or shared state) cache. Keep in mind that a single recipe will generate multiple tasks (fetch, unpack, patch, configure, compile, install, package, etc). Each of these tasks assume that they operate on a common work directory (created under tmp/work/….) in dependency order. Stamps are files that record that a specific task has been completed and under what conditions (environment, variables, etc). This allows subsequent BitBake invocations to pick up where previous invocations left off (for example running ‘bitbake -c fetch’ and then ‘bitbake -c unpack’ won’t repeat the fetch).

Shared State cache (sstate)

You’re probably wondering what the sstate cache is for then. When I attempt to build core-image-minimal, one of the tasks is generating the root filesystem. To do so, that task needs the packages produced by all the included recipes. Worst case, everything is built from scratch as happens with a fresh build directory. Obviously that is slow so caching comes into play again. If the stamps discussed previously are available, the tasks can be restarted from the latest, valid stamp.

That helps with repeated, local builds but often Yocto is used in teams where changes submitted upstream will invalidate a bunch of tasks on the next merge. Since Yocto is almost hermetic, the packages generated by submitter’s builds will usually match the packages I generate locally as long as the recipe and environment are the same. The sstate cache maps the task hash to the output of that task. When a recipe includes a _setscene() suffixed version of a task, the sstate cache is used to retrieve the output instead of rerunning the task. This combined with sharing of a sstate cache allows for sharing of build results between users in a team. Read the Shared State Cache section in the Yocto Manual for details on how this works and how to setup a shared cache.

Back to the original problem

So, what is going wrong in the writer’s os-release recipe? If you read the Anonymous Python Function section in the BitBake Manual carefully, they are run at parsing time. That means the result of the function is saved in the parsing cache. Unless the cache is invalidated (usually by modifying the recipe or an imported class), the cached value of BUILD_ID will be used even if the stamps and sstate cache are invalidated. To get BUILD_ID to be re-evaluated on each run, the parse cache needs to be disabled for this recipe. That’s accomplished by setting BB_DONT_CACHE = “1” in the recipe.

Note that the stamps and sstate cache are still enabled. There are some subtle details about making sure BitBake is aware that BUILD_ID is indirectly referenced by the do_compile task so that it gets included in the task hash (see how OS_RELEASE_FIELDS is used in os-release.bb). That ensures that the task hash changes whenever the SHA1 of the OEROOT git repo HEAD changes which means the caches will be invalidated then as well.

Confused by all the caches yet?

-Brother Yocto

On wiring diagrams

After I finished rewiring the Cobra drag car’s trunk, I found myself needing to pull a fuse to do a test and not remembering which fuse was for what. Having all the fuses and relays in one panel was a huge improvement over the original wiring but clearly it was time for labels and a wiring diagram.

Shopping for one-off labels

These labels are going to be affixed near the fuel cell and will likely see some abuse. I want a label that has a strong adhesive, is resistant to scratching, is UV resistant, and tolerates contact with fuel. This is no ordinary sticker. This is a high quality label.

I know Sticker Mule makes custom stickers. Maybe they have something that would work. Bad sign #1: minimum order quantity. I will need precisely one of each label. Having 49 extras is a bit much. A few quick searches show any custom label order is going to have a minimum order. It makes sense: setup takes time. Time to rethink my approach.

From somewhere in the back of my mind, the name Avery jumped forth. Avery makes printable labels in all sorts of shapes and sizes formatted on standard paper sizes so any inkjet or laser printer can be used. I fondly remember churning out hundreds of their mailing labels in the 90’s by running a Word mail merge and using Avery’s provided templates. Maybe I can put a coating over their labels.

Avery UltraDuty GHS Chemical Labels box

No need! Avery makes UltraDuty GHS Chemical Labels designed for labeling chemical containers. They are promoted as great for custom NFPA and HMIS labels. That should work perfectly in a trunk. Oh, look! They even have a template! Wait… it’s in Word? I don’t even own a copy of Word anymore. Hmm, what’s this? They have an online tool instead? Ok. I’m generally against these kinds of tools but I’m getting desperate.  Five minutes later, I close the tab in frustration. Well, I have a label but I’m on my own to layout my designs.

Troublesome Tools

I am a programmer, not an artist.  I know how to draft with pencil, paper, and ruler but I much rather write code.  SVG seemed like a natural fit for the lineart and text I planned to use on these labels.  Not having used SVG before, I read a few how-to articles before diving into the specification.  Laying out the basic shape of the label was pretty easy but filling it with content became baffling. There were so many options of how to proceed with different ramifications.  Two evenings of trying to figure it out was all I could muster.  I deleted my work in progress and took a day off.

As I took some time working on other projects, I realized that wiring diagrams are a form of undirected graph rendered orthographically. GraphViz is the common tool used by programmers far and wide for this.  I quickly wrote up a few lines of DOT that represented some of the major components in my diagram (battery, emergency disconnect, fuse panel) and the +12V and ground wiring between them.  GraphViz, specifically neato, generated a diagram that was technically correct but looked horrible.  Even with ortho layout selected, lines were going underneath boxes, boxes were laid out nonsensically.  As I researched how to pin boxes to specific locations in the diagram and use shapes other than boxes, I discovered these were mostly kludges.  Yet another tool was requiring huge amounts of fiddling to get anything close to reasonable output.  Ugh!

Fine.  I’ll draw it if I have to.

I’m a Mac user.  Part of being a Mac user is discovering The Omni Group‘s fabulous collection of tools.  If I’m going to draw a wiring diagram, I’m going to do with OmniGraffle, a tool similar in concept to Visio.  In an evening, I drew both a label for the fuse panel and a wiring diagram for the trunk.  By grouping primitive shapes, I could create complex shapes that help with identifying components.  Adding magnet points to those shapes anchors the ends of lines (wires) that track as I moved the shapes and line path around.  For all that I wanted to avoid drawing, the process ended up being straightforward and effective.

Cobra Commander trunk labels made with OmniGraffle

I’ll post pictures of the installed labels when I get a chance.

 

HDDG22 Talk: ECUs and their sensors

Chris Gammell asked me to give a presentation at his Hardware Developers Didactic Galactic meetup in San Francisco.  I enjoy talking about things I work on so I didn’t hesitate to say yes.  I’m pretty sure he was expecting me to talk about Google’s Zaius server, an open-hardware POWER9 server design,  that I brought to a previous meetup.  Giving presentations about my day job takes some review and approvals, even for open designs.   Instead, I offered to talk about engine control, a subject I’m spending many nights on recently.

I’ve had to learn about engine tuning and EFI systems in particular to get Cobra Commander back on the track.  Since that knowledge seems to be spread in bits and pieces, I put together what I’ve learned into a presentation focusing on the sensors used and how they fit into the engine control systems.

Both slides and video are available.  Chris did a writeup for SupplyFrame’s blog as well.