Monitoring GitLab on Kubernetes

TL;DR

If you want decent monitoring of GitLab deployed via GitLab’s helm charts, skip the built-in Prometheus config and go straight to InfluxDB. Make sure you enable InfluxDB’s UDP interface and replace the autogenerated, default retention policy with a 1h policy. Use GitLab’s Grafana dashboards by using jq to extract the dashboard node from each .json file and then changing the datasource name to match yours.

Distributed systems have pros and cons

A few weeks ago, I migrated our self-hosted GitLab Omnibus install to a Kubernetes deployment using GitLab’s helm charts. Since moving everyone over to using this instance for their day-to-day activities, the whole system has fallen over once or twice.  The beauty of putting GitLab on Kubernetes is that horizontal autoscaling will spin up additional app servers as load increases.  The downside is that there are a lot more pieces that can fail compared to an Omnibus install.

After the 2nd outage, I realized I needed better ways to get data about the state of the system.  As it was, I had to guess what was going wrong based on the observed end behavior.  Well, GitLab’s chart sets up Prometheus monitoring out of the box.  Let’s see what we can do with that.

Skip Prometheus, go straight to InfluxDB

At first glance, not much.  Prometheus was happily scraping all the GitLab pods and storing the data in time series but there was no dashboard or charts.  The only available option was using Prometheus’s built-in Expression Browser which is oriented toward ad-hoc queries.  I took the suggestion from Prometheus’s and GitLab’s docs and deployed Grafana to handle data visualization.

I can’t be the only person wanting to monitor GitLab installs, right? GitLab’s documentation points you to https://gitlab.com/gitlab-org/grafana-dashboards which has a dashboard for monitoring Omnibus installs. GitLab chart deployments aren’t quite the same as Omnibus but they’re kinda close. Not close enough. This dashboard provides only a high-level overview of process uptime and it doesn’t seem to work with how Prometheus is labeling the data.

The other folder in that repo is a pile of dashboards used by GitLab.com to monitor their public infrastructure. If it’s good enough for them, maybe it’ll be good enough for me.  At first, I couldn’t get any of the dashboards to import. Reading the README, these are formatted as API requests rather than just the dashboard. A quick pass through jq to extract the dashboard node and I can import them into Grafana!  Woohoo!

Uh, all of the queries are failing. Peeking at a few, it seems GitLab.com has evolved from using Prometheus to InfluxDB and the dashboards have a mix of the two. Most of the more detailed dashboards seem to use InfluxDB so let’s go that route. After deploying InfluxDB via Helm (and turning on the UDP API and creating a database) and telling GitLab to send data there (https://docs.gitlab.com/ee/administration/monitoring/performance/gitlab_configuration.html), I can see measurements arriving in InfluxDB. Add InfluxDB as a datasource to Grafana, change the datasource name in the dashboards, and a few of the charts start to work!  But not all of them.

Diving in further to these queries, it seems GitLab.com uses InfluxDB’s retention policies and continuous queries rather extensively. These dashboards depend on the latter for a variety of things.  How am I going to recreate these? Poking around on GitLab’s repos, I stumbled across https://gitlab.com/gitlab-org/influxdb-management which seems to set all of this up. First attempt at running it throws an error:

rake aborted!
InfluxDB::QueryError: retention policy duration must be greater than the shard duration
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/client/http.rb:91:in `handle_successful_response'
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/client/http.rb:15:in `block in get'
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/client/http.rb:53:in `connect_with_retry'
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/client/http.rb:11:in `get'
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/query/core.rb:100:in `execute'
/usr/local/bundle/gems/influxdb-0.3.5/lib/influxdb/query/retention_policy.rb:26:in `alter_retention_policy'
/Rakefile:25:in `block in <top (required)>'
/usr/local/bundle/gems/rake-12.3.1/exe/rake:27:in `<top (required)>'
/usr/local/bin/bundle:23:in `load'
/usr/local/bin/bundle:23:in `<main>'
Tasks: TOP => default => policies
(See full trace by running task with --trace)

Huh? I didn’t set up any retention policies? Searching for the error message, I find https://gitlab.com/gitlab-org/influxdb-management/issues/3. Well, at least it’s a known issue.  InfluxDB’s default configuration autogenerates a default retention policy when a database is created. That retention policy holds the data for infinity which implies a shard data duration of 168h. Shard durations cannot be changed so I deleted that retention policy and created a new one with a 1h retention since that’s what these GitLab scripts try to do.

Finally, I can deploy at least a few of these dashboards (after modifying the datasource name) and they just work.  Which ones are actually useful is still an exercise ahead of me.

Unpacking Xilinx 7-Series Bitstreams: Part 3

In Part 2, I detailed the configuration packet format and how the programming operation is conveyed as a sequence of register writes. As we move up to the configuration memory frame layer (see layer diagram in Part 1), the  construction of a Xilinx 7-series device becomes important. A major clue about this relationship comes from the Frame Address Register, a key register in any programming operation.

Frame Address Register (FAR)

Just before the first write of a configuration frame to the Frame Data Register, Input Register (FDRI), a 32-bit value is written to the Frame Address Register (FAR) to indicate where the new frame data should be placed in the configuration memory. What do these addresses tell us about the device construction?  UG470 tells us these 32-bit addresses are comprised of the following fields:

Xilinx 7-Series Frame Address Register

Blocks are further described as only using a few specific values:

  • 000 CLB, I/O, CLK
  • 001 Block RAM content
  • 010 CFG_CLB

This looks very much like a geographical addressing scheme similar to the Bus/Device/Function scheme used for PCI Configuration Space. That is, the device is constructed of a hierarchy of component groupings. In the case of PCI, a system may contain 256 buses, each of which may contain up to 32 devices.  Further each device may contain up to 8 functions. Identifying a specific function requires identifying the bus and device that contain it as well. Thus, function addresses in PCI are a tuple of (bus, device, function). Balancing complexity of address decoding logic with address compactness leads to representing each component of the tuple as a binary number with the minimum number of bits needed to represent the maximum allowed value and then concatenating those numbers into a single binary number padded to a common alignment size (8, 16, 32, or 64 bits).

Inferring Device Architecture

What does this tell us about 7-series devices then? A device is constructed of some hierarchy of block types, device halves, rows, columns, and minor frames. The FAR field descriptions of UG470 gives us a few more details:

  • Rows are numbered outward from the device center-line in the direction specified by the top/bottom bit
  • Columns are numbered with zero on the left and increasing to the right
  • Minor frames are contained within a column

Looking back at the FAR description, it seems that the fields are ordered such that each component contains all the components to the right of its field in FAR. That matches with traditional meanings of the terms used except the relationship between block types and halves. If rows are numbered growing outward from the center-line of the device, that implies there are only two halves in a device, not two per block type. Recall that only three block type values are used. What if instead of being part of the hierarchy, the block type selects one of multiple data buses going into the hierarchy? That would match the terms used better.

Combining those conclusions with the field bit-widths in FAR, we end up with the following addressing limits:

  • Block Types: 3
  • Halves: 2
  • Rows: up to 32 per half
  • Columns: up to 1024 per row
  • Minor frames: up to 128 per column

Putting all of that together, the device looks something like this:

Next: Verifying Against a Bitstream

In Part 4, I’ll look at the FAR and FDRI writes done by a Vivado-generated bitstream and see how well it matches my inferred device architecture. I’ll use a bitstream debugging feature to figure out the valid frame addresses in a device which results in a few surprises.

Unpacking Xilinx 7-Series Bitstreams: Part 2


In Part 1, I walked through the various file formats generated by Xilinx tools, the BIT file format header, and the physical interface layer of the bitstream protocol stack. In this part, I’ll dive into the gory details of the configuration packet format and how those packets control the overall programming operation.

As I briefly mentioned in Part 1, the physical interface layer transports a stream of packetized register read/write operations that constitute the configuration packet layer. The sync word that begins the packet stream also serves to establish 32-bit alignment within the overall byte stream carried by the physical interface layer. From that point on, all data formats are described in 32-bit, big-endian words.

Note that the physical interface used may impose limitations on the features available at the configuration packet layer. I’ll call out these limitations when describing features that are impacted.

Configuration Packet Format

Xilinx 7-Series Configuration Packet Header

Each configuration packet begins with a one-word header. The contents of the header change according to the header type which is contained in the top 3 bits. Only types 1 and 2 are officially documented though type 0 exists in practice as we’ll see later.

Xilinx 7-Series Configuration Packet Type 1 Header

Type 1 packets specify a complete operation to be performed with opcodes being defined as follows:

  • 00 – NOP
  • 01 – Read
  • 10 – Write

For a NOP, the remaining header fields are unused for this operation but the address field is important for type 2 packets. Reads and writes are directed at a specific register specified in the address field. While 14 bits of address space is defined in the header, 7-series devices seem to only use the lower 5 bits. Payload length is the number of data words to be read or written. These data words immediately follow the header with writes being sent to the device and reads being sent from the device. Note: reads are only available over SelectMAP and JTAG physical interfaces.

Xilinx 7-Series Configuration Packet Type 2 Header

Type 2 packets are used when the payload length exceeds the 11 bits available in a type 1 packet. Note the lack of an address field. Remember how I mentioned the address field being important for a NOP? The address field of the last type 1 packet is reused as the target of a type 2 packet. Only the address is reused so, in theory, a type 1 read could be followed by a type 2 write. In practice, I’ve only seen type 2 used immediately after a zero-length type 1 write.

Configuration Registers

Addresses specified in configuration packets are mapped 1-to-1 to a set of variable-width registers. Most of the registers are a single word wide but FDRI and FDRO are notable exceptions. I have not experimented with what happens if a packet attempts a short write or a write past the end of the register.

These registers provide low-level control over the chip including boot configuration and programming. Many of the available knobs are related to tuning physical interface behavior and which status/debug signals are available on pins. A few of the key registers used during programming are:

  • IDCODE
    Before writing to the configuration memory, a 32-bit device ID code must be written to this register. Reads from the register return the attached device’s ID code.
  • CRC
    When a packet is received by the device, it automatically updates an internal CRC calculation to include the contents of that packet. A write to the CRC register checks that the calculated CRC matches the expected value written to the register. This CRC check is only used to provide integrity checking of the packet stream, not the configuration memory contents, and are not required for programming. If you are modifying a bitstream, CRC writes can simply be removed instead of recalculating them.
  • Command
    Most of the programming sequence is implemented as a state machine that is controlled via one-shot actions. Writes to this register arm an action that, depending on the action requested, may be triggered immediately or delayed until some other condition is met.
    Important Note: During autoincremented frame writes (described later), the current command is rewritten during every autoincrement. This has the effect of rearming the action on every frame written.
  • Frame Address Register (FAR)
    Writes to this register set the starting address for the next frame read or write.
  • FDRI
    When a frame is written to FDRI, the frame data is written to the configuration memory address specified by FAR. If the write to FDRI contains more than one frame, FAR is autoincremented at the end of each frame.

For more details on these registers and others I didn’t mention, refer to Table 5-23 in UG470.

Programming Sequence

I’ll only be providing a high-level overview of a programming sequence for a complete write of the configuration memory. Partial reconfiguration uses a slightly different sequence that I’ll document in a separate post. I highly suggest looking at a bitstream as there are details such as NOPs that I am omitting that may be important when actually programming a device.

  1. Write TIMER: 0x000000000
    Disable the watchdog timer
  2. Write WBSTAR: 0x00000000
    On the next device boot, start with the bitstream at address zero.  This may be different if the bitstream contains a multi-boot configuration.
  3. Write COMMAND: 0x00000000
    Switch to the NULL command.
  4. Write COMMAND: 0x00000007
    Reset the calculated CRC to zero.
  5. Write register 0x13: 0x00000000
    Undocumented register. No idea what this does yet.
  6. Write Configuration Option Register 0: 0x02003fe5
    Setup timing of various device startup operations such as which startup cycle to wait in until MMCMs have locked and which clock settings to use during startup.
  7. Write Configuration Option Register 1: 0x00000000
    Writing defaults to various device options such as the page size used to read from BPI and whether continuous configuration memory CRC calculation is enabled.
  8. Write IDCODE: 0x0362c093
    Tell the device that this is a bitstream for a XC7A50T. If the device is an XC7A50T, configuration memory writes will be enabled.
  9. Write COMMAND: 0x00000009
    Activate the clock configuration specified in Configuration Option Register 0. Up to this point, the device was using whatever clock configuration the last loaded bitstream used.
  10. Write MASK: 0x00000401
    Set a bit-wise mask that is applied to subsequent writes to Control 0 and Control 1. This seems unnecessary for programming but is used to toggle certain bits in those registers instead of using precomputed values. It might make more sense in a use case where the exact value of Control 0 or Control 1 is unknown but a bit needs to be flipped.
  11. Write Control 0: 0x00000501
    Due to the previous write to MASK, 0x401 is actually written to this register which is the default value. Mostly disable fallback boot mode and masks out memory bits in the configuration memory during readback.
  12. Write MASK: 0x00000000
    Clear the write mask for Control 0 and Control 1
  13. Write Control 1: 0x00000000
    Control 1 is officially undocumented. See Part 3 for at least one bit I’ve figured out.
  14. Write FAR: 0x00000000
    Set starting address for frame writes to zero.
  15. Write COMMAND: 0x00000001
    Arm a frame write. The write will occur on the next write to FDRI.
  16. Write FDRI: <547420 words>
    Write desired configuration to configuration memory. Since more than 101 words are written, FAR autoincrementing is being used. 547420 words is 5420 frames. Between each frame, COMMAND will be rewritten with 0x1 which re-arms the next write. Note that the configuration memory space is fragmented and autoincrement moves to the next valid address. As we’ll see in Part 3, this is a rather annoying feature that makes reading bitstream configuration data a bit more challenging.
  17. Write COMMAND: 0x0000000A
    Update the routing and configuration flip-flops with the new values in the configuration memory. At this point, the device configuration has been updated but the device is still in programming mode.
  18. Write COMMAND: 0x00000003
    Tell the device that the last configuration frame has been received. The device re-enabled its interconnect.
  19. Write COMMAND: 0x00000005
    Arm the device startup sequence. Documentation claims both a valid CRC check and a DESYNC command are required to trigger the startup. In practice, a bitstream with no CRC checks works just fine.
  20. Write COMMAND: 0x0000000D
    Exit programming mode. After this, the device will ignore data on the configuration interfaces until the sync word is seen again. This also triggers the previously armed device startup sequence.

Next: Configuration Memory

In Part 3, I’ll cover how configuration memory is addressed and how that gives us some clues about the physical chip structure. I’ll also look at a very curious detail that violates the protocol stack encapsulation.

Unpacking Xilinx 7-Series Bitstreams: Part 1

For the past few months, I’ve been writing Xilinx 7-series bitstream manipulation tools for SymbiFlow. After building a mostly-working implementation in C++, I started to wonder what a generic framework for FPGA development tools would look like. Inspired by LLVM and partly as an excuse to learn Rust, I started a new project, Gaffe, to prototype ideas. With Xilinx 7-series fresh in my mind, I chose to reimplement the bitstream parsing as a first step. While most of the bitstream format is documented in UG470 7 Series FPGA Configuration User Guide, subtle details are omitted that I hope to clarify here.

File Formats Galore

Xilinx 7-series devices can be programmed through multiple interfaces (JTAG, SPI, BPI, SelectMAP) and multiple tools (iMPACT, SPI programmer, SVF player). This has led to multiple file formats being devised for different scenarios:

  • BIT – Binary file containing BIT header followed by raw bitstream
  • RBT – ASCII file with text header followed by raw bitstream written as literal ‘0’ and ‘1’ characters for each bit
  • BIN – Raw bitstream
  • MCS – PROM file format (includes address and checksum info)

Even though a BIN contains all the necessary data for programming a part, BIT is the default format generated by Vivado’s write_bitstream command and is what I’ll focus on.

BIT Header

Thankfully, this header format was documented on FPGA FAQ back in 2001.  It’s mostly a Tag-Length-Value (TLV) format but with a few quirks.  The information provided (design name, build date/time, target part name) are purely informational (ignored by the chip).  The main reason I mention this format is that most other tools (Vivado, openocdrequire this header to be present.

Layers of encapsulation

Past the BIT header, the raw bitstream is literally a stream of bytes that is interpreted by a 7-series part’s programming logic. Similar to networking protocols, part programming is built out of a protocol stack.

Stack of three layers from bottom to top: physical interface, packets, and frames.
Xilinx 7-Series Bitstream Layers

Starting at the base is the physical interface (JTAG, SPI, etc) used to connect to the part.  The physical interface carries a packetized format that controls the overall programming operation through a series of register reads/writes.  Part of the register set provides indexed access to the top layer of the stack: configuration memory frames.

Physical Interface Layer

As multiple physical interfaces are available, the electrical details depend on the specific interface you choose to use.  The only common piece of the physical interface layer is the detection of a sync word (0xAA995566) that begins the parsing of packets.

Any data received prior to to the sync word will not parsed as a packet but may have other effects.  For example, a few of the physical interfaces allow for multiple parallel bus widths.  The interface hardware looks for a magic sequence, called the bit width detection pattern, to determine the width of the parallel interface.  For details on how this works, see Chapter 5 of UG470.

Moving up the stack

In Part 2, I’ll be describing the packet format and the overall programming sequence.  Part 3 will focus on configuration memory frame addressing and a few places where this careful encapsulation gets violated.

Ubiquiti EdgeRouter Lite USB surgery

Keeping up with updates on all the devices in my home lab usually isn’t a big deal. Windows, macOS, Android, and iOS mostly take care of themselves. Ubuntu needs the occasional touch of ‘apt upgrade’ but is otherwise painless. My printer hasn’t had new firmware in over a year (there’s definitely a vuln or two that won’t ever be patched). Given the typical monotony, I was caught off guard when the firmware update on my router failed.

A/B firmware updates

Instead of a typical WiFi router, I use a Ubiquity EdgeRouter Lite to get some fancy features like BGP routing. Being geared toward WISPs and other industry applications, firmware upgrades are made safer by maintaining two OS images. When a new update is loaded onto the device, it overwrites the inactive OS image (the one not currently running). Once the update is written, checksummed, and ready to go, the device is rebooted and the bootloader loads the new OS image. If the new OS image fails for some reason, the bootloader will fallback to the old OS image that was running previously.

When failures repeat themselves

I SSH’d to the router and ran the update command:

kc8apf@router0:~$ add system image https://dl.ubnt.com/firmwares/edgemax/v1.9.7/ER-e100.v1.9.7+hotfix.4.5024004.tar
Trying to get upgrade file from https://dl.ubnt.com/firmwares/edgemax/v1.9.7/ER-e100.v1.9.7+hotfix.4.5024004.tar
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 80.8M 100 80.8M 0 0 4740k 0 0:00:17 0:00:17 --:--:-- 5082k
Download suceeded
Checking upgrade image...Done
Preparing to upgrade...Failed to mount partition

Uh, oh. This is about the time that I remember that this particular router failed to boot after a power outage a few months ago. The internal storage had been corrupted. I ran a recovery procedure, got it back online, and promptly forgot about it. Seems like the internal storage is failing. What can I do about it?

Surprisingly serviceable

From the Ubiquity Community forums, I learned that the internal storage is actually a USB thumb drive and can be replaced. Perfect! I have a pile of extra USB drives that I keep around for OS installers. I grabbed an 8GB drive and promptly realized I had no idea how to load the OS onto the new drive. Sure, I could pull the old drive and copy it but that would copy any corruption already present. A little more hunting in the forums turned up mkeosimg, a script that prepares a USB drive from an EdgeRouter firmware update.

The README is pretty clear that I need a Linux machine to use it. A quick look at the source confirms that it relies on a handful of tools such as parted and mkfs.ext3. The only Linux machines I have at home at rack-mount servers running containers. Getting access to a USB port on those is a bit of a chore so I took the “easier” approach of firing up an Ubuntu VM on Virtualbox.

Why so slow?!?

Almost immediately, mkeosimg tells me it is “Creating ER-e100.v1.9.7+hotfix.4.5024004-configured.img” and then sits there for a few minutes. What the heck is it doing? Creating a zero-filled file of course! While waiting for dd to copy 8GB worth of zeros to a file inside a VM is an exciting way to spend time, I like to live a fast and dangerous life. I replaced dd with a call to fallocate. fallocate simply asks the filesystem to create a file entry but don’t actually allocate sectors for data. That is, it creates an 8GB file almost instantly because it only updates the filesystem metadata. The data sectors will be allocated as data is written to the file. Now the script runs in seconds. I have an image! I write it to a USB drive. I’m nearing the end!

Opening the case

Yes, there is a USB stick inside this router. I’m not sure I’d call it a “standard” USB stick though.

EdgeRouter-Lite internal USB drive next to a 2.5" hard drive.
2.5″ hard drive for scale.

By removing all the normal casing, the router enclosure can be slightly smaller. The problem: my new USB drive has all the regular casing. A trip to the bench grinder was in order.

Now it just barely fits in the enclosure.

Well, that was fun.

Snakes in a Sysroot

Dear Brother Yocto,
Here’s a weird bug I found on my setup: pythonnative seems to be using
python3.4.3 whereas the recipe specified 2.7.

*What I tested:*
I found the code in phosphor-settings-manager, which inherits pythonnative,
seems to be working per python 3, so added the following line into a native
python function:

python do_check_version () {
  import sys
  bb.warn(sys.version)
}
addtask check_version after do_configure before do_compile

WARNING: phosphor-settings-manager-1.0-r1 do_check_version: 3.4.3 (default,
Nov 17 2016, 01:08:31)

This is with openbmc git tip, building MACHINE=zaius

*What the recipe should be using is seemingly 2.7:*
$ cat ../import-layers/yocto-poky/meta/classes/pythonnative.bbclass

inherit python-dir
[...]

$ cat ../import-layers/yocto-poky/meta/classes/python-dir.bbclass

PYTHON_BASEVERSION = "2.7"
PYTHON_ABI = ""
PYTHON_DIR = "python${PYTHON_BASEVERSION}"
PYTHON_PN = "python"
PYTHON_SITEPACKAGES_DIR = "${libdir}/${PYTHON_DIR}/site-packages"

*Why this is bad:*
well, python2.7 and python3 have a ton of differences. A bug I discovered
is due to usage of filter(), which returns different objects between
Python2/3.

Before diving into the layers of bitbake recipes to root cause what caused
this, I want to know if you have ideas why this went wrong.

-Snakes in a Sysroot

Dear Snakes in a Sysroot,
As you’ve already discovered, the version of Python used varies depending on context within a Yocto build. Recipes, by default, are built to run on the target machine and distro. For most recipes, that is what we want: we’re building a recipe so it can be included in the rootfs. In some cases, a tool needs to be run on the machine running bitbake. An obvious example is a compiler that generates binaries for the target machine. Of course, that compiler likely has dependencies on libraries that must be generated for the machine that runs that compiler. This leads to three categories of recipes: native, cross, and target. Each of these categories place their output into different directories (sysroots) both to avoid file collisions and to allow BitBake to select which tools are available to a recipe by altering the PATH environment variable.

native
Binaries and libraries generated by these recipes are executed on the machine running bitbake (host machine). Any binary that generates other binaries (compiler, linker, etc) will also generate binaries for the host machine.

cross
Similar to native, the binaries and libraries generated by these recipes are executed on the host machine. The difference is that any binary that generates other binaries will generate binaries for the target machine.

target
Binaries and libraries generated by these recipes are executed on the target machine only.

For a recipe like Python, it is common to have a native and a target recipe (python and python-native) that use a common include file to specify the version, most of the build process, etc. That’s what you see in yocto-poky/meta/classes/python-dir.bbclass.

So, why isn’t python-native being used to run the python function in your recipe? A key insight is that you are using ‘addtask’ to tell BitBake to run this python function as a task. That means it is being run in the context of BitBake itself, not as a command executed by a recipe’s task. The sysroots described above don’t apply since a new python instance is not being executed. Instead, the recipe is run inside the existing BitBake python process. This is also how in-recipe python methods can alter BitBake’s variables and tasks (see BitBake User Manual Section 3.5.2 specifically that the ‘bb’ package is already imported and ‘d’ is available as a global variable). Since bitbake is executed directly from the command line and is a script, its shebang line tells us it will be run under python3.

If you want to run a python script as part of the recipe’s build process (perhaps a tool written in python), you’d execute that script from within a task:

do_check_version() {
  python -c '
import sys
print(sys.version)
'
}
addtask check_version after do_configure before do_compile

Since python is being executed within a task, the version of python in the selected sysroot will be used. The output will not be displayed on the console in BitBake’s output. Instead, it will be written to the task’s log file within the current build directory. If you need to do this, I highly suggest writing the script as a separate file that is included in the package rather than trying to contain it all inside the recipe.

All about those caches

Dear Brother Yocto,
When building an image, I want to include information about the metadata repository that was used by the build.  The os-release recipe seems to be made for this purpose.  I added a .bbappend to my layer with the following:

def run_git(d, cmd):
  try:
    oeroot = d.getVar('COREBASE', True)
    return bb.process.run("git --work-tree %s --git-dir %s/.git %s"
        % (oeroot, oeroot, cmd))[0].strip('\n')
  except:
    pass

python() {
  version_id = run_git(d, 'describe --dirty --long')
  if version_id:
    d.setVar('VERSION_ID', version_id)
    versionList = version_id.split('-')
    version = versionList[0] + "-" + versionList[1]
    d.setVar('VERSION', version)

  build_id = run_git(d, 'describe --abbrev=0')
  if build_id:
    d.setVar('BUILD_ID', build_id)
}

OS_RELEASE_FIELDS_append = " BUILD_ID"

The first build using this .bbappend works fine: /etc/os-release is generated with VERSION_ID, VERSION, and BUILD_ID set to the ‘git describe’ output as intended. Subsequent builds always skip this recipe’s tasks even after a commit is made to the metadata repo. What’s going on?
-Flummoxed by the Cache

Dear Flummoxed by the Cache,

As you’ve already surmised, you’ve run afoul of BitBake/Yocto’s caching systems. Note the plural. I’ve discovered three and I’m far from convinced that I’ve found them all. Understanding how they interact can be maddening and subtle as your example shows.

Parse Cache

Broadly speaking, BitBake operates in two phases: recipe parsing and task execution. In a clean Yocto Poky repository, there are 1000s of recipes. Reading these recipes from disk, parsing them, and performing post-processing takes a modest amount of time so naturally the results (a per-recipe set of tasks, their dependency information, and a bunch of other data) are cached. I haven’t researched the exact invalidation criteria used for this cache but suffice it to say that modifying the file or any included files is sufficient. If the cache is valid on the next BitBake invocation, the already parsed data is loaded from the cache and handed off to the execution phase.

Stamps

In the execution phase, each task is run in dependency order, often with much parallelism. Two separate but related mechanisms are used to speed up this phase: stamps and sstate (or shared state) cache. Keep in mind that a single recipe will generate multiple tasks (fetch, unpack, patch, configure, compile, install, package, etc). Each of these tasks assume that they operate on a common work directory (created under tmp/work/….) in dependency order. Stamps are files that record that a specific task has been completed and under what conditions (environment, variables, etc). This allows subsequent BitBake invocations to pick up where previous invocations left off (for example running ‘bitbake -c fetch’ and then ‘bitbake -c unpack’ won’t repeat the fetch).

Shared State cache (sstate)

You’re probably wondering what the sstate cache is for then. When I attempt to build core-image-minimal, one of the tasks is generating the root filesystem. To do so, that task needs the packages produced by all the included recipes. Worst case, everything is built from scratch as happens with a fresh build directory. Obviously that is slow so caching comes into play again. If the stamps discussed previously are available, the tasks can be restarted from the latest, valid stamp.

That helps with repeated, local builds but often Yocto is used in teams where changes submitted upstream will invalidate a bunch of tasks on the next merge. Since Yocto is almost hermetic, the packages generated by submitter’s builds will usually match the packages I generate locally as long as the recipe and environment are the same. The sstate cache maps the task hash to the output of that task. When a recipe includes a _setscene() suffixed version of a task, the sstate cache is used to retrieve the output instead of rerunning the task. This combined with sharing of a sstate cache allows for sharing of build results between users in a team. Read the Shared State Cache section in the Yocto Manual for details on how this works and how to setup a shared cache.

Back to the original problem

So, what is going wrong in the writer’s os-release recipe? If you read the Anonymous Python Function section in the BitBake Manual carefully, they are run at parsing time. That means the result of the function is saved in the parsing cache. Unless the cache is invalidated (usually by modifying the recipe or an imported class), the cached value of BUILD_ID will be used even if the stamps and sstate cache are invalidated. To get BUILD_ID to be re-evaluated on each run, the parse cache needs to be disabled for this recipe. That’s accomplished by setting BB_DONT_CACHE = “1” in the recipe.

Note that the stamps and sstate cache are still enabled. There are some subtle details about making sure BitBake is aware that BUILD_ID is indirectly referenced by the do_compile task so that it gets included in the task hash (see how OS_RELEASE_FIELDS is used in os-release.bb). That ensures that the task hash changes whenever the SHA1 of the OEROOT git repo HEAD changes which means the caches will be invalidated then as well.

Confused by all the caches yet?

-Brother Yocto

On wiring diagrams

After I finished rewiring the Cobra drag car’s trunk, I found myself needing to pull a fuse to do a test and not remembering which fuse was for what. Having all the fuses and relays in one panel was a huge improvement over the original wiring but clearly it was time for labels and a wiring diagram.

Shopping for one-off labels

These labels are going to be affixed near the fuel cell and will likely see some abuse. I want a label that has a strong adhesive, is resistant to scratching, is UV resistant, and tolerates contact with fuel. This is no ordinary sticker. This is a high quality label.

I know Sticker Mule makes custom stickers. Maybe they have something that would work. Bad sign #1: minimum order quantity. I will need precisely one of each label. Having 49 extras is a bit much. A few quick searches show any custom label order is going to have a minimum order. It makes sense: setup takes time. Time to rethink my approach.

From somewhere in the back of my mind, the name Avery jumped forth. Avery makes printable labels in all sorts of shapes and sizes formatted on standard paper sizes so any inkjet or laser printer can be used. I fondly remember churning out hundreds of their mailing labels in the 90’s by running a Word mail merge and using Avery’s provided templates. Maybe I can put a coating over their labels.

Avery UltraDuty GHS Chemical Labels box

No need! Avery makes UltraDuty GHS Chemical Labels designed for labeling chemical containers. They are promoted as great for custom NFPA and HMIS labels. That should work perfectly in a trunk. Oh, look! They even have a template! Wait… it’s in Word? I don’t even own a copy of Word anymore. Hmm, what’s this? They have an online tool instead? Ok. I’m generally against these kinds of tools but I’m getting desperate.  Five minutes later, I close the tab in frustration. Well, I have a label but I’m on my own to layout my designs.

Troublesome Tools

I am a programmer, not an artist.  I know how to draft with pencil, paper, and ruler but I much rather write code.  SVG seemed like a natural fit for the lineart and text I planned to use on these labels.  Not having used SVG before, I read a few how-to articles before diving into the specification.  Laying out the basic shape of the label was pretty easy but filling it with content became baffling. There were so many options of how to proceed with different ramifications.  Two evenings of trying to figure it out was all I could muster.  I deleted my work in progress and took a day off.

As I took some time working on other projects, I realized that wiring diagrams are a form of undirected graph rendered orthographically. GraphViz is the common tool used by programmers far and wide for this.  I quickly wrote up a few lines of DOT that represented some of the major components in my diagram (battery, emergency disconnect, fuse panel) and the +12V and ground wiring between them.  GraphViz, specifically neato, generated a diagram that was technically correct but looked horrible.  Even with ortho layout selected, lines were going underneath boxes, boxes were laid out nonsensically.  As I researched how to pin boxes to specific locations in the diagram and use shapes other than boxes, I discovered these were mostly kludges.  Yet another tool was requiring huge amounts of fiddling to get anything close to reasonable output.  Ugh!

Fine.  I’ll draw it if I have to.

I’m a Mac user.  Part of being a Mac user is discovering The Omni Group‘s fabulous collection of tools.  If I’m going to draw a wiring diagram, I’m going to do with OmniGraffle, a tool similar in concept to Visio.  In an evening, I drew both a label for the fuse panel and a wiring diagram for the trunk.  By grouping primitive shapes, I could create complex shapes that help with identifying components.  Adding magnet points to those shapes anchors the ends of lines (wires) that track as I moved the shapes and line path around.  For all that I wanted to avoid drawing, the process ended up being straightforward and effective.

Cobra Commander trunk labels made with OmniGraffle

I’ll post pictures of the installed labels when I get a chance.

 

HDDG22 Talk: ECUs and their sensors

Chris Gammell asked me to give a presentation at his Hardware Developers Didactic Galactic meetup in San Francisco.  I enjoy talking about things I work on so I didn’t hesitate to say yes.  I’m pretty sure he was expecting me to talk about Google’s Zaius server, an open-hardware POWER9 server design,  that I brought to a previous meetup.  Giving presentations about my day job takes some review and approvals, even for open designs.   Instead, I offered to talk about engine control, a subject I’m spending many nights on recently.

I’ve had to learn about engine tuning and EFI systems in particular to get Cobra Commander back on the track.  Since that knowledge seems to be spread in bits and pieces, I put together what I’ve learned into a presentation focusing on the sensors used and how they fit into the engine control systems.

Both slides and video are available.  Chris did a writeup for SupplyFrame’s blog as well.

MoTeC M48 Teardown

After working out the serial port pinout on my MoTeC M48 (see MoTeC PCI Cable for $20), I was left with one unidentified pin.  As the same DB9 is used with both a PCI cable and a SUU, I suspect this extra pin is how the ECU distinguishes between the two.  My M48 is already running the latest firmware so figuring out how to emulate an SUU isn’t a critical but not knowing how it works bothers me.

After trying a few things (tie pin to +5V, tie to GND), I decided I needed to open up the case and figure out where pin 12 on the ECU connector goes.

Based on the sharp, 90° bends in the traces, the whole board was definitely autorouted.  Unfortunately, the autorouter decided to route traces underneath chips which makes tracing them out much harder.  Also, the whole board is covered with conformal coating.  On the upside, a large number of components are used to protect the inputs and output from out-of-spec voltages which makes sense as incorrect wiring is bound to be a common problem in the field.

Zooming in on the main processor, I find a Motorola MC68332ACFC16 (datasheet) which is a nifty 68k variant with peripherals geared toward generating time-synchronized I/O.

Next to the main processor are a pair of Altera MAX 7000 PLDs (EPM7032LC44 datasheet).  I didn’t research enough to determine if these PLDs are used with the main processor’s time processor functions or with its expansion bus.

After a few false starts, I determined that pin 12, our extra serial pin, passes through protection circuitry before ending up at a GPIO on port E of the main processor.  There seems to be no alternate functions on that pin that make sense so next up will be reverse engineering the firmware.  Thankfully, MoTeC provides a full, unencrypted firmware as part of the M48 software on their website.  Time to buy a license for IDA.