Use b4 for kernel contributions

Use b4 for kernel contributions

There is a little tool called b4 [1] that has been part of my workflow with the Linux kernel for a while. It is developed to be a tool used to simplify the work of the maintainers, but my main use of the tool has been to fetch patch series from the mailing list and apply them to my local git repository during reviews. I recently noticed that it got a lot of handy features (experimental though) for the contributors as well, which I now want to test!

The reason I started digging into my toolbox started with a discussion I had with a friend about how I could possibly prefer an email-based workflow when working with FOSS versus new and fresh web-based tools like Github and Gitlab. In web-based tools, all you have to do is push the right button, and email is so old-fashioned, isn't it?

First of all, I never hit the right button. Never. Secondly, it disturbs my workflow.

That is also my biggest reason; these "old school" tools fits my workflow perfectly:

  • I use mutt [4] to read and send my emails.
  • I use vim [5] for all text manipulation.
  • I use git [6] to manage my code.
  • I use b4 [1] in the (kernel) review process (and in contributions from now on)

These are the tools that I use for almost every email based FOSS project that I'm involved with. The first three is rather common, but what about b4? It is not too much information out there so I think a little introduction might be good.

/media/b4.png

So, what is b4?

The project started as a tool named get-lore-mbox [2] , which later became b4. Fun fact is that the name, b4, was chosen for ease of typing and because B-4 was the precursor to Lore and Data in the Star Trek universe :-)

B4 is a tool to simplify the development workflow with distributed patches, especially mailing lists. It works with public inbox archives and aims to be a useful tool for both maintainers and developers.

Example on what b4 will do for you:

  • Retrieve patch series from a public mailing list (e.g. lore.kernel.org)
  • Compare patch series
  • Apply patch series to your git repository
  • Prepare and send in your work
  • Retrieve code-review trailers

It is a pretty competent tool.

Install b4

First we have to install b4 on our system.

B4 is propbably already available in your distribution:

Archlinux:

$ pacman -Sy b4

Ubuntu:

$ apt-get install b4

Fedora:

$ dnf install b4

Or whatever package manager you use.

It is also possible to install it with pip:

$ python3 -m pip install --user b4

And of course, run it directly from the git repository [3] :

$ git clone https://git.kernel.org/pub/scm/utils/b4/b4.git
$ cd b4
$ git submodule update --init
$ pip install --user -r requirements.txt

Review patches workflow

I use b4 shazam to fetch the latest version of a patch series and apply them to my tree. All you need to provide is the Message-ID of the thread that you find in the email header of the patch. For instance:

$ b4 shazam 20230820102610.755188-6-marcus.folkesson@gmail.com
Grabbing thread from lore.kernel.org/all/20230820102610.755188-6-marcus.folkesson@gmail.com/t.mbox.gz
Checking for newer revisions
Grabbing search results from lore.kernel.org
  Added from v8: 7 patches
Analyzing 17 messages in the thread
Will use the latest revision: v8
You can pick other revisions using the -vN flag
Checking attestation on all messages, may take a moment...
---
  [PATCH v8 1/6] dt-bindings: iio: adc: mcp3911: add support for the whole MCP39xx family
  [PATCH v8 2/6] iio: adc: mcp3911: make use of dev_err_probe()
  [PATCH v8 3/6] iio: adc: mcp3911: simplify usage of spi->dev
  [PATCH v8 4/6] iio: adc: mcp3911: fix indentation
  [PATCH v8 5/6] iio: adc: mcp3911: avoid ambiguity parameters in macros
  [PATCH v8 6/6] iio: adc: mcp3911: add support for the whole MCP39xx family
  ---
  NOTE: install dkimpy for DKIM signature verification
---
Total patches: 6
---
 Base: using specified base-commit b320441c04c9bea76cbee1196ae55c20288fd7a6
Applying: dt-bindings: iio: adc: mcp3911: add support for the whole MCP39xx family
Applying: iio: adc: mcp3911: make use of dev_err_probe()
Applying: iio: adc: mcp3911: simplify usage of spi->dev
Applying: iio: adc: mcp3911: fix indentation
Applying: iio: adc: mcp3911: avoid ambiguity parameters in macros
Applying: iio: adc: mcp3911: add support for the whole MCP39xx family

Or even b4 prep to create a branch for it. This will not fetch the latest version though.

$ b4 prep -n review -F  20230820102610.755188-6-marcus.folkesson@gmail.com
Checking attestation on all messages, may take a moment...
---
  [PATCH v7 1/6] dt-bindings: iio: adc: mcp3911: add support for the whole MCP39xx family
  [PATCH v7 2/6] iio: adc: mcp3911: make use of dev_err_probe()
  [PATCH v7 3/6] iio: adc: mcp3911: simplify usage of spi->dev
  [PATCH v7 4/6] iio: adc: mcp3911: fix indentation
  [PATCH v7 5/6] iio: adc: mcp3911: avoid ambiguity parameters in macros
  [PATCH v7 6/6] iio: adc: mcp3911: add support for the whole MCP39xx family
  ---
  NOTE: install dkimpy for DKIM signature verification
---
Created new branch b4/review
Applying 6 patches
---
Applying: dt-bindings: iio: adc: mcp3911: add support for the whole MCP39xx family
Applying: iio: adc: mcp3911: make use of dev_err_probe()
Applying: iio: adc: mcp3911: simplify usage of spi->dev
Applying: iio: adc: mcp3911: fix indentation
Applying: iio: adc: mcp3911: avoid ambiguity parameters in macros
Applying: iio: adc: mcp3911: add support for the whole MCP39xx family

Once you have the patches applied to your local repository it is easier to perform a review as it gives a better context when you can jump around in the codebase. Also, it allows you to run any scripts for sanity checks and such.

That is pretty much how I use b4 in the review process. b4 has a lot of more neat features such as fetching pull-requests or generate thank-emails when something gets merged/applied, but that is nothing I use for the moment.

Contributor's workflow

As I said, I've been unaware of that b4 can assist you even in the workflow as a contributor. I'm so excited!

The workflow

These steps is more or less copied directly from the documentation [1].

  1. Prepare your patch series by using b4 prep and queueing your commits. Use git rebase -i to arrange the commits in the right order and to write good commit messages.
  2. Prepare your cover letter using b4 prep --edit-cover. You should provide a good overview of what your series does and why you think it will improve the current code.
  3. When you are almost ready to send, use b4 prep --auto-to-cc to collect the relevant addresses from your commits. If your project uses a MAINTAINERS file, this will also perform the required query to figure out who should be included on your patch series submission.
  4. Review the list of addresses that were added to the cover letter and, if you know what you're doing, remove any that you think are unnecessary.
  5. Send your series using b4 send. This will automatically reroll your series to the next version and add changelog entries to the cover letter.
  6. Await code review and feedback from maintainers.
  7. Apply any received code-review trailers using b4 trailers -u.
  8. Use git rebase -i to make any changes to the code based on the feedback you receive. Remember to record these changes in the cover letter's changelog.
  9. Unless series is accepted upstream, GOTO 3.
  10. Clean up obsolete prep-managed branches using b4 prep --cleanup

Example of usage

A Lego loving friend of mine pointed out that a reference in kernel documentation [6] I wrote no longer points to the site it was supposed to. That is something we are going to fix!

Just follow the steps listed above.

Start by prepare the tree with b4 prep -n pxrc -f v6.3:

$ b4 prep -n pxrc -f v6.3
Created new branch b4/pxrc
Created the default cover letter, you can edit with --edit-cover.

Now we have a branch for our work. We base our work on the v6.3 tag.

Next step is to edit the coverletter by b4 prep --edit-cover.

Here is what the first patch (coverletter) looks like after editing:

$ git show HEAD
commit b97650b087d88d113b11cd1bc367c67c314d77a1 (HEAD -> b4/pxrc)
Author: Marcus Folkesson <marcus.folkesson@gmail.com>
Date:   Fri Aug 25 11:43:52 2023 +0200

    Remove reference to site that is no longer related
    
    Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>
    
    --- b4-submit-tracking ---
    # This section is used internally by b4 prep for tracking purposes.
    {
      "series": {
        "revision": 1,
        "change-id": "20230825-pxrc-8518b297cd21",
        "prefixes": []
      }
    }

Notice the meta information about this series (revision, change-id). The change-id will follow us through all the versions of the patch.

Next is to commit the changes as usual with git add and git commit:

$ git add Documentation/input/devices/pxrc.rst
$ git commit --signoff
$ git show
commit 00cf9f943529c06b36e89407aecc18d46f1b028e (HEAD -> b4/pxrc)
Author: Marcus Folkesson <marcus.folkesson@gmail.com>
Date:   Thu Aug 24 15:47:24 2023 +0200

    input: docs: pxrc: remove reference to phoenix-sim
    
    The reference undeniably points to something unrelated nowadays.
    Remove it.
    
    Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>

diff --git a/Documentation/input/devices/pxrc.rst b/Documentation/input/devices/pxrc.rst
index ca11f646bae8..5a86df4ad079 100644
--- a/Documentation/input/devices/pxrc.rst
+++ b/Documentation/input/devices/pxrc.rst
@@ -5,7 +5,7 @@ pxrc - PhoenixRC Flight Controller Adapter
 :Author: Marcus Folkesson <marcus.folkesson@gmail.com>
 
 This driver let you use your own RC controller plugged into the
-adapter that comes with PhoenixRC [1]_ or other compatible adapters.
+adapter that comes with PhoenixRC or other compatible adapters.
 
 The adapter supports 7 analog channels and 1 digital input switch.
 
@@ -41,7 +41,7 @@ Manual Testing
 ==============
 
 To test this driver's functionality you may use `input-event` which is part of
-the `input layer utilities` suite [2]_.
+the `input layer utilities` suite [1]_.
 
 For example::
 
@@ -53,5 +53,4 @@ To print all input events from input `devnr`.
 References
 ==========
 
-.. [1] http://www.phoenix-sim.com/
-.. [2] https://www.kraxel.org/cgit/input/
+.. [1] https://www.kraxel.org/cgit/input/

Verify that the patch looks good with scripts/checkpatch.pl:

$ ./scripts/checkpatch.pl --git HEAD
total: 0 errors, 0 warnings, 22 lines checked

Commit 00cf9f943529 ("input: docs: pxrc: remove reference to phoenix-sim") has no obvious style problems and is ready for submission.

Ok, the patch is ready to be sent out to the mailing list.

Collect all TO and CC from scripts/get_maintainer.pl with b4 prep --auto-to-cc:

$ b4 prep --auto-to-cc
Will collect To: addresses using get_maintainer.pl
Will collect Cc: addresses using get_maintainer.pl
Collecting To/Cc addresses
    + To: Dmitry Torokhov <dmitry.torokhov@gmail.com>
    + To: Jonathan Corbet <corbet@lwn.net>
    + Cc: linux-input@vger.kernel.org
    + Cc: linux-doc@vger.kernel.org
    + Cc: linux-kernel@vger.kernel.org
---
You can trim/expand this list with: b4 prep --edit-cover
Invoking git-filter-repo to update the cover letter.
New history written in 0.02 seconds...
Completely finished after 0.18 seconds.

Now we are ready to send the patch to mailing list by invoking the b4 send command.

b4 send will automatically use the [sendemail] section of your git config to determine which SMTP server to use.

I've configure git to use gmail as SMTP server, here is the relevant part of my ~/.gitconfig:

[sendemail]
  smtpserver = smtp.gmail.com
  smtpuser = marcus.folkesson@gmail.com
  smtpserverport = 587
  smtpencryption = tls
  smtpssl = true
        chainreplyto = false
        confirm = auto

b4 send will send the patches to the mailing list and prepare for version 2 of the patch series by increase the version number and create a new tag.

Other features

Compare between versions

As I only have one version of the patch (and there will probably not be a version 2), I have no use for the cool compare feature. However, b4 let you compare your versions by simply :

b4 prep --compare-to v1

Trailers

Going through all mail threads and collect all trailer tags could be a painstaking work. B4 will do this for you and put the tag into the right patch magically.

Unfortunately, I forgot to mention my friend in the original patch, so I sent another mail [7] with the Suggested-by: Mark Olsson <mark@markolsson.se> tag.

Fetch the tag with b4 trailers -S -u (-S because the tag was sent by me and not Mark):

$ b4 trailers -S -u
Calculating patch-ids from commits, this may take a moment...
Checking change-id "20230824-pxrc-doc-1addbaa2250f"
Grabbing search results from lore.kernel.org
---
  input: docs: pxrc: remove reference to phoenix-sim
    + Suggested-by: Mark Olsson <mark@markolsson.se>
---
Invoking git-filter-repo to update trailers.
New history written in 0.03 seconds...
Completely finished after 0.17 seconds.
Trailers updated.

As we can see the Suggested-by tag is now applied to the patch in the right place:

commit 650264be66f0e589cf67a49f769ddc7d51e076cb (HEAD -> b4/pxrc)
Author: Marcus Folkesson <marcus.folkesson@gmail.com>
Date:   Thu Aug 24 15:47:24 2023 +0200

    input: docs: pxrc: remove reference to phoenix-sim
    
    The reference undeniably points to something unrelated nowadays.
    Remove it.
    
    Suggested-by: Mark Olsson <mark@markolsson.se>
    Signed-off-by: Marcus Folkesson <marcus.folkesson@gmail.com>

Magic.

Further reading

The creator of the tool, Konstantin Ryabitsev, has an excellent youtube [8] video where he demonstrate the usage of b4.

Linux wireless regulatory domains

Linux wireless regulatory domains

I had a case where I had an embedded system that should act as a WiFi Access Point on the 5GHz band. The HW was capable and the system managed to act as a client to 5GHz networks, so everything looked good.

However, the system could not create an access point on some frequencies. How is it that? It is all about regulatory domains!

/media/tux-radio.png

Regulatory domains

Radio regulations is something that applies to all devices that make transmissions in the radio spectrum. The Linux kernel makes itself compliant to these regulations by make the regulatory restrictions as a directly part of the cfg80211 configuration API that all (new) wireless device drivers use.

Radio regulatory has not always been so tight integrated into the Linux kernel. This is a result to address the vendor concerns [6] about getting products based on Linux certified against all (geographically dependent) radio regulatory authorities out there.

Before that, all wireless drivers used to be a propritary blob that we load into our kernel, nowadays more and more vendors has a more FOSS driven development for such a drivers which is what we strive for.

To build trust with the chip vendors so that they could consider FOSS drivers as an real alternative, we stick to a number of principles.

Principles

There are a few principles [1] that the Linux kernel follows in order to fulfull the requirements for on use of the radio spectrum:

  • It should be reasonably impossible for a user to fail to comply with local regulations either unwittingly or by accident.
  • Default configurations should err in favor of more restrictive behavior in order to protect unwitting users.
  • Configurations that have no known compliant usage should not be part of the 'official' kernel tree.
  • Configurations that are compliant only under special circumstances (e.g. with special licenses) should not be part of the 'official' kernel tree. Any exceptions should have their legal requirements clearly marked and those options should never be configured by default.
  • Configurations that disable regulatory enforcement mechanisms should not be part of the 'official' kernel tree.
  • The kernel should rely on userland components to determine regulatory policy. Consequently, the kernel's regulatory enforcement mechanisms should be flexible enough to cover known or reasonably anticipated regulatory policies.
  • It is the moral duty of responsible distribution vendors, software developers, and community members to make every good faith effort to ensure proper compliance with applicable regulations regarding wireless communications.

The overall approach is "better safe than sorry" with respect to radio regulations. In other words, if no local configuration is setup, the system will fall back to the more restrictive world regulatory domain.

An example on such behaviour could be that the system is not allowed to initiate radio communication on certain radio frequencies that is not globally allowed.

Integration

CRDA

(Used pre Linux v4.15.)

CRDA [3], Central Regulatory Domain Agent, is a userspace agent responsible to read and intepret the regulatory.bin file and update the regulatory domains.

CRDA is intended to be trigged on uevents from the kernel (via udev) upon changes in the regulatory domain and setup the new regulations .

The udev rule to do this my look like this:

KERNEL=="regulatory*", ACTION=="change", SUBSYSTEM=="platform", RUN+="/sbin/crda"

Nowadays, CRDA is no longer needed as of kernel v.4.15 ( commit [2], "cfg80211: support loading regulatory database as firmware file"), the regulatory database is read by the Linux kernel directly as a firmware file during boot.

wireless-regdb

wireless-regdb [4] is the regulatory database used by Linux. The db.txt file in the repository contains regulatory information for each domain.

The output from this project is regulatory.db, which is loaded by the kernel as a firmware. The integrity of regulatory.db is typically ensured by the regulatory daemon by verifying the built-in RSA signature against a list of public keys in a preconfigured directory.

Although it is possible to build regulatory.db without any RSA key signature checking, it is highly recommended not to do so, as if the regulatory database is compromised in some way we could end up with a product that violates product radio compatibility.

wireless-regdb and Yocto

A side note for Yocto users.

The wireless-regdb recipe is part of oe-core [5] and should be included into your image if you intend to use any wireless LAN. wireless-regdb-static should be used with kernel >= v4.15 and wireless-regdb is intended to be used with CRDA.

In other words, add:

IMAGE_INSTALL:append = " wireless-regdb-static "

to your yocto distribution.

Hands on

iw [7] is the nl80211 based tool we use to configure wireless devices in Linux.

Here we will take a look at what the regulations may look like.

World regulatory domain

# iw reg get
global
country 00: DFS-UNSET
        (755 - 928 @ 2), (N/A, 20), (N/A), PASSIVE-SCAN
        (2402 - 2472 @ 40), (N/A, 20), (N/A)
        (2457 - 2482 @ 20), (N/A, 20), (N/A), AUTO-BW, PASSIVE-SCAN
        (2474 - 2494 @ 20), (N/A, 20), (N/A), NO-OFDM, PASSIVE-SCAN
        (5170 - 5250 @ 80), (N/A, 20), (N/A), AUTO-BW, PASSIVE-SCAN
        (5250 - 5330 @ 80), (N/A, 20), (0 ms), DFS, AUTO-BW, PASSIVE-SCAN
        (5490 - 5730 @ 160), (N/A, 20), (0 ms), DFS, PASSIVE-SCAN
        (5735 - 5835 @ 80), (N/A, 20), (N/A), PASSIVE-SCAN
        (57240 - 63720 @ 2160), (N/A, 0), (N/A)

Country 00 is the world regulatory domain. This could be a result of a system that failed to load the regulatory database.

Look into the output from dmesg for verify:

$ dmesg | grep cfg80211
[    3.268852] cfg80211: Loading compiled-in X.509 certificates for regulatory database
[    3.269107] cfg80211: failed to load regulatory.db

As a result, this is the restrictions we have on the 5GHz band:

# iw list
[...]
                Frequencies:
                        * 5040 MHz [8] (disabled)
                        * 5060 MHz [12] (disabled)
                        * 5080 MHz [16] (disabled)
                        * 5170 MHz [34] (disabled)
                        * 5190 MHz [38] (20.0 dBm) (no IR)
                        * 5210 MHz [42] (20.0 dBm) (no IR)
                        * 5230 MHz [46] (20.0 dBm) (no IR)
                        * 5180 MHz [36] (20.0 dBm) (no IR)
                        * 5200 MHz [40] (20.0 dBm) (no IR)
                        * 5220 MHz [44] (20.0 dBm) (no IR)
                        * 5240 MHz [48] (20.0 dBm) (no IR)
                        * 5260 MHz [52] (20.0 dBm) (no IR, radar detection)
                        * 5280 MHz [56] (20.0 dBm) (no IR, radar detection)
                        * 5300 MHz [60] (20.0 dBm) (no IR, radar detection)
                        * 5320 MHz [64] (20.0 dBm) (no IR, radar detection)
                        * 5500 MHz [100] (20.0 dBm) (no IR, radar detection)
                        * 5520 MHz [104] (20.0 dBm) (no IR, radar detection)
                        * 5540 MHz [108] (20.0 dBm) (no IR, radar detection)
                        * 5560 MHz [112] (20.0 dBm) (no IR, radar detection)
                        * 5580 MHz [116] (20.0 dBm) (no IR, radar detection)
                        * 5600 MHz [120] (20.0 dBm) (no IR, radar detection)
                        * 5620 MHz [124] (20.0 dBm) (no IR, radar detection)
                        * 5640 MHz [128] (20.0 dBm) (no IR, radar detection)
                        * 5660 MHz [132] (20.0 dBm) (no IR, radar detection)
                        * 5680 MHz [136] (20.0 dBm) (no IR, radar detection)
                        * 5700 MHz [140] (20.0 dBm) (no IR, radar detection)
                        * 5745 MHz [149] (20.0 dBm) (no IR)
                        * 5765 MHz [153] (20.0 dBm) (no IR)
                        * 5785 MHz [157] (20.0 dBm) (no IR)
                        * 5805 MHz [161] (20.0 dBm) (no IR)
                        * 5825 MHz [165] (20.0 dBm) (no IR)
[...]

We can see the no IR flag is set for almost all frequencies on the 5GHz band. Please note that NO-IR is not the same as disabled, it simply means that we cannot initiate radiation on those frequencies.

Initiate radiation simply includes all modes of operations that require us to initiate radiation first. Think of acting as an Access Point, IBSS, Mesh or P2P master.

We can still use the frequency though, there is no problem to connect to an Access Point (we are not the one who initiate the radiation) on these frequencies.

Local regulatory domain

When a proper regulatory database is loaded into the system, we can setup the local regulatory domain instead of the globally one.

Set Swedish (SE) as our local regulatory domain:

# iw reg set SE
# iw reg get
global
country SE: DFS-ETSI
        (2400 - 2483 @ 40), (N/A, 20), (N/A)
        (5150 - 5250 @ 80), (N/A, 23), (N/A), NO-OUTDOOR, AUTO-BW
        (5250 - 5350 @ 80), (N/A, 20), (0 ms), NO-OUTDOOR, DFS, AUTO-BW
        (5470 - 5725 @ 160), (N/A, 26), (0 ms), DFS
        (5725 - 5875 @ 80), (N/A, 13), (N/A)
        (5945 - 6425 @ 160), (N/A, 23), (N/A), NO-OUTDOOR
        (57000 - 71000 @ 2160), (N/A, 40), (N/A)

And we are now allowed to use the 5GHz band with other restrictions:

#iw list
[...]
                Frequencies:
                        * 5040 MHz [8] (disabled)
                        * 5060 MHz [12] (disabled)
                        * 5080 MHz [16] (disabled)
                        * 5170 MHz [34] (23.0 dBm)
                        * 5190 MHz [38] (23.0 dBm)
                        * 5210 MHz [42] (23.0 dBm)
                        * 5230 MHz [46] (23.0 dBm)
                        * 5180 MHz [36] (23.0 dBm)
                        * 5200 MHz [40] (23.0 dBm)
                        * 5220 MHz [44] (23.0 dBm)
                        * 5240 MHz [48] (23.0 dBm)
                        * 5260 MHz [52] (20.0 dBm) (no IR, radar detection)
                        * 5280 MHz [56] (20.0 dBm) (no IR, radar detection)
                        * 5300 MHz [60] (20.0 dBm) (no IR, radar detection)
                        * 5320 MHz [64] (20.0 dBm) (no IR, radar detection)
                        * 5500 MHz [100] (26.0 dBm) (no IR, radar detection)
                        * 5520 MHz [104] (26.0 dBm) (no IR, radar detection)
                        * 5540 MHz [108] (26.0 dBm) (no IR, radar detection)
                        * 5560 MHz [112] (26.0 dBm) (no IR, radar detection)
                        * 5580 MHz [116] (26.0 dBm) (no IR, radar detection)
                        * 5600 MHz [120] (26.0 dBm) (no IR, radar detection)
                        * 5620 MHz [124] (26.0 dBm) (no IR, radar detection)
                        * 5640 MHz [128] (26.0 dBm) (no IR, radar detection)
                        * 5660 MHz [132] (26.0 dBm) (no IR, radar detection)
                        * 5680 MHz [136] (26.0 dBm) (no IR, radar detection)
                        * 5700 MHz [140] (26.0 dBm) (no IR, radar detection)
                        * 5745 MHz [149] (13.0 dBm)
                        * 5765 MHz [153] (13.0 dBm)
                        * 5785 MHz [157] (13.0 dBm)
                        * 5805 MHz [161] (13.0 dBm)
                        * 5825 MHz [165] (13.0 dBm)
[...]

Regulatory flags

Some of the flags reported by iw may not be obvious at a first glance. Here is an explaination for some of them:

Flag Meaning
  Can be used without restrictions.
disabled Disabled
NO-OUTDOOR MUST be used indoor only.
DFS MUST be used with DFS regardless indoor or outdoor.
SRD MUST comply with SRD requirements regardless indoor or outdoor.
NO-OUTDOOR/DFS MUST be used with DFS and indoor only.
NO-OUTDOOR/TPC MUST be used with TPC and indoor only.
DFS/TPC MUST be used with DFS and TPC.
DFS/TPC + SRD MUST be used with DFS, TPC and comply with SRD requirements.
  • DFS: stands for Dynamic Frequency Selection and is a channel allocation scheme used to prevent electromagnetic interference with systems that predates Wi-Fi.
  • TPC: stands for Transmit Power Control which is a mechanism to automatically reduce the used transmission output power when other networks are within range.
  • SRD: stands for Short-Range Device and cover devices that are low-power transmitters typically limited to the range of 24-100mW ERP.

Add support for MCP39XX in Linux kernel

Add support for MCP39XX in Linux kernel

I've maintained the MCP3911 driver in the Linux kernel for some time and continuously add support for new features [1] upon requests from people and companies.

Microchip has several IC:s in this serie of ADCs that works similiar to MCP3911. Actually, all other IC:s are register compatible but MCP3911. The IC:s I've extended support for is MCP3910, MCP3912, MCP3913, MCP3914, MCP3918 and MCP3919.

The main difference between these IC:s from the driver perspective is the number of channels ranging from 1 to 8 channels and that the register map is not the same for all devices.

/media/mcp39xx.png

Implementation

This is a rather small patch without any fanciness, but just to show how to do this without the macro-magic you find in Zephyr [2].

Add compatible strings

The Linux driver infrastructure binds a certain device to a driver by a string (or other uniq identifies such as VID/PID for USB). When, for example, the compatible property of a device tree node matches a device driver, a device is instantiated and the probe function is called.

As a single device driver could handle multiple similiar IC:s where some of their properites may differ, we have to differentiate those somehow. This is done by provide device specific data to each instance of the device. This data is called "driver data" or "private data" and is part of the device lookup table.

E.g. the driver_data field of the struct spi_device_id:

struct spi_device_id {
	char name[SPI_NAME_SIZE];
	kernel_ulong_t driver_data;	/* Data private to the driver */
};

Or the data field of the struct of_device_id:

/*
 * Struct used for matching a device
 */
struct of_device_id {
	char	name[32];
	char	type[32];
	char	compatible[128];
	const void *data;
};

For this driver, the driver data to these ID tables looks as follows:

static const struct of_device_id mcp3911_dt_ids[] = {
-       { .compatible = "microchip,mcp3911" },
+       { .compatible = "microchip,mcp3910", .data = &mcp3911_chip_info[MCP3910] },
+       { .compatible = "microchip,mcp3911", .data = &mcp3911_chip_info[MCP3911] },
+       { .compatible = "microchip,mcp3912", .data = &mcp3911_chip_info[MCP3912] },
+       { .compatible = "microchip,mcp3913", .data = &mcp3911_chip_info[MCP3913] },
+       { .compatible = "microchip,mcp3914", .data = &mcp3911_chip_info[MCP3914] },
+       { .compatible = "microchip,mcp3918", .data = &mcp3911_chip_info[MCP3918] },
+       { .compatible = "microchip,mcp3919", .data = &mcp3911_chip_info[MCP3919] },
    { }
};
MODULE_DEVICE_TABLE(of, mcp3911_dt_ids);

static const struct spi_device_id mcp3911_id[] = {
-       { "mcp3911", 0 },
+       { "mcp3910", (kernel_ulong_t)&mcp3911_chip_info[MCP3910] },
+       { "mcp3911", (kernel_ulong_t)&mcp3911_chip_info[MCP3911] },
+       { "mcp3912", (kernel_ulong_t)&mcp3911_chip_info[MCP3912] },
+       { "mcp3913", (kernel_ulong_t)&mcp3911_chip_info[MCP3913] },
+       { "mcp3914", (kernel_ulong_t)&mcp3911_chip_info[MCP3914] },
+       { "mcp3918", (kernel_ulong_t)&mcp3911_chip_info[MCP3918] },
+       { "mcp3919", (kernel_ulong_t)&mcp3911_chip_info[MCP3919] },
    { }

The driver data is then reachable in the probe function via spi_get_device_match_data():

    adc->chip = spi_get_device_match_data(spi);

Driver data

The driver data is used to distinguish between different devices and provide enough information to make it possible for the driver to handle all differencies between the IC:s in a common way.

The driver data for these devices looks as follows:

+struct mcp3911_chip_info {
+       const struct iio_chan_spec *channels;
+       unsigned int num_channels;
+
+       int (*config)(struct mcp3911 *adc);
+       int (*get_osr)(struct mcp3911 *adc, int *val);
+       int (*set_osr)(struct mcp3911 *adc, int val);
+       int (*get_offset)(struct mcp3911 *adc, int channel, int *val);
+       int (*set_offset)(struct mcp3911 *adc, int channel, int val);
+       int (*set_scale)(struct mcp3911 *adc, int channel, int val);
+};
+

Description of the structure members:

  • .channels is a a pointer to struct iio_chan_spec where all ADC and timestamp channels are specified.
  • .num_channels is the number of channels
  • .config is a function pointer to configure the device
  • .get_* and .set_* is function pointers used to get/set certain registers

A struct mcp3911_chip_info is created for each type of supported IC:

+static const struct mcp3911_chip_info mcp3911_chip_info[] = {
+       [MCP3910] = {
+               .channels = mcp3910_channels,
+               .num_channels = ARRAY_SIZE(mcp3910_channels),
+               .config = mcp3910_config,
+               .get_osr = mcp3910_get_osr,
+               .set_osr = mcp3910_set_osr,
+               .get_offset = mcp3910_get_offset,
+               .set_offset = mcp3910_set_offset,
+               .set_scale = mcp3910_set_scale,
+       },
+       [MCP3911] = {
+               .channels = mcp3911_channels,
+               .num_channels = ARRAY_SIZE(mcp3911_channels),
+               .config = mcp3911_config,
+               .get_osr = mcp3911_get_osr,
+               .set_osr = mcp3911_set_osr,
+               .get_offset = mcp3911_get_offset,
+               .set_offset = mcp3911_set_offset,
+               .set_scale = mcp3911_set_scale,
+       },
+       [MCP3912] = {
+               .channels = mcp3912_channels,
+               .num_channels = ARRAY_SIZE(mcp3912_channels),
+               .config = mcp3910_config,
+               .get_osr = mcp3910_get_osr,
+               .set_osr = mcp3910_set_osr,
+               .get_offset = mcp3910_get_offset,
+               .set_offset = mcp3910_set_offset,
+               .set_scale = mcp3910_set_scale,
+       },
+       [MCP3913] = {
+               .channels = mcp3913_channels,
+               .num_channels = ARRAY_SIZE(mcp3913_channels),
+               .config = mcp3910_config,
+               .get_osr = mcp3910_get_osr,
+               .set_osr = mcp3910_set_osr,
+               .get_offset = mcp3910_get_offset,
+               .set_offset = mcp3910_set_offset,
+               .set_scale = mcp3910_set_scale,
+       },
+       [MCP3914] = {
+               .channels = mcp3914_channels,
+               .num_channels = ARRAY_SIZE(mcp3914_channels),
+               .config = mcp3910_config,
+               .get_osr = mcp3910_get_osr,
+               .set_osr = mcp3910_set_osr,
+               .get_offset = mcp3910_get_offset,
+               .set_offset = mcp3910_set_offset,
+               .set_scale = mcp3910_set_scale,
+       },
+       [MCP3918] = {
+               .channels = mcp3918_channels,
+               .num_channels = ARRAY_SIZE(mcp3918_channels),
+               .config = mcp3910_config,
+               .get_osr = mcp3910_get_osr,
+               .set_osr = mcp3910_set_osr,
+               .get_offset = mcp3910_get_offset,
+               .set_offset = mcp3910_set_offset,
+               .set_scale = mcp3910_set_scale,
+       },
+       [MCP3919] = {
+               .channels = mcp3919_channels,
+               .num_channels = ARRAY_SIZE(mcp3919_channels),
+               .config = mcp3910_config,
+               .get_osr = mcp3910_get_osr,
+               .set_osr = mcp3910_set_osr,
+               .get_offset = mcp3910_get_offset,
+               .set_offset = mcp3910_set_offset,
+               .set_scale = mcp3910_set_scale,
+       },
+};

Thanks to this, all differences between the IC:s is in one place and the driver code is common for all devices. See the code below how oversampling ration is set. The differences between IC:s is handled by the callback function:

        case IIO_CHAN_INFO_OVERSAMPLING_RATIO:
                for (int i = 0; i < ARRAY_SIZE(mcp3911_osr_table); i++) {
                        if (val == mcp3911_osr_table[i]) {
-                               val = FIELD_PREP(MCP3911_CONFIG_OSR, i);
-                               ret = mcp3911_update(adc, MCP3911_REG_CONFIG, MCP3911_CONFIG_OSR,
-                                               val, 2);
+                               ret = adc->chip->set_osr(adc, i);
                                break;
                        }
                }

Checkpoint-restore in Linux

Checkpoint-restore in Linux

I'm working on power saving features for a project based on a Raspberry Pi Zero. Unfortunately, the RPi does not support features as hibernation to disk or suspend to RAM because how the processor is constructed (the GPU is actually the main processor). So I was looking for alternatives.

That's when I stumpled upon CRIU ( [1], [2] ), Checkpoint-Restore In Userspace. (I actually started to read about PTRACE_SEIZE [4] and ptrace parasite code [3] and found out that CRIU is one of their users.)

/media/CRIU.png

CRIU

CRIU is a project that implements checkpoint/restore functionality by freeze the state of the process and its sub tasks. CRIU makes use of ptrace [4] to stop the process by attach to the process by sending a PTRACE_SEIZE request. Then it injects parasitic code to dump the process's memory pages into image files to create a recoverable checkpoint.

Such process information is memory pages (collected from /proc/$PID/smaps, /proc/$PID/mapfiles/ and /proc/$PID/pagemap), but also information about opened files, credentials, registers, task states and more.

My first concern was that this could not work very well, how about open sockets (especially clients)? It turns out that CRIU alredy handle most of that stuff. There are only a few scenarios that cannot be dumped [5] yet.

Usage

CRIU has many possible use-cases. Some of those are:

  • Container live migration
  • Slow-boot services speed up
  • Seamless kernel upgrade
  • Seamless kernel upgrade
  • "Save" ability in apps (games), that don't have such
  • Snapshots of apps

My use case or now is just to save a snapshot of an application and poweroff the CPU module to later be able to power on and restore it.

PTRACE

For those not familiar with ptrace(2):

The  ptrace() system call provides a means by which one process (the "tracer") may observe and control the execution of an‐
other process (the "tracee"), and examine and change the tracee's memory and registers.  It is primarily used to  implement
breakpoint debugging and system call tracing.

ptrace is the only interface that the Linux kernel provides to poke around and fetch information from inside another application (think debugger and/or tracers).

The PTRACE_SEIZE was introduced in Linux 3.4:

PTRACE_SEIZE (since Linux 3.4)
       Attach  to  the  process  specified  in  pid,  making  it  a  tracee  of the calling process.  Unlike PTRACE_ATTACH,
       PTRACE_SEIZE does not stop the process.  Group-stops are reported as PTRACE_EVENT_STOP and WSTOPSIG(status)  returns
       the  stop  signal.  Automatically attached children stop with PTRACE_EVENT_STOP and WSTOPSIG(status) returns SIGTRAP
       instead of having SIGSTOP signal delivered  to  them.   execve(2)  does  not  deliver  an  extra  SIGTRAP.   Only  a
       PTRACE_SEIZEd  process can accept PTRACE_INTERRUPT and PTRACE_LISTEN commands.  The "seized" behavior just described
       is inherited by  children  that  are  automatically  attached  using  PTRACE_O_TRACEFORK,  PTRACE_O_TRACEVFORK,  and
       PTRACE_O_TRACECLONE.  addr must be zero.  data contains a bit mask of ptrace options to activate immediately.

       Permission to perform a PTRACE_SEIZE is governed by a ptrace access mode PTRACE_MODE_ATTACH_REALCREDS check; see be‐
       low.

But it took a while until the checkpoint/restore capability was created for this purpose, see capabilities(7):

CAP_CHECKPOINT_RESTORE (since Linux 5.9)
       •  Update /proc/sys/kernel/ns_last_pid (see pid_namespaces(7));
       •  employ the set_tid feature of clone3(2);
       •  read the contents of the symbolic links in /proc/pid/map_files for other processes.

       This capability was added in Linux  5.9  to  separate  out  checkpoint/restore  functionality  from  the  overloaded
       CAP_SYS_ADMIN capability.

Example

I wrote a simple C application that just count a variable up each second and print the value:

    #include <stdio.h>
    #include <unistd.h>
    int main()
    {
        printf("My PID is %i\n", getpid());
        int count = 0;
        while (1) {
            printf("%d\n", count++);
            sleep(1);
        }
    }

Compile the code:

    gcc main.c -o main

Start The application:

    [17:26:03]marcus@goliat:~/tmp/count$ ./main 
    My PID is 2483855
    0
    1
    2
    3
    4
    5
    6

The process is started with process ID 2483855.

We can now dump the process and store its state. We have to add the --shell-job flag to tell that it was spawned from a shell (and therefor have some file descriptors open to PTYs that needs to be restored).

    [17:27:26]marcus@goliat:~/tmp/criu$ sudo criu dump -t 2483855 --shell-job
    Warn  (compel/arch/x86/src/lib/infect.c:356): Will restore 2483855 with interrupted system call

CRIU needs to have the CAP_SYS_ADMIN or the CAP_CHECKPOINT_RESTORE capability. Set it by:

    setcap cap_checkpoint_restore+eip /usr/bin/criu

The criu dump command will now generate a bunch of files to store the current state of the application. These includes open file descriptors, registers, stackframes, memorymaps and more:

    [17:28:00]marcus@goliat:~/tmp/criu$ ls -1
    core-2483855.img
    fdinfo-2.img
    files.img
    fs-2483855.img
    ids-2483855.img
    inventory.img
    mm-2483855.img
    pagemap-2483855.img
    pages-1.img
    pstree.img
    seccomp.img
    stats-dump
    timens-0.img
    tty-info.img

We can now restore the application from where we stopped:

    [17:29:07]marcus@goliat:~/tmp/criu$ sudo criu restore --shell-job
    27
    28
    29
    30

This is cool. But what is even cooler is that you may restore the application on a different host(!).

Summary

I do not know if CRIU is applicable for what I want to achieve right now, but it is a cool project that I will probably find usage for in the future, so it is a welcome tool to my toolbag.

meta-readonly-rootfs-overlay

meta-readonly-rootfs-overlay

meta-readonly-rootfs-overlay [1] is a meta layer for the Yocto project [2] originally written by Claudius Heine. I took over the maintainership in May 2022 to keep it updated with recent Yocto releases and keep add functionality.

I've implemented it in a couple of industrial products so far and think it needs some extra attention as I find it so useful.

Why does this exists?

Having a read-only root file system is useful for many scenarios:

  • Separate user specific changes from system configuration, and being able to find differences
  • Allow factory reset, by deleting the user specific changes
  • Have a fallback image in case the user specific changes made the root file system no longer bootable.

Because some data on the root file system changes on first boot or while the system is running, just mounting the complete root file system as read-only breaks many applications. There are different solutions to this problem:

  • Symlinking/Bind mounting files and directories that could potentially change while the system is running to a writable partition
  • Instead of having a read-only root files system, mounting a writable overlay root file system, that uses a read-only file system as its base and writes changed data to another writable partition.

To implement the first solution, the developer needs to analyse which file needs to change and then create symlinks for them. When doing factory reset, the developer needs to overwrite every file that is linked with the factory configuration, to avoid dangling symlinks/binds. While this is more work on the developer side, it might increase the security, because only files that are symlinked/bind-mounted can be changed.

This meta-layer provides the second solution. Here no investigation of writable files are needed and factory reset can be done by just deleting all files or formatting the writable volume.

How does it work?

The implementation make use of OverlayFS [3], which is a union mount filesystem that combines multiple underlying mount points into one. The filesystem make use of the terms upper and lower filesystem where the upper is filesystem is applied as an overlay on the lower filesystem.

The resulting merge directory is a combination of these two where all files in the upper filesystem overrides all files in the lower.

/media/meta-readonly-rootfs-overlay.png

Dependencies

This layer only depends on:

URI: git://git.openembedded.org/bitbake
branch: kirkstone

and

URI: git://git.openembedded.org/openembedded-core
layers: meta
branch: kirkstone

Usage

Adding the readonly-rootfs-overlay layer to your build

In order to use this layer, you need to make the build system aware of it.

Assuming the readonly-rootfs-overlay layer exists at the top-level of your OpenEmbedded source tree, you can add it to the build system by adding the location of the readonly-rootfs-overlay layer to bblayers.conf, along with any other layers needed. e.g.:

BBLAYERS ?= " \
  /path/to/layers/meta \
  /path/to/layers/meta-poky \
  /path/to/layers/meta-yocto-bsp \
  /path/to/layers/meta-readonly-rootfs-overlay \
  "

To add the script to your image, just add:

IMAGE_INSTALL:append = " initscripts-readonly-rootfs-overlay"

to your local.conf or image recipe. Or use core-image-rorootfs-overlay-initramfs as initrd.

Read-only root filesystem

If you use this layer you do not need to set read-only-rootfs in the IMAGE_FEATURES or EXTRA_IMAGE_FEATURES variable.

Kernel command line parameters

These examples are not meant to be complete. They just contain parameters that are used by the initscript of this repository. Some additional paramters might be necessary.

Example using initrd

root=/dev/sda1 rootrw=/dev/sda2

This cmd line start /sbin/init with the /dev/sda1 partition as the read-only rootfs and the /dev/sda2 partition as the read-write persistent state.

root=/dev/sda1 rootrw=/dev/sda2 init=/bin/sh

The same as before but it now starts /bin/sh instead of /sbin/init.

Example without initrd

root=/dev/sda1 rootrw=/dev/sda2 init=/init

This cmd line starts /sbin/init with /dev/sda1 partition as the read-only rootfs and the /dev/sda2 partition as the read-write persistent state. When using this init script without an initrd, init=/init has to be set.

root=/dev/sda1 rootrw=/dev/sda2 init=/init rootinit=/bin/sh

The same as before but it now starts /bin/sh instead of /sbin/init

Details

All kernel parameters that is used to configure meta-readonly-rootfs-overlay:

  • root - specifies the read-only root file system device. If this is not specified, the current rootfs is used.
  • `rootfstype if support for the read-only file system is not build into the kernel, you can specify the required module name here. It will also be used in the mount command.
  • rootoptions specifies the mount options of the read-only file system. Defaults to noatime,nodiratime.
  • rootinit if the init parameter was used to specify this init script, rootinit can be used to overwrite the default (/sbin/init).
  • rootrw specifies the read-write file system device. If this is not specified, tmpfs is used.
  • rootrwfstype if support for the read-write file system is not build into the kernel, you can specify the required module name here. It will also be used in the mount command.
  • rootrwoptions specifies the mount options of the read-write file system. Defaults to rw,noatime,mode=755.
  • rootrwreset set to yes if you want to delete all the files in the read-write file system prior to building the overlay root files system.

Embedded Open Source Summit 2023

Embedded Open Source Summit 2023

This year the Embedded Linux Conference is colocated with Automotive Linux Summit, Embedded IOT summit, Safety-critical software summit, LFEnergy and Zephyr Summit. The event was held in Prague, Czech Republic this time.

It is the second time I'm at a Linux conference in Czech Republic, and it clearly is my favorite place for such a event. Not only for the cheap beer but also for the architecture and the culture.

I've collected notes from some of the talks. Mostly for my own good, but here they are:

9 Years in the making, the story of Zephyr [1]

Much has happened since the project started by an announcement at an internal event at Intel in 2014. Two years later it went public and was quickly picked up by the Linux Foundation, and now it is listed as one of the top critical open source projects by Google.

Now, in June 2023, it has mad 40 releases and has over a milion lines of code. What a trip, hue?

The project has made a huge progress, but the road has not been straight forward. Many design decisions has been made and changed over time. Not only technical decisions but in all areas. For example, Zephyr was originally BSD licenced. The current license, Apache2, was not the first choice. The license was changed upon requests from other vendors. I think it is good that not only one company has full dominance on the project.

Even the name has been up for discussion before it landed in Zephyr. One fun thing is that Zephyr has completely taken over all search results, it is hard to find anything that are not related to the Zephyr project as it masks out all other hits... oopsie.

Some major transitions and transformations made by the project:

  • The build system which was initially a bunch of custom made Makefiles, which then became Kbuild and finally CMake.
  • The kernel itself moved from a nano/micro kernel model to a unified kernel.
  • Even the review system has changed from Garrit to Github.

The change from the dual kernel model to a unified kernel was made in 2016. The motivation was that the older model suffers from a few drawbacks:

  • Non-intutive nature of the nano/micro kernel split
  • Double context switch affecting the performance
  • Duplication of object types for nano and micro
  • System initialixation in the idle task

Instead, we ended up with something that:

  • Made the nanokernel 'pre-emptible thread' aware
  • Unified fibers and tasks as one type of threads by dropping the Microkernel server
  • Allowed cooperative threads to operate on all types of objects
  • Clarified duplicated object types
  • Created a new, more streamlined API, without any loss of functionality

Many things points to that Zephyr has healthy eco system. If we look at the contributions we can se that the member/ community contributions are strictly increasing every year and the commits by Intel is decreasing.

It shows us that the project itself is an evolving and become more and more of a self- sustaining open eco-system.

System device trees [2]

As the current usage of device tree does not scale well, especially when working with Multi-core AMP SoCs. we have to come up with some alternatives.

One such alternative is the System Device Tree. It is an extenstion of the DT specification that are devleoped in the open. To me it sounded uncomfortible at the first glance, but the talker made it clear that the work is heavily in cooperate with the DT specifications and the Linux device tree maintainters.

The main problem is that there are one instance of everything that is available for all CPUs and that is not suitable for AMP architectures where each core could be of a completely different types. The CPU cores are normally instantiated by one CPU node. One thing that the system device trees contribute to is to change that to independent CPU clusters instead.

Also, in a normal setup, many peripherals are attached to the global simple bus, and are shared across cores. The new indirect-bus on the other hand, which are introduced in System Device Tree, addresses this problem by map the bus to a particular CPU cluster which makes the peripheral visable for a specific set of cores.

System Device Tree will also introduce independent execution domains, of course also mapped to a specific set of CPU cluster. By this we can encapsulate which peripherals that should be accessable from which application.

But how does it work? The suggestion is to let a tool, sysbuild to postprocess the standard DT structure into several standard devicetrees, one for each execution domain.

Manifests: Project sanity in the ever-changing Zephyr world [3]

Mike Szczys talked about manifests files and why you should use those in your project.

But first, what is a manifest file?

It is a file that manages the project hiearchy by specify all repositories by URL, which branch/tag/hash to use and the local path for checkout. The manifest file also support some more advanced features such as:

  • Inheritance
  • Allow/block lists
  • Grouping
  • West support for validation

The Zephyr tree already uses Manifest files to manage versions of modules and libraries, and there is no reason to do not use the same method in your application. It let you keep control of which versions of all modules that your application requires in a clear way. Besides, as the manifest file is part of your application repository, it does also has a commit history and all changes to the manifest is trackable and hopefully explained in the commit message.

The inheritance feature in the manifest file is a powerful tool. It let you to import other manifest files and explicitely allow or exclude parts of it. This let you reduce the size of of your project significally.

West will handle everything for you. It will parse the manifest file, recursively clone all repositories and update those to a certain commit/tag/branch. It is preferred to not use branches (or even tags) in the manifest files as those may change. Use the hash if possible. Generally speaking, this is the preferred way in any such system (Yocto, Buildroot, ...).

The biggest benifit that I see is that you treat all dependencies aside from your application and that those dependencies are locked to known versions. Zephyr itself will be treated as a dependency to your application, not the other way around.

It is easy to draw parallells to the Yocto project. My first impression of Yocto was that it is REALLY hard to maintain, pretty much for the same reason that we are talking about here - how do I keep track of every layer in a controllable way? The solution for me wasto use KAS which pretty much do exactly the same thing - it creates a manifest files with all layers (read dependencies) that you can version control.

Zbus [4]

Rodrigo Peixoto, the maintainer and author of the Zbus subsystem had a talk where he gave us an introduction on what it is.

(Rodrigo is a nice guy. If you see him, throw a snowball at him and say hi from me - he will understand).

Zephyr has support for many IPC mechanisms such as LIFO, FIFO, Stack, Message Queue, Mailbox and pipes. All of those works great for one-to-one communication, but that is not allways what we need. Even one-to-many could be tricky with the existing mechanism that Zephyr provides.

ZBus is an internal bus used in Zephyr for Many-to-Many communication, besides, such a infrastructure cover all cases (1:1, 1:N, N:M) as a bonus.

I like these kind of infrastructure. It reminds me of dbus (and kbus..) but in a more simplier manner (and that is a good thing). It allows you to have a event-driven architecture in your application and a unified way to make threads talk and share data. Testability is also a bulletpoint for ZBus. You may easily swap a real sensor for stubbed code and the rest of the system would not notice.

The conference

/media/myself-embedded-open-source-summit.jpg

(I got stuck on a picture. Don't know which talk, but it seems like I enjoyed it)

Route priorities - metric values

Route priorities - metric values

Brief

It is not an uncommon scenario that a Linux system has several network interfaces that are all up and routeable. For example, consider a laptop with both Ethernet and WiFi.

But how does the system determine which route to use when trying to reach another host?

I was up to setup a system with both a 4G modem and a WiFi connection. My use case was that when the WiFi is available, that interface should be prioritized over 4G. This achieved by adjusting the route metric values for those interfaces.

/media/route-metric.png

Metric values

The metric value is one of many fields in the routing table and indicates the cost of the route. This become useful if multiple routes exists to a given destination and the system has to make a decision on which route to use. With that said, the lower metric value (lower cost) a route have, the highter priority i gets.

It is up to you or your network manager to set proper metric values for your routes. The actual value could be determine based on several different factors depending on what is important for your setup. E.g:

  • Hop count - The number of routes (hops) in a path to reach a certein network. This is a common metric.
  • Delay - Some interfaces have higher delays than others. Compare a 4G modem with a fiber connecton.
  • Throughput - The expected throughput of the route.
  • Reliability - If some links are more prone på link failures than others, prefer to use other interfaces.

The ip route command will show you all the routes that your system currently have, the last number in the output is the metric value:

$ ip route
default via 192.168.20.1 dev enp0s13f0u1u4 proto dhcp src 192.168.20.173 metric 100
default via 192.168.20.1 dev wlp0s20f3 proto dhcp src 192.168.20.197 metric 600

I have two routes that both is routed via 192.168.20.1.

As you can see, my wlp0s20f3 (Wireless) interface has a higher metric value than my enp0s13f0u1u4 (Ethernet) interface, which will cause the system to choose the ethernet interface over WiFi. In my case, these values are chosen by NetworkManager.

Set metric value

If you want to set specific metric values for your routes, the way will differ depending on how your routes are created.

iproute2

The ip command could be handy to manually create or change the metric value for a certain route:

$ ip route replace default via {IP} dev {DEVICE} metric {METRIC}

ifmetric

ifmetric is a tool for setting the metric value for IPv4 routes attached to a given network interface. Compared to the raw ip command above, ifmetric works on interfaces rather than routes.

$ ifmetric INTERFACE [METRIC]

dhcpcd

Metric values could be set in /etc/dhcpcd.conf according to the manual [1]:

metric metric
Metrics are used to prefer an interface over another one, lowest wins.

e.g.:

interface wlan0
metric 200

If no metric value is given, the default metric is calculated by 200 + if_nametoindex(3). An extra 100 will be added for wireless interfaces.

NetworkManager

Add ipv4.route-metric METRIC to your /etc/NetworkManager/system-connections/<connection>.nmconnection file.

You could also use the command line tool:

    $ nmcli connection edit tuxnet

    ===| nmcli interactive connection editor | ===

    Editing existing '802-11-wireless' connection: 'tuxnet'

    Type 'help' or '?' for available commands.
    Type 'print' to show all the connection properties.
    Type 'describe [<setting>.<prop>]' for detailed property description.

    You may edit the following settings: connection, 802-11-wireless (wifi), 802-11-wireless-security (wifi-sec), 802-1x, ethtool, match, ipv4, ipv6, tc, proxy
    nmcli> set ipv4.route-metric 600
    nmcli> save
    nmcli> quit

PPPD

PPP is a protocol used for establishing internet links over dial-up modems. These links is usually not the preferred link when the device has other more reliable and/or cheaper connections.

The pppd daemon has a few options as specified in the manual [2] for creating a default route and set the metric value:

defaultroute
       Add a default route to the system routing tables, using
       the peer as the gateway, when IPCP negotiation is
       successfully completed.  This entry is removed when the
       PPP connection is broken.  This option is privileged if
       the nodefaultroute option has been specified.

defaultroute-metric
       Define the metric of the defaultroute and only add it if
       there is no other default route with the same metric.
       With the default value of -1, the route is only added if
       there is no default route at all.

replacedefaultroute
       This option is a flag to the defaultroute option. If
       defaultroute is set and this flag is also set, pppd
       replaces an existing default route with the new default
       route.  This option is privileged.

E.g.

replacedefaultroute
defaultroute-metric 900

Summary

It is not that often you actually have to set the metric value yourself. The network manager usually does a great job.

In my system, the NetworkManager did not manage the PPP interface so its metric-logic did not apply to that interface. Therefor I had to let pppd create a default route with a fixed metric.

Lund Linux Conference 2023

Lund Linux Conference 2023

The conference

Lund Linux Conference (LLC) [1] is a "half-open" conference located in Lund. It is a conference with with high quality and I appreciate that the athmosphere is more familiar than at the larger conferences. I've been at the conference a couple of times before and the quality on the talks this year was as good as usual. ( The talks are by the way availalble on Youtube [3].)

We are growing though. Axis generously assists with premisses, but it remains to be seen wether we will get place next year.

Anyway, I took some notes as usual, and this blog post is nothing more than the notes I took during the talk.

The RISC-V Linux port; past/current/next

Björn Töpel talked about the current status of RISC-V architecture in the Linux kernel.

For those who don't know - RISC-V is a open and royalty free Instruction Set Architecture. In practice, this means for example that whenever you want to implement your own CPU core in your FPGA, you are free to do so using the RISC-V ISA. Compare that to ARM that you are strictly not allowed to even think about it without pay royalties and other fees.

RISC-V is a rather new port, the first proposol was sent out to the mailing list in 2016. It makes it a pretty good target to get involved into if you want to get to know the kernel in-depth as the implementation is still quite small in lines of code, which makes it easier to overview.

Björn told us that kernel support for RISC-V has made huge progress in the embedded area, but still lack some important functionality to be useful on the server side. Parts that are missing is e.g. support for ACPI, EUFI, AP-TEE, hotplugs and an advanced interrupt controller.

The architecture gets more support for each kernel release though. Some of the news for RISC-V in linux v6.4 are:

  • Support for Kernel Adress Space Layout Randomization (KASLR)
  • Relocatable kernel
  • HWprobe syscall

Vector support is on its way, but it currently break the ABI, so there are a few things left that needs to be addressed before we can expect any merge.

One giant leap for security: leveraging capabilities in Linux

Kevin Brodsky talked about self aware pointers, which I found interresting. That we can use address bits for other purposes than addresses is nothing new. In a 64bit ARM kernel we do often use only 52bits anyway (4PiB of addressable memory is more than enough for all people(phun intended )).

What Kevin and his team has done is to extend the address to 129bits to even include meta data for boundaries, capabilities and validity tags. The 129bits reservaton has of course a huge impact on the system as it use more than double the size compared to a normal 64-bit system, but it also gives us much in return.

These 129 bits is by the way already a compressed version of the 256 bit variant they started with..

Unfortunately, the implementation is for userpace only, which is a little sad because we already have tons of tools to run application in a protected and constrained environment, but it proves that the concept works and maybe we will see something like this for kernel space in the future.

The implementation requires changes is several parts of the system. The memory allocator and unwind code is most affected, but even the kernel itself and glibc has to be modified. Most of the applications and libraries is not affected at all though.

There is a working meta-layer for Yocto called Morello that can be used to test it out. It contains a usage guide and even a little tutorial on howto build and run Doom :-)

Supporting zoned storage in Ublk

Andreas Hindborg has been working with support for zoned storage [2] in the ublk driver. Zoned storage is basically about to spit the address space into regions called zones that can only be written sequentially. This leads to higher throughput and increased capacity. It also eliminates the need for a Flash Translation Layer (FTL) for e.g. SSD devices.

ublk make use of io_uring internally, which by the way is a cool feature. The io_uring let you queue system calls into a ring buffer, which makes it possible to do more work every time you enter the kernel space. This has impact on the performance as you do not need to context switch back and forth to userspace between each system call.

It is quite easy to add support for io_uring operations to normal character devices, as the struct file_operation now has a uring_cmd callback function that could be populated. This makes it to a high performance alternative to the IOCTL we are used to.

ublk is used to create a block device driver in userspace. It works as all requests and results to/from the block device is redirected to a userspace daemon. The userspace daemon used for this is called ublk-rs, which is entirely written i Rust (of course..). Unfortunately, the source code is not yet available due to legal reasons, but is on its way.

His work was to add support for zoned storage (basically split the address space into regions called zones)

Rust

Then there was a couple of talks about the most hip programming language for now; Rust.

Linus Walleij gave us a history lecture in programming languages in his talk "Rust: Abstraction and Productivity" and his thoughts aout why Rust could be something good for kernel. Andreas Hindborg continued and showed how he implemented a null_blk driver completely in Rust.

But why should we even consider Rust for the kernel? In fact, the language is guaranteed to have a few properties C does not, and we basic Rust support was introduced in Linux v6.1.

We say that Rust is safe, and when we state that, we think of that Rust does have:

  • No buffer overflows
  • No use after free
  • No dereferencing null or invalid pointers
  • No double free
  • No pointer aliasing
  • No type errors
  • No data races
  • ... and more

What was new to me is that a Rust application does not even compile if you try something of the above.

This together makes Rust both memory safe, type safe and thread safe. Consider that 20-60% of the bug fixes in the kernel are for memory safety bugs. These memory bugs takes a lot of productivity away as it often takes long time to find and fix them. Maybe Rust is not that bad after all.

Many cool projects are going on in Rust, example on those are:

  • TLS handshake in the kernel
  • Ethernet-SPI drivers
  • M1&M3 GPU drivers.

The goal with Andreas null_blk driver is to first write a solid Rust API for the blk-mq implementation and then use it in the null_blk driver to provide a reference implementation for linux kernel developers to get started with.

Summary

This was far from all talks, but only those that I had some taken some meaningful notes from.

Hope to see you there next year!

/media/lund-linuxcon-2018.jpg

Write a device driver for Zephyr - Part 1

Write a device driver for Zephyr - Part 1

This is the first post in this series. See also part part2, part3 and part4.

Overview

The first time I came across Zephyr [1] was on Embedded Linux Conference in 2016. Once back from the conference I tried to install it on a Cortex-M EVK board I had on my desk. It did not go smoothly at all. The documentation was not very good back then and I don't think I ever got system up and running. That's where I left it.

Now, seven years Later, I'm going to give it another try. A friend of mine, Benjamin Börjesson, who is an active contributor to the project has inspired me to test it out once again.

So I took whatever I could find at home that could be used for an evaluation. What I found was :

  • A Raspberry Pi Pico [2] to run Zephyr on
  • A Segger J-Link [3] for programming and debugging
  • A Digital-To-Analogue-Converter IC (ltc1665 [4]) that the Zephyr project did not support

Great! Our goal will be to write a driver for the DAC, test it out and contribute to the Zephyr project.

/media/zephyr-logo.png

Zephyr

First a few words about Zephyr itself. Zephyr is a small Real-Time Operating System (RTOS) which became a hosted collaborative project for the Linux Foundation in 2016.

Zephyr targets small and cheap MCU:s with constrained resources rather than those bigger SoCs that usually runs Linux. It supports a wide range of architectures and has a extensive suite of kernel services that you can use in the application.

It offers a kernel with a small footprint and a flexible configuration build system. Every Linux kernel hacker will recognize itself in the filesystem structure, Kconfig and device trees - which felt good to me.

To me, it feels like a more modern and fresh alternative to FreeRTOS [5] which I am quite familiar with already.

Besides, FreeRTOS uses the Hungarian notation [6], and just avoiding that is actually reason enough for me to choose Zephyr over FreeRTOS. I fully agree with the Linux kernel documentation [7]:

Encoding the type of a function into the name (so-called Hungarian` notation) is asinine - the compiler knows the types anyway and can check those, and it only confuses the programmer.

Even if I personally prefer the older version (before our Code-of-Conduct) [8] :

Encoding the type of a function into the name (so-called Hungarian notation) is brain damaged - the compiler knows the types anyway and can check those, and it only confuses the programmer. No wonder MicroSoft makes buggy programs.

Hardware setup

No fancy hardware setup. I did solder the LTC1665 chip on a break-out board and connected everything with jumper cables. The electrical interface for the LTC1665 is SPI.

/media/rpi-ltc1665.jpg

The connection between the Raspberry Pi Pico and the J-Link:

Pin RP Pico Pin J-Link Signal
"DEBUG SWCLKW 9 SWCLK
"DEBUG GND" 4 GND
"3V3" Pad 36 1 VTref

The connection between Raspberry Pi Pico and LTC1665:

Pin RP Pico LTC1665 Signal
"SPI0_RX" Pad 16 DIN Pin 9 SPI_RX
"SPI0_CSN" Pad 17 CS Pin 7 SPI_CS
"SPI0_SCK" Pad 18 SCK pin 8 SPI_SCK
"SPI0_TX" Pad 19 DOUT Pin 10 SPI_TX

Software setup

Install Zephyr

Zephyr does use west [10] for pretty much everything. West is a meta tool used for repository management, building, debugging, deploying.. you name it. It has many similarities with bitbake that you will find in Yocto. I'm more of a "do one thing and do it well"-guy, so these tools (nor west or bitbake) makes a huge impression on me.

West is written in Python, as the nature of Python is as it is, you have to make a virtual environment to make sure that your setup will work for more than a week. Otherwise you will end up in incompatibilities as soon you upgrading some of the python dependencies.

The documentation [9] is actually really good nowadays. Most of these commands are just copy&paste from there.

Create a new virtual environment:

python -m venv ~/zephyrproject/.venv

Activate the virtual environment:

source ~/zephyrproject/.venv/bin/activate

Install west:

pip install west

Get the Zephyr source code:

west init ~/zephyrproject
cd ~/zephyrproject
west update

Export a Zephyr CMake package to allow CMake to automatically load boilerplate code required for building Zephyr applications:

west zephyr-export

The Zephyr project does contain a file with additional Python dependencies, install them:

pip install -r ~/zephyrproject/zephyr/scripts/requirements.txt

Install Zephyr SDK

The Zephyr Software Development Kit (SDK) contain toolchains for all architectures that is supported by Zephyr.

Download the latest SDK bundle:

cd ~
wget https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.0/zephyr-sdk-0.16.0_linux-x86_64.tar.xz
wget -O - https://github.com/zephyrproject-rtos/sdk-ng/releases/download/v0.16.0/sha256.sum | shasum --check --ignore-missing

Extract the archive:

tar xvf zephyr-sdk-0.16.0_linux-x86_64.tar.xz

Run the setup script:

cd zephyr-sdk-0.16.0
./setup.sh

Build OpenOCD

The Raspberry Pi Pico has an SWD interface that can be used to program and debug the on board RP2040 MCU.

This interface can be utilized by OpenOCD. Support for RP2040 is not mainlined though, so we have to go for a rpi fork [11].

Clone repository:

git clone https://github.com/raspberrypi/openocd.git
cd openocd

Build:

./bootstrap
./configure
make

And install:

make install

Build sample application

The Raspberry Pi Pico does have a LED on board. So blinky, an application that will flash the LED with 1Hz, is a good test to prove that at least something is alive. Build it:

cd ~/zephyrproject/zephyr
west build -b rpi_pico samples/basic/blinky -- -DOPENOCD=/usr/local/bin/openocd -DOPENOCD_DEFAULT_PATH=/usr/local/share/openocd/scripts -DRPI_PICO_DEBUG_ADAPTER=jlink

Note that we specify the board (-b) to rpi_pico.

OPENOCD and OPENOCD_DEFAULT_PATH should point to where OpenOCD is installed in the previous step.

Flash the application

To flash our Raspberry Pi Pico, we just run:

west flash

As we have set the RPI_PICO_DEBUG_ADAPTER during the build stage, it is cached so it can be omitted from the west flash and west debug commands. Otherwise we had to provide the --runner option. E.g. :

west flash --runner jlink

You don't have to use a J-link to flash the Raspberry Pi Pico, you could also copy the UF2 file to target. If you power up the Pico with the BOOTSEL button pressed, it will appear on the host as a mass storage device where you could simply copy the UF2 file to. You loose the possibility to debug with GDB though.

Debug the application

The most straight forward way is to use west to start a GDB session (--runner is still cached from the build stage):

west debug

I prefer to use the Text User Interface (TUI) as it is easier to follow the code, both in C and assembler. Enter TUI mode by press CTRL+X+A or enter "tui enable" on the command line.

If you do not want to use west, you could start openocd by yourself:

openocd -f interface/jlink.cfg -c 'transport select swd' -f target/rp2040.cfg -c "adapter speed 2000" -c 'targets rp2040.core0'

And manually connect with GDB:

gdb-multiarch -tui
(gdb) target external :3333
(gdb) file ./build/zephyr/zephyr.elf

The result is the same.

/media/zephyr-gdb.png

Summary

Both the hardware and software environment is now ready to do some real work. In the part2 we will focus on how to integrate the driver into the Zephyr project.

Write a device driver for Zephyr - Part 2

Write a device driver for Zephyr - Part 2

This is the second post in this series. See also part part1, part3 and part4.

Overview

In the first part1 of this series, we did setup the hardware and prepared the software environment. In this part we will focus on pretty much everything but writing the actual driver implementation. We will touch multiple areas in order to fully integrate the driver into the Zephyr project, this includes:

  • Devicetrees
  • The driver
  • KConfig
  • Unit tests

Lets introduce each one of those before we start.

Devicetrees

A Devicetree [2] is a data structure that describe the static hardware configuration in a standard manner. One of the motivations behind devicetree is that it should not be specific for any kernel. In the best of the worlds, you should be able to boot a Linux kernel, BSD kernel or Zephyr (well..) with the same devicetree. I've never heard about a working example IRL though, but the idea is good.

In the same way, you should be able to boot the same kernel on different board by only swap the devicetree. In Zephyr, the devicetree is integrated to the binary blob, so this idea does not fully apply to Zephyr though.

There are two types of files related to device trees in Zephyr:

  • Devicetree sources - the devicetree itself (including dts, interface files and overlays).
  • Devicetree bindings - description of its content. E.g. data types and which properties that is required or optional.

Zephyr does make use of both of these type of files during the build process. It allows the build process to make a build-time validation of the devicetree sources against the bindings, generate KConfig macros and a whole bunch of other macros that is to be used by the application and by Zephyr itself. We will see example of these macros later on.

Here is a simplified picture of the build process with respect to devicetrees:

/media/zephyr-devicetree.png

Driver

All drivers is located in the ./driver directory. It is C-files that contains the actual implementation of the driver.

KConfig

Like the Linux kernel (and U-boot, busybox, Barebox, Buildroot...), Zephyr uses the KConfig system to select what subsystem, libraries and drivers to be included in the build.

Remember when we did build the blinky application in the part1? We did provide -b rpi_pico to the build command to specify board:

west build -b rpi_pico ....

This will load ./boards/arm/rpi_pico/rpi_pico_defconfig as the default configuration and store it into ./build/zephyr/.config, which is the actual configuration the build system will use.

The .config file contains all configuration options selected by e.g. menuconfig AND the generated configuration options from the devicetree.

Unit tests

Zephyr makes use of Twister [1] for unit tests. By default it will build the majority of all tests on a defined set of boards. All these tests is part of the automatic test procedure for every pull request.

Lets start!

First we have to create a few files and integrate them into the build system. The directory hiearchy is similiar to the Linux kernel, lucky for me, it was quite obvious where to put things.

Driver

Create an empty file for now:

touch drivers/dac/dac_ltc166x.c

The driver will support both ltc1660 (10-bit, 8 channels) and ltc1665 (8-bit, 8 channels) DAC. I do not prefer to name drivers with an x as there actually are chips out there with an x in their name, so it could be a little fraudulent. That is at least something we try to avoid it in the Linux kernel.

A better name would be just dac_ltc1660.c and support all ICs that are compatible with dac_ltc1660. However, the Zephyr project has choosen to make use of the x in names to indicate that multiple chips are supported. When in Rome, do as the Romans do.

Add the file to the CMake build system:

diff --git a/drivers/dac/CMakeLists.txt b/drivers/dac/CMakeLists.txt
index b0e86e3bd4..800bc895fd 100644
--- a/drivers/dac/CMakeLists.txt
+++ b/drivers/dac/CMakeLists.txt
@@ -9,6 +9,7 @@ zephyr_library_sources_ifdef(CONFIG_DAC_SAM             dac_sam.c)
 zephyr_library_sources_ifdef(CONFIG_DAC_SAM0           dac_sam0.c)
 zephyr_library_sources_ifdef(CONFIG_DAC_DACX0508       dac_dacx0508.c)
 zephyr_library_sources_ifdef(CONFIG_DAC_DACX3608       dac_dacx3608.c)
+zephyr_library_sources_ifdef(CONFIG_DAC_LTC166X     dac_ltc166x.c)
 zephyr_library_sources_ifdef(CONFIG_DAC_SHELL          dac_shell.c)
 zephyr_library_sources_ifdef(CONFIG_DAC_MCP4725                dac_mcp4725.c)
 zephyr_library_sources_ifdef(CONFIG_DAC_MCP4728                dac_mcp4728.c)

CONFIG_DAC_LTC166X comes from the Kconfig system and could be either 'y' or 'n' dependig on if it is selected or not.

Kconfig

Create two new Kconfig configuration options. One for the driver itself and one for its init priority:

diff --git a/drivers/dac/Kconfig.ltc166x b/drivers/dac/Kconfig.ltc166x
new file mode 100644
index 0000000000..6053bc39bf
--- /dev/null
+++ b/drivers/dac/Kconfig.ltc166x
@@ -0,0 +1,22 @@
+# DAC configuration options
+
+# Copyright (C) 2023 Marcus Folkesson <marcus.folkesson@gmail.com>
+#
+# SPDX-License-Identifier: Apache-2.0
+
+config DAC_LTC166X
+       bool "Linear Technology LTC166X DAC"
+       default y
+       select SPI
    +       depends on DT_HAS_LLTC_LTC1660_ENABLED  || \
+               DT_HAS_LLTC_LTC1665_ENABLED
+       help
+         Enable the driver for the Linear Technology LTC166X DAC
+
+if DAC_LTC166X
+
+config DAC_LTC166X_INIT_PRIORITY
+       int "Init priority"
+       default 80
+       help
+         Linear Technology LTC166X DAC device driver initialization priority.
+
+endif # DAC_LTC166X

DT_HAS_LLTC_LTC1660_ENABLED and DT_HAS_LLTC_LTC1660_ENABLED is configuration options that is generated from the seleted devicetree. By depend on it, the DAC_LTC166X option will only show up if there are such a node specified. I really like this feature.

Also add it into the build stucture:

diff --git a/drivers/dac/Kconfig b/drivers/dac/Kconfig
index 7b54572146..77b0db902b 100644
--- a/drivers/dac/Kconfig
+++ b/drivers/dac/Kconfig
@@ -42,6 +42,8 @@ source "drivers/dac/Kconfig.dacx0508"

 source "drivers/dac/Kconfig.dacx3608"

+source "drivers/dac/Kconfig.ltc166x"
+
 source "drivers/dac/Kconfig.mcp4725"

 source "drivers/dac/Kconfig.mcp4728"

Device tree

The bindings for all devices has to be described in the YAML format. These bindings is verified during compile time in order to make sure that the device tree node fulfills all required properties and not tries to invent some new ones. This protects us against typos, which also is a really good feature. The Linux kernel does not have this...

We have to create such a binding, one for each chip:

diff --git a/dts/bindings/dac/lltc,ltc1660.yaml b/dts/bindings/dac/lltc,ltc1660.yaml
new file mode 100644
index 0000000000..196204236a
--- /dev/null
+++ b/dts/bindings/dac/lltc,ltc1660.yaml
@@ -0,0 +1,8 @@
+# Copyright (C) 2023 Marcus Folkesson <marcus.folkesson@gmail.com>
+# SPDX-License-Identifier: Apache-2.0
+
+include: [dac-controller.yaml, spi-device.yaml]
+
+description: Linear Technology Micropower octal 10-Bit DAC
+
+compatible: "lltc,ltc1660"
diff --git a/dts/bindings/dac/lltc,ltc1665.yaml b/dts/bindings/dac/lltc,ltc1665.yaml
new file mode 100644
index 0000000000..2c789ecc56
--- /dev/null
+++ b/dts/bindings/dac/lltc,ltc1665.yaml
@@ -0,0 +1,8 @@
+# Copyright (C) 2023 Marcus Folkesson <marcus.folkesson@gmail.com>
+# SPDX-License-Identifier: Apache-2.0
+
+include: [dac-controller.yaml, spi-device.yaml]
+
+description: Linear Technology Micropower octal 8-Bit DAC
+
+compatible: "lltc,ltc1665"

dac-controller.yaml and spi-device.yaml is included to inherit some of the required properties (such as spi-max-speed) for this of device.

Unit tests

Add the driver to the test framework and allow the test to be executed on the native_posix platform:

diff --git a/tests/drivers/build_all/dac/testcase.yaml b/tests/drivers/build_all/dac/testcase.yaml
index fa2eb5ac7a..1c7fa521d0 100644
--- a/tests/drivers/build_all/dac/testcase.yaml
+++ b/tests/drivers/build_all/dac/testcase.yaml
@@ -5,7 +5,7 @@ tests:
   drivers.dac.build:
     # will cover I2C, SPI based drivers
     platform_allow: native_posix
-    tags: dac_dacx0508 dac_dacx3608 dac_mcp4725 dac_mcp4728
+    tags: dac_dacx0508 dac_dacx3608 dac_mcp4725 dac_mcp4728 dac_ltc1660 dac_ltc1665
     extra_args: "CONFIG_GPIO=y"
   drivers.dac.mcux.build:
     platform_allow: frdm_k22f

Also add nodes in app.overlay to make it possible for the unit tests to instantiate the DAC:

diff --git a/tests/drivers/build_all/dac/app.overlay b/tests/drivers/build_all/dac/app.overlay
index 471bfae6e8..c1e9146974 100644
--- a/tests/drivers/build_all/dac/app.overlay
+++ b/tests/drivers/build_all/dac/app.overlay
@@ -68,6 +68,8 @@

                        /* one entry for every devices at spi.dtsi */
                        cs-gpios = <&test_gpio 0 0>,
+                                  <&test_gpio 0 0>,
+                                  <&test_gpio 0 0>,
                                   <&test_gpio 0 0>,
                                   <&test_gpio 0 0>;

@@ -118,6 +120,20 @@
                                channel6-gain = <0>;
                                channel7-gain = <0>;
                        };
+
+                       test_spi_ltc1660: ltc1660@3 {
+                               compatible = "lltc,ltc1660";
+                               reg = <0x3>;
+                               spi-max-frequency = <0>;
+                               #io-channel-cells = <1>;
+                       };
+
+                       test_spi_ltc1665: ltc1665@4 {
+                               compatible = "lltc,ltc1665";
+                               reg = <0x4>;
+                               spi-max-frequency = <0>;
+                               #io-channel-cells = <1>;
+                       };
                };
        };
 };

Summary

It are some work that needs to be done to integrate the driver into the Zephyr project. This has to be done for every driver.

In part3 we will start writing the driver code.