inside

Confessions of a Recovering Proprietary Programmer, Part XVIII: Preventing Involuntary Generosity

I recently learned that all that is required for someone to take out a loan in some random USA citizen's name is that citizen's full name, postal address, email address, date of birth, and social security number. If you are above a certain age, all of these are for all intents and purposes a matter of public record. If you are younger, then your social security number is of course supposed to be secret—and it will be, right up to that data breach that makes it available to all the wrong people.

This sort of thing can of course be a bit annoying to our involuntarily generous USA citizen. Some unknown person out there gets a fancy toy, and our citizen gets some bank's dunning notices. Fortunately, there are quite a few things you can do, although I will not try to reproduce the entirety of the volumes of good advice that are available out there. Especially given that laws, processes, and procedures are all subject to change.

But at present, one important way to prevent this is to put a hold on your credit information through either of Experian, Equifax, or Transunion. I strongly suggest that you save yourself considerable time and hassle by doing this, which is free of charge for a no-questions-asked one-year hold. Taking this step is especially important if you are among the all too many of us whose finances don't have much slack, as was the case with my family back when my children were small. After all, it is one thing to have to deal with a few hassles in the form of phone calls, email, and paperwork, but it is quite another if you and your loved ones end up missing meals. Thankfully, it never came to that for my family, although one of my children did complain bitterly to a medical professional about the woefully insufficient stores of candy in our house.

Of course, I also have some advice for the vendor, retailer, digital-finance company, and bank that were involved in my case of attempted involuntary generosity:

  1. Put just a little more effort into determining who you are really doing business with.
  2. If the toy contains a computer and connects to the internet, consider the option of directing your dunning notices through the toy rather than to the email and phone of your involuntarily generous USA citizen.
  3. A loan application for a toy that is shipped to a non-residential address should be viewed with great suspicion.
  4. In fact, such a loan application should be viewed with with some suspicion even if the addresses match. Porch pirates and all that.
  5. If the toy is of a type that must connect to the internet to do anything useful, you have an easy method of dealing with non-payment, don't you?

I should hasten to add that after only a little discussion, these companies have thus far proven quite cooperative in my particular case, which is why they are thus far going nameless.

Longer term, it is hard to be optimistic, especially given advances in various easy-to-abuse areas of information technology. In the meantime, I respectfully suggest that you learn from my experience and put a hold on your credit information!
RCU

Stupid RCU Tricks: Which tests do I run???

The rcutorture test suite has quite a few options, including locktorture, rcuscale, refscale, and scftorture in addition to rcutorture itself. These tests can be run with the assistance of either KASAN or KCSAN. Given that RCU contains kernel modules, there is the occasional need for an allmodconfig build. Testing of kvfree_rcu() is currently a special case of rcuscale. Some care is required to adapt some of the tests to the test system, for example, based on the number of available CPUs. Both rcuscale and refscale have varying numbers of primitives that they test, so how to keep up with the inevitable additions and deletions? How much time should be devoted to each of locktorture, scftorture, and rcutorture, which, in contrast with rcuscale and refscale, do not have natural accuracy-driven durations? And finally, if you do run all of these things, you end up with about 100 gigabytes of test artifacts scattered across more than 50 date-stamped directories in tools/testing/selftests/rcutorture/bin/res.

Back in the old days, I kept mental track of the -rcu tree and ran the tests appropriate to whatever was queued there. This strategy broke down in late 2020 due to family health issues (everyone is now fine, thank you!), resulting in a couple of embarrassing escapes. Some additional automation was clearly required.

This automation took the form of a new torture.sh script. This is not intended to be the main testing mechanism, but instead an overnight touch-test of the full rcutorture suite that is run occasionally, for example, just after accepting a large patch series or just before sending a pull request.

By default, torture.sh runs everything both with and without KASAN, and with a 10-minute “duration base”. The translation from “duration base” into wall-clock time is a bit indirect. The fewer CPUs you have, the more tests you run, and the longer it takes your system to build a kernel, the more wall-clock time that “10 minutes” will turn into. On my 16-hardware-thread laptop, running everything (including the non-default KCSAN runs) turns that 10-minute duration base into about 11 hours. Increasing the duration base by five minutes increases the total wall-clock time by about 100 minutes.

This is therefore not a test to be integrated into a per-commit CI system, however, manually selecting specific tests for the most recent RCU-related commit is far easier than keeping the entire -rcu stack in one's head. And torture.sh assists with this by providing sets of --configs- and --do- parameters.

The --configs- parameters are as follows:

  1. --configs-rcutorture.
  2. --configs-locktorture.
  3. --configs-scftorture.
These arguments are passed to the --configs argument of kvm.sh for the --torture rcu, --torture lock, and --torture scf cases, respectively. By default, --configs CFLIST is passed. You may accumulate a long list via multiple --configs- arguments, or you can just as easily pass a long quoted list of scenarios through a single --configs- argument.

The --do- parameters are as follows:

  1. --do-all, which enables everything, including non-default options such as KCSAN.
  2. --do-allmodconfig, which does a single allmodconfig kernel build without running anything, and without either KASAN or KCSAN.
  3. --do-clocksourcewd, which does a short test of the clocksource watchdog, verifying that it can tell the difference between delay-based skew and clock-based skew.
  4. --do-kasan, which enables KASAN on everything except -do-allmodconfig.
  5. --do-kcsan, which enables KCSAN on everything except -do-allmodconfig.
  6. --do-kvfree, which runs a special rcuscale test of the kvfree_rcu() primitive.
  7. --do-locktorture, which enables a set of locktorture runs.
  8. --do-none, which disables everything. Yes, you can give a long series of --do-all and --do-none arguments if you really want to, but the usual approach is to follow --do-none with the lists of tests you want to enable, for example, --do-none --do-clocksourcewd will test only the clocksource watchdog, and do so in but a few minutes.
  9. --do-rcuscale, which enables rcuscale update-side performance tests, adapted to the number of CPUs on your system.
  10. --do-rcutorture, which enables rcutorture stress tests.
  11. --do-refscale, which enables refscale read-side performance tests, adapted to the number of CPUs on your system.
  12. --do-scftorture, which enables scftorture stress tests for smp_call_function() and friends, adapted to the number of CPUs on your system.
Each of these --do- parameters has a corresponding --do-no- parameter, wit the exception of --do-all and --do-none, each of which is the other's --do-no- parameter. This allows all-but runs, for example, --do-all --do-no-rcutorture would run everything (even KCSAN), but none of the rcutorture runs.

As of early 2021, KCSAN is still a bit picky about compiler versions, so the --kcsan-kmake-arg allows you to specify arguments to the --kmake-arg argument to kvm.sh. For example, right now, I use --kcsan-kmake-arg "CC=clang-11".

As noted earlier, both rcuscale and refscale can have tests added and removed over time. The torture.sh script deals with this by doing a grep through the rcuscale.c and refscale source code, respectively, and running all of the tests that it finds.

The --duration argument specifies the duration base, which, as noted earlier, defaults to 10 minutes. This duration base is apportioned across the kvm.sh script's --duration parameter, with 70% for rcutorture, 10% for locktorture, and 20% for scftorture. So if you specify --duration 20 to torture.sh, the rcutorture kvm.sh runs will specify --duration 14, the locktorture kvm.sh runs will specify --duration 2, and the scftorture kvm.sh runs will specify --duration 4.

The 100GB full run is addressed at least partially by compressing KASAN vmlinux files, which gains roughly a factor of two overall, courtesy of the 1GB size of each such file. Normally, torture.sh uses all available CPUs to do the compression, but you can restrict it using the --compress-kasan-vmlinux parameter. At the extreme, --compress-kasan-vmlinux 0 will disable compression entirely, which can be an attractive option given that compressing takes about an hour of wall-clock time on my 16-CPU laptop.

Finally, torture.sh places all of its output under a date-stamped directory suffixed with -torture, for example, tools/testing/selftests/rcutorture/res/2021.05.03-20.10.12-torture. This allows bulky torture.sh directories to be more aggressively cleaned up when disks start getting full.

Taking all of this together, torture.sh provides a very useful overnight “acceptance test” for RCU.
RCU

Stupid RCU Tricks: A tour through rcutorture

Although Linux-kernel RCU gets most of the attention, without rcutorture, RCU would not be what it is today. To see this, note that the old saying “If it ain't tested, it don't work!” is if anything more valid today than it was back then. After all, software has not gotten any simpler, workloads have not become less demanding, and systems have not grown smaller, except in terms of physical size. That said, the decrease in size has been truly impressive. Back when Jack and I invented RCU, the hardware contained in my laptop would have filled no fewer than fifteen standard racks, and that ignores the hardware that simply was not available back then, and also ignores the reliability issues that would have resulted from such an imposing agglomeration of hardware.

It is rcutorture's job to make sure that Linux-kernel RCU actually works, and so it is worthwhile getting to know rcutorture a bit better. The following blog posts cover design of, use of, and experience with this test suite:

  1. Stupid RCU Tricks: So you want to torture RCU? (use)
  2. Stupid RCU Tricks: So rcutorture is Not Aggressive Enough For You? (use)
  3. Stupid RCU Tricks: Failure Probability and CPU Count (use)
  4. Stupid RCU Tricks: Enlisting the Aid of a Debugger (use)
  5. Stupid RCU Tricks: Torturing RCU Fundamentally, Part I (design)
  6. Stupid RCU Tricks: Torturing RCU Fundamentally, Part II (design)
  7. Stupid RCU Tricks: Torturing RCU Fundamentally, Part III (design)
  8. Stupid RCU Tricks: Torturing RCU Fundamentally, Parts IV and V (design)
  9. Stupid RCU Tricks: So rcutorture is Still Not Aggressive Enough For You? (use)
  10. Stupid RCU Tricks: rcutorture fails to find an RCU bug (experience)
  11. Stupid RCU Tricks: The design of rcutorture (design)
  12. Stupid RCU Tricks: Which tests do I run??? (use)

And here are a few older posts covering rcutorture:

  1. Hunting Heisenbugs (experience, 2009)
  2. Hunting More Heisenbugs (experience, 2009)
  3. Stupid RCU Tricks: RCU Priority Inversion (design, 2010)
  4. And it used to be so simple... (design, 2011)
  5. Stupid RCU Tricks: Bug Found by Refactored Tests (design, experience, and use, 2014)
  6. Stupid RCU Tricks: rcutorture Catches an RCU Bug (experience, 2014)
  7. Stupid RCU Tricks: rcutorture Accidentally Catches an RCU Bug (experience, 2017)
Ah, but what about formal verification? But of course! Please see this series, and especially this post.

I hope that this series is helpful, and I further hope that it will inspire more aggressive torturing of other software!
RCU

Stupid RCU Tricks: The design of rcutorture

This installment of the rcutorture series takes a high-level look at its design. At the highest level, rcutorture is a stress test with a few unit-test components thrown in for good measure. It also includes scripts to handle both single-system and distributed testing. All of this code is of course paying homage to the many moods of Mr. Murphy.

The Many Moods of Mr. Murphy

As I have progressed through my career, I seem to have progressively miffed Mr. Murphy.

I completed my first professional (but pro bono) project in the mid-1970s. It had one user. Any million-year bugs it might have contained took the full million years to appear. This meant that Murphy was actually a pretty nice guy. Sure, whatever could happen would. Eventually. Maybe in geologic time.

In the 1980s, I completed a number of contract-programming projects that might have had installed bases of at many as 100 units. A million-year bug could be expected to appear about once per 10,000 years. In the 1990s, I worked on Sequent's DYNIX/ptx proprietary-UNIX operating system, which had an installed base of perhaps 6,000 systems. A million-year bug could be expected to appear not quite once per two centuries.

Shortly after the year 2000, I started working on the Linux kernel. There are at best rough estimates of the Linux kernel's installed based, and as of 2017, there were an estimated 20 billion systems of one sort of another running the Linux kernel, including smartphones, automobiles, household appliances, and much more. A million-year bug could be expected to appear more than once per hour across this huge installed base. In other words, over a period of about 40 years, Murphy has transitioned from being a pretty nice guy to being a total jerk!

Worse yet, should the Linux kernel capture even a modest fraction of the Internet-of-things market, a million-year bug could be expected to appear every few minutes across the installed base. Which might well result in Murphy becoming nothing less than a homicidal maniac.

Fortunately, there are some validation strategies that might help keep Murphy on the straight and narrow.

If You Cannot Beat Him, Join Him!

Given that everything that can happen eventually will, the task at hand is to try to make it happen in the comparative comfort and safety of the lab. This means aiding and abetting Mr. Murphy, at least within the lab environment. And this is the whole point of rcutorture, whose tricks include the following:

  1. Temporal fuzzing.
  2. Exercising race conditions.
  3. Anticipating abuse.
Of course, none of these tricks are new, but it does not hurt to review them.

Temporal Fuzzing

But why not go for the full effect and apply straight-up fuzzing? The answer to this question may be found in RCU's core API:
void rcu_read_lock(void);
void rcu_read_unlock(void);
void synchronize_rcu(void);
void call_rcu(struct rcu_head *head, rcu_callback_t func);
For the first three functions, there is nothing to fuzz, unless you are trying to test your compiler. For the last function, fuzzing of pointers—and most especially pointers to functions—is reserved for the truly brave and for those wishing to test their kernel's exception handling.

But it does make sense to fuzz the timing of calls to these functions, and that is exactly what rcutorture does. RCU readers and updaters are invoked at random times, with readers and updaters cooperating to detect any too-short grace periods, memory misordering, and so on. Much of the fuzzing is randomly generated at run time, but there are also module parameters that insert delays in specific locations. This strategy is straightforward, but can also be powerful, for example, careful choice of delays and other configuration settings decreased the mean time between failure (MTBF) of a memorable heisenbug from hundreds of hours to less than five hours. This had the beneficial effect of de-heisening this bug.

Exercising Race Conditions

Many of the most troublesome bugs involve rare operations, and one way to join forces with Murphy is to make rare operations less rare during validation. And rcutorture takes this approach often, including for the following operations:

  1. CPU hotplug.
  2. Transitions to and from idle, including transitions to and from the whole system being idle.
  3. Long RCU readers.
  4. Readers from interrupt handlers.
  5. Complex readers, for example, those overlapping with irq-disable regions.
  6. Delayed grace periods, for example, allowing a CPU to go offline and come back online during grace-period initialization.
  7. Racing call_rcu() invocations against rcu_barrier().
  8. Periodic forced migrations to other CPUs.
  9. Substantial testing of less-popular grace-period mechanisms.
  10. Processes running on the hypervisor to preempt code running in rcutorture guest OSes.
  11. Process exit.
  12. ”Near misses“ where the RCU grace-period guarantee is almost violated.
  13. Moving CPUs to and from rcu_nocbs callback-offloaded mode.
This exercising of race conditions might be reminiscent of the Netflix Chaos Monkey.

Anticipating Abuse

There are things that RCU users are not supposed to do. Just as users of the fork() system call are not supposed to code up forkbombs, RCU users are not supposed to code up endless blasts of call_rcu() invocations (see Documentation/RCU/checklist.rst item 8). Nevertheless, rcutorture does engage in (carefully limited forms of) call_rcu() abuse in order to find stress-related RCU bugs. This abuse is enabled by default and may be controlled by the rcutorture.fwd_progress module parameter and friends.

In addition, rcutorture inserts the occasional long-term delay in preemptible RCU readers and exercises code paths that must avoid deadlocks involving the scheduler and RCU.

Meta-Murphy, AKA Test the Test

Of course, one danger of joining Murphy is that things can go wrong in test code just as easily as they can go wrong in the code under test.

For this reason, rcutorture provides the rcutorture.object_debug module parameter that verifies that the code checking for double call_rcu() invocations is working properly. In addition, the rcutorture.stall_cpu module parameter and friends may be used to force RCU CPU stall warning messages of various types.

The rcutorture tests of more fundamental RCU properties may be enabled by using the rcutorture.torture_type module parameter. For example, rcutorture.torture_type=busted selects a broken RCU implementation, which may also be selected using the BUSTED scenario. Either way, rcutorture had jolly well better complain about too-short grace periods. In addition, rcutorture.torture_type=busted_srcud forces rcutorture to run compound readers against SRCU, which does not support this notion. In this case also, rcutorture had better complain about too-short grace periods for these compound readers. The rcutorture.leakpointer module parameter tests the CONFIG_RCU_STRICT_GRACE_PERIOD Kconfig option's ability to detect pointers leaked from RCU read-side critical sections. Finally, the rcutorture tests of RCU priority boosting can themselves be tested by using the BUSTED-BOOST scenario, which must then complain about priority-boosting failures.

Additional unscheduled tests of rcutorture testing are of course provided by bugs in RCU itself. Perhaps these are rare examples of Murphy working against himself, but they normally do not feel that way at the time!

Enlisting Darwin

Those who are willing to consider the possibility that natural selection applies to non-living objects might do well to consider validation such as that provided by rcutorture to be a selection function. Now, some developers might object to the thought that their carefully created changes are random mutations, but the sad fact is that long experience has often supported that view.

With this in mind, a good validation suite will select against bugs, resulting in robust software, right?

Wrong.

You see, bugs are a form of software. An undesirable form, perhaps, but a form nevertheless. Bugs will therefore adapt to any fixed validation suite and accumulate in your software, degrading its robustness. This means that any bugs located by end users must also be considered bugs against the validation suite, which after all failed to find those bugs. Modifying the validation suite to successfully find those bugs is therefore important, as is independent efforts to make the validation suite more capable. The hope is that modifying the test suite will make it more difficult for bugs to adapt to it.

But even that is insufficient. Blindly adding tests and test cases will eventually bloat your test suite to the point where it is no longer feasible to run all of it. It is therefore also necessary to review test cases and work out how to make them find bugs faster with less hardware, whether by merging tests, running more tests concurrently, or by more vigorously enlisting Mr. Murphy's assistance. It might also be necessary to eliminate test cases that are no longer relevant, for example, now that RCU no longer has a synchronize_rcu_bh(), there is no point in testing it.

In short, the price of robust software is eternal test development.
RCU

Stupid RCU Tricks: rcutorture fails to find an RCU bug

I recently took a close look at rcutorture's console output and noticed the following string: rtbf: 0 rtb: 0. The good news is that there were no rcutorture priority-boosting failures (rtbf: 0). The bad news is that this was only because there was no priority-boosting testing (rtb: 0). And as we all know, if it isn't tested, it doesn't work, so this implied bugs in RCU priority boosting itself.

What is RCU Priority Boosting?

If you are running a kernel built with CONFIG_PREEMPT=y, RCU read-side critical sections can be preempted by higher-priority tasks, regardless of whether these tasks are executing kernel or userspace code. If there are enough higher-priority tasks, and especially if someone has foolishly disabled realtime throttling, these RCU read-side critical sections might remain preempted for a good long time. And as long as they remain preempted, RCU grace periods cannot complete. And if RCU grace periods cannot complete, your system has an OOM in its future.

This is where RCU priority boosting comes in, at least in kernels built with CONFIG_RCU_BOOST=y. If a given grace period is blocked only by preempted RCU read-side critical sections, and that grace period is at least 500 milliseconds old (this timeout can be adjusted using the RCU_BOOST_DELAY Kconfig option), then RCU starts boosting the priority of these RCU readers to the level specified by the rcutree.kthread_prio kernel boot parameter, which defaults to FIFO priority 2. RCU does this using one rcub kthread per rcu_node structure. Given a default Kconfig, this works out to one rcub kthread per 16 CPUs.

Why did rcutorture Fail to Test RCU Priority Boosting?

As with many things in life, this happened one step at a time:

  1. A bug I was chasing a few years back reproduced much more quickly if I enabled CPU hotplug on the TREE03 rcutorture scenario.
  2. And in addition, x86 no longer supports configurations where CPUs cannot be hotplugged (mumble mumble security mumble mumble), which means that the rcutorture scripting is always going to test CPU hotplug.
  3. TREE03 was the one scenario that tested RCU priority boosting.
  4. But RCU priority-boost testing assumes that CPU hotplug was disabled. So much so that it would disable itself if CPU-hotplug testing was enabled. Which it now always was.
  5. So RCU priority boosting has gone completely untested for quite a few years.
  6. Quite a few more years back, I learned that firmware sometimes lies about the number of CPUs. I learned this from bug reports noting that RCU was sometimes creating way more kthreads than made any sense on small systems.
  7. So the spawning of kthreads that are per-CPU or per-group-of-CPUs is done at CPU-online time. Which ensures that systems get the right number of RCU kthreads even in the presence of lying firmware. In the case of the RCU boost kthreads, the code verifies that the rcu_node structure in question has at least one online CPU before spawning the corresponding kthread.
  8. Except that it is now quite possible for the incoming CPU to not be fully online at the time that rcutree_online_cpu() executes, in part due to RCU being much more careful about CPU hotplug. This means that the RCU boost kthread will be spawned when the second CPU corresponding to a given rcu_node structure comes online.
  9. Which means that rcu_node structures that have only one CPU never have an RCU boost kthread, and in turn that RCU readers preempted on such CPUs will never be boosted. This problematic situation is unusual, requiring 17, 33, 49, 65, ... CPUs on the system, assuming default RCU kconfig options. But it can be made to happen, especially when using the rcutorture scripting. (--kconfig "CONFIG_NR_CPUS=17" ...)

The fix is to refactor the creation of rcub kthreads so that a CPU coming online is assumed to eventually make it online, which means that one online CPU suffices to spawn an rcub kthread.

Additional Testing Challenges

The rcu_torture_boost() function required additional rework because CPUs can fail to pass through a quiescent state for some seconds from time to time, and there is nothing that RCU priority boosting can do about this. There are now checks for this condition, and rcutorture refrains from reporting an error in such cases.

Worse yet, this testing proceeds by disabling the aforementioned realtime throttling, then running a FIFO realtime priority 1 kthread on each CPU. This sort of abuse is a great way to break your kernel, yet nothing less abusive will reliably and efficiently test RCU priority boosting. It just so happens that many of RCU's kthreads will do just fine because in this configuration they run at FIFO realtime priority 2. Unfortunately, timers often run in a ksoftirqd kthread, which runs at a non-realtime priority. This means that although RCU's grace-period kthread runs just fine, if it tries to sleep for (say) three milliseconds, it won't awaken until RCU priority boosting testing has completed, which is a great way to force this testing to fail.

Therefore, rcutorture now takes a the rude and crude approach of checking to see if it is built into the kernel (as opposed to running as a kernel module), and if so, it forces all of the ksoftirqd kthreads to run at FIFO realtime priority 2. (Needless to say, don't try this at home.)

The usual way to asynchronously determine when a grace period has ended is to post an RCU callback using call_rcu(). Except that in realtime configurations, RCU callbacks are often offloaded to rcuo kthreads. It is the system administrator's responsibility to decide where to run these, and, failing that, the Linux-kernel scheduler's responsibility. Neither of which should be expected to do the right thing in the presence of a full set of CPU-bound unthrottled real-time-priority boost-test kthreads.

Fortunately, RCU now has polling APIs for managing grace periods. The start_poll_synchronize_rcu() function starts a new grace period if needed and returns a “cookie” that can be passed to poll_state_synchronize_rcu(), which will return true if the needed grace period has completed. These functions do not rely on RCU callbacks, and thus will function correctly even if the rcuo kthreads are inauspiciously scheduled, or even if these kthreads are not scheduled at all. Thus, rcutorture's test of RCU priority boosting now uses these two functions.

With all of this in place, RCU priority boosting lives again!

But untested software does not work, and that includes the tests themselves. Thus, a new BUSTED-BOOST scenario tests RCU priority boosting on a kernel built with CONFIG_RCU_BOOST=y, which does not do RCU priority boosting. This scenario fails within a few tens of seconds, so the test being tested might actually be working!
RCU

Stupid RCU Tricks: So rcutorture is Still Not Aggressive Enough For You?

An earlier post discussed ways of making rcutorture more aggressive, but even with these techniques, rcutorture's level of aggression is limited by build time on the one hand and the confines of a single system on the other. This post describes some recent ways around those limitations.

Play It Again, Sam!

A full rcutorture run will do about 20 kernel builds, which can take some tens of minutes or, on slower systems, well over an hour. This can be extremely annoying when you simply want to re-run the last test in order to obtain better failure statistics or to get more test time on a recent bug fix.

The traditional rcutorture way of avoiding rebuilds is to optionally edit the qemu-cmd files for each scenario to be re-run, then manually invoke sh on each resulting file. The editing step allows you to avoid overwriting the previous run's console output, but may be omitted if you don't care about that console output or if you have already saved it off somewhere. This works, but is painstaking and error-prone.

This is where the new kvm-again.sh script comes in. Its first argument is the path to the directory for the old run, for one example on my laptop, tools/testing/selftests/rcutorture/res/2021.03.31-10.52.56. This can be a relative pathname as in this example, but use of absolute pathnames can make your life easier when reviewing output from prior kvm-again.sh runs. By default, the new run will have the same duration as the old run, but the --duration argument may be used to specify the new run's duration. Also by default, kvm-again.sh will generate the new run's directory based on the current date and time (suffixed with -again), but the --rundir argument may be used to specify some other location. Finally, and again by default, hard links are used to “copy” the needed data from the old run directory (such as the Linux kernel), but the --link argument can be used to specify soft links or explicit copy operations. The full set of scenarios generates some 20 kernels, each of which is somewhat larger than they would have been in the past. You may therefore need to exercise some caution when using --link copy, especially if you are doing repeated kvm-again.sh runs.

The re-run file in the new run directory gives the pathname of the old run directory. Although you can give a run directory produced by a prior kvm-again.sh invocation to a later kvm-again.sh invocation, best practice is to continue specifying the original run directory. If nothing else, following this best practice avoids ever-growing qemu-cmd files.

Of course, the shorter the runs, the greater an advantage kvm-again.sh provides. In the extreme case, it can be amazingly helpful when testing for rare boot-time failures.

Strength in Numbers

It seems likely that there are quite a few more people with access to eight 16-CPU systems than there are people with access to a single 128-CPU system. You can of course run kvm.sh on each of eight 16-CPU systems, but working out which scenarios to run on each of those systems can be time-consuming and error-prone. And this is why the new kvm-remote.sh script exists.

Build or Buy?

This script can be invoked in two different modes. In both cases, the first argument is a quoted list of system names, as in names that the ssh command understands. Specifying localhost or any of its synonyms might work, but is an option for the brave at this point. Should this prove useful, it will be take care of in a later version of this script.

The first form builds all needed kernels on the system on which the kvm-remote.sh script is run. In this case, the second and subsequent arguments can be anything accepted by the kvm.sh script.

In the second form, the second and subsequent arguments must be suitable for the kvm-again.sh script, that is, the second argument must specify the path to an old run directory and the third and subsequent arguments can be --duration, --rundir, and </tt>--link</tt>.

In both forms, once the kernels are available, a tarball of all scenarios is downloaded to all of the systems. Each such download is run sequentially, which means that downloading can take significant time, especially if low-bandwidth network links are involved. Once all systems have had the tarball downloaded and expanded, batches of scenarios are parceled out among the systems specified by the first argument. If there are more batches than there are systems, once a system completes its current batch, it will be given another batch.

Once all batches have completed, the results from each system are uploaded back to the system running the kvm-remote.sh script, where the usual end-of-run error-checking and analysis is carried out.

This script assumes that all systems have the same number of CPUs. Addressing this limitations is future work. In the meantime, one workaround is to do multiple --buildonly runs of kvm.sh, one for each type of system. Then multiple runs of the second form of the kvm-remote.sh script can safely be run concurrently on the same build system. Because all the pre-built kernels for each type of system are safely collected up in the corresponding old-run directory, the multiple invocations of kvm-remote.sh will not interfere with each other.

Why ssh?

The kvm-remote.sh script uses ssh to do all downloading, control, and uploading operations. This might seem to be a poor choice in this age of Kubernetes and friends, but the fact remains that ssh is widely available, easy to configure, and reasonably robust. In contrast, there is a wide variety of Kubernetes-like systems, and they can be configured in a wide variety of ways. It would be impossible to choose just one of these systems, and it would be quite difficult to accommodate all of the configurations, versions, and variants of even one of them.

However, please note that kvm-remote.sh assumes that all of the systems have been set up properly. This means that low-level virtualization support must be in place, and it also means that running an ssh command to any of the specified systems must complete without the need for any human interaction. For example, if ssh foo date does not open a connection to system foo, run the date command, and print the result without any need to type any sort password or passphrase, then system foo is not yet set up properly.

Similarly, kvm-remote.sh does not take any actions that might be necessary to reserve system foo for your exclusive use, nor does it do anything to release this system upon completion of the test. Thus, these system-configuration, reservation, and release operations are jobs for which you may wish to enlist the help of Kubernetes or of similar frameworks. For example, I use (admittedly crude) scripts that interact with Facebook's internal environment to reserve and configure the desired number and type of systems, invoke kvm-remote.sh once everything is set up, and then release those systems.

What Might The Future Hold?

Although the kvm-remote.sh approach of using ssh works reasonably well on a few tens of systems, if someone wanted to run rcutorture on thousands of systems, something else would likely be required. On the other hand, there are not that many sites where one would reasonably devote anywhere near that many systems to rcutorture. There might be downloading improvements at some point, most likely in the form of allowing a script to be provided to allow kvm-remote.sh to use some site-specific optimized multi-system download utility. Both kvm-again.sh and kvm-remote.sh might someday need a way to specify that only a subset of a prior run's scenarios be re-run, for example, to chase down a bug that occurred in only a few of those scenarios.

And as mentioned earlier, perhaps a future version of kvm-remote.sh will gracefully handle remote systems with varying numbers of CPUs or running actual tests on the system running the kvm-remote.sh script.

But if things go as they usually do, a fair fraction of the future changes will come as complete surprises.
SequentialCaveman

Parallel Programming: Second Edition

The second edition of “Is Parallel Programming Hard, And, If So, What Can You Do About It?” is now available. I have no plans to create a dead-tree version, but I have no objection to others doing so, whether individually or in groups.

Big-animal changes over the First Edition include:


  1. A full rewrite of the memory-barriers section, which is now its own chapter. This new chapter includes discussion of the Linux-kernel memory model, courtesy of Akira Yokosawa, who kindly pulled in the LWN article.
  2. A number of new tools have been added to the formal-verification chapter.
  3. A new section on SMP real-time programming.
  4. The “Tools of the Trade” chapter has been dragged kicking and screaming into the 2020s, courtesy of Akira Yokosawa, Junchang Wang, and Slavomir Kaslev.
  5. Hyperlinking between quizzes and answers, courtesy of Paolo Bonzini and Akira Yokosawa.
  6. Improved formatting and build system, courtesy of Akira Yokosawa.
  7. Bibliographic facelift, courtesy of Stamatis Karnouskos and Akira Yokosawa.
  8. Grammatical fixes from a great many people, but especially from translators SeongJae Park and Motohiro Kanda.
  9. Several new cartoons.
  10. Performance results from a system with hundreds of CPUs, courtesy of my employer, Facebook.
  11. Substantial updates pretty much everywhere else. (Yes, this might be the first time in a long time that I read through the entire book. Why do you ask?)

Contributors include Akira Yokosawa; SeongJae Park; Junchang Wang; Borislav Petkov; Stamatis Karnouskos; Palik, Imre; Paolo Bonzini; Praveen Kumar; Tobias Klauser; Andreea-Cristina Bernat; Balbir Singh; Bill Pemberton; Boqun Feng; Emilio G. Cota; Namhyung Kim; Andrew Donnellan; Dominik Dingel; Igor Dzreyev; Pierre Kuo; Yubin Ruan; Chris Rorvick; Dave; Mike Rapoport; Nicholas Krause; Patrick Marlier; Patrick Yingxi Pan; Slavomir Kaslev; Zhang, Kai; and Zygmunt Bazyli Krynicki. On behalf of all who read this book, I thank you all for all you did to help make this second edition a reality!
SequentialCaveman

Parallel Programming: December 2020 Update

This release of Is Parallel Programming Hard, And, If So, What Can You Do About It? features numerous improvments:

 


  1. LaTeX and build-system upgrades (including helpful error checking and reporting), formatting improvements (including much nicer display of hyperlinks and of Quick Quizzes, polishing of numerous figures and tables, plus easier builds for A4 paper), refreshing of numerous broken URLs, an improved “make help” command (see below), improved FAQ-BUILD material, and a prototype index, all courtesy of Akira Yokosawa.
  2. A lengthy Quick Quiz on the relationship of half-barriers, compilers, CPUs, and locking primitives, courtesy of Patrick Yingxi Pan.
  3. Updated performance results throughout the book, courtesy of a large x86 system kindly provided by Facebook.
  4. Compiler tricks, RCU semantics, and other material from the Linux-kernel memory model added to the memory-ordering and tools-of-the-trade chapters.
  5. Improved discussion of non-blocking-synchronization algorithms.
  6. Many new citations, cross-references, fixes, and touchups throughout the book.
A number of issues were spotted by Motohiro Kanda in the course of his translation of this book to Japanese, and Borislav Petkov, Igor Dzreyev, and Junchang Wang also provided much-appreciated fixes.

The output of the aforementioned make help is as follows:
Official targets (Latin Modern Typewriter for monospace font):
  Full,              Abbr.
  perfbook.pdf,      2c:   (default) 2-column layout
  perfbook-1c.pdf,   1c:   1-column layout

Set env variable PERFBOOK_PAPER to change paper size:
   PERFBOOK_PAPER=A4: a4paper
   PERFBOOK_PAPER=HB: hard cover book
   other (default):   letterpaper

make help-full" will show the full list of available targets.

The following excerpt of the make help-full command's output might be of interest to those who find Quick Quizzes distracting:
Experimental targets:
  Full,              Abbr.
  perfbook-qq.pdf,   qq:   framed Quick Quizzes
  perfbook-nq.pdf,   nq:   no inline Quick Quizzes (chapterwise Answers)

Thus, the make nq command creates a perfbook-nq.pdf with Quick Quizzes and their answers grouped at the end of each chapter, in the usual textbook style, while still providing PDF navigation from each Quick Quiz to the relevant portion of that chapter.

Finally, this release also happens to be the first release candidate for the long-awaited Second Edition, which should be available shortly.
RCU

Stupid RCU Tricks: Torturing RCU Fundamentally, Parts IV and V

Continuing further into the Linux-kernel Documentation/RCU/Design/Requirements/Requirements.rst file uncovers RCU's final two fundamental guarantees:

 

  1. The common-case RCU primitives are unconditional, and
  2. RCU users can perform a guaranteed read-to-write upgrade.

The first guarantee is trivially verified by inspection of the RCU API. The type of rcu_read_lock(), rcu_read_unlock(), synchronize_rcu(), call_rcu(), and rcu_assign_pointer() are all void. These API members therefore have no way to indicate failure. Even primitives like rcu_dereference(), which do have non-void return types, will succeed any time a load of their pointer argument would succeed. That is, if you do rcu_dereference(*foop), where foop is a NULL pointer, then yes, you will get a segmentation fault. But this segmentation fault will be unconditional, as advertised!

The second guarantee is a consequence of the first four guarantees, and must be tested not within RCU itself, but rather within the code using RCU to carry out the read-to-write upgrade.

Thus for these last two fundamental guarantees there is no code in rcutorture. But maybe even rcutorture deserves a break from time to time! ;–)
RCU

Stupid RCU Tricks: Torturing RCU Fundamentally, Part III

Even more reading of the Linux-kernel Documentation/RCU/Design/Requirements/Requirements.rst file encounters RCU's memory-barrier guarantees. These guarantees are a bit ornate, but roughly speaking guarantee that RCU read-side critical sections lapping over one end of a given grace period are fully ordered with anything past the other end of that same grace period. RCU's overall approach towards this guarantee is shown in the Linux-kernel Documentation/RCU/Design/Memory-Ordering/Tree-RCU-Memory-Ordering.rst file, so one approach would be to argue that these guarantees are proven by a combination of this documentation along with periodic code inspection. Although this approach works well for some properties, the periodic code inspections require great attention to detail spanning a large quantity of intricate code. As such, these inspections are all too vulnerable to human error.

Another approach is formal verification, and in fact RCU's guarantees have been formally verified. Unfortunately, these formal-verification efforts, groundbreaking though they are, must be considered to be one-off tours de force. In contrast, RCU needs regular regression testing.

This leaves rcutorture, which has the advantage of being tireless and reasonably thorough, especially when compared to human beings. Except that rcutorture does not currently test RCU's memory-barrier guarantees.

Or at least it did not until today.

A new commit (which has since been accepted into Linux kernel v5.11) enlists the existing RCU readers. Each reader frequently increments a free-running counter, which can then be used to check memory ordering: If the counter appears to have counted backwards, something is broken. Each reader samples and records a randomly selected reader's counter, and assigns some other randomly selected reader to check for backwardsness. A flag is set at the end of each grace period, and once this flag is set, that other reader takes another sample of that same counter and compares them.

The test strategy for this particular fundamental property of RCU is more complex and likely less effective than the memory-ordering property described earlier, but life is like that sometimes.