RCU

Stupid RCU Tricks: Failure Probability and CPU Count

So rcutorture found a bug, whether in RCU or elsewhere, and it is now time to reproduce that bug, whether to make good use of git bisect or to verify an alleged fix. One problem is that, rcutorture being what it is, that bug is likely a race condition and it likely takes longer than you would like to reproduce. Assuming that it reproduces at all.

How to make it reproduce faster? Or at all, as the case may be?

One approach is to tweak the Kconfig options and maybe even the code to make the failure more probable. Another is to find a “near miss” that is related to and more probable than the actual failure.

But given that we are trying to make a race condition happen more frequently, it is only natural to try tweaking the number of CPUs. After all, one would hope that increasing the number of CPUs would increase the probability of hitting the race condition. So the straightforward answer is to use all available CPUs.

But how to use them? Run a single rcutorture scenario covering all the CPUs, give or take the limitations imposed by qemu and KVM? Or run many instances of that same scenario, with each instance using a small fraction of the available CPUs?

As is so often the case, the answer is: “It depends!”

If the race condition happens randomly between any pair of CPUs, then bigger is better. To see this, consider the following old-school ASCII-art comparison:

+---------------------+
|        N * M        |
+---+---+---+-----+---+
| N | N | N | ... | N |
+---+---+---+-----+---+

If there are n CPUs that can participate in the race condition, then at any given time there are n(n-1)/2 possible races. The upper row has N*M CPUs, and thus N*M*(N*M-1)/2 possible races. The lower row has M sets of N CPUs, and thus M*N*(N-1)/2, which is almost a factor of M smaller. For this type of race condition, you should therefore run a small number of scenarios with each using as many CPUs as possible, and preferably only one scenario that uses all of the CPUs. For example, to make the TREE03 scenario run on 64 CPUs, edit the tools/testing/selftests/rcutorture/configs/rcu/TREE03 file so as to set CONFIG_NR_CPUS=64.

But there is no guarantee that the race condition will be such that all CPUs participate with equal probability. For example, suppose that the bug was due to a race between RCU's grace-period kthread (named either rcu_preempt or rcu_sched, depending on your Kconfig options) and its expedited grace period, which at any given time will be running on at most one workqueue kthread.

In this case, no matter how many CPUs were available to a given rcutorture scenario, at most two of them could be participating in this race. In this case, it is instead best to run as many two-CPU rcutorture scenarios as possible, give or take the memory footprint of that many guest OSes (one per rcutorture scenario). For example, to make 32 TREE03 scenarios run on 64 CPUs, edit the tools/testing/selftests/rcutorture/configs/rcu/TREE03 file so as to set CONFIG_NR_CPUS=2 and remember to pass either the --allcpus or the --cpus 64 argument to kvm.sh.

What happens in real life?

For a race condition that rcutorture uncovered during the v5.8 merge window, running one large rcutorture instance instead of 14 smaller ones (very) roughly doubled the probability of locating the race condition.

In other words, real life is completely capable of lying somewhere between the two theoretical extremes outlined above.
RCU

Stupid RCU Tricks: So rcutorture is Not Aggressive Enough For You?

So you read the previous post, but simply running rcutorture did not completely vent your frustration. What can you do?

One thing you can do is to tweak a number of rcutorture settings to adjust the manner and type of torture that your testing inflicts.

RCU CPU Stall Warnings

If you are not averse to a quick act of vandalism, then you might wish to induce an RCU CPU stall warning. The --bootargs argument can be used for this, for example as follows:

tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --duration 3 --trust-make \
    --bootargs "rcutorture.stall_cpu=22 rcutorture.fwd_progress=0"

The rcutorture.stall_cpu=22 says to stall a CPU for 22 seconds, that is, one second longer than the default RCU CPU stall timeout in mainline. If you are instead using a distribution kernel, you might need to specify 61 seconds (as in “rcutorture.stall_cpu=61”) in order to allow for the typical 60-second RCU CPU stall timeout. The rcutorture.fwd_progress=0 has no effect except to suppress a warning message (with stack trace included free of charge) that questions the wisdom of running both RCU-callback forward-progress tests and RCU CPU stall tests at the same time. In fact, the code not only emits the warning message, it also automatically suppresses the forward-progress tests. If you prefer living dangerously and don't mind the occasional out-of-memory (OOM) lockup accompanying your RCU CPU stall warnings, feel free to edit kernel/rcu/rcutorture.c to remove this automatic suppression.

If you are running on a large system that takes more than ten seconds to boot, you might need to increase the RCU CPU stall holdoff interval. For example, adding rcutorture.stall_cpu_holdoff=120 to the --bootargs list would wait for two minutes before stalling a CPU instead of the default holdoff of 10 seconds. If simply spinning a CPU with preemption disabled does not fully vent your ire, you could undertake a more profound act of vandalism by adding rcutorture.stall_cpu_irqsoff=1 so as to cause interrupts to be disabled on the spinning CPU.

Some flavors of RCU such as SRCU permit general blocking within their read-side critical sections, and you can exercise this capability by adding rcutorture.stall_cpu_block=1 to the --bootargs list. Better yet, you can use this kernel-boot parameter to torture flavors of RCU that forbid blocking within read-side critical sections, which allows you to see they complain about such mistreatment.

The vanilla flavor of RCU has a grace-period kthread, and stalling this kthread is another good way to torture RCU. Simply add rcutorture.stall_gp_kthread=22 to the --bootargs list, which delays the grace-period kthread for 22 seconds. Doing this will normally elicit strident protests from mainline kernels.

Finally, you could starve rcutorture of CPU time by running a large number of them concurrently (each in its own Linux-kernel source tree), thereby overcommitting the CPUs.

But maybe you would prefer to deprive RCU of memory. If so, read on!

Running rcutorture Out of Memory

By default, each rcutorture guest OS is allotted 512MB of memory. But perhaps you would like to have it make do with only 128MB:

tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --trust-make --memory 128M

You could go further by making the RCU need-resched testing more aggressive,T for example, by increasing the duration of this testing from the default three-quarters of the RCU CPU stall timeout to (say) seven eighths:

tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --trust-make --memory 128M \
    --bootargs "rcutorture.fwd_progress_div=8"

More to the point, you might make the RCU callback-flooding tests more aggressive, for example by adjusting the values of the MAX_FWD_CB_JIFFIES, MIN_FWD_CB_LAUNDERS, or MIN_FWD_CBS_LAUNDERED macros and rebuilding the kernel. Alternatively, you could use kill -STOP on one of the vCPUs in the middle of an rcutorture run. Either way, if you break it, you buy it!

Or perhaps you would rather attempt to drown rcutorture in memory, perhaps forcing a full 16GB onto each guest OS:

tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --trust-make --memory 16G

Another productive torture method involves unusual combinations of Kconfig options, a topic take up by the next section.

Confused Kconfig Options

The Kconfig options for a given rcutorture scenario are specified by the corresponding file in the tools/testing/selftests/rcutorture/configs/rcu directory. For example, the Kconfig options for the infamous TREE03 scenario may be found in tools/testing/selftests/rcutorture/configs/rcu/TREE03.

But why not just use the --kconfig argument and be happy, as described previously?

One reason is that there are a few Kconfig options that the rcutorture scripting refers to early in the process, before the --kconfig parameter's additions have been processed, for example, changing CONFIG_NR_CPUS should be done in the file rather than via the --kconfig parameter. Another reason is to not need to keep supplying a --kconfig argument for each of many repeated rcutorture runs. But perhaps most important, if you want some scenarios to be built with one Kconfig option and others built with some other Kconfig option, modifying each scenario's file avoids the need for multiple rcutorture runs.

For example, you could edit the tools/testing/selftests/rcutorture/configs/rcu/TREE03 file to change the CONFIG_NR_CPUS=16 to instead read CONFIG_NR_CPUS=4, and then run the following on a 12-CPU system:

tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --trust-make --configs "3*TREE03"

This would run three concurrent copies of TREE03, but with each guest OS restricted to only 4 CPUs.

Finally, if a given Kconfig option applies to all rcutorture runs and you are tired of repeatedly entering --kconfig arguments, you can instead add that option to the tools/testing/selftests/rcutorture/configs/rcu/CFcommon file.

But sometimes Kconfig options just aren't enough. And that is why we have kernel boot parameters, the subject of the next section.

Boisterous Boot Parameters

We have supplied kernel boot parameters using the --bootargs parameter, but sometimes ordering considerations or sheer laziness motivate greater permanent. Either way, the scenario's .boot file may be brought to bear, for example, the TREE03 scenario's file is located here: tools/testing/selftests/rcutorture/configs/rcu/TREE03.boot.

As of the v5.7 Linux kernel, this file contains the following:

rcutorture.onoff_interval=200 rcutorture.onoff_holdoff=30
rcutree.gp_preinit_delay=12
rcutree.gp_init_delay=3
rcutree.gp_cleanup_delay=3
rcutree.kthread_prio=2
threadirqs

For example, the probability of RCU's grace period processing overlapping with CPU-hotplug operations may be adjusted by decreasing the value of the rcutorture.onoff_interval from its default of 200 milliseconds or by adjusting the various grace-period delays specified by the rcutree.gp_preinit_delay, rcutree.gp_init_delay, and rcutree.gp_cleanup_delay parameters. In fact, chasing bugs involving races between RCU grace periods and CPU-hotplug operations often involves tuning these four parameters to maximize race probability, thus decreasing the required rcutorture run durations.

The possibilities for the .boot file contents are limited only by the extent of the Documentation/admin-guide/kernel-parameters.txt. And actually not even by that, given the all-to-real possibility of undocumented kernel boot parameters.

You can also create your own rcutorture scenarios by creating a new set of files in the tools/testing/selftests/rcutorture/configs/rcu directory. You can make them run by default (or in response to the CFLIST string to the --configs parameter) by adding its name to the tools/testing/selftests/rcutorture/configs/rcu/CFLIST file. For example, you could create a MYSCENARIO file containing Kconfig options and (optionally) a MYSCENARIO.boot file containing kernel boot parameters in the tools/testing/selftests/rcutorture/configs/rcu directory, and make them run by default by adding a line reading MYSCENARIO to the tools/testing/selftests/rcutorture/configs/rcu/CFLIST file.

Summary

This post discussed enhancing rcutorture through use of stall warnings, memory limitations, Kconfig options, and kernel boot parameters. The special case of adjusting CONFIG_NR_CPUS deserves more attention, and that is the topic of the next post.
RCU

Stupid RCU Tricks: So you want to torture RCU?

Let's face it, using synchronization primitives such as RCU can be frustrating. And it is only natural to wish to get back, somehow, at the source of such frustration. In short, it is quite understandable to want to torture RCU. (And other synchronization primitives as well, but you have to start somewhere!) Another benefit of torturing RCU is that doing so sometimes uncovers bugs in other parts of the kernel. You see, RCU is not always willing to suffer alone.

One long-standing RCU-torture approach is to use modprobe and rmmod to install and remove the rcutorture module, as described in the torture-test documentation. However, this approach requires considerable manual work to check for errors.

On the other hand, this approach avoids any concern about the underlying architecture or virtualization technology. This means that use of modprobe and rmmod is the method of choice if you wish to torture RCU on (say) SPARC or when running on Hyper-V (this last according to people actually doing this). This method is also necessary when you want to torture RCU on a very specific kernel configuration or when you need to torture RCU on bare metal.

But for those of us running mainline kernels on x86 systems supporting virtualization, the approach described in the remainder of this document will usually be more convenient.

Running rcutorture in a Guest OS

If you have an x86 system (or, with luck, an ARMv8 or PowerPC system) set up to run qemu and KVM, you can instead use the rcutorture scripting, which automates running rcutorture over a full set of configurations, as well as automating analysis of the build products and console output. Running this can be as simple as:

tools/testing/selftests/rcutorture/bin/kvm.sh

As of v5.8-rc1, this will build and run each of nineteen combinations of Kconfig options, with each run taking 30 minutes for a total of 8.5 hours, not including the time required to build the kernel, boot the guest OS, and analyze the test results. Given that a number of the scenarios use only a single CPU, this approach can be quite wasteful, especially on the well-endowed systems of the year 2020.

This waste can be avoided by using the --cpus argument, for example, for the 12-hardware-thread laptop on which I am typing this, you could do the following:

tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 12

This command would run up to 12 CPUs worth of rcutorture scenarios concurrently, so that the nineteen combinations would be run in eight batches. Because TREE03 and TREE07 each want 16 CPUs, rcutorture will complain in its run summary as follows:

 --- Mon Jun 15 10:23:02 PDT 2020 Test summary:
Results directory: /home/git/linux/tools/testing/selftests/rcutorture/res/2020.06.15-10.23.02
tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 12 --duration 5 --trust-make
RUDE01 ------- 2102 GPs (7.00667/s) [tasks-rude: g0 f0x0 ]
SRCU-N ------- 42229 GPs (140.763/s) [srcu: g549860 f0x0 ]
SRCU-P ------- 11887 GPs (39.6233/s) [srcud: g110444 f0x0 ]
SRCU-t ------- 59641 GPs (198.803/s) [srcu: g1 f0x0 ]
SRCU-u ------- 59209 GPs (197.363/s) [srcud: g1 f0x0 ]
TASKS01 ------- 1029 GPs (3.43/s) [tasks: g0 f0x0 ]
TASKS02 ------- 1043 GPs (3.47667/s) [tasks: g0 f0x0 ]
TASKS03 ------- 1019 GPs (3.39667/s) [tasks: g0 f0x0 ]
TINY01 ------- 43373 GPs (144.577/s) [rcu: g0 f0x0 ] n_max_cbs: 34463
TINY02 ------- 46519 GPs (155.063/s) [rcu: g0 f0x0 ] n_max_cbs: 2197
TRACE01 ------- 756 GPs (2.52/s) [tasks-tracing: g0 f0x0 ]
TRACE02 ------- 559 GPs (1.86333/s) [tasks-tracing: g0 f0x0 ]
TREE01 ------- 8930 GPs (29.7667/s) [rcu: g64765 f0x0 ]
TREE02 ------- 17514 GPs (58.38/s) [rcu: g138645 f0x0 ] n_max_cbs: 18010
TREE03 ------- 15920 GPs (53.0667/s) [rcu: g159973 f0x0 ] n_max_cbs: 1025308
CPU count limited from 16 to 12
TREE04 ------- 10821 GPs (36.07/s) [rcu: g70293 f0x0 ] n_max_cbs: 81293
TREE05 ------- 16942 GPs (56.4733/s) [rcu: g123745 f0x0 ] n_max_cbs: 99796
TREE07 ------- 8248 GPs (27.4933/s) [rcu: g52933 f0x0 ] n_max_cbs: 183589
CPU count limited from 16 to 12
TREE09 ------- 39903 GPs (133.01/s) [rcu: g717745 f0x0 ] n_max_cbs: 83002

However, other than these two complaints, this is what the summary of an uneventful rcutorture run looks like.

Whatever is the meaning of all those numbers in the summary???

The console output for each run and much else besides may be found in the /home/git/linux/tools/testing/selftests/rcutorture/res/2020.06.15-10.23.02 directory called out above.

The more CPUs you have, the fewer batches are required:

CPUsBatches
119
216
413
810
166
323
642
1281


If you specify more CPUs than your system actually has, kvm.sh will ignore your fantasies in favor of your system's reality.

Specifying Specific Scenarios

Sometimes it is useful to take one's ire out on a specific type of RCU, for example, SRCU. You can use the --configs argument to select specific scenarios:

tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 12 \
    --configs "SRCU-N SRCU-P SRCU-t SRCU-u"

This runs in two batches, but the second batch uses only two CPUs, which is again wasteful. Given that SRCU-P requires eight CPUs, SRCU-N four CPUs, and SRCU-t and SRCU-u one each, it would cost nothing to run two instances of each of these scenarios other than SRCU-N as follows:

tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 12 \
    --configs "SRCU-N 2*SRCU-P 2*SRCU-t 2*SRCU-u"

This same notation can be used to run multiple copies of the entire list of scenarios. For example (again, in v5.7), a system with 384 CPUs can use --configs 4*CFLIST to run four copies of of the full set of scenarios as follows:

tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 384 --configs "4*CFLIST"

Mixing and matching is permissible, for example:

tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 384 --configs "3*CFLIST 12*TREE02"

A kvm.sh script that is to run on a wide variety of systems can benefit from --allcpus (expected to appear in v5.9), which acts like --cpus N, where N is the number of CPUs on the current system:

tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --configs "3*CFLIST 12*TREE02"

Build time can dominate when running a large number of short-duration runs, for example, when chasing down a low-probability non-deterministic boot-time failure. Use of --trust-make can be very helpful in this case:

tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 384 --duration 2 \
    --configs "1000*TINY01" --trust-make

Without --trust-make, rcutorture will play it safe by forcing your source tree to a known state between each build. In addition to --trust-make, there are a number of tools such as ccache that can also greatly reduce build times.

Locating Test Failures

Although the ability to automatically run many tens of scenarios can be very convenient, it can also cause significant eyestrain staring through a long “summary” checking for test failures. Therefore, if there are failures, this is noted at the end of the summary, for example, as shown in the following abbreviated output from a --configs "28*TREE03" run:

TREE03.8 ------- 1195094 GPs (55.3284/s) [rcu: g11475633 f0x0 ] n_max_cbs: 1449125
TREE03.9 ------- 1202936 GPs (55.6915/s) [rcu: g11572377 f0x0 ] n_max_cbs: 1514561
3 runs with runtime errors.

Of course, picking the three errors out of the 28 runs can also cause eyestrain, so there is yet another useful little script:

tools/testing/selftests/rcutorture/bin/kvm-find-errors.sh \
    /home/git/linux/tools/testing/selftests/rcutorture/res/2020.06.15-10.23.02

This will run your editor on the make output for each build error and on the console output for each runtime failure, greatly reducing eyestrain. Users of vi can also edit a summary of the runtime errors from each failing run as follows:

vi /home/git/linux/tools/testing/selftests/rcutorture/res/2020.06.15-10.23.02/*/console.log.diags

Enlisting Torture Assistance

If rcutorture produces a failure-free run, that is a failure on the part of rcutorture. After all, there are bugs in there somewhere, and rcutorture failed to find them!

One approach is to increase the duration, for example, to 12 hours (also known as 720 minutes):

tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 12 --duration 720

Another approach is to enlist the help of other in-kernel torture features, for example, lockdep. The --kconfig parameter to kvm.sh can be used to this end:

tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 12 --configs "TREE03" \
    --kconfig "CONFIG_DEBUG_LOCK_ALLOC=y CONFIG_PROVE_LOCKING=y"

The aid of the kernel address sanitizer (KASAN) can be enlisted using the --kasan argument:

tools/testing/selftests/rcutorture/bin/kvm.sh --cpus 12 --kasan

The kernel concurrency sanitizer (KCSAN) can also be brought to bear, but proper use of KCSAN requires some thought (see part 1 and part 2 of the LWN “Concurrency bugs should fear the big bad data-race detector” article) and also version 11 or later of Clang/LLVM (and a patch for GCC has been accepted). Once you have all of that in place, the --kcsan argument invokes KCSAN and also generates a summary as described in part 1 of the aforementioned LWN article. Note again that only very recent compiler versions (such as Clang-11) support KCSAN, so a --kmake "CC=clang-11" or similar argument might also be necessary.

Selective Torturing

Sometimes enlisting debugging aid is the best approach, but other times greater selectivity is the best way forward.

Sometimes simply building a kernel is torture enough, especially when building with unusual Kconfig options (see the discussion of --kconfig above). In this case, specifying the --buildonly argument will build the kernels, but refrain from running them. This approach can also be useful for running multiple copies of the resulting binaries on multiple systems: You can use the --buildonly to build the kernels and qemu-cmd scripts, and then run these files on the other systems, given suitable adjustments to the qemu-cmd scripts.

Other times it is useful to torture some specific portion of RCU. For example, one wishing to vent their ire solely on expedited grace periods could add --bootargs "rcutorture.gp_exp=1" to the kvm.sh command line. This argument causes rcutorture to run a stress test using only expedited RCU grace periods, which can be helpful when attempting to work out whether a too-short RCU grace period is due to a bug in the normal or the expedited grace-period code. Similarly, the callback-flooding aspects of rcutorture stress testing can be disabled using --bootargs "rcutorture.fwd_progress=0". It is possible to specify both in one run using --bootargs "rcutorture.gp_exp=1 rcutorture.fwd_progress=0".

Enlisting Debugging Assistance

Still other times, it is helpful to enable event tracing. For example, if the rcu_barrier() event traces are of interest, use --bootargs "trace_event=rcu:rcu_barrier". The trace buffer will be dumped automatically upon specific rcutorture failures. If the failure mode is instead a non-rcutorture-specific oops, use this: --bootargs "trace_event=rcu:rcu_barrier ftrace_dump_on_oops". If it is also necessary to dump the trace buffers on warnings, a (heavy handed) way to achieve this is to use --bootargs "trace_event=rcu:rcu_barrier ftrace_dump_on_oops panic_on_warn".

If you have many tens of rcutorture instances that all decide to flush their trace buffers at about the same time, the combined flushing operations can take considerable time, especially if the underlying system features rotating rust. If only the most recent activity is of interest, specifying a small trace buffer can help: --bootargs "trace_event=rcu:rcu_barrier ftrace_dump_on_oops panic_on_warn trace_buf_size=3k".

If only the oopsing/warning CPU's traces are relevant, the orig_cpu modifier can be helpful: --bootargs "trace_event=rcu:rcu_barrier ftrace_dump_on_oops=orig_cpu panic_on_warn trace_buf_size=3k".

More information on tracing can be found in Documentation/trace, and more on kernel boot parameters in general may be found in kernel-parameters.txt. Given the multi-thousand-line heft of this latter, there is clearly great scope for tweaking your torturing of RCU!

Why Stop at Torturing RCU?

After all, locking can sometimes be almost as annoying as RCU. And it is possible to torture locking:

tools/testing/selftests/rcutorture/bin/kvm.sh --allcpus --torture lock

This locktorture stress test does not get as much love and attention as does rcutorture, but it is at least a start.

There are also a couple of RCU performance tests and an upcoming smp_call_function*() stress test that use this same torture-test infrastructure. Please note that the details of the summary output varies from test to test.

In short, you can do some serious torturing of RCU, and much else besides! So show them no mercy!!! :-)
inside

The Old Man and His Smartphone, 2020 Spring Break Episode

Complete draining of my smartphone's battery was commonplace while working from home. After all, given laptops and browsers, to say nothing of full-sized keyboards, I rarely used it. So I started doing my daily web browsing on my smartphone at breakfast, thus forcing a daily battery-level check.

This approach has been working, except that it is quite painful to print out articles my wife might be interested in. My current approach is to email the URL to myself, which in a surprisingly ornate process:

  1. Copy the URL.
  2. Start an email.
  3. Click on the triple dot at the upper right-hand side of the keyboard.
  4. Select the text-box icon at the right.
  5. Select “paste” from the resulting menu, then hit “send”.
  6. Read email on a laptop, open the URL, and print it.

The addition of a control key to the virtual keyboard might be useful to those of us otherwise wondering “How on earth do I type control-V???” Or I could take the time required to figure out how to print directly from my smartphone. But I would not recommend holding your breath waiting.

What with COVID-19 I and the associate lockdowns, I have not used my smartphone's location services much, helpful though it was in the pre-COVID-19 days. For example, prior to a business trip to Prague, my wife let me know that she wanted additional copies of a particular local craft item that I had brought back on a prior trip almost ten years ago. Unfortunately, I could not remember the name of the shop, nor were the usual search engines any help at all.

Fortunately, some passers-by mentioned Wenceslas Square, which triggered a vague memory. So I used my smartphone to go to Wenceslas Square, and from there used the old-school approach of wandering randomly. Suddenly, I knew where I was, and sure enough, when I turned to my right, there was the shop! And the craft item was even in the same place within the shop that it had been on my earlier visit!

Of course, the minute I completed my purchase, my smartphone and laptops were full of advertisements for that craft item, including listing any number of additional shops offering it for sale. Therefore, although it is quite clear that the “A” in “AI” stands for “artificial”, I am forced to dispute the usual interpretation of the “I”.

My smartphone also took the liberty of autocomposing its first-ever reply to an email, quite likely because I failed to power it off before lying it down on its screen on a not-quite-flat surface. The resulting email was heavy on the letter “b” and contained lots of emo and angst, perhaps because the word "bad" occurred quite frequently. This draft also included an instance of the name “Bob Dylan”. I will leave any discussion of the morals and immorals of this particular AI choice to the great man's many fans and detractors.

I can only be thankful that the phone left its composition in draft mode, as opposed to actually sending it. In fact, I was so shocked by the possibility that it could well have sent it that I immediately deleted it. Of course, now I wish that I had kept it so I could show it off. As they say, haste makes waste!

However, I did find the following prior effort in my “Drafts” folder. This effort is nowhere near as entertaining as the one I so hastily deleted, but it does give some of the flavor of my smartphone's approach to email autocomposition:
But there is no doubt about the way the bldg will do it in this smartphone a while now that the company is still in its position as the world's most profitable competitor to its android smartphone and its android phone in its own right and will continue its search to make its way through its mobile app market and its customers will have to pay attention for their products to the web and other apps for their customers by clicking the button and using a new app BBC to help you get your phone back in your browser and your browser based phone number and the number one you can click to see you in your browser or the other apps that are compatible or the app you use for your browser or a computer and both have or Google and you will have a lot more to say than the one that is not the only way you could not be in a good mood to get the most of your life and the rest you are in for the next two days and the rest is not a bad for you are you in a good place and the best thing you could be doing to help your family and your friends will have a sense that they can help them get their jobs done in a way that's what you are going through with your work in a good place to work and make them work better and better for their job than you can in a long term way and you are a better parent and you are not going through the process and the process is going through a good job of thinking that you're not a teacher and a teacher who believes that the best thing to be is that your browser will have the number and access of the app you can get to the web and the app is available to users for a while to be sure you can use the internet for a while you are still in a position where I have a few more questions to ask you about being able and the app you have on your computer will have to do not use it as an app you have for a
And so I have one small request. Could those of you wishing for digital assistants please consider the option of being more careful what you wish for?

My smartphone also came in handy during a power outage: The cell towers apparently had backup generators, and my smartphone's battery, though low, was not completely drained. I posted noting my situation and battery state online, which in turn prompted a proud Tesla owner to call attention to the several hundred kilowatt-hours of electrical energy stored in his driveway. Unfortunately for me, his driveway was located the better part of a thousand miles away. However, it did remind me of the single kilowatt hour stored in my conventional automobile's lead-acid battery. But fortunately, the power outage lasted only a few hours, so my smartphone's much smaller battery was sufficient to the cause.

As you would expect, I checked my smartphone's specifications when I first received it and learned that it has eight CPUs, which is not unusual for today's smartphones.

But it only recently occurred to me that the early 1990s DYNIX/ptx system on which I developed RCU had only four CPUs.

Go figure!!!
elephant, penguin

Confessions of a Recovering Proprietary Programmer, Part XVII

One of the gatherings I attended last year featured a young man asking if anyone felt comfortable doing git rebase “without adult supervision”, as he put it. He seemed as surprised to see anyone answer in the affirmative as I was to see only a very few people so answer. This seems to me to be a suboptimal state of affairs, and thus this post describes how you, too, can learn to become comfortable doing git rebase “without adult supervision”.

Use gitk to See What You Are Doing

The first trick is to be able to see what you are doing while you are doing it. This is nothing particularly obscure or new, and is in fact why screen editors are preferred over line-oriented editors (remember ed?). And gitk displays your commits and shows how they are connected, including any branching and merging. The current commit is evident (yellow rather than blue circle) as is the current branch, if any (branch name in bold font). As with screen editors, this display helps avoid inevitable errors stemming from you and git disagreeing on the state of the repository. Such disagreements were especially common when I was first learning git. Given that git always prevailed in these sorts of disagreements, I heartily recommend using gitk even when you are restricting yourself to the less advanced git commands.

Note that gitk opens a new window, which may not work in all environments. In such cases, the --graph --pretty=oneline arguments to the git log command will give you a static ASCII-art approximation of the gitk display. As such, this approach is similar to using a line-oriented editor, but printing out the local lines every so often. In other words, it is better than nothing, but not as good as might be hoped for.

Fortunately, one of my colleagues pointed me at tig, which provides a dynamic ASCII-art display of the selected commits. This is again not as good as gitk, but it is probably as good as it gets in a text-only environment.

These tools do have their limits, and other techniques are required if you are actively rearranging more than a few hundred commits. If you are in that situation, you should look into the workflows used by high-level maintainers or by the -stable maintainer, who commonly wrangle many hundreds or even thousands of commits. Extreme numbers of commits will of course require significant automation, and many large-scale maintainers do in fact support their workflows with elaborate scripting.

Doing advanced git work without being able to see what you are doing is about as much a recipe for success as chopping wood in the dark. So do yourself a favor and use tools that allow you to see what you are doing!

Make Sure You Can Get Back To Where You Started

A common git rebase horror story involves a mistake made while rebasing, but with the git garbage collector erasing the starting point, so that there is no going back. As the old saying goes, “to err is human”, so such stories are all too plausible. But it is dead simple to give this horror story a happy ending: Simply create a branch at your starting point before doing git rebase:

git branch starting-point
git rebase -i --onto destination-commit base-commit rebase-branch
# The rebased commits are broken, perhaps misresolved conflicts?
git checkout starting-point # or maybe: git checkout -B rebase-branch starting-branch


Alternatively, if you are using git in a distributed environment, you can push all your changes to the master repository before trying the unfamiliar command. Then if things go wrong, you can simply destroy your copy, re-clone the repository, and start over.

Whichever approach you choose, the benefit of ensuring that you can return to your starting point is the ability to repeat the git rebase as many times as needed to arrive at the desired result. Sort of like playing a video game, when you think about it.

Practice on an Experimental Repository

On-the-job training can be a wonderful thing, but sometimes it is better to create an experimental repository for the sole purpose of practicing your git commands. But sometimes, you need a repository with lots of commits to provide a realistic environment for your practice session. In that case, it might be worthwhile to clone another copy of your working repository and do your practicing there. After all, you can always remove the repository after you have finished practicing.

And there are some commands that have such far-reaching effects that I always do a dry-run on a sacrificial repository before trying it in real life. The poster boy for such a command is git filter-branch, which has impressive power for both good and evil.

 

In summary, to use advanced git commands without adult supervision, first make sure that you can see what you are doing, then make sure that you can get back to where you started, and finally, practice makes perfect!
inside

The Old Man and His Macbook

I received a MacBook at the same time I received the smartphone. This was not my first encounter with a Mac, in fact, I long ago had the privilege of trying out a Lisa. I occasionally made use of the original Macintosh (perhaps most notably to prepare a resume when applying for a job at Sequent), and even briefly owned an iMac, purchased to run some educational software for my children. But that iMac was my last close contact with the Macintosh line, some 20 years before the MacBook: Since then, I have used Windows and more recently, Linux.

So how does the MacBook compare? Let's start with some positives:

  • Small and light package, especially when compared to my rcutorture-capable ThinkPad. On the other hand, the MacBook would not be particularly useful for running rcutorture.
  • Much of the familiar UNIX userspace is right at my fingertips.
  • The GUI remembers which windows were on the external display, and restores them when plugged back into that display.
  • Automatically powers off when not in use, but resumes where you left off.
  • Most (maybe all) applications resume where they left off after rebooting for an upgrade, which was an extremely pleasant surprise.
  • Wireless works seamlessly.


There are of course some annoyances:

  • My typing speed and accuracy took a serious hit. Upon closer inspection, this turned out to be due to the keyboard being smaller than standard. I have no idea why this “interesting” design choice was made, given that there appears to be ample room for full-sized keys. Where possible, I connect a full-sized keyboard, thus restoring full-speed typing.
  • I detest trackpads, but that is the only built-in mouse available, which defeats my usual strategy of disabling them. As with the keyboard, where possible I connect a full-sized mouse. In pleasing contrast to the earlier Macs, this MacBook understands that a mouse can have more than one button.
  • I found myself detesting the MacBook trackpad even more than usual, in part because brushing up against it can result in obnoxious pop-up windows offering to sell me songs and other products related to RCU. I disabled this advertising “feature” only to find that it was now putting up obnoxious pop-up windows offering to look up RCU-related words in the dictionary. In both cases, these pop-up windows grab focus, which makes them especially unfriendly to touch-typists. Again, the solution is to attach a full-sized keyboard and standard mouse. Perhaps my next trip will motivate me to disable this misfeature, but who knows what other misfeature lies hidden behind it?
  • Connectivity. You want to connect to video? A memory stick? Ethernet? You will need a special adapter.
  • Command key instead of control key for cut-and-paste. Nor can I reasonably remap the keys, at least not if I want to continue using control-C to interrupt unruly UNIX-style applications. On the other hand, I freely admit that Linux's rather anarchic approach to paste buffers is at best an acquired taste.
  • The control key appears only on the left-hand side of the keyboard, which is also unfriendly to touch-typists.
  • Multiple workspaces are a bit spooky. They sometimes change order, or maybe I am accidentally hitting some key combination that moves them. Thankfully, it is very easy to move them where you want them: Control-uparrow, then drag and drop with the mouse.
  • I tried porting perfbook, but TexLive took forever to install. I ran out of patience long before it ran out of whatever it was downloading.


Overall impression? It is yet another laptop, with its own advantages, quirks, odd corners, and downsides. I can see how people who grew up on Macbook and who use nothing else could grow to love it passionately. But switching back and forth between MacBook and Linux is a bit jarring, though of course MacBook and Linux have much more in common than did the five different systems I switched back and forth between in the late 1970s.

My current plan is to stick with it for a year (nine months left!), and decide where to go from there. I might continue sticking with it, or I might try moving to Linux. We will see!
inside

Other weighty matters

I used to be one of those disgusting people who could eat whatever he wanted, whenever he wanted, and as much as he wanted—and not gain weight.

In fact, towards the end of my teen years, I often grew very tired of eating. You see, what with all my running and growing, in order to maintain weight I had to eat until I felt nauseous. I would feel overstuffed for about 30 minutes and then I would feel fine for about two hours. Then I would be hungry again. In retrospect, perhaps I should have adopted hobbit-like eating habits, but then again, six meals a day does not mesh well with school and workplace schedules, to say nothing of with family traditions.

Once I stopped growing in my early 20s, I was able to eat more normally. Nevertheless, I rarely felt full. In fact, on one of those rare occasions when I did profess a feeling of fullness, my friends not only demanded that I give it to them in writing, but also that I sign and date the resulting document. This document was rendered somewhat less than fully official due to its being written on a whiteboard.

And even by age 40, eating what most would consider to be a normal diet caused my weight to drop dramatically and abruptly.

However, my metabolism continued to slow down, and my body's ability to tolerate vigorous exercise waned as well. But these change took place slowly, and so the number on the scale crept up almost imperceptibly.

But so what if I am carrying a little extra weight? Why should I worry?

Because I have a goal: Should I reach age 80, I would very much like to walk under my own power. And it doesn't take great powers of observation to conclude that carrying extra weight is not consistent with that goal. Therefore, I must pay close attention to the scale.

But life flowed quickly, so I continued failing to pay attention to the scale, at least not until a visit to airport in Florida. After passing through one of the full-body scanners, I was called out for a full-body search. A young man patted me down quite thoroughly, but wasn't able to find whatever it was that he was looking for. He called in a more experienced colleague, who quickly determined that what had apparently appeared to be a explosive device under my shirt was instead an embarrassingly thick layer of body fat. And yes, I did take entirely too much satisfaction from the fact that he chose to dress down his less-experienced colleague, but I could no longer deny that I was a good 25-30 pounds overweight. And in the poor guy's defense, the energy content of that portion of my body fat really did rival that of a small bomb. And, more to the point, the sheer mass of that fat was in no way consistent with my goal to be able to walk under my own power at age 80.

So let that be a lesson to you. If you refuse take the hint from your bathroom scale, you might well find yourself instead taking it from the United States of America's Transportation Security Administration.

Accepting the fact that I was overweight was one thing. Actually doing something about it was quite another. You see, my body had become a card-carrying member of House Stark, complete with their slogan: “Winter is coming.” And my body is wise in the ways of winter. It knows not only that winter is coming, but also that food will be hard to come by, especially given my slowing reflexes and decreasing agility. Now, my body has never actually seen such a winter, but countless generations of of natural selection have hammered a deep and abiding anticipation of such winters into my very DNA. Furthermore, my body knows exactly one way to deal with such a winter, and that is to eat well while the eating is good.

However, I have thus far had the privilege of living in a time and place where the eating is always good and where winter never comes, at least not the fearsome winters that my body is fanatically motivated to prepare for.

This line of thought reminded me of a piece written long ago by the late Isaac Asimov, in which he suggested that we should stop eating before we feel full. (Shortly after writing this, an acquaintance is said to have pointed out that Asimov could stand to lose some weight, and Asimov is said to have reacted by re-reading his own writing and then successfully implementing its recommendation.) The fact that I now weighed in at more than 210 pounds provided additional motivation.

With much effort, I was able to lose more than ten pounds, but then my weight crept back up again. I was able to keep my weight to about 205, and there it remained for some time.

At least, there it remained until I lost more than ten pounds due to illness. I figured that since I had paid the price of the illness, I owed it to myself to take full advantage of the resulting weight loss. Over a period of some months, I managed to get down to 190 pounds, which was a great improvement over 210, but significantly heavier than my 180-pound target weight.

But my weight remained stubbornly fixed at about 190 for some months.

Then I remembered the control systems class I took decades ago and realized that my body and I comprised a control system designed to maintain my weight at 190. You see, my body wanted a good fifty pounds of fat to give me a good chance of surviving the food-free winter that it knew was coming. So, yes, I wanted my weight to be 180. But only when the scale read 190 or more would I panic and take drastic action, such as fasting for a day, inspired by several colleagues' lifestyle fasts. Below 190, I would eat normally, that is, I would completely give in to my body's insistence that I gain weight.

As usual, the solution was simple but difficult to implement. I “simply” slowly decreased my panic point from 190 downwards, one pound at a time.

One of the ways that my body convinces me to overeat is through feelings of anxiety. “If I miss this meal, bad things will happen!!!” However, it is more difficult for my body to convince me that missing a meal would be a disaster if I have recently fasted. Therefore, fasting turned out to be an important component of my weight-loss regimen. A fast might mean just skipping breakfast, it might mean skipping both breakfast and lunch, or it might be a 24-hour fast. But note that a 24-hour fast skips first dinner, then breakfast, and finally lunch. Skipping breakfast, lunch, and then dinner results in more than 30 hours of fasting, which seems a bit excessive.

Of course, my body is also skilled at exploiting any opportunity for impulse eating, and I must confess that I do not yet consistently beat it at this game.

Exercise continues to be important, but it also introduces some complications. You see, exercise is inherently damaging to muscles. The strengthening effects of exercise are not due to the exercise itself, but rather to the body's efforts to repair the damage and then some. Therefore, in the 24 hours or so after exercise, my muscles suffer significant inflammation due to this damage, which results in a pound or two of added water weight (but note that everyone's body is different, so your mileage may vary). My body is not stupid, and so it quickly figured out that one of the consequences of a heavy workout was reduced rations the next day. It therefore produced all sorts of reasons why a heavy workout would be a bad idea, and with a significant rate of success.

So I allow myself an extra pound the day after a heavy workout. This way my body enjoys the exercise and gets to indulge the following day. Win-win! ;–)

There are also some foods that result in added water weight, with corned beef, ham, and bacon being prominent among them. The amount of water weight seems to vary based on I know not what, but sometimes ranges up to three pounds. I have not yet worked out exactly what to do about this, but one strategy might be to eat these types of food only on the day of a heavy workout. Another strategy would be to avoid them completely, but that is crazy talk, especially in the case of bacon.

So after two years, I have gotten down to 180, and stayed there for several months. What does the future hold?

Sadly, it is hard to say. In my case it appears that something like 90% of the effort required to lose weight is required to keep that weight off. So if you really do want to know what the future holds, all I can say is “Ask me in the future.”

But the difficulty of keeping weight off should come as no surprise.

After all, my body is still acutely aware that winter is coming!
SequentialCaveman

Parallel Programming: December 2019 Update

There is a new release of Is Parallel Programming Hard, And, If So, What Can You Do About It?.

This release features a number of formatting and build-system improvements by the indefatigible Akira Yokosawa. On the formatting side, we have listings automatically generated from source code, clever references, selective PDF hyperlink highlighting, and finally settling the old after-period one-space/two-space debate by mandating newline instead. On the build side, we improved checks for incompatible packages, SyncTeX database file generation (instigated by Balbir Singh), better identification of PDFs, build notes for recent Fedora releases, fixes for some multiple-figure page issues, and improved font handling, and a2ping workarounds for ever-troublesome Ghostscript. In addition, the .bib file format was dragged kicking and screaming out of the 1980s, prompted by Stamatis Karnouskos. The new format is said to be more compatible with modern .bib-file tooling.

On the content side, the “Hardware and its Habits”, “Tools of the Trade”, “Locking”, “Deferred Processing”, “Data Structures”, and “Formal Verification” chapters received some much needed attention, the latter by Akira, who also updated the “Shared-Variable Shenanigans” section based on a recent LWN article. SeongJae Park, Stamatis, and Zhang Kai fixed a large quantity of typos and addressed numerous other issues. There is now a full complement of top-level section epigraphs, and there are a few scalability results up to 420 CPUs, courtesy of a system provided by my new employer.

On the code side, there have been a number of bug fixes and updates from ACCESS_ONCE() to READ_ONCE() or WRITE_ONCE(), with significant contributions from Akira, Junchang Wang, and Slavomir Kaslev.

A full list of the changes since the previous release may be found here, and as always, git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git will be updated in real time.
BookAndGlasses

Exit Libris, Take Two

Still the same number of bookshelves, and although I do have a smartphone, I have not yet succumbed to the ereader habit. So some books must go!


  • Books about science and computing, in some cases rather loosely speaking:

    • “The Rocks Don't Lie”, David R. Montgomery. Great story of how geologists spent a great many years rediscovering the second century's received wisdom that the Book of Genesis should not be given a literal interpretation, specifically the part regarding Noah's Flood. Only to spend the rest of their lives resisting J. Harlen Bretz's work on the catastrophic floods that shaped the Columbia Gorge. The book also covers a number of other suspected catastrophic floods, showing how science sometimes catches up with folklore. Well w-orth a read, but discarded in favor of a biography focusing on J. Harlen Bretz. Which is around here somewhere...
    • “This Book Warps Space and Time”, Normal Sperling. Nice collection of science-related humor. Of course, they say that every book warps space and time.
    • “Scarcity: The True Cost of not Having Enough”, Sendhil Mullainathan and Eldar Shafir. Not a bad book for its genre, for example, covering more than mere money. Interesting proposals, but less validation of the proposals than one might hope. (Yes, I do write and validate software. Why do you ask?)
    • “the smartest kids in the world, and how they got that way”, amanda ripley [sic]. Classic case of generalizing from too little data taken over too short a time. But kudos to a book about education with a punctuation-free all-lowercase front cover, I suppose...
    • “Thinking, Fast and Slow”, Daniel Kahneman. Classic book, well worth reading, but it takes up a lot of space on a shelf.
    • “The Information, A Theory, A History, A Flood”, James Gleick. Ditto.
    • “The Human-Computer Interaction Handbook”, Julie A. Jacko and Andrew Sears. This is the textbook from the last university class I took back in 2004. I have kept almost all of my textbooks, but this one is quite large, is a collection of independent papers (most of which are not exactly timeless), and way outside my field.
    • “The Two-Mile Time Machine, Richard B. Alley”. Account of the learnings from ice cores collected in Greenland, whose two-mile-thick ice sheets give the book its name.
    • “Dirt: The Erosion of Civilizations”, David R. Montgomery. If you didn't grow up in a farming community, read this book so you can learn that dirt does in fact matter a great deal.
    • “Advanced Topics in Broadband ATM Networks”, Ender Ayanoglu and Malathi Veeraraghanavan. Yes, Asynchronous Transfer Mode networks were going to take over the entirety of the computing world, and anyone who said otherwise just wasn't with it. (Ender looked too old to have been named after the protagonist of “Ender's Game” so your guess is as good as mine.)
    • “Recent Advances in the Algorithmic Analysis of Queues”, David M. Lucantoni. I had been hoping to apply this to my mid-90s analysis work, but no joy. On the other hand, if I remember correctly, this was the session in which an academic reproached me for understanding the material despite being from industry rather than academia, a situation that she felt was totally reprehensible and not to be tolerated. Philistine that I am, I still feel no shame. ;-)
    • “The Principia”, Isaac Newton. A great man, but there are more accessible sources of this information. Besides, the copy I have is not the original text, but rather an English translation.

  • Related to my recent change of employer:

    • “Roget's Thesaurus in Dictionary Form”, C.O. Sylvester Mawson. Duplicate, and largely obsoleted by the world wide web.
    • “Webster's New World Dictionary of the American Language (College Edition)”. Ditto. This one is only a year older than I am, in contract with the thesaurus which is more than 20 years older than I am.
    • “Guide to LaTeX, Fourth Edition”, Helmut Kopka and Patrick W. Daly. Ditto, though much younger.
    • “Pattern Languages of Program Design, Book 2”, Edited by John M. Vlissides, James O. Coplien, and Norman L. Kerth. Ditto.
    • “Pattern-Oriented Software Architecture Volume 2: Patterns for Concurrent and Networked Objects”, Douglas Schmidt, Michael Stal, Hans Rohnert, and Frank Buschmann. Ditto.
    • “Strengths Finder 2.0”, Tom Rath. Ditto.
    • Books on IBM: “IBM Redux”, Doug Garr; “Saving Big Blue”, Robert Slater; “Who's Afraid of Big Blue”, Regis McKenna; “After the Merger”, Max M. Habeck, Fritz Kroeger, and Michael R. Traem. Worth a read, but not quite of as much interest as they were previously. But I am keeping Louis Gerstner's classic “Who Says Elephants Can't Dance?”

  • Self-help books, in some cases very loosely speaking:

    • “Getting to Yes: Negotiating Agreement Without Giving In”, by Roger Fisher and William Ury. A classic, but I somehow ended up with two of them, and both at home.
    • “How to Make People Think You're Normal”, Ben Goode.
    • “Geezerhood: What to expect from life now that you're as old as dirt”, Ben Goode.
    • “So You Think You Can ’Geezer’: Instructions for becoming the old coot you have always dreamed of”, Ben Goode.
    • “The Challenger Customer”, Brent Adamson, Matthew Dixon, Pat Spenner, and Nick Toman. Good insights on how tough customers can help you get to the next level and how to work with them, but numerous alternative sources.
    • “The Innovator's Solution”, Clayton M. Christensen and Michael E. Raynor. Not bad, but keeping “The Innovator's Dilemma” instead.

  • Recent USA miltary writings:

    • “Back in Action”, Captain David Rozelle
    • “Imperial Grunts”, Robert D. Kaplan
    • “Shadow War”, Richard Miniter
    • “Imperial Hubris”, Anonymous
    • “American Heroes”, Oliver North

    A good set of widely ranging opinions, but I am keeping David Kilcullen's series. Kilcullen was actually there (as was Rozelle and to some extent Kaplan) and has much more experience and a broader perspective than the above five. Yes, Anonymous is unknown, but that book was published in 2004 as compared to Kilcullen's series that spans the Bush and Obama administrations. You get to decide whether Kilcullen's being Australian is a plus or a minus. Choose wisely! ;-)

  • Brain teasers:

    • “The Riddle of Scheherazade and Other Amazing Puzzles”, Raymond Smullyan
    • “Lateral Thinking: Creativity Step by Step”, Edward de Bono
    • “The Great IQ Challenge”, Philip J. Carter and Ken A. Russell

  • Social commentary:

    • “A Darwinian Left: Politics, Evolution, and Cooperation”, Peter Singer
    • “Rigged”, Ben Mezrich
    • “Braiding Sweetgrass: Indigenous Wisdom, Scientific Knowledge, and the Teaching of Plants”, Robin Wall Kimmerer
    • “Injustice”, J. Christian Adams
    • “The Intimidation Game”, Kimberley Strassel

inside

The Old Man and His Smartphone, 2019 Holiday Season Episode

I used my smartphone as a camera very early on, but the need to log in made it less than attractive for snapshots. Except that I saw some of my colleagues whip out their smartphones and immediately take photos. They kindly let me in on the secret: Double-clicking the power button puts the phone directly into camera mode. This resulted in a substantial uptick in my smartphone-as-camera usage. And the camera is astonishingly good by decade-old digital-camera standards, to say nothing of old-school 35mm film standards.

I also learned how to make the camera refrain from mirror-imaging selfies, but this proved hard to get right. The selfie looks wrong when immediately viewed if it is not mirror imaged! I eventually positioned myself to include some text in the selfie in order to reliably verify proper orientation.

Those who know me will be amused to hear that I printed a map the other day, just from force of habit. But in the event, I forgot to bring not only both the map and the smartphone, but also the presents that I was supposed to be transporting. In pleasant contrast to a memorable prior year, I remembered the presents before crossing the Columbia, which was (sort of) in time to return home to fetch them. I didn't bother with either the map or the smartphone, but reached my destination nevertheless. Cautionary tales notwithstanding, sometimes you just have to trust the old neural net's direction-finding capabilities. (Or at least that is what I keep telling myself!)

I also joined the non-exclusive group who uses a smartphone to photograph whiteboards prior to erasing them. I still have not succumbed to the food-photography habit, though. Taking a selfie with the non-selfie lens through a mirror is possible, but surprisingly challenging.

I have done a bit of ride-sharing, and the location-sharing features are quite helpful when meeting someone—no need to agree on a unique landmark, only to find the hard way that said landmark is not all that unique!

The smartphone is surprisingly useful for browsing the web while on the go, with any annoyances over the small format heavily outweighed by the ability to start and stop browsing very quickly. But I could not help but feel a pang of jealousy while watching a better equipped smartphone user type using swiping motions rather than a finger-at-a-time approach. Of course, I could not help but try it. Imagine my delight to learn that the swiping-motion approach was not some add-on extra, but instead standard! Swiping typing is not a replacement for a full-sized keyboard, but it is a huge improvement over finger-at-a-time typing, to say nothing of my old multi-press flip phone.

Recent foreign travel required careful prioritization and scheduling of my sole international power adapter among the three devices needing it. But my new USB-A-to-USB-C adapter allows me to charge my smartphone from my heavy-duty rcutorture-capable ThinkPad, albeit significantly more slowly than via AC adapter, and even more slowly when the laptop is powered off. Especially when I am actively using the smartphone. To my surprise, I can also charge my MacBook from my ThinkPad using this same adapter—but only when the MacBook is powered off. If the MacBook is running, all this does is extend the MacBook's battery life. Which admittely might still be quite useful.

All in all, it looks like I can get by with just the one international AC adapter. This is a good thing, especially considering how bulky those things are!

My smartphone's notifications are still a bit annoying, though I have gotten it a bit better trained to only bother me when it is important. And yes, I have missed a few important notifications!!! When using my laptop, which also receives all these notifications, my defacto strategy has been to completely ignore the smartphone. Which more than once has had the unintended consequence of completely draining my smartphone's battery. The first time this happened was quite disconcerting because it appeared that I had bricked my new smartphone. Thankfully, a quick web search turned up the unintuitive trick of simultaneously depressing the volume-down and power buttons for ten seconds.

But if things go as they usually do, this two-button salute will soon become all too natural!