?

Log in

The refereed talk deadline for Linux Plumbers Conference is only a few weeks off, September 1, 2016 at 11:59PM CET. So there is still some time to get your proposals in, but time is growing short.

Note that this year's Plumbers is co-located with Linux Kernel Summit rather than LinuxCon, so the refereed track is all Plumbers this year. We are therefore looking forward to seeing your all-Plumbers refereed-track submission!

As you might have noticed, earlybird registration has closed, but normal-rate registration will be opening up on August 27th—however, accepted refereed speaking proposals will receive a free pass.

The conference itself is in Santa Fe, New Mexico on November 1-4, 2016. Looking forward to seeing you there!
Although trusted platform modules (TPMs) have been the subject of some controversy over the years, it is quite likely that they have important roles to play in preventing firmware-based attacks, protecting user keys, and so on. However, some work is required to enable TPMs to successfully play these roles, including getting TPM support into bootloaders, securely distributing known-good hashes, and providing robust and repeatable handling of upgrades.


In short, given the ever-more-hostile environments that our systems must operate in, it seems quite likely that much help will be needed, including from TPMs. For more details, see the TPMs Microconference wiki page.

We hope to see you there!
It might well be that wireless networking recently made the transition from an ubiquitous networking technology to the dominant networking technology, at least from the viewpoint of end-user devices. Part of this trend is the use of wireless in automobiles, and this workshop will look at Wireless Access in Vehicular Environments (WAVE), also know as IEEE 802.11p. In addition, the bufferbloat problem is starting to focus on the more difficult wireless environment, and to that end, this workshop will discuss FQ/Codel integration, testing, and development. As usual, the workshop will encompass the full 802.11 stack, not just the kernel portions, and therefore wpa_supplicant will also be on the agenda.

Please join us for a timely and important discussion!

Parallel Programming: July 2016 Update

This release of Is Parallel Programming Hard, And, If So, What Can You Do About It? has some really nice updates:

 

 


  1. PDFs now have hyperlinks from each quick quiz to its answer, and each answer has hyperlinks to its quick quiz, courtesy of Paolo Bonzini with refinements by Akira Yokosawa.
  2. People building from the git archive will see some excellent improvements in the build system, which now does a much better job of determining whether or not a rebuild is necessary, and is also much better at displaying the relevant LaTeX errors in case of build failure. These changes were courtesy of Akira Yokosawa, who also greatly improved formatting by introducing a number of recent LaTeX features and capabilities.
  3. The deferred-processing chapter now has a running example that more clearly shows the performance tradeoffs. The reference-counting section within this chapter now avoids forward references, and the information about combining reference counting with other techniques now appears in the later putting-it-all-together chapter.
  4. The code samples were updated to eliminate a number of compiler warnings.
  5. SeongJae Park's translation efforts resulted in a number of fixes to the English version's spelling and grammar, and Akira made substantial contributions in this area as well.
  6. Andreea-Cristina Bernat, Andrew Donnellan, Balbir Singh, Dave Willmer, Dominik Dingel, Emilio G. Cota, and Namhyung Kim also provided many much-appreciated contributions.
As always, git://git.kernel.org/pub/scm/linux/kernel/git/paulmck/perfbook.git will be updated in real time.

Stupid RCU Tricks: An Early-1970s Example

Up to now, I have been calling out Kung's and Lehman's classic 1980 “Concurrent Manipulation of Binary Search Trees” as the oldest mention of something vaguely resembling RCU.

However, while looking into the history of reference counting, I found Weizenbaum's postively antique 1963 “Symmetric List Processor (SLIP)”, which describes a list-processing library, written in FORTRAN, no less. One of its features was storage reclamation based on reference counting, in which newly reclaimed list items are added to the end of SLIP's list of available space.

And, on page 413 of his 1973 “The Art of Computer Programming: Fundamental Algorithms”, none other than Donald Knuth points out that this add-at-end implementation means that “incorrect programs run correctly for awhile”. Here, “incorrect programs” is presumably referring to readers traversing SLIP lists while failing to increment the needed SLIP reference counters.

So it is that the earliest known mention of RCU-based algorithms dismisses them all as buggy. And rightfully so, given that SLIP has no notion of anything like a grace period. Thus, Kung and Lehman remain the first known implementers of something vaguely resembling RCU—but no longer the first mention!
Suppose a Linux-kernel task registers a pair of RCU callbacks, as follows:

call_rcu(&p->rcu, myfunc);
smp_mb();
call_rcu(&q->rcu, myfunc);

Given that these two callbacks are guaranteed to be registered in order,
are they also guaranteed to be invoked in order?
Android continues to find interesting new applications and problems to solve, both within and outside the mobile arena. Mainlining continues to be an area of focus, as do a number of areas of core Android functionality, including the kernel. Other topics include efficient operation on big.LITTLE systems, support for Hikey in AOSP (and multi-device support in general), and the upcoming migration to Clang" for Android builds.

Android continues to be a very exciting and dynamic project, with the above topics merely scratching the surface. For more details, see the Android/Mobile Microconference wiki page.
After taking a break in 2015, Tracing is back at Plumbers this year! Tracing is heavily used throughout the Linux ecosystem, and provides an essential method for extracting information about the underlying code that is running on the system. Although tracing is simple in concept, effective usage and implementation can be quite involved.

Topics proposed for this year's event include new features in the BPF compiler collection, perf, and ftrace; visualization frameworks; large-scale tracing and distributed debugging; always-on analytics and monitoring; do-it-yourself tracing tools; and, last but not least, a kernel-tracing wishlist.

We hope to see you there!
This year will feature a four-fold deeper dive into checkpoint-restore technology, thanks to participation by people from a number of additional related projects! These are the OpenMPI message-passing library, Berkeley Lab Checkpoint/Restart (BLCR), and Distributed MultiThreaded CheckPointing (DMTCP) (not to be confused with TCP/IP), in addition to the Checkpoint/Restore in Userspace group that has participated in prior years.

Docker integration remains a hot topic, as is post-copy live migration, as well as testing/validation. As you might guess from the inclusion of people from BLCR and OpenMPI, checkpoint-restore for distributed workloads (rather than just single systems) is an area of interest.

Please join us for a timely and important discussion!
Testing, fuzzing, and other diagnostics have made the Linux ecosystem much more robust than in the past, but there are still embarrassing bugs. Furthermore, million-year bugs will be happening many times per day across Linux's huge installed base, so there is clearly need for even more aggressive validation.

The Testing and Fuzzing Microconference aims to significantly increase the aggression level of Linux-kernel validation, with discussions on tools and test suites including kselftest, syzkaller, trinity, mutation testing, and the 0day Test Robot. The effectiveness of these tools will be attested to by any of their victims, but we must further raise our game as the installed base of Linux continues to increase.

One additional way of raising the level of testing aggression is to document the various ABIs in machine-readable format, thus lowering the barrier to entry for new projects. Who knows? Perhaps Linux testing will be driven by artificial-intelligence techniques!

Join us for an important and spirited discussion!