From: Konstantin Belousov <kostikbel_at_gmail.com> Subject: Re: Waiting for bufdaemon Date: Mon, 8 Mar 2021 02:03:13 +0200 > On Sun, Mar 07, 2021 at 02:25:45PM -0600, Kyle Evans wrote: >> I'm going to be pedantic here, but note that this isn't a tunable. You >> cannot, AFAIK, influence the timecounter chosen with kenv; it has to >> be set post-boot via sysctl, and if you're really unlucky it could be >> that bufdaemon wedges before you manage to change it (though it seems >> unlikely). >> >> I've had some success with setting it to ACPI-fast in sysctl.conf here. > Right, I forgot that timecounter.hardware is not tunable, I thought that > it was fixed. > > https://reviews.freebsd.org/D29122 I applied the patch of D29122 to 705d06b289e of main, do `make kernel` and reboot. Then I tried the test that consists of following 6 steps. 1. Edit /boot/loader.conf.local 2. shutdow -r now 3. Login as root 4. cd /usr/src 5. make -j 4 -s buildworld 6. shutdown -r now Though I didn't intend it at first, I repeated this test 3 times changing how /boot/loader.conf.local was edited at step 1. And I got the result that was surprising and confusing, at least for me. At first trial I changed /boot/loader.conf.local so it includes 'kern.timecounter.hardware=i8254' in it. Then timeout of bufdaemon didn't happen at step 6. This means adding the line works as workaound for the problem. Next I replaced 'kern.timecounter.hardware=i8254' with 'kern.timecounter.hardware=ACPI-fast'and did test. As I wrote previous mail, 'ACPI-fast' is the initial value of kern.timecounter.hardware on the system in question. So I tried it for comparison purpose expecting the problem happens. But the result is unexpected one. Timeout of bufdaemon didn't happen at step 6 with this case either. So tried yet another case. I removed 'kern.timecounter.hardware=ACPI-fast' from /boot/loader.conf.local and did test again. In this case 'kern.timecounter.hardware' isn't touched at all. So the result should have been same as 2nd case. But it was different. Timeout of bufdeamon happend at step 6 in this case. The result surprised and confused me with following 2 points. (a) Setting initial value (2nd case) should be same as not touching at all (3rd case). But actually they caused different results. (b) 2nd case should have caused same result as one without patch. But actually it caused different one. I found the reason when I execute `sysctl kernel.timecount` and checked the output. If I do it with unpatched kernel, I get following result. ---------------------------------------------------------------------- yasu_at_rolling-vm-freebsd1[1005]% sysctl kern.timecounter kern.timecounter.tsc_shift: 1 kern.timecounter.smp_tsc_adjust: 0 kern.timecounter.smp_tsc: 0 kern.timecounter.invariant_tsc: 1 kern.timecounter.fast_gettime: 1 kern.timecounter.tick: 1 kern.timecounter.choice: ACPI-fast(900) i8254(0) TSC-low(-100) dummy(-1000000) kern.timecounter.hardware: ACPI-fast kern.timecounter.alloweddeviation: 5 kern.timecounter.timehands_count: 2 kern.timecounter.stepwarnings: 0 kern.timecounter.tc.ACPI-fast.quality: 900 kern.timecounter.tc.ACPI-fast.frequency: 3579545 kern.timecounter.tc.ACPI-fast.counter: 1311282421 kern.timecounter.tc.ACPI-fast.mask: 4294967295 kern.timecounter.tc.i8254.quality: 0 kern.timecounter.tc.i8254.frequency: 1193182 kern.timecounter.tc.i8254.counter: 9931 kern.timecounter.tc.i8254.mask: 65535 kern.timecounter.tc.TSC-low.quality: -100 kern.timecounter.tc.TSC-low.frequency: 1500035122 kern.timecounter.tc.TSC-low.counter: 4026452692 kern.timecounter.tc.TSC-low.mask: 4294967295 yasu_at_rolling-vm-freebsd1[1006]% ---------------------------------------------------------------------- But if I do it with patched kernel, I get a bit different one. ---------------------------------------------------------------------- yasu_at_rolling-vm-freebsd1[1001]% sysctl kern.timecounter kern.timecounter.tsc_shift: 1 kern.timecounter.smp_tsc_adjust: 0 kern.timecounter.smp_tsc: 1 kern.timecounter.invariant_tsc: 1 kern.timecounter.fast_gettime: 1 kern.timecounter.tick: 1 kern.timecounter.choice: ACPI-fast(900) i8254(0) TSC-low(1000) dummy(-1000000) kern.timecounter.hardware: TSC-low kern.timecounter.alloweddeviation: 5 kern.timecounter.timehands_count: 2 kern.timecounter.stepwarnings: 0 kern.timecounter.tc.ACPI-fast.quality: 900 kern.timecounter.tc.ACPI-fast.frequency: 3579545 kern.timecounter.tc.ACPI-fast.counter: 401131388 kern.timecounter.tc.ACPI-fast.mask: 4294967295 kern.timecounter.tc.i8254.quality: 0 kern.timecounter.tc.i8254.frequency: 1193182 kern.timecounter.tc.i8254.counter: 9975 kern.timecounter.tc.i8254.mask: 65535 kern.timecounter.tc.TSC-low.quality: 1000 kern.timecounter.tc.TSC-low.frequency: 1500035491 kern.timecounter.tc.TSC-low.counter: 581590125 kern.timecounter.tc.TSC-low.mask: 4294967295 yasu_at_rolling-vm-freebsd1[1002]% ---------------------------------------------------------------------- Comparing two output there are two notable differences. The first is the value of kern.timecounter.hardware. With unpatched kernel it is 'ACPI-fast'. On the other hand, with patched kernel it is 'TSC-low'. It means the initial value is changed by applying the patch. it explains why results of 2nd case and 3rd one are different. The second is the value of kern.timecounter.tc.ACPI-fast.counter. With unpatched kernel it is '1311282421' and with patch one it is '401131388'. I don't know what this value means. But I guess the difference is the reason that the result of 2nd case is different from the one of unpatched kernel. So now I know why unexpected result (a) and (b) happen. Furthermore, I comfirmed that setting kern.timecounter.hardware to either 'i8254' or 'ACPI-fast' works as workround of the problem. It is good news anyway. But still one question remains. Why have the value of kern.timecounter.hardware and kern.timecounter.tc.ACPI-fast.counter changed by applying the patch? My understanding is that it only makes 'kern.timecounter.hardware' loader tunable that can be configured with loader.conf. Is it unexpected side effect? Or is everything expected result? --- Yasuhiro KimuraReceived on Mon Mar 08 2021 - 14:58:38 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:27 UTC