Re: PostgreSQL performance on FreeBSD

From: Adrian Chadd <adrian_at_freebsd.org>
Date: Fri, 3 Jun 2016 11:27:21 -0700
On 3 June 2016 at 10:55, Konstantin Belousov <kostikbel_at_gmail.com> wrote:
> On Fri, Jun 03, 2016 at 11:29:13AM -0600, Alan Somers wrote:
>> On Fri, Jun 3, 2016 at 11:26 AM, Konstantin Belousov
>> <kostikbel_at_gmail.com> wrote:
>> > On Fri, Jun 03, 2016 at 09:29:16AM -0600, Alan Somers wrote:
>> >> I notice that, with the exception of the VM_PHYSSEG_MAX change, these
>> >> patches never made it into head or ports.  Are they unsuitable for low
>> >> core-count machines, or is there some other reason not to commit them?
>> >>  If not, what would it take to get these into 11.0 or 11.1 ?
>> >
>> > The fast page fault handler was redesigned and committed in r269728
>> > and r270011 (with several follow-ups).
>> > Instead of lock-less buffer queues iterators, Jeff changed buffer allocator
>> > to use uma, see r289279.  Other improvement to the buffer cache was
>> > committed as r267255.
>> >
>> > What was not committed is the aggressive pre-population of the phys objects
>> > mem queue, and a knob to further split NUMA domains into smaller domains.
>> > The later change is rotten.
>> >
>> > In fact, I think that with that load, what you would see right now on
>> > HEAD, is the contention on vm_page_queue_free_mtx.  There are plans to
>> > handle it.
>>
>> Thanks for the update.  Is it still recommended to enable the
>> multithreaded pagedaemon?
>
> Single-threaded pagedaemon cannot maintain the good system state even
> on non-NUMA systems, if machine has large memory.  This was the motivation
> for the NUMA domain split patch.  So yes, to get better performance you
> should enable VM_NUMA_ALLOC option.
>
> Unfortunately, there were some code changes of quite low quality which
> resulted in the NUMA-enabled system to randomly fail with NULL pointer
> deref in the vm page alloc path.  Supposedly that was fixed, but you
> should try that yourself.  One result of the mentioned changes was that
> nobody used/tested NUMA-enabled systems under any significant load, for
> quite long time.

The iterator bug was fixed, so it still behaves like it used to if
NUMA is enabled circa what, freebsd-9? If you'd like that older
behavior, you can totally flip back to the global policy being
round-robin only, and it's then a glorified, configurable-at-runtime
no-op.

The difference now is that you can tickle imbalances if you have too
many processes that need pages from a specific domain instead of round
robin, because the underlying tracking mechanisms still assume a
single global pool and global method of cleaning things.

That and the other NUMA stuff is something to address in -12.


-adrian
Received on Fri Jun 03 2016 - 16:27:23 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:05 UTC