Re: PostgreSQL performance on FreeBSD

From: Alan Somers <asomers_at_freebsd.org>
Date: Fri, 3 Jun 2016 09:29:16 -0600
On Thu, Aug 14, 2014 at 12:19 PM, Alan Cox <alc_at_rice.edu> wrote:
> On 08/14/2014 10:47, John Baldwin wrote:
>> On Wednesday, August 13, 2014 1:00:22 pm Alan Cox wrote:
>>> On Tue, Aug 12, 2014 at 1:09 PM, John Baldwin <jhb_at_freebsd.org> wrote:
>>>
>>>> On Wednesday, July 16, 2014 1:52:45 pm Adrian Chadd wrote:
>>>>> Hi!
>>>>>
>>>>>
>>>>> On 16 July 2014 06:29, Konstantin Belousov <kostikbel_at_gmail.com> wrote:
>>>>>> On Fri, Jun 27, 2014 at 03:56:13PM +0300, Konstantin Belousov wrote:
>>>>>>> Hi,
>>>>>>> I did some measurements and hacks to see about the performance and
>>>>>>> scalability of PostgreSQL 9.3 on FreeBSD, sponsored by The FreeBSD
>>>>>>> Foundation.
>>>>>>>
>>>>>>> The results are described in https://kib.kiev.ua/kib/pgsql_perf.pdf.
>>>>>>> The uncommitted patches, referenced in the article, are available as
>>>>>>> https://kib.kiev.ua/kib/pig1.patch.txt
>>>>>>> https://kib.kiev.ua/kib/patch-2
>>>>>> A followup to the original paper.
>>>>>>
>>>>>> Most importantly, I identified the cause for the drop on the graph
>>>>>> after the 30 clients, which appeared to be the debugging version
>>>>>> of malloc(3) in libc.
>>>>>>
>>>>>> Also there are some updates on the patches.
>>>>>>
>>>>>> New version of the paper is available at
>>>>>> https://www.kib.kiev.ua/kib/pgsql_perf_v2.0.pdf
>>>>>> The changes are marked as 'update for version 2.0'.
>>>>> Would you mind trying a default (non-PRODUCTION) build, but with junk
>>>>> filling turned off?
>>>>>
>>>>> adrian_at_adrian-hackbox:~ % ls -l /etc/malloc.conf
>>>>>
>>>>> lrwxr-xr-x  1 root  wheel  10 Jun 24 04:37 /etc/malloc.conf -> junk:false
>>>>>
>>>>> That fixes almost all of the malloc debug performance issues that I
>>>>> see without having to recompile.
>>>>>
>>>>> I'd like to know if you see any after that.
>>>> OTOH, I have actually seen junk profiling _improve_ performance in certain
>>>> cases as it forces promotion of allocated pages to superpages since all
>>>> pages
>>>> are dirtied.  (I have a local hack that adds a new malloc option to
>>>> explicitly
>>>> memset() new pages allocated via mmap() that gives the same benefit without
>>>> the junking overheadon each malloc() / free(), but it does increase
>>>> physical
>>>> RAM usage.)
>>>>
>>>>
>>> John,
>>>
>>> A couple small steps have been taken toward eliminating the need for this
>>> hack: the addition of the "page size index" field to struct vm_page and the
>>> addition of a similarly named parameter to pmap_enter().  However, at the
>>> moment, the only tangible effect is in the automatic prefaulting by
>>> mmap(2).  Instead of establishing 96 4KB page mappings, the automatic
>>> prefaulting establishes 96 page mappings whose size is determined by the
>>> size of the physical pages that it finds in the vm object.  So, the
>>> prefaulting overhead remains constant, but the coverage provided by the
>>> automatic prefaulting will vary with the underlying page size.
>> Yes, I think what we might actually want is what I mentioned in person at
>> BSDCan: some sort of flag to mmap() that malloc() could use to assume that any
>> reservations are fully used when they are reserved.  This would avoid the need
>> to wait for all pages to be dirtied before promotion provides a superpage
>> mapping and would avoid demotions while still allowing the kernel to gracefully
>> fall back to regular pages if a reservation can't be made.
>>
>
> I agree.

I notice that, with the exception of the VM_PHYSSEG_MAX change, these
patches never made it into head or ports.  Are they unsuitable for low
core-count machines, or is there some other reason not to commit them?
 If not, what would it take to get these into 11.0 or 11.1 ?

-Alan
Received on Fri Jun 03 2016 - 13:29:18 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:41:05 UTC