Re: Possible bug in malloc-code

From: Bruce Evans <bde_at_zeta.org.au>
Date: Sat, 29 May 2004 01:07:15 +1000 (EST)
On Fri, 28 May 2004, Willem Jan Withagen wrote:

> ...
> Didn't really work:
>
> The process itself:
> Alloc:  n =  335544320, ADR = 0x00000000485D7000
> Alloc:  n =  402653184, ADR = 0x000000005C5D7000
> Alloc:  n =  469762048, ADR = 0x00000000745D7000
> Alloc:  n =  536870912, ADR = 0xFFFFFFFF905D7000
> Free:   n =  536870912, ADR = 0xFFFFFFFF905D7000
> rMemoryDrv in free(): error: junk pointer, too high to make sense
>
> On the console:
> panic: ffs_write: uio->uio_resid < 0
> at line 602 in file /home2/src/sys/ufs/ffs/ffs_vnops.c
> cpuid = 1;
> Stack backtrace:
> backtrace() at backtrace+0x17
> __panic() at __panic+0x1e4
> ffs_write() at ffs_write+0x162
> vn_rdwr() at vn_rdwr+0x164
> vn_rdwr_inchunks() at vn_rdwr_inchunks+0x80
> elf64_coredump() at elf64_coredump+0x113
> coredump() at coredump+0x586
> sigexit() at sigexit+0x72
> postsig() at postsig+0x1be
> ast() at ast+0x417
> Xfast_syscall() at Xfast_syscall+0xdd
> --- syscall (0), rip = 0x20067c8ec, rsp = 0x7fffffffe878, rbp = 0x2006df6c0 ---
>
> So what next....
> It is VERY reproduceable, so with guidance on what to look at.
> I'm more than willing to up my skills and get to the bottom of this.

If a section is larger than INT_MAX, then overflow seems to occur here
in __elfN_coredump():

% 		for (i = 0; i < seginfo.count; i++) {
% 			error = vn_rdwr_inchunks(UIO_WRITE, vp,
% 			    (caddr_t)php->p_vaddr, php->p_filesz, offset,
  			                           ^^^^^^^^^^^^^
% 			    UIO_USERSPACE, IO_DIRECT, cred, NOCRED, NULL, td);

php->p_filesz has type u_int64_t on 64-bit machines, but here it gets
silently converted to int, so it overflows if the size is larger than
INT_MAX.  (Overflow may occur even on 32-bit machines, but it's harder
to fit a section larger than INT_MAX on a 32-bit machine.)  If ints
are 32-bits 2's complement and the section size is between 2^31 and
2^32-1 inclusive, then the above asks vn_rdwr() a negative length.
The negative length apparently gets as far as ffs_write() before
causing a panic.

It;s a longstanding bug that ssize_t is 64 bits and SSIZE_MAX is
2^63-1 on 64 bit machines, but writes from userland are limited to
INT_MAX (normally 2^31-1), so 64-bit applications would have a hard
time writing huge amounts.  Core dumps apparently have the same
problem writing large sections.  A text section with size 2GB would
be huge, but a data section with size 2GB is just large.

The traceback should show the args, but that seems to be broken for
amd64's.

Bruce
Received on Fri May 28 2004 - 06:07:25 UTC

This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:55 UTC