> If a section is larger than INT_MAX, then overflow seems to occur here > in __elfN_coredump(): > > % for (i = 0; i < seginfo.count; i++) { > % error = vn_rdwr_inchunks(UIO_WRITE, vp, > % (caddr_t)php->p_vaddr, php->p_filesz, offset, > ^^^^^^^^^^^^^ > % UIO_USERSPACE, IO_DIRECT, cred, NOCRED, NULL, td); > > php->p_filesz has type u_int64_t on 64-bit machines, but here it gets > silently converted to int, so it overflows if the size is larger than > INT_MAX. (Overflow may occur even on 32-bit machines, but it's harder > to fit a section larger than INT_MAX on a 32-bit machine.) If ints > are 32-bits 2's complement and the section size is between 2^31 and > 2^32-1 inclusive, then the above asks vn_rdwr() a negative length. > The negative length apparently gets as far as ffs_write() before > causing a panic. > > It;s a longstanding bug that ssize_t is 64 bits and SSIZE_MAX is > 2^63-1 on 64 bit machines, but writes from userland are limited to > INT_MAX (normally 2^31-1), so 64-bit applications would have a hard > time writing huge amounts. Core dumps apparently have the same > problem writing large sections. A text section with size 2GB would > be huge, but a data section with size 2GB is just large. > > The traceback should show the args, but that seems to be broken for > amd64's. Thanx for the explanation. It seems that these types of overflows will occur on more than just this location. IMHO my problem starts with the malloc-routines, and the panic is just a consequence of that. And dumping seems to suffer as an extra bonus. My tools do not allocate these huge types of data, so I probably can live with this bug. And in the testset I'll just limit the amount of space allocated. Given the level of kernel stuff I'm not into, I'll go on fixing the tools. Which are also ridled with hidden 32->64 conversions, with equal problems. --WjWReceived on Fri May 28 2004 - 06:37:46 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:37:55 UTC