Due to some good sleuthing by avg_at_, there is a patch that might fix the recent reports of data corruption on current. It would explain some of the recent reports where a file that was read would have missing gaps of bytes. The problem is with the BUS_DMA_KEEP_PG_OFFSET changes to bus_dma. When a bounce page was used by USB2, the changes to bus_dma would actually change the starting virtual and physical addresses of the bounce page. When the bounce page was no longer needed it was left in this bogus state. Later if another device used the same bounce page for DMA it would use the wrong offset and address. The issue there is if the second device was doing a full page of I/O. In that case the DMA from the device would actually spill over into the next page which could in theory be used by another DMA request. It could also break alignment assumptions (since the previous PG_OFFSET may not be aligned and the bus_dma code assumes bounce pages for the !PG_OFFSET case are page aligned). The quick fix is to always restore the bounce page to the normal state when a PG_OFFSET DMA request is finished. I'd actually prefer not ever touching the page's starting addresses, but those changes would be more invasive I believe. http://www.FreeBSD.org/~jhb/patches/dma_sg.patch Index: amd64/amd64/busdma_machdep.c =================================================================== --- amd64/amd64/busdma_machdep.c (revision 191101) +++ amd64/amd64/busdma_machdep.c (working copy) _at__at_ -1138,8 +1138,6 _at__at_ if (dmat->flags & BUS_DMA_KEEP_PG_OFFSET) { /* page offset needs to be preserved */ - bpage->vaddr &= ~PAGE_MASK; - bpage->busaddr &= ~PAGE_MASK; bpage->vaddr |= vaddr & PAGE_MASK; bpage->busaddr |= vaddr & PAGE_MASK; } _at__at_ -1158,6 +1156,10 _at__at_ bz = dmat->bounce_zone; bpage->datavaddr = 0; bpage->datacount = 0; + if (dmat->flags & BUS_DMA_KEEP_PG_OFFSET) { + bpage->vaddr &= ~PAGE_MASK; + bpage->busaddr &= ~PAGE_MASK; + } mtx_lock(&bounce_lock); STAILQ_INSERT_HEAD(&bz->bounce_page_list, bpage, links); Index: arm/arm/busdma_machdep.c =================================================================== --- arm/arm/busdma_machdep.c (revision 191101) +++ arm/arm/busdma_machdep.c (working copy) _at__at_ -1428,8 +1428,6 _at__at_ if (dmat->flags & BUS_DMA_KEEP_PG_OFFSET) { /* page offset needs to be preserved */ - bpage->vaddr &= ~PAGE_MASK; - bpage->busaddr &= ~PAGE_MASK; bpage->vaddr |= vaddr & PAGE_MASK; bpage->busaddr |= vaddr & PAGE_MASK; } _at__at_ -1448,6 +1446,10 _at__at_ bz = dmat->bounce_zone; bpage->datavaddr = 0; bpage->datacount = 0; + if (dmat->flags & BUS_DMA_KEEP_PG_OFFSET) { + bpage->vaddr &= ~PAGE_MASK; + bpage->busaddr &= ~PAGE_MASK; + } mtx_lock(&bounce_lock); STAILQ_INSERT_HEAD(&bz->bounce_page_list, bpage, links); Index: i386/i386/busdma_machdep.c =================================================================== --- i386/i386/busdma_machdep.c (revision 191101) +++ i386/i386/busdma_machdep.c (working copy) _at__at_ -1156,8 +1156,6 _at__at_ if (dmat->flags & BUS_DMA_KEEP_PG_OFFSET) { /* page offset needs to be preserved */ - bpage->vaddr &= ~PAGE_MASK; - bpage->busaddr &= ~PAGE_MASK; bpage->vaddr |= vaddr & PAGE_MASK; bpage->busaddr |= vaddr & PAGE_MASK; } _at__at_ -1176,6 +1174,10 _at__at_ bz = dmat->bounce_zone; bpage->datavaddr = 0; bpage->datacount = 0; + if (dmat->flags & BUS_DMA_KEEP_PG_OFFSET) { + bpage->vaddr &= ~PAGE_MASK; + bpage->busaddr &= ~PAGE_MASK; + } mtx_lock(&bounce_lock); STAILQ_INSERT_HEAD(&bz->bounce_page_list, bpage, links); Index: ia64/ia64/busdma_machdep.c =================================================================== --- ia64/ia64/busdma_machdep.c (revision 191101) +++ ia64/ia64/busdma_machdep.c (working copy) _at__at_ -941,8 +941,6 _at__at_ if (dmat->flags & BUS_DMA_KEEP_PG_OFFSET) { /* page offset needs to be preserved */ - bpage->vaddr &= ~PAGE_MASK; - bpage->busaddr &= ~PAGE_MASK; bpage->vaddr |= vaddr & PAGE_MASK; bpage->busaddr |= vaddr & PAGE_MASK; } _at__at_ -959,6 +957,10 _at__at_ bpage->datavaddr = 0; bpage->datacount = 0; + if (dmat->flags & BUS_DMA_KEEP_PG_OFFSET) { + bpage->vaddr &= ~PAGE_MASK; + bpage->busaddr &= ~PAGE_MASK; + } mtx_lock(&bounce_lock); STAILQ_INSERT_HEAD(&bounce_page_list, bpage, links); -- John BaldwinReceived on Thu Apr 16 2009 - 15:39:35 UTC
This archive was generated by hypermail 2.4.0 : Wed May 19 2021 - 11:39:46 UTC