Message ID | 20210212223952.1172568-3-anthony.l.nguyen@intel.com |
---|---|
State | New |
Headers | show |
Series | 40GbE Intel Wired LAN Driver Updates 2021-02-12 | expand |
On Fri, Feb 12, 2021 at 2:46 PM Tony Nguyen <anthony.l.nguyen@intel.com> wrote: > > From: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > > i40e_cleanup_headers has a statement about check against skb being > linear or not which is not relevant anymore, so let's remove it. > > Same case for i40e_can_reuse_rx_page, it references things that are not > present there anymore. > > Reviewed-by: Björn Töpel <bjorn.topel@intel.com> > Signed-off-by: Maciej Fijalkowski <maciej.fijalkowski@intel.com> > Tested-by: Tony Brelinski <tonyx.brelinski@intel.com> > Signed-off-by: Tony Nguyen <anthony.l.nguyen@intel.com> > --- > drivers/net/ethernet/intel/i40e/i40e_txrx.c | 33 ++++----------------- > 1 file changed, 6 insertions(+), 27 deletions(-) > > diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c > index 3d24c6032616..5f6aa13e85ca 100644 > --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c > +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c > @@ -1963,9 +1963,6 @@ void i40e_process_skb_fields(struct i40e_ring *rx_ring, > * @skb: pointer to current skb being fixed > * @rx_desc: pointer to the EOP Rx descriptor > * > - * Also address the case where we are pulling data in on pages only > - * and as such no data is present in the skb header. > - * > * In addition if skb is not at least 60 bytes we need to pad it so that > * it is large enough to qualify as a valid Ethernet frame. > * > @@ -1998,33 +1995,15 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb, > } > > /** > - * i40e_can_reuse_rx_page - Determine if this page can be reused by > - * the adapter for another receive > - * > + * i40e_can_reuse_rx_page - Determine if page can be reused for another Rx > * @rx_buffer: buffer containing the page > * @rx_buffer_pgcnt: buffer page refcount pre xdp_do_redirect() call > * > - * If page is reusable, rx_buffer->page_offset is adjusted to point to > - * an unused region in the page. > - * > - * For small pages, @truesize will be a constant value, half the size > - * of the memory at page. We'll attempt to alternate between high and > - * low halves of the page, with one half ready for use by the hardware > - * and the other half being consumed by the stack. We use the page > - * ref count to determine whether the stack has finished consuming the > - * portion of this page that was passed up with a previous packet. If > - * the page ref count is >1, we'll assume the "other" half page is > - * still busy, and this page cannot be reused. > - * > - * For larger pages, @truesize will be the actual space used by the > - * received packet (adjusted upward to an even multiple of the cache > - * line size). This will advance through the page by the amount > - * actually consumed by the received packets while there is still > - * space for a buffer. Each region of larger pages will be used at > - * most once, after which the page will not be reused. > - * > - * In either case, if the page is reusable its refcount is increased. > - **/ > + * If page is reusable, we have a green light for calling i40e_reuse_rx_page, > + * which will assign the current buffer to the buffer that next_to_alloc is > + * pointing to; otherwise, the DMA mapping needs to be destroyed and > + * page freed > + */ > static bool i40e_can_reuse_rx_page(struct i40e_rx_buffer *rx_buffer, > int rx_buffer_pgcnt) > { So this lost all of the context for why or how the function works. You should probably call out that for 4K pages it is using a simple page count where if the count hits 2 we have to return false, and if the page is bigger than 4K we have to check the remaining unused buffer to determine if we will fail or not.
diff --git a/drivers/net/ethernet/intel/i40e/i40e_txrx.c b/drivers/net/ethernet/intel/i40e/i40e_txrx.c index 3d24c6032616..5f6aa13e85ca 100644 --- a/drivers/net/ethernet/intel/i40e/i40e_txrx.c +++ b/drivers/net/ethernet/intel/i40e/i40e_txrx.c @@ -1963,9 +1963,6 @@ void i40e_process_skb_fields(struct i40e_ring *rx_ring, * @skb: pointer to current skb being fixed * @rx_desc: pointer to the EOP Rx descriptor * - * Also address the case where we are pulling data in on pages only - * and as such no data is present in the skb header. - * * In addition if skb is not at least 60 bytes we need to pad it so that * it is large enough to qualify as a valid Ethernet frame. * @@ -1998,33 +1995,15 @@ static bool i40e_cleanup_headers(struct i40e_ring *rx_ring, struct sk_buff *skb, } /** - * i40e_can_reuse_rx_page - Determine if this page can be reused by - * the adapter for another receive - * + * i40e_can_reuse_rx_page - Determine if page can be reused for another Rx * @rx_buffer: buffer containing the page * @rx_buffer_pgcnt: buffer page refcount pre xdp_do_redirect() call * - * If page is reusable, rx_buffer->page_offset is adjusted to point to - * an unused region in the page. - * - * For small pages, @truesize will be a constant value, half the size - * of the memory at page. We'll attempt to alternate between high and - * low halves of the page, with one half ready for use by the hardware - * and the other half being consumed by the stack. We use the page - * ref count to determine whether the stack has finished consuming the - * portion of this page that was passed up with a previous packet. If - * the page ref count is >1, we'll assume the "other" half page is - * still busy, and this page cannot be reused. - * - * For larger pages, @truesize will be the actual space used by the - * received packet (adjusted upward to an even multiple of the cache - * line size). This will advance through the page by the amount - * actually consumed by the received packets while there is still - * space for a buffer. Each region of larger pages will be used at - * most once, after which the page will not be reused. - * - * In either case, if the page is reusable its refcount is increased. - **/ + * If page is reusable, we have a green light for calling i40e_reuse_rx_page, + * which will assign the current buffer to the buffer that next_to_alloc is + * pointing to; otherwise, the DMA mapping needs to be destroyed and + * page freed + */ static bool i40e_can_reuse_rx_page(struct i40e_rx_buffer *rx_buffer, int rx_buffer_pgcnt) {