Message ID | 20210112110647.627783-1-a.darwish@linutronix.de |
---|---|
Headers | show |
Series | scsi: libsas: Remove in_interrupt() check | expand |
On Tue, Jan 12, 2021 at 11:53:50AM +0000, John Garry wrote: > On 12/01/2021 11:06, Ahmed S. Darwish wrote: > > Hi, > > > > Changelog v2 > > ------------ ... > > I'll give this a spin today and help review also then. > > There's 18 patches here - it would be very convenient if they were on a > public branch :) > Konstantin's "b4" is your friend: https://people.kernel.org/monsieuricon/introducing-b4-and-patch-attestation It boils down to: $ pip install b4 $ b4 am -v2 20210112110647.627783-1-a.darwish@linutronix.de Kind regards, -- Ahmed S. Darwish Linutronix GmbH
On 12/01/2021 11:06, Ahmed S. Darwish wrote: > sas_phy = container_of(port->phy_list.next, struct asd_sas_phy, > port_phy_el); > - sas_notify_port_event_gfp(sas_phy, > + sas_notify_port_event(sas_phy, > PORTE_BROADCAST_RCVD, GFP_KERNEL); nit: I think that this now fits on a single line, without exceeding 80 characters Thanks, John
- intel-linux-scu@intel.com On 12/01/2021 13:19, Ahmed S. Darwish wrote: > On Tue, Jan 12, 2021 at 11:53:50AM +0000, John Garry wrote: >> On 12/01/2021 11:06, Ahmed S. Darwish wrote: >>> Hi, >>> >>> Changelog v2 >>> ------------ > ... >> >> I'll give this a spin today and help review also then. >> I boot-tested on my machines which have hisi_sas v2 and v3 hw, and it's ok. I will ask some guys to test a bit more. And generally the changes look ok. But I just have a slight concern that we don't pass the gfp_flags all the way from the origin caller. So we have some really long callchains, for example: host.c: sci_controller_error_handler(): atomic, irq handler (*) OR host.c: sci_controller_completion_handler(), atomic, tasklet (*) -> sci_controller_process_completions() -> sci_controller_unsolicited_frame() -> phy.c: sci_phy_frame_handler() -> sci_change_state(SCI_PHY_SUB_AWAIT_SAS_POWER) -> sci_phy_starting_await_sas_power_substate_enter() -> host.c: sci_controller_power_control_queue_insert() -> phy.c: sci_phy_consume_power_handler() -> sci_change_state(SCI_PHY_SUB_FINAL) -> sci_change_state(SCI_PHY_SUB_FINAL) -> sci_controller_event_completion() -> phy.c: sci_phy_event_handler() -> sci_phy_start_sata_link_training() -> sci_change_state(SCI_PHY_SUB_AWAIT_SATA_POWER) -> sci_phy_starting_await_sata_power_substate_enter -> host.c: sci_controller_power_control_queue_insert() -> phy.c: sci_phy_consume_power_handler() -> sci_change_state(SCI_PHY_SUB_FINAL) So if someone rearranges the code later, adds new callchains, etc., it could be missed that the context may have changed than what we assume at the bottom. But then passing the flags everywhere is cumbersome, and all the libsas users see little or no significant changes anyway, apart from a couple. Thanks, John
On 12/01/2021 17:33, Ahmed S. Darwish wrote: > On Tue, Jan 12, 2021 at 04:00:57PM +0000, John Garry wrote: > ... >> I boot-tested on my machines which have hisi_sas v2 and v3 hw, and it's ok. >> I will ask some guys to test a bit more. >> > Thanks a lot! > >> And generally the changes look ok. But I just have a slight concern that we >> don't pass the gfp_flags all the way from the origin caller. >> >> So we have some really long callchains, for example: >> >> host.c: sci_controller_error_handler(): atomic, irq handler (*) >> OR host.c: sci_controller_completion_handler(), atomic, tasklet (*) >> -> sci_controller_process_completions() >> -> sci_controller_unsolicited_frame() >> -> phy.c: sci_phy_frame_handler() >> -> sci_change_state(SCI_PHY_SUB_AWAIT_SAS_POWER) >> -> sci_phy_starting_await_sas_power_substate_enter() >> -> host.c: sci_controller_power_control_queue_insert() >> -> phy.c: sci_phy_consume_power_handler() >> -> sci_change_state(SCI_PHY_SUB_FINAL) >> -> sci_change_state(SCI_PHY_SUB_FINAL) >> -> sci_controller_event_completion() >> -> phy.c: sci_phy_event_handler() >> -> sci_phy_start_sata_link_training() >> -> sci_change_state(SCI_PHY_SUB_AWAIT_SATA_POWER) >> -> sci_phy_starting_await_sata_power_substate_enter >> -> host.c: sci_controller_power_control_queue_insert() >> -> phy.c: sci_phy_consume_power_handler() >> -> sci_change_state(SCI_PHY_SUB_FINAL) >> >> So if someone rearranges the code later, adds new callchains, etc., it could >> be missed that the context may have changed than what we assume at the >> bottom. But then passing the flags everywhere is cumbersome, and all the >> libsas users see little or no significant changes anyway, apart from a >> couple. >> > The deep call chains like the one you've quoted are all within the isci > Intel driver (patches #5 => #7), due to the*massive* state transitions > that driver has. But as the commit logs of these three patches show, > almost all of such transitions happened under atomic context anyway and > GFP_ATOMIC was thus used. > > The GFP_KERNEL call-chains were all very simple: a workqueue, functions > already calling msleep() or wait_event_timeout() two or three lines > nearby, and so on. > > All the other libsas clients (that is, except isci) also had normal call > chains that were IMHO easy to follow. To me, the series looks fine. Well, the end result - I didn't go through patch by patch. So: Reviewed-by: John Garry <john.garry@huawei.com> I'm still hoping some guys are testing a bit for me, but I'll let you know if any problem. As an aside, your analysis showed some quite poor usage of spinlocks in some drivers, specifically grabbing a lock and then calling into a depth of 3 or 4 functions. Thanks, John
On Thu, Jan 14, 2021 at 09:51:35AM +0000, John Garry wrote: ... > > To me, the series looks fine. Well, the end result - I didn't go through > patch by patch. So: > > Reviewed-by: John Garry <john.garry@huawei.com> > Thanks! Shall I add you r-b tag to the whole series then, or only to the ones which directly touch libsas (#3, #12, #16, and #19)? > > As an aside, your analysis showed some quite poor usage of spinlocks in some > drivers, specifically grabbing a lock and then calling into a depth of 3 or > 4 functions. > Correct. Kind regards, -- Ahmed S. Darwish
On 15/01/2021 16:27, Ahmed S. Darwish wrote: > Thanks! > > Shall I add you r-b tag to the whole series then, or only to the ones > which directly touch libsas (#3, #12, #16, and #19)? The whole series, if you like. But there was a nit about fitting some code on a single line still, and I think Christoph also had some issue on that related topic. > >> As an aside, your analysis showed some quite poor usage of spinlocks in some >> drivers, specifically grabbing a lock and then calling into a depth of 3 or >> 4 functions. >> > Correct. BTW, testing report looked all good. Thanks, john
On Fri, Jan 15, 2021 at 04:29:51PM +0000, John Garry wrote: > On 15/01/2021 16:27, Ahmed S. Darwish wrote: > > Thanks! > > > > Shall I add you r-b tag to the whole series then, or only to the ones > > which directly touch libsas (#3, #12, #16, and #19)? > > The whole series, if you like. But there was a nit about fitting some code > on a single line still, and I think Christoph also had some issue on that > related topic. > Nice. Then I'll send a v3 to fixing these 80 col issues -- including in the intermediate patches. > > > > > As an aside, your analysis showed some quite poor usage of spinlocks in some > > > drivers, specifically grabbing a lock and then calling into a depth of 3 or > > > 4 functions. > > > > > Correct. > > BTW, testing report looked all good. > Oh, that's good to hear :) Have a nice weekend, -- Ahmed S. Darwish Linutronix GmbH