Message ID | 20210610163959.71634-1-andrealmeid@collabora.com |
---|---|
Headers | show |
Series | lib: Convert UUID runtime test to KUnit | expand |
On Thu, Jun 10, 2021 at 01:39:58PM -0300, André Almeida wrote: > Hi, > > This patch converts existing UUID runtime test to use KUnit framework. > > Below, there's a comparison between the old output format and the new > one. Keep in mind that even if KUnit seems very verbose, this is the > corner case where _every_ test has failed. Btw, do we have test coverage statistics? I mean since we reduced 18 test cases to 12, do we still have the same / better test coverage?
Hi Andy, Às 06:55 de 11/06/21, Andy Shevchenko escreveu: > On Thu, Jun 10, 2021 at 01:39:58PM -0300, André Almeida wrote: >> Hi, >> >> This patch converts existing UUID runtime test to use KUnit framework. >> >> Below, there's a comparison between the old output format and the new >> one. Keep in mind that even if KUnit seems very verbose, this is the >> corner case where _every_ test has failed. > > Btw, do we have test coverage statistics? > > I mean since we reduced 18 test cases to 12, do we still have the same / better > test coverage? > I don't think we have automated statistics, but I can assure you that the coverage it's exactly the same. We are testing two correlated functions with the same input, in a single test case, instead of having a single case for each one, so that's why the number of cases is reduced. For example, instead of: total_tests++; if (guid_parse(data->uuid, &le)) total_tests++; if (!guid_equal(&data->le, &le)) We now have: KUNIT_ASSERT_EQ(guid_parse(data->guid, &le), 0) KUNIT_EXPECT_TRUE(guid_equal(&data->le, &le)) That will count as a single test.