mbox series

[bpf-next,v3,00/13] bpf/tests: Extend JIT test suite coverage

Message ID 20210909143303.811171-1-johan.almbladh@anyfinetworks.com
Headers show
Series bpf/tests: Extend JIT test suite coverage | expand

Message

Johan Almbladh Sept. 9, 2021, 2:32 p.m. UTC
This patch set adds a number of new tests to the test_bpf.ko test suite.
The tests are intended to verify the correctness of eBPF JITs.

Changes since v2:
* Fixed tail call test case to handle the case where a called function is
  outside the 32-bit range of the BPF immediate field. Such calls are
  now omitted in this test. (13/13)
* Fixed typo in commit message. (7/13)

Link: https://lore.kernel.org/bpf/20210907222339.4130924-1-johan.almbladh@anyfinetworks.com/
Link: https://lore.kernel.org/bpf/20210902185229.1840281-1-johan.almbladh@anyfinetworks.com/

Johan Almbladh (13):
  bpf/tests: Allow different number of runs per test case
  bpf/tests: Reduce memory footprint of test suite
  bpf/tests: Add exhaustive tests of ALU shift values
  bpf/tests: Add exhaustive tests of ALU operand magnitudes
  bpf/tests: Add exhaustive tests of JMP operand magnitudes
  bpf/tests: Add staggered JMP and JMP32 tests
  bpf/tests: Add exhaustive test of LD_IMM64 immediate magnitudes
  bpf/tests: Add test case flag for verifier zero-extension
  bpf/tests: Add JMP tests with small offsets
  bpf/tests: Add JMP tests with degenerate conditional
  bpf/tests: Expand branch conversion JIT test
  bpf/tests: Add more BPF_END byte order conversion tests
  bpf/tests: Add tail call limit test with external function call

 lib/test_bpf.c | 3348 +++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 3306 insertions(+), 42 deletions(-)

Comments

Daniel Borkmann Sept. 10, 2021, 7:47 p.m. UTC | #1
On 9/9/21 4:33 PM, Johan Almbladh wrote:
> This patch adds a tail call limit test where the program also emits

> a BPF_CALL to an external function prior to the tail call. Mainly

> testing that JITed programs preserve its internal register state, for

> example tail call count, across such external calls.

> 

> Signed-off-by: Johan Almbladh <johan.almbladh@anyfinetworks.com>

> ---

>   lib/test_bpf.c | 83 ++++++++++++++++++++++++++++++++++++++++++++++++--

>   1 file changed, 80 insertions(+), 3 deletions(-)

> 

> diff --git a/lib/test_bpf.c b/lib/test_bpf.c

> index 7475abfd2186..152193b4080f 100644

> --- a/lib/test_bpf.c

> +++ b/lib/test_bpf.c

> @@ -12202,6 +12202,30 @@ struct tail_call_test {

>   		     offset, TAIL_CALL_MARKER),	       \

>   	BPF_JMP_IMM(BPF_TAIL_CALL, 0, 0, 0)

>   

> +/*

> + * A test function to be called from a BPF program, clobbering a lot of

> + * CPU registers in the process. A JITed BPF program calling this function

> + * must save and restore any caller-saved registers it uses for internal

> + * state, for example the current tail call count.

> + */

> +BPF_CALL_1(bpf_test_func, u64, arg)

> +{

> +	char buf[64];

> +	long a = 0;

> +	long b = 1;

> +	long c = 2;

> +	long d = 3;

> +	long e = 4;

> +	long f = 5;

> +	long g = 6;

> +	long h = 7;

> +

> +	return snprintf(buf, sizeof(buf),

> +			"%ld %lu %lx %ld %lu %lx %ld %lu %x",

> +			a, b, c, d, e, f, g, h, (int)arg);

> +}

> +#define BPF_FUNC_test_func __BPF_FUNC_MAX_ID

> +

>   /*

>    * Tail call tests. Each test case may call any other test in the table,

>    * including itself, specified as a relative index offset from the calling

> @@ -12259,6 +12283,25 @@ static struct tail_call_test tail_call_tests[] = {

>   		},

>   		.result = MAX_TAIL_CALL_CNT + 1,

>   	},

> +	{

> +		"Tail call count preserved across function calls",

> +		.insns = {

> +			BPF_ALU64_IMM(BPF_ADD, R1, 1),

> +			BPF_STX_MEM(BPF_DW, R10, R1, -8),

> +			BPF_CALL_REL(BPF_FUNC_get_numa_node_id),

> +			BPF_CALL_REL(BPF_FUNC_ktime_get_ns),

> +			BPF_CALL_REL(BPF_FUNC_ktime_get_boot_ns),

> +			BPF_CALL_REL(BPF_FUNC_ktime_get_coarse_ns),

> +			BPF_CALL_REL(BPF_FUNC_jiffies64),

> +			BPF_CALL_REL(BPF_FUNC_test_func),

> +			BPF_LDX_MEM(BPF_DW, R1, R10, -8),

> +			BPF_ALU32_REG(BPF_MOV, R0, R1),

> +			TAIL_CALL(0),

> +			BPF_EXIT_INSN(),


 From discussion with Johan, there'll be a v4 respin since assumption of R0
being valid before exit insn would not hold true when going through verifier.
Fixing it confirmed the 33 limit for x86 JIT as well, so both interpreter and
JIT is 33-aligned.

> +		},

> +		.stack_depth = 8,

> +		.result = MAX_TAIL_CALL_CNT + 1,

> +	},

>   	{

>   		"Tail call error path, NULL target",

>   		.insns = {

> @@ -12333,17 +12376,19 @@ static __init int prepare_tail_call_tests(struct bpf_array **pprogs)

>   		/* Relocate runtime tail call offsets and addresses */

>   		for (i = 0; i < len; i++) {

>   			struct bpf_insn *insn = &fp->insnsi[i];

> -

> -			if (insn->imm != TAIL_CALL_MARKER)

> -				continue;

> +			long addr = 0;

>   

>   			switch (insn->code) {

>   			case BPF_LD | BPF_DW | BPF_IMM: