Series |
tcg patch queue
|
expand
-
[PULL,v2,00/60] tcg patch queue
-
[PULL,v2,01/60] qemu/int128: Add int128_{not,xor}
-
[PULL,v2,02/60] host-utils: move checks out of divu128/divs128
-
[PULL,v2,03/60] host-utils: move udiv_qrnnd() to host-utils
-
[PULL,v2,04/60] host-utils: add 128-bit quotient support to divu128/divs128
-
[PULL,v2,05/60] host-utils: add unit tests for divu128/divs128
-
[PULL,v2,06/60] tcg/optimize: Rename "mask" to "z_mask"
-
[PULL,v2,07/60] tcg/optimize: Split out OptContext
-
[PULL,v2,08/60] tcg/optimize: Remove do_default label
-
[PULL,v2,09/60] tcg/optimize: Change tcg_opt_gen_{mov, movi} interface
-
[PULL,v2,10/60] tcg/optimize: Move prev_mb into OptContext
-
[PULL,v2,11/60] tcg/optimize: Split out init_arguments
-
[PULL,v2,12/60] tcg/optimize: Split out copy_propagate
-
[PULL,v2,13/60] tcg/optimize: Split out fold_call
-
[PULL,v2,14/60] tcg/optimize: Drop nb_oargs, nb_iargs locals
-
[PULL,v2,15/60] tcg/optimize: Change fail return for do_constant_folding_cond*
-
[PULL,v2,16/60] tcg/optimize: Return true from tcg_opt_gen_{mov, movi}
-
[PULL,v2,17/60] tcg/optimize: Split out finish_folding
-
[PULL,v2,18/60] tcg/optimize: Use a boolean to avoid a mass of continues
-
[PULL,v2,19/60] tcg/optimize: Split out fold_mb, fold_qemu_{ld,st}
-
[PULL,v2,20/60] tcg/optimize: Split out fold_const{1,2}
-
[PULL,v2,21/60] tcg/optimize: Split out fold_setcond2
-
[PULL,v2,22/60] tcg/optimize: Split out fold_brcond2
-
[PULL,v2,23/60] tcg/optimize: Split out fold_brcond
-
[PULL,v2,24/60] tcg/optimize: Split out fold_setcond
-
[PULL,v2,25/60] tcg/optimize: Split out fold_mulu2_i32
-
[PULL,v2,26/60] tcg/optimize: Split out fold_addsub2_i32
-
[PULL,v2,27/60] tcg/optimize: Split out fold_movcond
-
[PULL,v2,28/60] tcg/optimize: Split out fold_extract2
-
[PULL,v2,29/60] tcg/optimize: Split out fold_extract, fold_sextract
-
[PULL,v2,30/60] tcg/optimize: Split out fold_deposit
-
[PULL,v2,31/60] tcg/optimize: Split out fold_count_zeros
-
[PULL,v2,32/60] tcg/optimize: Split out fold_bswap
-
[PULL,v2,33/60] tcg/optimize: Split out fold_dup, fold_dup2
-
[PULL,v2,34/60] tcg/optimize: Split out fold_mov
-
[PULL,v2,35/60] tcg/optimize: Split out fold_xx_to_i
-
[PULL,v2,36/60] tcg/optimize: Split out fold_xx_to_x
-
[PULL,v2,37/60] tcg/optimize: Split out fold_xi_to_i
-
[PULL,v2,38/60] tcg/optimize: Add type to OptContext
-
[PULL,v2,39/60] tcg/optimize: Split out fold_to_not
-
[PULL,v2,40/60] tcg/optimize: Split out fold_sub_to_neg
-
[PULL,v2,41/60] tcg/optimize: Split out fold_xi_to_x
-
[PULL,v2,42/60] tcg/optimize: Split out fold_ix_to_i
-
[PULL,v2,43/60] tcg/optimize: Split out fold_masks
-
[PULL,v2,44/60] tcg/optimize: Expand fold_mulu2_i32 to all 4-arg multiplies
-
[PULL,v2,45/60] tcg/optimize: Expand fold_addsub2_i32 to 64-bit ops
-
[PULL,v2,46/60] tcg/optimize: Sink commutative operand swapping into fold functions
-
[PULL,v2,47/60] tcg: Extend call args using the correct opcodes
-
[PULL,v2,48/60] tcg/optimize: Stop forcing z_mask to "garbage" for 32-bit values
-
[PULL,v2,49/60] tcg/optimize: Use fold_xx_to_i for orc
-
[PULL,v2,50/60] tcg/optimize: Use fold_xi_to_x for mul
-
[PULL,v2,51/60] tcg/optimize: Use fold_xi_to_x for div
-
[PULL,v2,52/60] tcg/optimize: Use fold_xx_to_i for rem
-
[PULL,v2,53/60] tcg/optimize: Optimize sign extensions
-
[PULL,v2,54/60] tcg/optimize: Propagate sign info for logical operations
-
[PULL,v2,55/60] tcg/optimize: Propagate sign info for setcond
-
[PULL,v2,56/60] tcg/optimize: Propagate sign info for bit counting
-
[PULL,v2,57/60] tcg/optimize: Propagate sign info for shifting
-
[PULL,v2,58/60] softmmu: fix watchpoint processing in icount mode
-
[PULL,v2,59/60] softmmu: remove useless condition in watchpoint check
-
[PULL,v2,60/60] softmmu: fix for "after access" watchpoints
|