Message ID | 87lgluittz.fsf@linaro.org |
---|---|
State | New |
Headers | show |
Series | Make more use of opt_mode | expand |
On Mon, Sep 4, 2017 at 1:31 PM, Richard Sandiford <richard.sandiford@linaro.org> wrote: > This patch converts more places that could use int_mode_for_size instead > of mode_for_size. This is in preparation for an upcoming patch that > makes mode_for_size itself return an opt_mode. > > require () seems like the right choice in expand_builtin_powi > because we have got past the point of backing out. We go on to do: > > op1 = expand_expr (arg1, NULL_RTX, mode2, EXPAND_NORMAL); > if (GET_MODE (op1) != mode2) > op1 = convert_to_mode (mode2, op1, 0); > > which would be invalid for (and have failed for) BLKmode. > > In get_builtin_sync_mode and expand_ifn_atomic_compare_exchange, > the possible bitsizes are {8, 16, 32, 64, 128}, all of which give > target-independent integer modes (up to TImode). The comment above > the call in get_builtin_sync_mode makes clear that an integer mode > must be found. > > We can use require () in expand_builtin_atomic_clear and > expand_builtin_atomic_test_and_set because there's always an integer > mode for the boolean type. The same goes for the POINTER_SIZE request > in layout_type. Similarly we can use require () in combine_instructions > and gen_lowpart_common because there's always an integer mode for > HOST_BITS_PER_WIDE_INT (DImode when BITS_PER_UNIT == 8), and > HOST_BITS_PER_DOUBLE_INT (TImode). > > The calls in aarch64_function_value, arm_function_value, > aapcs_allocate_return_reg and mips_function_value_1 are handling > cases in which a big-endian target passes or returns values at > the most significant end of a register. In each case the ABI > constrains the size to a small amount and does not handle > non-power-of-2 sizes wider than a word. > > The calls in c6x_expand_movmem, i386.c:emit_memset, > lm32_block_move_inline, microblaze_block_move_straight and > mips_block_move_straight are dealing with expansions of > block memory operations using register-wise operations, > and those registers must have non-BLK mode. > > The reason for using require () in ix86_expand_sse_cmp, > mips_expand_ins_as_unaligned_store, spu.c:adjust_operand and > spu_emit_branch_and_set is that we go on to emit non-call > instructions that use registers of that mode, which wouldn't > be valid for BLKmode. Ok. Richard. > 2017-09-04 Richard Sandiford <richard.sandiford@linaro.org> > > gcc/ > * builtins.c (expand_builtin_powi): Use int_mode_for_size. > (get_builtin_sync_mode): Likewise. > (expand_ifn_atomic_compare_exchange): Likewise. > (expand_builtin_atomic_clear): Likewise. > (expand_builtin_atomic_test_and_set): Likewise. > (fold_builtin_atomic_always_lock_free): Likewise. > * calls.c (compute_argument_addresses): Likewise. > (emit_library_call_value_1): Likewise. > (store_one_arg): Likewise. > * combine.c (combine_instructions): Likewise. > * config/aarch64/aarch64.c (aarch64_function_value): Likewise. > * config/arm/arm.c (arm_function_value): Likewise. > (aapcs_allocate_return_reg): Likewise. > * config/c6x/c6x.c (c6x_expand_movmem): Likewise. > * config/i386/i386.c (construct_container): Likewise. > (ix86_gimplify_va_arg): Likewise. > (ix86_expand_sse_cmp): Likewise. > (emit_memmov): Likewise. > (emit_memset): Likewise. > (expand_small_movmem_or_setmem): Likewise. > (ix86_expand_pextr): Likewise. > (ix86_expand_pinsr): Likewise. > * config/lm32/lm32.c (lm32_block_move_inline): Likewise. > * config/microblaze/microblaze.c (microblaze_block_move_straight): > Likewise. > * config/mips/mips.c (mips_function_value_1) Likewise. > (mips_block_move_straight): Likewise. > (mips_expand_ins_as_unaligned_store): Likewise. > * config/powerpcspe/powerpcspe.c > (rs6000_darwin64_record_arg_advance_flush): Likewise. > (rs6000_darwin64_record_arg_flush): Likewise. > * config/rs6000/rs6000.c > (rs6000_darwin64_record_arg_advance_flush): Likewise. > (rs6000_darwin64_record_arg_flush): Likewise. > * config/sparc/sparc.c (sparc_function_arg_1): Likewise. > (sparc_function_value_1): Likewise. > * config/spu/spu.c (adjust_operand): Likewise. > (spu_emit_branch_or_set): Likewise. > (arith_immediate_p): Likewise. > * emit-rtl.c (gen_lowpart_common): Likewise. > * expr.c (expand_expr_real_1): Likewise. > * function.c (assign_parm_setup_block): Likewise. > * gimple-ssa-store-merging.c (encode_tree_to_bitpos): Likewise. > * reload1.c (alter_reg): Likewise. > * stor-layout.c (mode_for_vector): Likewise. > (layout_type): Likewise. > > gcc/ada/ > * gcc-interface/utils2.c (build_load_modify_store): > Use int_mode_for_size. > > Index: gcc/builtins.c > =================================================================== > --- gcc/builtins.c 2017-09-04 08:30:09.328308115 +0100 > +++ gcc/builtins.c 2017-09-04 12:18:44.865115639 +0100 > @@ -2755,7 +2755,7 @@ expand_builtin_powi (tree exp, rtx targe > /* Emit a libcall to libgcc. */ > > /* Mode of the 2nd argument must match that of an int. */ > - mode2 = mode_for_size (INT_TYPE_SIZE, MODE_INT, 0); > + mode2 = int_mode_for_size (INT_TYPE_SIZE, 0).require (); > > if (target == NULL_RTX) > target = gen_reg_rtx (mode); > @@ -5477,7 +5477,7 @@ get_builtin_sync_mode (int fcode_diff) > { > /* The size is not negotiable, so ask not to get BLKmode in return > if the target indicates that a smaller size would be better. */ > - return mode_for_size (BITS_PER_UNIT << fcode_diff, MODE_INT, 0); > + return int_mode_for_size (BITS_PER_UNIT << fcode_diff, 0).require (); > } > > /* Expand the memory expression LOC and return the appropriate memory operand > @@ -5858,7 +5858,7 @@ expand_ifn_atomic_compare_exchange (gcal > { > int size = tree_to_shwi (gimple_call_arg (call, 3)) & 255; > gcc_assert (size == 1 || size == 2 || size == 4 || size == 8 || size == 16); > - machine_mode mode = mode_for_size (BITS_PER_UNIT * size, MODE_INT, 0); > + machine_mode mode = int_mode_for_size (BITS_PER_UNIT * size, 0).require (); > rtx expect, desired, mem, oldval, boolret; > enum memmodel success, failure; > tree lhs; > @@ -6154,7 +6154,7 @@ expand_builtin_atomic_clear (tree exp) > rtx mem, ret; > enum memmodel model; > > - mode = mode_for_size (BOOL_TYPE_SIZE, MODE_INT, 0); > + mode = int_mode_for_size (BOOL_TYPE_SIZE, 0).require (); > mem = get_builtin_sync_mem (CALL_EXPR_ARG (exp, 0), mode); > model = get_memmodel (CALL_EXPR_ARG (exp, 1)); > > @@ -6189,7 +6189,7 @@ expand_builtin_atomic_test_and_set (tree > enum memmodel model; > machine_mode mode; > > - mode = mode_for_size (BOOL_TYPE_SIZE, MODE_INT, 0); > + mode = int_mode_for_size (BOOL_TYPE_SIZE, 0).require (); > mem = get_builtin_sync_mem (CALL_EXPR_ARG (exp, 0), mode); > model = get_memmodel (CALL_EXPR_ARG (exp, 1)); > > @@ -6210,8 +6210,11 @@ fold_builtin_atomic_always_lock_free (tr > if (TREE_CODE (arg0) != INTEGER_CST) > return NULL_TREE; > > + /* We need a corresponding integer mode for the access to be lock-free. */ > size = INTVAL (expand_normal (arg0)) * BITS_PER_UNIT; > - mode = mode_for_size (size, MODE_INT, 0); > + if (!int_mode_for_size (size, 0).exists (&mode)) > + return boolean_false_node; > + > mode_align = GET_MODE_ALIGNMENT (mode); > > if (TREE_CODE (arg1) == INTEGER_CST) > Index: gcc/calls.c > =================================================================== > --- gcc/calls.c 2017-09-04 11:50:24.542663572 +0100 > +++ gcc/calls.c 2017-09-04 12:18:44.866121179 +0100 > @@ -2209,8 +2209,8 @@ compute_argument_addresses (struct arg_d > /* Only part of the parameter is being passed on the stack. > Generate a simple memory reference of the correct size. */ > units_on_stack = args[i].locate.size.constant; > - partial_mode = mode_for_size (units_on_stack * BITS_PER_UNIT, > - MODE_INT, 1); > + unsigned int bits_on_stack = units_on_stack * BITS_PER_UNIT; > + partial_mode = int_mode_for_size (bits_on_stack, 1).else_blk (); > args[i].stack = gen_rtx_MEM (partial_mode, addr); > set_mem_size (args[i].stack, units_on_stack); > } > @@ -4818,7 +4818,7 @@ emit_library_call_value_1 (int retval, r > unsigned int size > = argvec[argnum].locate.size.constant * BITS_PER_UNIT; > machine_mode save_mode > - = mode_for_size (size, MODE_INT, 1); > + = int_mode_for_size (size, 1).else_blk (); > rtx adr > = plus_constant (Pmode, argblock, > argvec[argnum].locate.offset.constant); > @@ -5271,7 +5271,8 @@ store_one_arg (struct arg_data *arg, rtx > { > /* We need to make a save area. */ > unsigned int size = arg->locate.size.constant * BITS_PER_UNIT; > - machine_mode save_mode = mode_for_size (size, MODE_INT, 1); > + machine_mode save_mode > + = int_mode_for_size (size, 1).else_blk (); > rtx adr = memory_address (save_mode, XEXP (arg->stack_slot, 0)); > rtx stack_area = gen_rtx_MEM (save_mode, adr); > > Index: gcc/combine.c > =================================================================== > --- gcc/combine.c 2017-09-04 11:50:08.502225206 +0100 > +++ gcc/combine.c 2017-09-04 12:18:44.871148881 +0100 > @@ -370,7 +370,7 @@ alloc_insn_link (rtx_insn *insn, unsigne > /* Mode used to compute significance in reg_stat[].nonzero_bits. It is the > largest integer mode that can fit in HOST_BITS_PER_WIDE_INT. */ > > -static machine_mode nonzero_bits_mode; > +static scalar_int_mode nonzero_bits_mode; > > /* Nonzero when reg_stat[].nonzero_bits and reg_stat[].sign_bit_copies can > be safely used. It is zero while computing them and after combine has > @@ -1157,7 +1157,7 @@ combine_instructions (rtx_insn *f, unsig > uid_insn_cost = XCNEWVEC (int, max_uid_known + 1); > gcc_obstack_init (&insn_link_obstack); > > - nonzero_bits_mode = mode_for_size (HOST_BITS_PER_WIDE_INT, MODE_INT, 0); > + nonzero_bits_mode = int_mode_for_size (HOST_BITS_PER_WIDE_INT, 0).require (); > > /* Don't use reg_stat[].nonzero_bits when computing it. This can cause > problems when, for example, we have j <<= 1 in a loop. */ > Index: gcc/config/aarch64/aarch64.c > =================================================================== > --- gcc/config/aarch64/aarch64.c 2017-09-04 11:50:24.544464351 +0100 > +++ gcc/config/aarch64/aarch64.c 2017-09-04 12:18:44.874165502 +0100 > @@ -2235,7 +2235,7 @@ aarch64_function_value (const_tree type, > if (size % UNITS_PER_WORD != 0) > { > size += UNITS_PER_WORD - size % UNITS_PER_WORD; > - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); > + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).require (); > } > } > > Index: gcc/config/arm/arm.c > =================================================================== > --- gcc/config/arm/arm.c 2017-09-04 11:50:24.546265130 +0100 > +++ gcc/config/arm/arm.c 2017-09-04 12:18:44.886231985 +0100 > @@ -5358,7 +5358,7 @@ arm_function_value(const_tree type, cons > if (size % UNITS_PER_WORD != 0) > { > size += UNITS_PER_WORD - size % UNITS_PER_WORD; > - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); > + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).require (); > } > } > > @@ -6315,7 +6315,7 @@ aapcs_allocate_return_reg (machine_mode > if (size % UNITS_PER_WORD != 0) > { > size += UNITS_PER_WORD - size % UNITS_PER_WORD; > - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); > + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).require (); > } > } > > Index: gcc/config/c6x/c6x.c > =================================================================== > --- gcc/config/c6x/c6x.c 2017-09-04 11:50:08.509428322 +0100 > +++ gcc/config/c6x/c6x.c 2017-09-04 12:18:44.887237526 +0100 > @@ -1758,8 +1758,8 @@ c6x_expand_movmem (rtx dst, rtx src, rtx > if (dst_size > src_size) > dst_size = src_size; > > - srcmode = mode_for_size (src_size * BITS_PER_UNIT, MODE_INT, 0); > - dstmode = mode_for_size (dst_size * BITS_PER_UNIT, MODE_INT, 0); > + srcmode = int_mode_for_size (src_size * BITS_PER_UNIT, 0).require (); > + dstmode = int_mode_for_size (dst_size * BITS_PER_UNIT, 0).require (); > if (src_size >= 4) > reg_lowpart = reg = gen_reg_rtx (srcmode); > else > Index: gcc/config/i386/i386.c > =================================================================== > --- gcc/config/i386/i386.c 2017-09-04 11:50:08.515731048 +0100 > +++ gcc/config/i386/i386.c 2017-09-04 12:18:44.903326171 +0100 > @@ -9892,16 +9892,17 @@ construct_container (machine_mode mode, > case X86_64_INTEGERSI_CLASS: > /* Merge TImodes on aligned occasions here too. */ > if (i * 8 + 8 > bytes) > - tmpmode > - = mode_for_size ((bytes - i * 8) * BITS_PER_UNIT, MODE_INT, 0); > + { > + unsigned int tmpbits = (bytes - i * 8) * BITS_PER_UNIT; > + if (!int_mode_for_size (tmpbits, 0).exists (&tmpmode)) > + /* We've requested 24 bytes we > + don't have mode for. Use DImode. */ > + tmpmode = DImode; > + } > else if (regclass[i] == X86_64_INTEGERSI_CLASS) > tmpmode = SImode; > else > tmpmode = DImode; > - /* We've requested 24 bytes we > - don't have mode for. Use DImode. */ > - if (tmpmode == BLKmode) > - tmpmode = DImode; > exp [nexps++] > = gen_rtx_EXPR_LIST (VOIDmode, > gen_rtx_REG (tmpmode, *intreg), > @@ -11880,8 +11881,8 @@ ix86_gimplify_va_arg (tree valist, tree > if (prev_size + cur_size > size) > { > cur_size = size - prev_size; > - mode = mode_for_size (cur_size * BITS_PER_UNIT, MODE_INT, 1); > - if (mode == BLKmode) > + unsigned int nbits = cur_size * BITS_PER_UNIT; > + if (!int_mode_for_size (nbits, 1).exists (&mode)) > mode = QImode; > } > piece_type = lang_hooks.types.type_for_mode (mode, 1); > @@ -24807,9 +24808,8 @@ ix86_expand_sse_cmp (rtx dest, enum rtx_ > > if (GET_MODE_SIZE (cmp_ops_mode) == 64) > { > - cmp_mode = mode_for_size (GET_MODE_NUNITS (cmp_ops_mode), MODE_INT, 0); > - gcc_assert (cmp_mode != BLKmode); > - > + unsigned int nbits = GET_MODE_NUNITS (cmp_ops_mode); > + cmp_mode = int_mode_for_size (nbits, 0).require (); > maskcmp = true; > } > else > @@ -27408,13 +27408,11 @@ emit_memmov (rtx destmem, rtx *srcmem, r > Start with the biggest power of 2 less than SIZE_TO_MOVE and half > it until move of such size is supported. */ > piece_size = 1 << floor_log2 (size_to_move); > - move_mode = mode_for_size (piece_size * BITS_PER_UNIT, MODE_INT, 0); > - code = optab_handler (mov_optab, move_mode); > - while (code == CODE_FOR_nothing && piece_size > 1) > + while (!int_mode_for_size (piece_size * BITS_PER_UNIT, 0).exists (&move_mode) > + || (code = optab_handler (mov_optab, move_mode)) == CODE_FOR_nothing) > { > + gcc_assert (piece_size > 1); > piece_size >>= 1; > - move_mode = mode_for_size (piece_size * BITS_PER_UNIT, MODE_INT, 0); > - code = optab_handler (mov_optab, move_mode); > } > > /* Find the corresponding vector mode with the same size as MOVE_MODE. > @@ -27597,7 +27595,8 @@ emit_memset (rtx destmem, rtx destptr, r > move_mode = QImode; > if (size_to_move < GET_MODE_SIZE (move_mode)) > { > - move_mode = mode_for_size (size_to_move * BITS_PER_UNIT, MODE_INT, 0); > + unsigned int move_bits = size_to_move * BITS_PER_UNIT; > + move_mode = int_mode_for_size (move_bits, 0).require (); > promoted_val = gen_lowpart (move_mode, promoted_val); > } > piece_size = GET_MODE_SIZE (move_mode); > @@ -27792,7 +27791,7 @@ expand_small_movmem_or_setmem (rtx destm > rtx done_label, bool issetmem) > { > rtx_code_label *label = ix86_expand_aligntest (count, size, false); > - machine_mode mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 1); > + machine_mode mode = int_mode_for_size (size * BITS_PER_UNIT, 1).else_blk (); > rtx modesize; > int n; > > @@ -50453,7 +50452,8 @@ ix86_expand_pextr (rtx *operands) > machine_mode srcmode, dstmode; > rtx d, pat; > > - dstmode = mode_for_size (size, MODE_INT, 0); > + if (!int_mode_for_size (size, 0).exists (&dstmode)) > + return false; > > switch (dstmode) > { > @@ -50549,7 +50549,8 @@ ix86_expand_pinsr (rtx *operands) > rtx (*pinsr)(rtx, rtx, rtx, rtx); > rtx d; > > - srcmode = mode_for_size (size, MODE_INT, 0); > + if (!int_mode_for_size (size, 0).exists (&srcmode)) > + return false; > > switch (srcmode) > { > Index: gcc/config/lm32/lm32.c > =================================================================== > --- gcc/config/lm32/lm32.c 2017-09-04 11:50:08.517531827 +0100 > +++ gcc/config/lm32/lm32.c 2017-09-04 12:18:44.903326171 +0100 > @@ -836,7 +836,7 @@ lm32_block_move_inline (rtx dest, rtx sr > break; > } > > - mode = mode_for_size (bits, MODE_INT, 0); > + mode = int_mode_for_size (bits, 0).require (); > delta = bits / BITS_PER_UNIT; > > /* Allocate a buffer for the temporary registers. */ > Index: gcc/config/microblaze/microblaze.c > =================================================================== > --- gcc/config/microblaze/microblaze.c 2017-09-04 11:50:08.520232996 +0100 > +++ gcc/config/microblaze/microblaze.c 2017-09-04 12:18:44.904331711 +0100 > @@ -1087,7 +1087,7 @@ microblaze_block_move_straight (rtx dest > rtx *regs; > > bits = BITS_PER_WORD; > - mode = mode_for_size (bits, MODE_INT, 0); > + mode = int_mode_for_size (bits, 0).require (); > delta = bits / BITS_PER_UNIT; > > /* Allocate a buffer for the temporary registers. */ > Index: gcc/config/mips/mips.c > =================================================================== > --- gcc/config/mips/mips.c 2017-09-04 11:50:24.550767077 +0100 > +++ gcc/config/mips/mips.c 2017-09-04 12:18:44.906342792 +0100 > @@ -6384,7 +6384,7 @@ mips_function_value_1 (const_tree valtyp > if (size % UNITS_PER_WORD != 0) > { > size += UNITS_PER_WORD - size % UNITS_PER_WORD; > - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); > + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).require (); > } > } > > @@ -7992,7 +7992,7 @@ mips_block_move_straight (rtx dest, rtx > bits = BITS_PER_WORD; > } > > - mode = mode_for_size (bits, MODE_INT, 0); > + mode = int_mode_for_size (bits, 0).require (); > delta = bits / BITS_PER_UNIT; > > /* Allocate a buffer for the temporary registers. */ > @@ -8397,7 +8397,7 @@ mips_expand_ins_as_unaligned_store (rtx > if (!mips_get_unaligned_mem (dest, width, bitpos, &left, &right)) > return false; > > - mode = mode_for_size (width, MODE_INT, 0); > + mode = int_mode_for_size (width, 0).require (); > src = gen_lowpart (mode, src); > if (mode == DImode) > { > Index: gcc/config/powerpcspe/powerpcspe.c > =================================================================== > --- gcc/config/powerpcspe/powerpcspe.c 2017-09-04 11:50:24.557069804 +0100 > +++ gcc/config/powerpcspe/powerpcspe.c 2017-09-04 12:18:44.919414816 +0100 > @@ -12222,7 +12222,6 @@ rs6000_darwin64_record_arg_advance_flush > { > unsigned int startbit, endbit; > int intregs, intoffset; > - machine_mode mode; > > /* Handle the situations where a float is taking up the first half > of the GPR, and the other half is empty (typically due to > @@ -12246,9 +12245,8 @@ rs6000_darwin64_record_arg_advance_flush > > if (intoffset % BITS_PER_WORD != 0) > { > - mode = mode_for_size (BITS_PER_WORD - intoffset % BITS_PER_WORD, > - MODE_INT, 0); > - if (mode == BLKmode) > + unsigned int bits = BITS_PER_WORD - intoffset % BITS_PER_WORD; > + if (!int_mode_for_size (bits, 0).exists ()) > { > /* We couldn't find an appropriate mode, which happens, > e.g., in packed structs when there are 3 bytes to load. > @@ -12714,9 +12712,8 @@ rs6000_darwin64_record_arg_flush (CUMULA > > if (intoffset % BITS_PER_WORD != 0) > { > - mode = mode_for_size (BITS_PER_WORD - intoffset % BITS_PER_WORD, > - MODE_INT, 0); > - if (mode == BLKmode) > + unsigned int bits = BITS_PER_WORD - intoffset % BITS_PER_WORD; > + if (!int_mode_for_size (bits, 0).exists (&mode)) > { > /* We couldn't find an appropriate mode, which happens, > e.g., in packed structs when there are 3 bytes to load. > Index: gcc/config/rs6000/rs6000.c > =================================================================== > --- gcc/config/rs6000/rs6000.c 2017-09-04 11:50:24.560671361 +0100 > +++ gcc/config/rs6000/rs6000.c 2017-09-04 12:18:44.929470219 +0100 > @@ -11654,7 +11654,6 @@ rs6000_darwin64_record_arg_advance_flush > { > unsigned int startbit, endbit; > int intregs, intoffset; > - machine_mode mode; > > /* Handle the situations where a float is taking up the first half > of the GPR, and the other half is empty (typically due to > @@ -11678,9 +11677,8 @@ rs6000_darwin64_record_arg_advance_flush > > if (intoffset % BITS_PER_WORD != 0) > { > - mode = mode_for_size (BITS_PER_WORD - intoffset % BITS_PER_WORD, > - MODE_INT, 0); > - if (mode == BLKmode) > + unsigned int bits = BITS_PER_WORD - intoffset % BITS_PER_WORD; > + if (!int_mode_for_size (bits, 0).exists ()) > { > /* We couldn't find an appropriate mode, which happens, > e.g., in packed structs when there are 3 bytes to load. > @@ -12049,9 +12047,8 @@ rs6000_darwin64_record_arg_flush (CUMULA > > if (intoffset % BITS_PER_WORD != 0) > { > - mode = mode_for_size (BITS_PER_WORD - intoffset % BITS_PER_WORD, > - MODE_INT, 0); > - if (mode == BLKmode) > + unsigned int bits = BITS_PER_WORD - intoffset % BITS_PER_WORD; > + if (!int_mode_for_size (bits, 0).exists (&mode)) > { > /* We couldn't find an appropriate mode, which happens, > e.g., in packed structs when there are 3 bytes to load. > Index: gcc/config/sparc/sparc.c > =================================================================== > --- gcc/config/sparc/sparc.c 2017-09-04 11:50:24.562472140 +0100 > +++ gcc/config/sparc/sparc.c 2017-09-04 12:18:44.932486840 +0100 > @@ -7123,7 +7123,7 @@ sparc_function_arg_1 (cumulative_args_t > HOST_WIDE_INT size = int_size_in_bytes (type); > gcc_assert (size <= 16); > > - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); > + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).else_blk (); > } > > return gen_rtx_REG (mode, regno); > @@ -7499,7 +7499,7 @@ sparc_function_value_1 (const_tree type, > HOST_WIDE_INT size = int_size_in_bytes (type); > gcc_assert (size <= 32); > > - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); > + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).else_blk (); > > /* ??? We probably should have made the same ABI change in > 3.4.0 as the one we made for unions. The latter was > Index: gcc/config/spu/spu.c > =================================================================== > --- gcc/config/spu/spu.c 2017-09-04 12:18:41.572976650 +0100 > +++ gcc/config/spu/spu.c 2017-09-04 12:18:44.934497920 +0100 > @@ -368,7 +368,7 @@ adjust_operand (rtx op, HOST_WIDE_INT * > op_size = 32; > } > /* If it is not a MODE_INT (and/or it is smaller than SI) add a SUBREG. */ > - mode = mode_for_size (op_size, MODE_INT, 0); > + mode = int_mode_for_size (op_size, 0).require (); > if (mode != GET_MODE (op)) > op = gen_rtx_SUBREG (mode, op, 0); > return op; > @@ -935,7 +935,7 @@ spu_emit_branch_or_set (int is_set, rtx > rtx target = operands[0]; > int compare_size = GET_MODE_BITSIZE (comp_mode); > int target_size = GET_MODE_BITSIZE (GET_MODE (target)); > - machine_mode mode = mode_for_size (target_size, MODE_INT, 0); > + machine_mode mode = int_mode_for_size (target_size, 0).require (); > rtx select_mask; > rtx op_t = operands[2]; > rtx op_f = operands[3]; > Index: gcc/emit-rtl.c > =================================================================== > --- gcc/emit-rtl.c 2017-09-04 11:48:27.399539901 +0100 > +++ gcc/emit-rtl.c 2017-09-04 12:18:44.935503461 +0100 > @@ -1430,9 +1430,9 @@ gen_lowpart_common (machine_mode mode, r > innermode = GET_MODE (x); > if (CONST_INT_P (x) > && msize * BITS_PER_UNIT <= HOST_BITS_PER_WIDE_INT) > - innermode = mode_for_size (HOST_BITS_PER_WIDE_INT, MODE_INT, 0); > + innermode = int_mode_for_size (HOST_BITS_PER_WIDE_INT, 0).require (); > else if (innermode == VOIDmode) > - innermode = mode_for_size (HOST_BITS_PER_DOUBLE_INT, MODE_INT, 0); > + innermode = int_mode_for_size (HOST_BITS_PER_DOUBLE_INT, 0).require (); > > xsize = GET_MODE_SIZE (innermode); > > Index: gcc/expr.c > =================================================================== > --- gcc/expr.c 2017-09-04 11:50:24.566974088 +0100 > +++ gcc/expr.c 2017-09-04 12:18:44.938520082 +0100 > @@ -10680,7 +10680,7 @@ expand_expr_real_1 (tree exp, rtx target > && ! (target != 0 && MEM_P (op0) > && MEM_P (target) > && bitpos % BITS_PER_UNIT == 0)) > - ext_mode = mode_for_size (bitsize, MODE_INT, 1); > + ext_mode = int_mode_for_size (bitsize, 1).else_blk (); > > if (ext_mode == BLKmode) > { > Index: gcc/function.c > =================================================================== > --- gcc/function.c 2017-09-04 11:50:24.567874477 +0100 > +++ gcc/function.c 2017-09-04 12:18:44.940531162 +0100 > @@ -2978,8 +2978,8 @@ assign_parm_setup_block (struct assign_p > that mode's store operation. */ > else if (size <= UNITS_PER_WORD) > { > - machine_mode mode > - = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); > + unsigned int bits = size * BITS_PER_UNIT; > + machine_mode mode = int_mode_for_size (bits, 0).else_blk (); > > if (mode != BLKmode > #ifdef BLOCK_REG_PADDING > Index: gcc/gimple-ssa-store-merging.c > =================================================================== > --- gcc/gimple-ssa-store-merging.c 2017-07-27 10:37:56.776048721 +0100 > +++ gcc/gimple-ssa-store-merging.c 2017-09-04 12:18:44.941536703 +0100 > @@ -354,7 +354,7 @@ encode_tree_to_bitpos (tree expr, unsign > tree tmp_int = expr; > bool sub_byte_op_p = ((bitlen % BITS_PER_UNIT) > || (bitpos % BITS_PER_UNIT) > - || mode_for_size (bitlen, MODE_INT, 0) == BLKmode); > + || !int_mode_for_size (bitlen, 0).exists ()); > > if (!sub_byte_op_p) > return (native_encode_expr (tmp_int, ptr + first_byte, total_bytes, 0) > Index: gcc/reload1.c > =================================================================== > --- gcc/reload1.c 2017-09-04 11:49:42.942500722 +0100 > +++ gcc/reload1.c 2017-09-04 12:18:44.943547783 +0100 > @@ -2189,11 +2189,12 @@ alter_reg (int i, int from_reg, bool don > { > adjust = inherent_size - total_size; > if (adjust) > - stack_slot > - = adjust_address_nv (x, mode_for_size (total_size > - * BITS_PER_UNIT, > - MODE_INT, 1), > - adjust); > + { > + unsigned int total_bits = total_size * BITS_PER_UNIT; > + machine_mode mem_mode > + = int_mode_for_size (total_bits, 1).else_blk (); > + stack_slot = adjust_address_nv (x, mem_mode, adjust); > + } > } > > if (! dont_share_p && ira_conflicts_p) > @@ -2240,11 +2241,12 @@ alter_reg (int i, int from_reg, bool don > { > adjust = GET_MODE_SIZE (mode) - total_size; > if (adjust) > - stack_slot > - = adjust_address_nv (x, mode_for_size (total_size > - * BITS_PER_UNIT, > - MODE_INT, 1), > - adjust); > + { > + unsigned int total_bits = total_size * BITS_PER_UNIT; > + machine_mode mem_mode > + = int_mode_for_size (total_bits, 1).else_blk (); > + stack_slot = adjust_address_nv (x, mem_mode, adjust); > + } > } > > spill_stack_slot[from_reg] = stack_slot; > Index: gcc/stor-layout.c > =================================================================== > --- gcc/stor-layout.c 2017-08-30 12:20:41.643620906 +0100 > +++ gcc/stor-layout.c 2017-09-04 12:18:44.944553324 +0100 > @@ -506,8 +506,10 @@ mode_for_vector (scalar_mode innermode, > /* For integers, try mapping it to a same-sized scalar mode. */ > if (mode == VOIDmode > && GET_MODE_CLASS (innermode) == MODE_INT) > - mode = mode_for_size (nunits * GET_MODE_BITSIZE (innermode), > - MODE_INT, 0); > + { > + unsigned int nbits = nunits * GET_MODE_BITSIZE (innermode); > + mode = int_mode_for_size (nbits, 0).else_blk (); > + } > > if (mode == VOIDmode > || (GET_MODE_CLASS (mode) == MODE_INT > @@ -2295,7 +2297,7 @@ layout_type (tree type) > TYPE_SIZE_UNIT (type) = size_int (POINTER_SIZE_UNITS); > /* A pointer might be MODE_PARTIAL_INT, but ptrdiff_t must be > integral, which may be an __intN. */ > - SET_TYPE_MODE (type, mode_for_size (POINTER_SIZE, MODE_INT, 0)); > + SET_TYPE_MODE (type, int_mode_for_size (POINTER_SIZE, 0).require ()); > TYPE_PRECISION (type) = POINTER_SIZE; > break; > > @@ -2304,7 +2306,8 @@ layout_type (tree type) > /* It's hard to see what the mode and size of a function ought to > be, but we do know the alignment is FUNCTION_BOUNDARY, so > make it consistent with that. */ > - SET_TYPE_MODE (type, mode_for_size (FUNCTION_BOUNDARY, MODE_INT, 0)); > + SET_TYPE_MODE (type, > + int_mode_for_size (FUNCTION_BOUNDARY, 0).else_blk ()); > TYPE_SIZE (type) = bitsize_int (FUNCTION_BOUNDARY); > TYPE_SIZE_UNIT (type) = size_int (FUNCTION_BOUNDARY / BITS_PER_UNIT); > break; > Index: gcc/ada/gcc-interface/utils2.c > =================================================================== > --- gcc/ada/gcc-interface/utils2.c 2017-05-31 10:02:29.736972972 +0100 > +++ gcc/ada/gcc-interface/utils2.c 2017-09-04 12:18:44.864110098 +0100 > @@ -800,7 +800,8 @@ build_load_modify_store (tree dest, tree > { > unsigned int size = tree_to_uhwi (TYPE_SIZE (type)); > type = copy_type (type); > - SET_TYPE_MODE (type, mode_for_size (size, MODE_INT, 0)); > + machine_mode mode = int_mode_for_size (size, 0).else_blk (); > + SET_TYPE_MODE (type, mode); > } > > /* Create the temporary by inserting a SAVE_EXPR. */
Index: gcc/builtins.c =================================================================== --- gcc/builtins.c 2017-09-04 08:30:09.328308115 +0100 +++ gcc/builtins.c 2017-09-04 12:18:44.865115639 +0100 @@ -2755,7 +2755,7 @@ expand_builtin_powi (tree exp, rtx targe /* Emit a libcall to libgcc. */ /* Mode of the 2nd argument must match that of an int. */ - mode2 = mode_for_size (INT_TYPE_SIZE, MODE_INT, 0); + mode2 = int_mode_for_size (INT_TYPE_SIZE, 0).require (); if (target == NULL_RTX) target = gen_reg_rtx (mode); @@ -5477,7 +5477,7 @@ get_builtin_sync_mode (int fcode_diff) { /* The size is not negotiable, so ask not to get BLKmode in return if the target indicates that a smaller size would be better. */ - return mode_for_size (BITS_PER_UNIT << fcode_diff, MODE_INT, 0); + return int_mode_for_size (BITS_PER_UNIT << fcode_diff, 0).require (); } /* Expand the memory expression LOC and return the appropriate memory operand @@ -5858,7 +5858,7 @@ expand_ifn_atomic_compare_exchange (gcal { int size = tree_to_shwi (gimple_call_arg (call, 3)) & 255; gcc_assert (size == 1 || size == 2 || size == 4 || size == 8 || size == 16); - machine_mode mode = mode_for_size (BITS_PER_UNIT * size, MODE_INT, 0); + machine_mode mode = int_mode_for_size (BITS_PER_UNIT * size, 0).require (); rtx expect, desired, mem, oldval, boolret; enum memmodel success, failure; tree lhs; @@ -6154,7 +6154,7 @@ expand_builtin_atomic_clear (tree exp) rtx mem, ret; enum memmodel model; - mode = mode_for_size (BOOL_TYPE_SIZE, MODE_INT, 0); + mode = int_mode_for_size (BOOL_TYPE_SIZE, 0).require (); mem = get_builtin_sync_mem (CALL_EXPR_ARG (exp, 0), mode); model = get_memmodel (CALL_EXPR_ARG (exp, 1)); @@ -6189,7 +6189,7 @@ expand_builtin_atomic_test_and_set (tree enum memmodel model; machine_mode mode; - mode = mode_for_size (BOOL_TYPE_SIZE, MODE_INT, 0); + mode = int_mode_for_size (BOOL_TYPE_SIZE, 0).require (); mem = get_builtin_sync_mem (CALL_EXPR_ARG (exp, 0), mode); model = get_memmodel (CALL_EXPR_ARG (exp, 1)); @@ -6210,8 +6210,11 @@ fold_builtin_atomic_always_lock_free (tr if (TREE_CODE (arg0) != INTEGER_CST) return NULL_TREE; + /* We need a corresponding integer mode for the access to be lock-free. */ size = INTVAL (expand_normal (arg0)) * BITS_PER_UNIT; - mode = mode_for_size (size, MODE_INT, 0); + if (!int_mode_for_size (size, 0).exists (&mode)) + return boolean_false_node; + mode_align = GET_MODE_ALIGNMENT (mode); if (TREE_CODE (arg1) == INTEGER_CST) Index: gcc/calls.c =================================================================== --- gcc/calls.c 2017-09-04 11:50:24.542663572 +0100 +++ gcc/calls.c 2017-09-04 12:18:44.866121179 +0100 @@ -2209,8 +2209,8 @@ compute_argument_addresses (struct arg_d /* Only part of the parameter is being passed on the stack. Generate a simple memory reference of the correct size. */ units_on_stack = args[i].locate.size.constant; - partial_mode = mode_for_size (units_on_stack * BITS_PER_UNIT, - MODE_INT, 1); + unsigned int bits_on_stack = units_on_stack * BITS_PER_UNIT; + partial_mode = int_mode_for_size (bits_on_stack, 1).else_blk (); args[i].stack = gen_rtx_MEM (partial_mode, addr); set_mem_size (args[i].stack, units_on_stack); } @@ -4818,7 +4818,7 @@ emit_library_call_value_1 (int retval, r unsigned int size = argvec[argnum].locate.size.constant * BITS_PER_UNIT; machine_mode save_mode - = mode_for_size (size, MODE_INT, 1); + = int_mode_for_size (size, 1).else_blk (); rtx adr = plus_constant (Pmode, argblock, argvec[argnum].locate.offset.constant); @@ -5271,7 +5271,8 @@ store_one_arg (struct arg_data *arg, rtx { /* We need to make a save area. */ unsigned int size = arg->locate.size.constant * BITS_PER_UNIT; - machine_mode save_mode = mode_for_size (size, MODE_INT, 1); + machine_mode save_mode + = int_mode_for_size (size, 1).else_blk (); rtx adr = memory_address (save_mode, XEXP (arg->stack_slot, 0)); rtx stack_area = gen_rtx_MEM (save_mode, adr); Index: gcc/combine.c =================================================================== --- gcc/combine.c 2017-09-04 11:50:08.502225206 +0100 +++ gcc/combine.c 2017-09-04 12:18:44.871148881 +0100 @@ -370,7 +370,7 @@ alloc_insn_link (rtx_insn *insn, unsigne /* Mode used to compute significance in reg_stat[].nonzero_bits. It is the largest integer mode that can fit in HOST_BITS_PER_WIDE_INT. */ -static machine_mode nonzero_bits_mode; +static scalar_int_mode nonzero_bits_mode; /* Nonzero when reg_stat[].nonzero_bits and reg_stat[].sign_bit_copies can be safely used. It is zero while computing them and after combine has @@ -1157,7 +1157,7 @@ combine_instructions (rtx_insn *f, unsig uid_insn_cost = XCNEWVEC (int, max_uid_known + 1); gcc_obstack_init (&insn_link_obstack); - nonzero_bits_mode = mode_for_size (HOST_BITS_PER_WIDE_INT, MODE_INT, 0); + nonzero_bits_mode = int_mode_for_size (HOST_BITS_PER_WIDE_INT, 0).require (); /* Don't use reg_stat[].nonzero_bits when computing it. This can cause problems when, for example, we have j <<= 1 in a loop. */ Index: gcc/config/aarch64/aarch64.c =================================================================== --- gcc/config/aarch64/aarch64.c 2017-09-04 11:50:24.544464351 +0100 +++ gcc/config/aarch64/aarch64.c 2017-09-04 12:18:44.874165502 +0100 @@ -2235,7 +2235,7 @@ aarch64_function_value (const_tree type, if (size % UNITS_PER_WORD != 0) { size += UNITS_PER_WORD - size % UNITS_PER_WORD; - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).require (); } } Index: gcc/config/arm/arm.c =================================================================== --- gcc/config/arm/arm.c 2017-09-04 11:50:24.546265130 +0100 +++ gcc/config/arm/arm.c 2017-09-04 12:18:44.886231985 +0100 @@ -5358,7 +5358,7 @@ arm_function_value(const_tree type, cons if (size % UNITS_PER_WORD != 0) { size += UNITS_PER_WORD - size % UNITS_PER_WORD; - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).require (); } } @@ -6315,7 +6315,7 @@ aapcs_allocate_return_reg (machine_mode if (size % UNITS_PER_WORD != 0) { size += UNITS_PER_WORD - size % UNITS_PER_WORD; - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).require (); } } Index: gcc/config/c6x/c6x.c =================================================================== --- gcc/config/c6x/c6x.c 2017-09-04 11:50:08.509428322 +0100 +++ gcc/config/c6x/c6x.c 2017-09-04 12:18:44.887237526 +0100 @@ -1758,8 +1758,8 @@ c6x_expand_movmem (rtx dst, rtx src, rtx if (dst_size > src_size) dst_size = src_size; - srcmode = mode_for_size (src_size * BITS_PER_UNIT, MODE_INT, 0); - dstmode = mode_for_size (dst_size * BITS_PER_UNIT, MODE_INT, 0); + srcmode = int_mode_for_size (src_size * BITS_PER_UNIT, 0).require (); + dstmode = int_mode_for_size (dst_size * BITS_PER_UNIT, 0).require (); if (src_size >= 4) reg_lowpart = reg = gen_reg_rtx (srcmode); else Index: gcc/config/i386/i386.c =================================================================== --- gcc/config/i386/i386.c 2017-09-04 11:50:08.515731048 +0100 +++ gcc/config/i386/i386.c 2017-09-04 12:18:44.903326171 +0100 @@ -9892,16 +9892,17 @@ construct_container (machine_mode mode, case X86_64_INTEGERSI_CLASS: /* Merge TImodes on aligned occasions here too. */ if (i * 8 + 8 > bytes) - tmpmode - = mode_for_size ((bytes - i * 8) * BITS_PER_UNIT, MODE_INT, 0); + { + unsigned int tmpbits = (bytes - i * 8) * BITS_PER_UNIT; + if (!int_mode_for_size (tmpbits, 0).exists (&tmpmode)) + /* We've requested 24 bytes we + don't have mode for. Use DImode. */ + tmpmode = DImode; + } else if (regclass[i] == X86_64_INTEGERSI_CLASS) tmpmode = SImode; else tmpmode = DImode; - /* We've requested 24 bytes we - don't have mode for. Use DImode. */ - if (tmpmode == BLKmode) - tmpmode = DImode; exp [nexps++] = gen_rtx_EXPR_LIST (VOIDmode, gen_rtx_REG (tmpmode, *intreg), @@ -11880,8 +11881,8 @@ ix86_gimplify_va_arg (tree valist, tree if (prev_size + cur_size > size) { cur_size = size - prev_size; - mode = mode_for_size (cur_size * BITS_PER_UNIT, MODE_INT, 1); - if (mode == BLKmode) + unsigned int nbits = cur_size * BITS_PER_UNIT; + if (!int_mode_for_size (nbits, 1).exists (&mode)) mode = QImode; } piece_type = lang_hooks.types.type_for_mode (mode, 1); @@ -24807,9 +24808,8 @@ ix86_expand_sse_cmp (rtx dest, enum rtx_ if (GET_MODE_SIZE (cmp_ops_mode) == 64) { - cmp_mode = mode_for_size (GET_MODE_NUNITS (cmp_ops_mode), MODE_INT, 0); - gcc_assert (cmp_mode != BLKmode); - + unsigned int nbits = GET_MODE_NUNITS (cmp_ops_mode); + cmp_mode = int_mode_for_size (nbits, 0).require (); maskcmp = true; } else @@ -27408,13 +27408,11 @@ emit_memmov (rtx destmem, rtx *srcmem, r Start with the biggest power of 2 less than SIZE_TO_MOVE and half it until move of such size is supported. */ piece_size = 1 << floor_log2 (size_to_move); - move_mode = mode_for_size (piece_size * BITS_PER_UNIT, MODE_INT, 0); - code = optab_handler (mov_optab, move_mode); - while (code == CODE_FOR_nothing && piece_size > 1) + while (!int_mode_for_size (piece_size * BITS_PER_UNIT, 0).exists (&move_mode) + || (code = optab_handler (mov_optab, move_mode)) == CODE_FOR_nothing) { + gcc_assert (piece_size > 1); piece_size >>= 1; - move_mode = mode_for_size (piece_size * BITS_PER_UNIT, MODE_INT, 0); - code = optab_handler (mov_optab, move_mode); } /* Find the corresponding vector mode with the same size as MOVE_MODE. @@ -27597,7 +27595,8 @@ emit_memset (rtx destmem, rtx destptr, r move_mode = QImode; if (size_to_move < GET_MODE_SIZE (move_mode)) { - move_mode = mode_for_size (size_to_move * BITS_PER_UNIT, MODE_INT, 0); + unsigned int move_bits = size_to_move * BITS_PER_UNIT; + move_mode = int_mode_for_size (move_bits, 0).require (); promoted_val = gen_lowpart (move_mode, promoted_val); } piece_size = GET_MODE_SIZE (move_mode); @@ -27792,7 +27791,7 @@ expand_small_movmem_or_setmem (rtx destm rtx done_label, bool issetmem) { rtx_code_label *label = ix86_expand_aligntest (count, size, false); - machine_mode mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 1); + machine_mode mode = int_mode_for_size (size * BITS_PER_UNIT, 1).else_blk (); rtx modesize; int n; @@ -50453,7 +50452,8 @@ ix86_expand_pextr (rtx *operands) machine_mode srcmode, dstmode; rtx d, pat; - dstmode = mode_for_size (size, MODE_INT, 0); + if (!int_mode_for_size (size, 0).exists (&dstmode)) + return false; switch (dstmode) { @@ -50549,7 +50549,8 @@ ix86_expand_pinsr (rtx *operands) rtx (*pinsr)(rtx, rtx, rtx, rtx); rtx d; - srcmode = mode_for_size (size, MODE_INT, 0); + if (!int_mode_for_size (size, 0).exists (&srcmode)) + return false; switch (srcmode) { Index: gcc/config/lm32/lm32.c =================================================================== --- gcc/config/lm32/lm32.c 2017-09-04 11:50:08.517531827 +0100 +++ gcc/config/lm32/lm32.c 2017-09-04 12:18:44.903326171 +0100 @@ -836,7 +836,7 @@ lm32_block_move_inline (rtx dest, rtx sr break; } - mode = mode_for_size (bits, MODE_INT, 0); + mode = int_mode_for_size (bits, 0).require (); delta = bits / BITS_PER_UNIT; /* Allocate a buffer for the temporary registers. */ Index: gcc/config/microblaze/microblaze.c =================================================================== --- gcc/config/microblaze/microblaze.c 2017-09-04 11:50:08.520232996 +0100 +++ gcc/config/microblaze/microblaze.c 2017-09-04 12:18:44.904331711 +0100 @@ -1087,7 +1087,7 @@ microblaze_block_move_straight (rtx dest rtx *regs; bits = BITS_PER_WORD; - mode = mode_for_size (bits, MODE_INT, 0); + mode = int_mode_for_size (bits, 0).require (); delta = bits / BITS_PER_UNIT; /* Allocate a buffer for the temporary registers. */ Index: gcc/config/mips/mips.c =================================================================== --- gcc/config/mips/mips.c 2017-09-04 11:50:24.550767077 +0100 +++ gcc/config/mips/mips.c 2017-09-04 12:18:44.906342792 +0100 @@ -6384,7 +6384,7 @@ mips_function_value_1 (const_tree valtyp if (size % UNITS_PER_WORD != 0) { size += UNITS_PER_WORD - size % UNITS_PER_WORD; - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).require (); } } @@ -7992,7 +7992,7 @@ mips_block_move_straight (rtx dest, rtx bits = BITS_PER_WORD; } - mode = mode_for_size (bits, MODE_INT, 0); + mode = int_mode_for_size (bits, 0).require (); delta = bits / BITS_PER_UNIT; /* Allocate a buffer for the temporary registers. */ @@ -8397,7 +8397,7 @@ mips_expand_ins_as_unaligned_store (rtx if (!mips_get_unaligned_mem (dest, width, bitpos, &left, &right)) return false; - mode = mode_for_size (width, MODE_INT, 0); + mode = int_mode_for_size (width, 0).require (); src = gen_lowpart (mode, src); if (mode == DImode) { Index: gcc/config/powerpcspe/powerpcspe.c =================================================================== --- gcc/config/powerpcspe/powerpcspe.c 2017-09-04 11:50:24.557069804 +0100 +++ gcc/config/powerpcspe/powerpcspe.c 2017-09-04 12:18:44.919414816 +0100 @@ -12222,7 +12222,6 @@ rs6000_darwin64_record_arg_advance_flush { unsigned int startbit, endbit; int intregs, intoffset; - machine_mode mode; /* Handle the situations where a float is taking up the first half of the GPR, and the other half is empty (typically due to @@ -12246,9 +12245,8 @@ rs6000_darwin64_record_arg_advance_flush if (intoffset % BITS_PER_WORD != 0) { - mode = mode_for_size (BITS_PER_WORD - intoffset % BITS_PER_WORD, - MODE_INT, 0); - if (mode == BLKmode) + unsigned int bits = BITS_PER_WORD - intoffset % BITS_PER_WORD; + if (!int_mode_for_size (bits, 0).exists ()) { /* We couldn't find an appropriate mode, which happens, e.g., in packed structs when there are 3 bytes to load. @@ -12714,9 +12712,8 @@ rs6000_darwin64_record_arg_flush (CUMULA if (intoffset % BITS_PER_WORD != 0) { - mode = mode_for_size (BITS_PER_WORD - intoffset % BITS_PER_WORD, - MODE_INT, 0); - if (mode == BLKmode) + unsigned int bits = BITS_PER_WORD - intoffset % BITS_PER_WORD; + if (!int_mode_for_size (bits, 0).exists (&mode)) { /* We couldn't find an appropriate mode, which happens, e.g., in packed structs when there are 3 bytes to load. Index: gcc/config/rs6000/rs6000.c =================================================================== --- gcc/config/rs6000/rs6000.c 2017-09-04 11:50:24.560671361 +0100 +++ gcc/config/rs6000/rs6000.c 2017-09-04 12:18:44.929470219 +0100 @@ -11654,7 +11654,6 @@ rs6000_darwin64_record_arg_advance_flush { unsigned int startbit, endbit; int intregs, intoffset; - machine_mode mode; /* Handle the situations where a float is taking up the first half of the GPR, and the other half is empty (typically due to @@ -11678,9 +11677,8 @@ rs6000_darwin64_record_arg_advance_flush if (intoffset % BITS_PER_WORD != 0) { - mode = mode_for_size (BITS_PER_WORD - intoffset % BITS_PER_WORD, - MODE_INT, 0); - if (mode == BLKmode) + unsigned int bits = BITS_PER_WORD - intoffset % BITS_PER_WORD; + if (!int_mode_for_size (bits, 0).exists ()) { /* We couldn't find an appropriate mode, which happens, e.g., in packed structs when there are 3 bytes to load. @@ -12049,9 +12047,8 @@ rs6000_darwin64_record_arg_flush (CUMULA if (intoffset % BITS_PER_WORD != 0) { - mode = mode_for_size (BITS_PER_WORD - intoffset % BITS_PER_WORD, - MODE_INT, 0); - if (mode == BLKmode) + unsigned int bits = BITS_PER_WORD - intoffset % BITS_PER_WORD; + if (!int_mode_for_size (bits, 0).exists (&mode)) { /* We couldn't find an appropriate mode, which happens, e.g., in packed structs when there are 3 bytes to load. Index: gcc/config/sparc/sparc.c =================================================================== --- gcc/config/sparc/sparc.c 2017-09-04 11:50:24.562472140 +0100 +++ gcc/config/sparc/sparc.c 2017-09-04 12:18:44.932486840 +0100 @@ -7123,7 +7123,7 @@ sparc_function_arg_1 (cumulative_args_t HOST_WIDE_INT size = int_size_in_bytes (type); gcc_assert (size <= 16); - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).else_blk (); } return gen_rtx_REG (mode, regno); @@ -7499,7 +7499,7 @@ sparc_function_value_1 (const_tree type, HOST_WIDE_INT size = int_size_in_bytes (type); gcc_assert (size <= 32); - mode = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); + mode = int_mode_for_size (size * BITS_PER_UNIT, 0).else_blk (); /* ??? We probably should have made the same ABI change in 3.4.0 as the one we made for unions. The latter was Index: gcc/config/spu/spu.c =================================================================== --- gcc/config/spu/spu.c 2017-09-04 12:18:41.572976650 +0100 +++ gcc/config/spu/spu.c 2017-09-04 12:18:44.934497920 +0100 @@ -368,7 +368,7 @@ adjust_operand (rtx op, HOST_WIDE_INT * op_size = 32; } /* If it is not a MODE_INT (and/or it is smaller than SI) add a SUBREG. */ - mode = mode_for_size (op_size, MODE_INT, 0); + mode = int_mode_for_size (op_size, 0).require (); if (mode != GET_MODE (op)) op = gen_rtx_SUBREG (mode, op, 0); return op; @@ -935,7 +935,7 @@ spu_emit_branch_or_set (int is_set, rtx rtx target = operands[0]; int compare_size = GET_MODE_BITSIZE (comp_mode); int target_size = GET_MODE_BITSIZE (GET_MODE (target)); - machine_mode mode = mode_for_size (target_size, MODE_INT, 0); + machine_mode mode = int_mode_for_size (target_size, 0).require (); rtx select_mask; rtx op_t = operands[2]; rtx op_f = operands[3]; Index: gcc/emit-rtl.c =================================================================== --- gcc/emit-rtl.c 2017-09-04 11:48:27.399539901 +0100 +++ gcc/emit-rtl.c 2017-09-04 12:18:44.935503461 +0100 @@ -1430,9 +1430,9 @@ gen_lowpart_common (machine_mode mode, r innermode = GET_MODE (x); if (CONST_INT_P (x) && msize * BITS_PER_UNIT <= HOST_BITS_PER_WIDE_INT) - innermode = mode_for_size (HOST_BITS_PER_WIDE_INT, MODE_INT, 0); + innermode = int_mode_for_size (HOST_BITS_PER_WIDE_INT, 0).require (); else if (innermode == VOIDmode) - innermode = mode_for_size (HOST_BITS_PER_DOUBLE_INT, MODE_INT, 0); + innermode = int_mode_for_size (HOST_BITS_PER_DOUBLE_INT, 0).require (); xsize = GET_MODE_SIZE (innermode); Index: gcc/expr.c =================================================================== --- gcc/expr.c 2017-09-04 11:50:24.566974088 +0100 +++ gcc/expr.c 2017-09-04 12:18:44.938520082 +0100 @@ -10680,7 +10680,7 @@ expand_expr_real_1 (tree exp, rtx target && ! (target != 0 && MEM_P (op0) && MEM_P (target) && bitpos % BITS_PER_UNIT == 0)) - ext_mode = mode_for_size (bitsize, MODE_INT, 1); + ext_mode = int_mode_for_size (bitsize, 1).else_blk (); if (ext_mode == BLKmode) { Index: gcc/function.c =================================================================== --- gcc/function.c 2017-09-04 11:50:24.567874477 +0100 +++ gcc/function.c 2017-09-04 12:18:44.940531162 +0100 @@ -2978,8 +2978,8 @@ assign_parm_setup_block (struct assign_p that mode's store operation. */ else if (size <= UNITS_PER_WORD) { - machine_mode mode - = mode_for_size (size * BITS_PER_UNIT, MODE_INT, 0); + unsigned int bits = size * BITS_PER_UNIT; + machine_mode mode = int_mode_for_size (bits, 0).else_blk (); if (mode != BLKmode #ifdef BLOCK_REG_PADDING Index: gcc/gimple-ssa-store-merging.c =================================================================== --- gcc/gimple-ssa-store-merging.c 2017-07-27 10:37:56.776048721 +0100 +++ gcc/gimple-ssa-store-merging.c 2017-09-04 12:18:44.941536703 +0100 @@ -354,7 +354,7 @@ encode_tree_to_bitpos (tree expr, unsign tree tmp_int = expr; bool sub_byte_op_p = ((bitlen % BITS_PER_UNIT) || (bitpos % BITS_PER_UNIT) - || mode_for_size (bitlen, MODE_INT, 0) == BLKmode); + || !int_mode_for_size (bitlen, 0).exists ()); if (!sub_byte_op_p) return (native_encode_expr (tmp_int, ptr + first_byte, total_bytes, 0) Index: gcc/reload1.c =================================================================== --- gcc/reload1.c 2017-09-04 11:49:42.942500722 +0100 +++ gcc/reload1.c 2017-09-04 12:18:44.943547783 +0100 @@ -2189,11 +2189,12 @@ alter_reg (int i, int from_reg, bool don { adjust = inherent_size - total_size; if (adjust) - stack_slot - = adjust_address_nv (x, mode_for_size (total_size - * BITS_PER_UNIT, - MODE_INT, 1), - adjust); + { + unsigned int total_bits = total_size * BITS_PER_UNIT; + machine_mode mem_mode + = int_mode_for_size (total_bits, 1).else_blk (); + stack_slot = adjust_address_nv (x, mem_mode, adjust); + } } if (! dont_share_p && ira_conflicts_p) @@ -2240,11 +2241,12 @@ alter_reg (int i, int from_reg, bool don { adjust = GET_MODE_SIZE (mode) - total_size; if (adjust) - stack_slot - = adjust_address_nv (x, mode_for_size (total_size - * BITS_PER_UNIT, - MODE_INT, 1), - adjust); + { + unsigned int total_bits = total_size * BITS_PER_UNIT; + machine_mode mem_mode + = int_mode_for_size (total_bits, 1).else_blk (); + stack_slot = adjust_address_nv (x, mem_mode, adjust); + } } spill_stack_slot[from_reg] = stack_slot; Index: gcc/stor-layout.c =================================================================== --- gcc/stor-layout.c 2017-08-30 12:20:41.643620906 +0100 +++ gcc/stor-layout.c 2017-09-04 12:18:44.944553324 +0100 @@ -506,8 +506,10 @@ mode_for_vector (scalar_mode innermode, /* For integers, try mapping it to a same-sized scalar mode. */ if (mode == VOIDmode && GET_MODE_CLASS (innermode) == MODE_INT) - mode = mode_for_size (nunits * GET_MODE_BITSIZE (innermode), - MODE_INT, 0); + { + unsigned int nbits = nunits * GET_MODE_BITSIZE (innermode); + mode = int_mode_for_size (nbits, 0).else_blk (); + } if (mode == VOIDmode || (GET_MODE_CLASS (mode) == MODE_INT @@ -2295,7 +2297,7 @@ layout_type (tree type) TYPE_SIZE_UNIT (type) = size_int (POINTER_SIZE_UNITS); /* A pointer might be MODE_PARTIAL_INT, but ptrdiff_t must be integral, which may be an __intN. */ - SET_TYPE_MODE (type, mode_for_size (POINTER_SIZE, MODE_INT, 0)); + SET_TYPE_MODE (type, int_mode_for_size (POINTER_SIZE, 0).require ()); TYPE_PRECISION (type) = POINTER_SIZE; break; @@ -2304,7 +2306,8 @@ layout_type (tree type) /* It's hard to see what the mode and size of a function ought to be, but we do know the alignment is FUNCTION_BOUNDARY, so make it consistent with that. */ - SET_TYPE_MODE (type, mode_for_size (FUNCTION_BOUNDARY, MODE_INT, 0)); + SET_TYPE_MODE (type, + int_mode_for_size (FUNCTION_BOUNDARY, 0).else_blk ()); TYPE_SIZE (type) = bitsize_int (FUNCTION_BOUNDARY); TYPE_SIZE_UNIT (type) = size_int (FUNCTION_BOUNDARY / BITS_PER_UNIT); break; Index: gcc/ada/gcc-interface/utils2.c =================================================================== --- gcc/ada/gcc-interface/utils2.c 2017-05-31 10:02:29.736972972 +0100 +++ gcc/ada/gcc-interface/utils2.c 2017-09-04 12:18:44.864110098 +0100 @@ -800,7 +800,8 @@ build_load_modify_store (tree dest, tree { unsigned int size = tree_to_uhwi (TYPE_SIZE (type)); type = copy_type (type); - SET_TYPE_MODE (type, mode_for_size (size, MODE_INT, 0)); + machine_mode mode = int_mode_for_size (size, 0).else_blk (); + SET_TYPE_MODE (type, mode); } /* Create the temporary by inserting a SAVE_EXPR. */