From patchwork Fri Oct 21 09:18:31 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Eric Botcazou X-Patchwork-Id: 78617 Delivered-To: patch@linaro.org Received: by 10.140.97.247 with SMTP id m110csp1196730qge; Fri, 21 Oct 2016 02:19:02 -0700 (PDT) X-Received: by 10.99.121.2 with SMTP id u2mr8085574pgc.141.1477041541955; Fri, 21 Oct 2016 02:19:01 -0700 (PDT) Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id l9si1556813paw.129.2016.10.21.02.19.01 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Fri, 21 Oct 2016 02:19:01 -0700 (PDT) Received-SPF: pass (google.com: domain of gcc-patches-return-439203-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@gcc.gnu.org; spf=pass (google.com: domain of gcc-patches-return-439203-patch=linaro.org@gcc.gnu.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=gcc-patches-return-439203-patch=linaro.org@gcc.gnu.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:mime-version:content-type :content-transfer-encoding; q=dns; s=default; b=QCdaR9ApNkBACcOh mT6ErQBOwa2Y6FXJSN+seZUl7H1K0y1xkLT66j8mLRtCbNk8OMrXSMk539qS7Uia tUYxGFYytOCycRgs4Hy8YtwiASi3ZyJQKwA1DtDgMFRcxmduHUpUUe4Iz0z9loje RQdKAZhTLQjtLaKo/vMfljyRwIs= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=gcc.gnu.org; h=list-id :list-unsubscribe:list-archive:list-post:list-help:sender:from :to:cc:subject:date:message-id:mime-version:content-type :content-transfer-encoding; s=default; bh=AOTz382eRe0/RMCJ8e9lFV sr5vI=; b=nL29u+hL0Z8c82rKQOlFri50H++qFX4WJkbSh0WLTSW2d2ygXXNg5x 5FRe+AOPMZ256i0clIlCnM4SdAR9XQ0SVDkmzWHw/UeaHlz4LN1UPDMYda/n4X+i x6rqcZGP6q4U+44ITrXyPgCTOtqAcJsVaXutVVMR9A1I5pHcM6LXM= Received: (qmail 33717 invoked by alias); 21 Oct 2016 09:18:46 -0000 Mailing-List: contact gcc-patches-help@gcc.gnu.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Archive: List-Post: List-Help: Sender: gcc-patches-owner@gcc.gnu.org Delivered-To: mailing list gcc-patches@gcc.gnu.org Received: (qmail 33703 invoked by uid 89); 21 Oct 2016 09:18:45 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=1.4 required=5.0 tests=AWL, BAYES_50, KAM_ASCII_DIVIDERS, RCVD_IN_DNSWL_NONE, SPF_PASS autolearn=no version=3.3.2 spammy=divide, ccc, PLUS, resorts X-HELO: smtp.eu.adacore.com Received: from mel.act-europe.fr (HELO smtp.eu.adacore.com) (194.98.77.210) by sourceware.org (qpsmtpd/0.93/v0.84-503-g423c35a) with ESMTP; Fri, 21 Oct 2016 09:18:35 +0000 Received: from localhost (localhost [127.0.0.1]) by filtered-smtp.eu.adacore.com (Postfix) with ESMTP id DD350812EE; Fri, 21 Oct 2016 11:18:32 +0200 (CEST) Received: from smtp.eu.adacore.com ([127.0.0.1]) by localhost (smtp.eu.adacore.com [127.0.0.1]) (amavisd-new, port 10024) with ESMTP id fpfkdJoylNgG; Fri, 21 Oct 2016 11:18:32 +0200 (CEST) Received: from polaris.localnet (bon31-6-88-161-99-133.fbx.proxad.net [88.161.99.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by smtp.eu.adacore.com (Postfix) with ESMTPSA id 6D0A781300; Fri, 21 Oct 2016 11:18:32 +0200 (CEST) From: Eric Botcazou To: gcc-patches@gcc.gnu.org Cc: jakub@redhat.com Subject: [SPARC] Add support for overflow arithmetic Date: Fri, 21 Oct 2016 11:18:31 +0200 Message-ID: <2518121.rYAYzHP3Zx@polaris> User-Agent: KMail/4.14.10 (Linux/3.16.7-42-desktop; KDE/4.14.9; x86_64; ; ) MIME-Version: 1.0 SPARC can expose the CC register before reload so the implementation is direct as on x86 and ARM. Like ARM, there is no support for multiplication at all, but support for 64-bit operations in 32-bit mode; the latter would give rise to complex patterns for signed arithmetic involving TImode in 32-bit mode which is not (supposed to be) supported, so, unlike the ARM implementation, the implementation resorts to UNSPECs for signed arithmetic and avoids TImode. The code is optimal for 32-bit and 64-bit operations in 32-bit mode, but only for 64-bit operations in 64-bit mode, although SPARC supports separate 32-bit and 64-bit condition codes in 64-bit mode; that's because the port defines the WORD_REGISTER_OPERATIONS macro and expand_arith_overflow has: /* For sub-word operations, if target doesn't have them, start with precres widening right away, otherwise do it only if the most simple cases can't be used. */ if (WORD_REGISTER_OPERATIONS && orig_precres == precres && precres < BITS_PER_WORD) ; Jakub, any idea of an elegant way to address this issue? Tested on SPARC/Solaris, applied on the mainline. 2016-10-21 Eric Botcazou * config/sparc/sparc-modes.def (CCV): New. (CCXV): Likewise. * config/sparc/predicates.md (v_comparison_operator): New. (icc_comparison_operator): Add support for CCV/CCXV. (xcc_comparison_operator): Likewise. * config/sparc/sparc.c (output_cbranch): Likewise. (sparc_print_operand): Likewise. * config/sparc/sparc.md (UNSPEC_{ADD,SUB,NEG}V): New constants. (uaddvdi4): New expander. (addvdi4): Likewise. (uaddvdi4_sp32): New instruction. (addvdi4_sp32): Likewise. (uaddvsi4): New expander. (addvsi4): Likewise. (cmp_ccc_plus_sltu_set): New instruction. (cmp_ccv_plus): Likewise. (cmp_ccxv_plus): Likewise. (cmp_ccv_plus_set): Likewise. (cmp_ccxv_plus_set): Likewise. (cmp_ccv_plus_sltu_set): Likewise. (uaddvdi4): New expander. (subvdi4): Likewise. (usubdi4_sp32): New instruction. (subvdi4_sp32): Likewise. (usubvsi4): New expander. (subvsi4): Likewise. (cmpsi_minus_sltu_set): New instruction. (cmp_ccv_minus): Likewise. (cmp_ccxv_minus): Likewise. (cmp_ccv_minus_set): Likewise. (cmp_ccxv_minus_set): Likewise. (cmp_ccv_minus_sltu_set): Likewise. (unegvdi3): New expander. (negvdi3): Likewise. (unegdi3_sp32): New instruction. (negvdi3_sp32): Likewise. (unegvsi3): New expander. (negvsi3): Likewise. (cmp_ccc_neg_sltu_set): New instruction. (cmp_ccv_neg): Likewise. (cmp_ccxv_neg): Likewise. (cmp_ccv_neg_set): Likewise. (cmp_ccxv_neg_set): Likewise. (cmp_ccv_neg_sltu_set): Likewise. testsuite/ * gcc.target/sparc/overflow-1.c: New test. * gcc.target/sparc/overflow-2.c: Likewise. * gcc.target/sparc/overflow-3.c: Likewise. -- Eric Botcazou Index: config/sparc/predicates.md =================================================================== --- config/sparc/predicates.md (revision 241326) +++ config/sparc/predicates.md (working copy) @@ -420,6 +420,10 @@ (define_predicate "nz_comparison_operato (define_predicate "c_comparison_operator" (match_code "ltu,geu")) +;; Return true if OP is a valid comparison operator for CCVmode. +(define_predicate "v_comparison_operator" + (match_code "eq,ne")) + ;; Return true if OP is an integer comparison operator. This allows ;; the use of MATCH_OPERATOR to recognize all the branch insns. (define_predicate "icc_comparison_operator" @@ -436,6 +440,9 @@ (define_predicate "icc_comparison_operat case CCCmode: case CCXCmode: return c_comparison_operator (op, mode); + case CCVmode: + case CCXVmode: + return v_comparison_operator (op, mode); default: return false; } Index: config/sparc/sparc-modes.def =================================================================== --- config/sparc/sparc-modes.def (revision 241326) +++ config/sparc/sparc-modes.def (working copy) @@ -34,6 +34,10 @@ FLOAT_MODE (TF, 16, ieee_quad_format); they explicitly set the C flag (unsigned overflow). Only the unsigned <,>= operators can be used in conjunction with it. + We also have a CCVmode which is used by the arithmetic instructions when + they explicitly set the V flag (signed overflow). Only the =,!= operators + can be used in conjunction with it. + We also have two modes to indicate that the relevant condition code is in the floating-point condition code register. One for comparisons which will generate an exception if the result is unordered (CCFPEmode) and @@ -46,6 +50,8 @@ CC_MODE (CCNZ); CC_MODE (CCXNZ); CC_MODE (CCC); CC_MODE (CCXC); +CC_MODE (CCV); +CC_MODE (CCXV); CC_MODE (CCFP); CC_MODE (CCFPE); Index: config/sparc/sparc.c =================================================================== --- config/sparc/sparc.c (revision 241326) +++ config/sparc/sparc.c (working copy) @@ -2784,8 +2784,9 @@ select_cc_mode (enum rtx_code op, rtx x, gcc_unreachable (); } } - else if (GET_CODE (x) == PLUS || GET_CODE (x) == MINUS - || GET_CODE (x) == NEG || GET_CODE (x) == ASHIFT) + else if ((GET_CODE (x) == PLUS || GET_CODE (x) == MINUS + || GET_CODE (x) == NEG || GET_CODE (x) == ASHIFT) + && y == const0_rtx) { if (TARGET_ARCH64 && GET_MODE (x) == DImode) return CCXNZmode; @@ -2803,6 +2804,18 @@ select_cc_mode (enum rtx_code op, rtx x, return CCCmode; } + /* This is for the [u]addvdi4_sp32 and [u]subvdi4_sp32 patterns. */ + if (!TARGET_ARCH64 && GET_MODE (x) == DImode) + { + if (GET_CODE (y) == UNSPEC + && (XINT (y, 1) == UNSPEC_ADDV + || XINT (y, 1) == UNSPEC_SUBV + || XINT (y, 1) == UNSPEC_NEGV)) + return CCVmode; + else + return CCCmode; + } + if (TARGET_ARCH64 && GET_MODE (x) == DImode) return CCXmode; else @@ -7724,10 +7737,16 @@ output_cbranch (rtx op, rtx dest, int la switch (code) { case NE: - branch = "bne"; + if (mode == CCVmode || mode == CCXVmode) + branch = "bvs"; + else + branch = "bne"; break; case EQ: - branch = "be"; + if (mode == CCVmode || mode == CCXVmode) + branch = "bvc"; + else + branch = "be"; break; case GE: if (mode == CCNZmode || mode == CCXNZmode) @@ -7794,6 +7813,7 @@ output_cbranch (rtx op, rtx dest, int la case CCmode: case CCNZmode: case CCCmode: + case CCVmode: labelno = "%%icc, "; if (v8) labelno = ""; @@ -7801,6 +7821,7 @@ output_cbranch (rtx op, rtx dest, int la case CCXmode: case CCXNZmode: case CCXCmode: + case CCXVmode: labelno = "%%xcc, "; gcc_assert (!v8); break; @@ -8804,11 +8825,13 @@ sparc_print_operand (FILE *file, rtx x, case CCmode: case CCNZmode: case CCCmode: + case CCVmode: s = "%icc"; break; case CCXmode: case CCXNZmode: case CCXCmode: + case CCXVmode: s = "%xcc"; break; default: @@ -8883,10 +8906,16 @@ sparc_print_operand (FILE *file, rtx x, switch (GET_CODE (x)) { case NE: - s = "ne"; + if (mode == CCVmode || mode == CCXVmode) + s = "vs"; + else + s = "ne"; break; case EQ: - s = "e"; + if (mode == CCVmode || mode == CCXVmode) + s = "vc"; + else + s = "e"; break; case GE: if (mode == CCNZmode || mode == CCXNZmode) Index: config/sparc/sparc.md =================================================================== --- config/sparc/sparc.md (revision 241326) +++ config/sparc/sparc.md (working copy) @@ -92,6 +92,10 @@ (define_c_enum "unspec" [ UNSPEC_MUL8 UNSPEC_MUL8SU UNSPEC_MULDSU + + UNSPEC_ADDV + UNSPEC_SUBV + UNSPEC_NEGV ]) (define_c_enum "unspecv" [ @@ -3714,6 +3718,51 @@ (define_expand "adddi3" } }) +(define_expand "uaddvdi4" + [(parallel [(set (reg:CCXC CC_REG) + (compare:CCXC (plus:DI (match_operand:DI 1 "register_operand") + (match_operand:DI 2 "arith_add_operand")) + (match_dup 1))) + (set (match_operand:DI 0 "register_operand") + (plus:DI (match_dup 1) (match_dup 2)))]) + (set (pc) (if_then_else (ltu (reg:CCXC CC_REG) (const_int 0)) + (label_ref (match_operand 3)) + (pc)))] + "" +{ + if (!TARGET_64BIT) + { + emit_insn (gen_uaddvdi4_sp32 (operands[0], operands[1], operands[2])); + rtx x = gen_rtx_LTU (VOIDmode, gen_rtx_REG (CCCmode, SPARC_ICC_REG), + const0_rtx); + emit_jump_insn (gen_cbranchcc4 (x, XEXP (x, 0), XEXP (x, 1), operands[3])); + DONE; + } +}) + +(define_expand "addvdi4" + [(parallel [(set (reg:CCXV CC_REG) + (compare:CCXV (plus:DI (match_operand:DI 1 "register_operand") + (match_operand:DI 2 "arith_add_operand")) + (unspec:DI [(match_dup 1) (match_dup 2)] + UNSPEC_ADDV))) + (set (match_operand:DI 0 "register_operand") + (plus:DI (match_dup 1) (match_dup 2)))]) + (set (pc) (if_then_else (ne (reg:CCXV CC_REG) (const_int 0)) + (label_ref (match_operand 3)) + (pc)))] + "" +{ + if (!TARGET_64BIT) + { + emit_insn (gen_addvdi4_sp32 (operands[0], operands[1], operands[2])); + rtx x = gen_rtx_NE (VOIDmode, gen_rtx_REG (CCVmode, SPARC_ICC_REG), + const0_rtx); + emit_jump_insn (gen_cbranchcc4 (x, XEXP (x, 0), XEXP (x, 1), operands[3])); + DONE; + } +}) + (define_insn_and_split "adddi3_sp32" [(set (match_operand:DI 0 "register_operand" "=&r") (plus:DI (match_operand:DI 1 "register_operand" "%r") @@ -3740,6 +3789,80 @@ (define_insn_and_split "adddi3_sp32" } [(set_attr "length" "2")]) +(define_insn_and_split "uaddvdi4_sp32" + [(set (reg:CCC CC_REG) + (compare:CCC (plus:DI (match_operand:DI 1 "register_operand" "%r") + (match_operand:DI 2 "arith_double_operand" "rHI")) + (match_dup 1))) + (set (match_operand:DI 0 "register_operand" "=&r") + (plus:DI (match_dup 1) (match_dup 2)))] + "!TARGET_ARCH64" + "#" + "&& reload_completed" + [(parallel [(set (reg:CCC CC_REG) + (compare:CCC (plus:SI (match_dup 4) (match_dup 5)) + (match_dup 4))) + (set (match_dup 3) + (plus:SI (match_dup 4) (match_dup 5)))]) + (parallel [(set (reg:CCC CC_REG) + (compare:CCC (zero_extend:DI + (plus:SI (plus:SI (match_dup 7) (match_dup 8)) + (ltu:SI (reg:CCC CC_REG) + (const_int 0)))) + (plus:DI (plus:DI (zero_extend:DI (match_dup 7)) + (zero_extend:DI (match_dup 8))) + (ltu:DI (reg:CCC CC_REG) + (const_int 0))))) + (set (match_dup 6) + (plus:SI (plus:SI (match_dup 7) (match_dup 8)) + (ltu:SI (reg:CCC CC_REG) + (const_int 0))))])] +{ + operands[3] = gen_lowpart (SImode, operands[0]); + operands[4] = gen_lowpart (SImode, operands[1]); + operands[5] = gen_lowpart (SImode, operands[2]); + operands[6] = gen_highpart (SImode, operands[0]); + operands[7] = gen_highpart_mode (SImode, DImode, operands[1]); + operands[8] = gen_highpart_mode (SImode, DImode, operands[2]); +} + [(set_attr "length" "2")]) + +(define_insn_and_split "addvdi4_sp32" + [(set (reg:CCV CC_REG) + (compare:CCV (plus:DI (match_operand:DI 1 "register_operand" "%r") + (match_operand:DI 2 "arith_double_operand" "rHI")) + (unspec:DI [(match_dup 1) (match_dup 2)] UNSPEC_ADDV))) + (set (match_operand:DI 0 "register_operand" "=&r") + (plus:DI (match_dup 1) (match_dup 2)))] + "!TARGET_ARCH64" + "#" + "&& reload_completed" + [(parallel [(set (reg:CCC CC_REG) + (compare:CCC (plus:SI (match_dup 4) (match_dup 5)) + (match_dup 4))) + (set (match_dup 3) + (plus:SI (match_dup 4) (match_dup 5)))]) + (parallel [(set (reg:CCV CC_REG) + (compare:CCV (plus:SI (plus:SI (match_dup 7) (match_dup 8)) + (ltu:SI (reg:CCC CC_REG) + (const_int 0))) + (unspec:SI [(plus:SI (match_dup 7) (match_dup 8)) + (ltu:SI (reg:CCC CC_REG) + (const_int 0))] + UNSPEC_ADDV))) + (set (match_dup 6) + (plus:SI (plus:SI (match_dup 7) (match_dup 8)) + (ltu:SI (reg:CCC CC_REG) (const_int 0))))])] +{ + operands[3] = gen_lowpart (SImode, operands[0]); + operands[4] = gen_lowpart (SImode, operands[1]); + operands[5] = gen_lowpart (SImode, operands[2]); + operands[6] = gen_highpart (SImode, operands[0]); + operands[7] = gen_highpart_mode (SImode, DImode, operands[1]); + operands[8] = gen_highpart_mode (SImode, DImode, operands[2]); +} + [(set_attr "length" "2")]) + (define_insn_and_split "*addx_extend_sp32" [(set (match_operand:DI 0 "register_operand" "=r") (zero_extend:DI (plus:SI (plus:SI @@ -3797,6 +3920,31 @@ (define_insn "addsi3" [(set_attr "type" "*,*") (set_attr "fptype" "*,*")]) +(define_expand "uaddvsi4" + [(parallel [(set (reg:CCC CC_REG) + (compare:CCC (plus:SI (match_operand:SI 1 "register_operand") + (match_operand:SI 2 "arith_operand")) + (match_dup 1))) + (set (match_operand:SI 0 "register_operand") + (plus:SI (match_dup 1) (match_dup 2)))]) + (set (pc) (if_then_else (ltu (reg:CCC CC_REG) (const_int 0)) + (label_ref (match_operand 3)) + (pc)))] + "") + +(define_expand "addvsi4" + [(parallel [(set (reg:CCV CC_REG) + (compare:CCV (plus:SI (match_operand:SI 1 "register_operand") + (match_operand:SI 2 "arith_operand")) + (unspec:SI [(match_dup 1) (match_dup 2)] + UNSPEC_ADDV))) + (set (match_operand:SI 0 "register_operand") + (plus:SI (match_dup 1) (match_dup 2)))]) + (set (pc) (if_then_else (ne (reg:CCV CC_REG) (const_int 0)) + (label_ref (match_operand 3)) + (pc)))] + "") + (define_insn "*cmp_ccnz_plus" [(set (reg:CCNZ CC_REG) (compare:CCNZ (plus:SI (match_operand:SI 0 "register_operand" "%r") @@ -3877,6 +4025,79 @@ (define_insn "*cmp_ccxc_plus_set" "addcc\t%1, %2, %0" [(set_attr "type" "compare")]) +(define_insn "*cmp_ccc_plus_sltu_set" + [(set (reg:CCC CC_REG) + (compare:CCC (zero_extend:DI + (plus:SI + (plus:SI (match_operand:SI 1 "register_operand" "%r") + (match_operand:SI 2 "arith_operand" "rI")) + (ltu:SI (reg:CCC CC_REG) (const_int 0)))) + (plus:DI (plus:DI (zero_extend:DI (match_dup 1)) + (zero_extend:DI (match_dup 2))) + (ltu:DI (reg:CCC CC_REG) (const_int 0))))) + (set (match_operand:SI 0 "register_operand" "=r") + (plus:SI (plus:SI (match_dup 1) (match_dup 2)) + (ltu:SI (reg:CCC CC_REG) (const_int 0))))] + "" + "addxcc\t%1, %2, %0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccv_plus" + [(set (reg:CCV CC_REG) + (compare:CCV (plus:SI (match_operand:SI 0 "register_operand" "%r") + (match_operand:SI 1 "arith_operand" "rI")) + (unspec:SI [(match_dup 0) (match_dup 1)] UNSPEC_ADDV)))] + "" + "addcc\t%0, %1, %%g0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccxv_plus" + [(set (reg:CCXV CC_REG) + (compare:CCXV (plus:DI (match_operand:DI 0 "register_operand" "%r") + (match_operand:DI 1 "arith_operand" "rI")) + (unspec:DI [(match_dup 0) (match_dup 1)] UNSPEC_ADDV)))] + "TARGET_ARCH64" + "addcc\t%0, %1, %%g0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccv_plus_set" + [(set (reg:CCV CC_REG) + (compare:CCV (plus:SI (match_operand:SI 1 "register_operand" "%r") + (match_operand:SI 2 "arith_operand" "rI")) + (unspec:SI [(match_dup 1) (match_dup 2)] UNSPEC_ADDV))) + (set (match_operand:SI 0 "register_operand" "=r") + (plus:SI (match_dup 1) (match_dup 2)))] + "" + "addcc\t%1, %2, %0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccxv_plus_set" + [(set (reg:CCXV CC_REG) + (compare:CCXV (plus:DI (match_operand:DI 1 "register_operand" "%r") + (match_operand:DI 2 "arith_operand" "rI")) + (unspec:DI [(match_dup 1) (match_dup 2)] UNSPEC_ADDV))) + (set (match_operand:DI 0 "register_operand" "=r") + (plus:DI (match_dup 1) (match_dup 2)))] + "TARGET_ARCH64" + "addcc\t%1, %2, %0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccv_plus_sltu_set" + [(set (reg:CCV CC_REG) + (compare:CCV (plus:SI (plus:SI (match_operand:SI 1 "register_operand" "%r") + (match_operand:SI 2 "arith_operand" "rI")) + (ltu:SI (reg:CCC CC_REG) (const_int 0))) + (unspec:SI [(plus:SI (match_dup 1) (match_dup 2)) + (ltu:SI (reg:CCC CC_REG) (const_int 0))] + UNSPEC_ADDV))) + (set (match_operand:SI 0 "register_operand" "=r") + (plus:SI (plus:SI (match_dup 1) (match_dup 2)) + (ltu:SI (reg:CCC CC_REG) (const_int 0))))] + "" + "addxcc\t%1, %2, %0" + [(set_attr "type" "compare")]) + + (define_expand "subdi3" [(set (match_operand:DI 0 "register_operand" "") (minus:DI (match_operand:DI 1 "register_operand" "") @@ -3890,6 +4111,56 @@ (define_expand "subdi3" } }) +(define_expand "usubvdi4" + [(parallel [(set (reg:CCX CC_REG) + (compare:CCX (match_operand:DI 1 "register_or_zero_operand") + (match_operand:DI 2 "arith_add_operand"))) + (set (match_operand:DI 0 "register_operand") + (minus:DI (match_dup 1) (match_dup 2)))]) + (set (pc) (if_then_else (ltu (reg:CCX CC_REG) (const_int 0)) + (label_ref (match_operand 3)) + (pc)))] + "" +{ + if (operands[1] == const0_rtx) + { + emit_insn (gen_unegvdi3 (operands[0], operands[2], operands[3])); + DONE; + } + + if (!TARGET_64BIT) + { + emit_insn (gen_usubvdi4_sp32 (operands[0], operands[1], operands[2])); + rtx x = gen_rtx_LTU (VOIDmode, gen_rtx_REG (CCCmode, SPARC_ICC_REG), + const0_rtx); + emit_jump_insn (gen_cbranchcc4 (x, XEXP (x, 0), XEXP (x, 1), operands[3])); + DONE; + } +}) + +(define_expand "subvdi4" + [(parallel [(set (reg:CCXV CC_REG) + (compare:CCXV (minus:DI (match_operand:DI 1 "register_operand") + (match_operand:DI 2 "arith_add_operand")) + (unspec:DI [(match_dup 1) (match_dup 2)] + UNSPEC_SUBV))) + (set (match_operand:DI 0 "register_operand") + (minus:DI (match_dup 1) (match_dup 2)))]) + (set (pc) (if_then_else (ne (reg:CCXV CC_REG) (const_int 0)) + (label_ref (match_operand 3)) + (pc)))] + "" +{ + if (!TARGET_64BIT) + { + emit_insn (gen_subvdi4_sp32 (operands[0], operands[1], operands[2])); + rtx x = gen_rtx_NE (VOIDmode, gen_rtx_REG (CCVmode, SPARC_ICC_REG), + const0_rtx); + emit_jump_insn (gen_cbranchcc4 (x, XEXP (x, 0), XEXP (x, 1), operands[3])); + DONE; + } +}) + (define_insn_and_split "subdi3_sp32" [(set (match_operand:DI 0 "register_operand" "=&r") (minus:DI (match_operand:DI 1 "register_operand" "r") @@ -3915,6 +4186,80 @@ (define_insn_and_split "subdi3_sp32" } [(set_attr "length" "2")]) +(define_insn_and_split "usubvdi4_sp32" + [(set (reg:CCC CC_REG) + (compare:CCC (match_operand:DI 1 "register_operand" "r") + (match_operand:DI 2 "arith_double_operand" "rHI"))) + (set (match_operand:DI 0 "register_operand" "=&r") + (minus:DI (match_dup 1) (match_dup 2)))] + "!TARGET_ARCH64" + "#" + "&& reload_completed" + [(parallel [(set (reg:CC CC_REG) + (compare:CC (match_dup 4) (match_dup 5))) + (set (match_dup 3) + (minus:SI (match_dup 4) (match_dup 5)))]) + (parallel [(set (reg:CCC CC_REG) + (compare:CCC (zero_extend:DI + (minus:SI (minus:SI (match_dup 7) + (ltu:SI (reg:CC CC_REG) + (const_int 0))) + (match_dup 8))) + (minus:DI + (minus:DI (zero_extend:DI (match_dup 7)) + (ltu:DI (reg:CC CC_REG) + (const_int 0))) + (zero_extend:DI (match_dup 8))))) + (set (match_dup 6) + (minus:SI (minus:SI (match_dup 7) + (ltu:SI (reg:CC CC_REG) + (const_int 0))) + (match_dup 8)))])] +{ + operands[3] = gen_lowpart (SImode, operands[0]); + operands[4] = gen_lowpart (SImode, operands[1]); + operands[5] = gen_lowpart (SImode, operands[2]); + operands[6] = gen_highpart (SImode, operands[0]); + operands[7] = gen_highpart_mode (SImode, DImode, operands[1]); + operands[8] = gen_highpart_mode (SImode, DImode, operands[2]); +} + [(set_attr "length" "2")]) + +(define_insn_and_split "subvdi4_sp32" + [(set (reg:CCV CC_REG) + (compare:CCV (minus:DI (match_operand:DI 1 "register_operand" "%r") + (match_operand:DI 2 "arith_double_operand" "rHI")) + (unspec:DI [(match_dup 1) (match_dup 2)] UNSPEC_SUBV))) + (set (match_operand:DI 0 "register_operand" "=&r") + (minus:DI (match_dup 1) (match_dup 2)))] + "!TARGET_ARCH64" + "#" + "&& reload_completed" + [(parallel [(set (reg:CC CC_REG) + (compare:CC (match_dup 4) (match_dup 5))) + (set (match_dup 3) + (minus:SI (match_dup 4) (match_dup 5)))]) + (parallel [(set (reg:CCV CC_REG) + (compare:CCV (minus:SI (minus:SI (match_dup 7) (match_dup 8)) + (ltu:SI (reg:CC CC_REG) + (const_int 0))) + (unspec:SI [(minus:SI (match_dup 7) (match_dup 8)) + (ltu:SI (reg:CC CC_REG) + (const_int 0))] + UNSPEC_SUBV))) + (set (match_dup 6) + (minus:SI (minus:SI (match_dup 7) (match_dup 8)) + (ltu:SI (reg:CC CC_REG) (const_int 0))))])] +{ + operands[3] = gen_lowpart (SImode, operands[0]); + operands[4] = gen_lowpart (SImode, operands[1]); + operands[5] = gen_lowpart (SImode, operands[2]); + operands[6] = gen_highpart (SImode, operands[0]); + operands[7] = gen_highpart_mode (SImode, DImode, operands[1]); + operands[8] = gen_highpart_mode (SImode, DImode, operands[2]); +} + [(set_attr "length" "2")]) + (define_insn_and_split "*subx_extend_sp32" [(set (match_operand:DI 0 "register_operand" "=r") (zero_extend:DI (minus:SI (minus:SI @@ -3971,6 +4316,37 @@ (define_insn "subsi3" [(set_attr "type" "*,*") (set_attr "fptype" "*,*")]) +(define_expand "usubvsi4" + [(parallel [(set (reg:CC CC_REG) + (compare:CC (match_operand:SI 1 "register_or_zero_operand") + (match_operand:SI 2 "arith_operand"))) + (set (match_operand:SI 0 "register_operand") + (minus:SI (match_dup 1) (match_dup 2)))]) + (set (pc) (if_then_else (ltu (reg:CC CC_REG) (const_int 0)) + (label_ref (match_operand 3)) + (pc)))] + "" +{ + if (operands[1] == const0_rtx) + { + emit_insn (gen_unegvsi3 (operands[0], operands[2], operands[3])); + DONE; + } +}) + +(define_expand "subvsi4" + [(parallel [(set (reg:CCV CC_REG) + (compare:CCV (minus:SI (match_operand:SI 1 "register_operand") + (match_operand:SI 2 "arith_operand")) + (unspec:SI [(match_dup 1) (match_dup 2)] + UNSPEC_SUBV))) + (set (match_operand:SI 0 "register_operand") + (minus:SI (match_dup 1) (match_dup 2)))]) + (set (pc) (if_then_else (ne (reg:CCV CC_REG) (const_int 0)) + (label_ref (match_operand 3)) + (pc)))] + "") + (define_insn "*cmp_ccnz_minus" [(set (reg:CCNZ CC_REG) (compare:CCNZ (minus:SI (match_operand:SI 0 "register_or_zero_operand" "rJ") @@ -4031,6 +4407,82 @@ (define_insn "*cmpdi_set" "subcc\t%r1, %2, %0" [(set_attr "type" "compare")]) +(define_insn "*cmp_ccc_minus_sltu_set" + [(set (reg:CCC CC_REG) + (compare:CCC (zero_extend:DI + (minus:SI + (minus:SI + (match_operand:SI 1 "register_or_zero_operand" "rJ") + (ltu:SI (reg:CC CC_REG) (const_int 0))) + (match_operand:SI 2 "arith_operand" "rI"))) + (minus:DI + (minus:DI + (zero_extend:DI (match_dup 1)) + (ltu:DI (reg:CC CC_REG) (const_int 0))) + (zero_extend:DI (match_dup 2))))) + (set (match_operand:SI 0 "register_operand" "=r") + (minus:SI (minus:SI (match_dup 1) + (ltu:SI (reg:CC CC_REG) (const_int 0))) + (match_dup 2)))] + "" + "subxcc\t%r1, %2, %0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccv_minus" + [(set (reg:CCV CC_REG) + (compare:CCV (minus:SI (match_operand:SI 0 "register_or_zero_operand" "rJ") + (match_operand:SI 1 "arith_operand" "rI")) + (unspec:SI [(match_dup 0) (match_dup 1)] UNSPEC_SUBV)))] + "" + "subcc\t%r0, %1, %%g0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccxv_minus" + [(set (reg:CCXV CC_REG) + (compare:CCXV (minus:DI (match_operand:DI 0 "register_or_zero_operand" "rJ") + (match_operand:DI 1 "arith_operand" "rI")) + (unspec:DI [(match_dup 0) (match_dup 1)] UNSPEC_SUBV)))] + "TARGET_ARCH64" + "subcc\t%r0, %1, %%g0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccv_minus_set" + [(set (reg:CCV CC_REG) + (compare:CCV (minus:SI (match_operand:SI 1 "register_or_zero_operand" "rJ") + (match_operand:SI 2 "arith_operand" "rI")) + (unspec:SI [(match_dup 1) (match_dup 2)] UNSPEC_SUBV))) + (set (match_operand:SI 0 "register_operand" "=r") + (minus:SI (match_dup 1) (match_dup 2)))] + "" + "subcc\t%r1, %2, %0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccxv_minus_set" + [(set (reg:CCXV CC_REG) + (compare:CCXV (minus:DI (match_operand:DI 1 "register_or_zero_operand" "rJ") + (match_operand:DI 2 "arith_operand" "rI")) + (unspec:DI [(match_dup 1) (match_dup 2)] UNSPEC_SUBV))) + (set (match_operand:DI 0 "register_operand" "=r") + (minus:DI (match_dup 1) (match_dup 2)))] + "TARGET_ARCH64" + "subcc\t%r1, %2, %0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccv_minus_sltu_set" + [(set (reg:CCV CC_REG) + (compare:CCV (minus:SI (minus:SI (match_operand:SI 1 "register_or_zero_operand" "rJ") + (match_operand:SI 2 "arith_operand" "rI")) + (ltu:SI (reg:CC CC_REG) (const_int 0))) + (unspec:SI [(minus:SI (match_dup 1) (match_dup 2)) + (ltu:SI (reg:CC CC_REG) (const_int 0))] + UNSPEC_SUBV))) + (set (match_operand:SI 0 "register_operand" "=r") + (minus:SI (minus:SI (match_dup 1) (match_dup 2)) + (ltu:SI (reg:CC CC_REG) (const_int 0))))] + "" + "subxcc\t%1, %2, %0" + [(set_attr "type" "compare")]) + ;; Integer multiply/divide instructions. @@ -5127,6 +5579,50 @@ (define_expand "negdi2" } }) +(define_expand "unegvdi3" + [(parallel [(set (reg:CCXC CC_REG) + (compare:CCXC (not:DI (match_operand:DI 1 "register_operand" "")) + (const_int -1))) + (set (match_operand:DI 0 "register_operand" "") + (neg:DI (match_dup 1)))]) + (set (pc) + (if_then_else (ltu (reg:CCXC CC_REG) (const_int 0)) + (label_ref (match_operand 2 "")) + (pc)))] + "" +{ + if (!TARGET_64BIT) + { + emit_insn (gen_unegvdi3_sp32 (operands[0], operands[1])); + rtx x = gen_rtx_LTU (VOIDmode, gen_rtx_REG (CCCmode, SPARC_ICC_REG), + const0_rtx); + emit_jump_insn (gen_cbranchcc4 (x, XEXP (x, 0), XEXP (x, 1), operands[2])); + DONE; + } +}) + +(define_expand "negvdi3" + [(parallel [(set (reg:CCXV CC_REG) + (compare:CCXV (neg:DI (match_operand:DI 1 "register_operand" "")) + (unspec:DI [(match_dup 1)] UNSPEC_NEGV))) + (set (match_operand:DI 0 "register_operand" "") + (neg:DI (match_dup 1)))]) + (set (pc) + (if_then_else (ne (reg:CCXV CC_REG) (const_int 0)) + (label_ref (match_operand 2 "")) + (pc)))] + "" +{ + if (!TARGET_64BIT) + { + emit_insn (gen_negvdi3_sp32 (operands[0], operands[1])); + rtx x = gen_rtx_NE (VOIDmode, gen_rtx_REG (CCVmode, SPARC_ICC_REG), + const0_rtx); + emit_jump_insn (gen_cbranchcc4 (x, XEXP (x, 0), XEXP (x, 1), operands[2])); + DONE; + } +}) + (define_insn_and_split "negdi2_sp32" [(set (match_operand:DI 0 "register_operand" "=&r") (neg:DI (match_operand:DI 1 "register_operand" "r"))) @@ -5145,6 +5641,64 @@ (define_insn_and_split "negdi2_sp32" operands[5] = gen_lowpart (SImode, operands[1]);" [(set_attr "length" "2")]) +(define_insn_and_split "unegvdi3_sp32" + [(set (reg:CCC CC_REG) + (compare:CCC (not:DI (match_operand:DI 1 "register_operand" "r")) + (const_int -1))) + (set (match_operand:DI 0 "register_operand" "=&r") + (neg:DI (match_dup 1)))] + "!TARGET_ARCH64" + "#" + "&& reload_completed" + [(parallel [(set (reg:CCC CC_REG) + (compare:CCC (not:SI (match_dup 5)) (const_int -1))) + (set (match_dup 4) (neg:SI (match_dup 5)))]) + (parallel [(set (reg:CCC CC_REG) + (compare:CCC (zero_extend:DI + (neg:SI (plus:SI (match_dup 3) + (ltu:SI (reg:CCC CC_REG) + (const_int 0))))) + (neg:DI (plus:DI (zero_extend:DI (match_dup 3)) + (ltu:DI (reg:CCC CC_REG) + (const_int 0)))))) + (set (match_dup 2) (neg:SI (plus:SI (match_dup 3) + (ltu:SI (reg:CCC CC_REG) + (const_int 0)))))])] + "operands[2] = gen_highpart (SImode, operands[0]); + operands[3] = gen_highpart (SImode, operands[1]); + operands[4] = gen_lowpart (SImode, operands[0]); + operands[5] = gen_lowpart (SImode, operands[1]);" + [(set_attr "length" "2")]) + +(define_insn_and_split "negvdi3_sp32" + [(set (reg:CCV CC_REG) + (compare:CCV (neg:DI (match_operand:DI 1 "register_operand" "r")) + (unspec:DI [(match_dup 1)] UNSPEC_NEGV))) + (set (match_operand:DI 0 "register_operand" "=&r") + (neg:DI (match_dup 1)))] + "!TARGET_ARCH64" + "#" + "&& reload_completed" + [(parallel [(set (reg:CCC CC_REG) + (compare:CCC (not:SI (match_dup 5)) (const_int -1))) + (set (match_dup 4) (neg:SI (match_dup 5)))]) + (parallel [(set (reg:CCV CC_REG) + (compare:CCV (neg:SI (plus:SI (match_dup 3) + (ltu:SI (reg:CCC CC_REG) + (const_int 0)))) + (unspec:SI [(plus:SI (match_dup 3) + (ltu:SI (reg:CCC CC_REG) + (const_int 0)))] + UNSPEC_NEGV))) + (set (match_dup 2) (neg:SI (plus:SI (match_dup 3) + (ltu:SI (reg:CCC CC_REG) + (const_int 0)))))])] + "operands[2] = gen_highpart (SImode, operands[0]); + operands[3] = gen_highpart (SImode, operands[1]); + operands[4] = gen_lowpart (SImode, operands[0]); + operands[5] = gen_lowpart (SImode, operands[1]);" + [(set_attr "length" "2")]) + (define_insn "*negdi2_sp64" [(set (match_operand:DI 0 "register_operand" "=r") (neg:DI (match_operand:DI 1 "register_operand" "r")))] @@ -5157,6 +5711,30 @@ (define_insn "negsi2" "" "sub\t%%g0, %1, %0") +(define_expand "unegvsi3" + [(parallel [(set (reg:CCC CC_REG) + (compare:CCC (not:SI (match_operand:SI 1 "arith_operand" "")) + (const_int -1))) + (set (match_operand:SI 0 "register_operand" "") + (neg:SI (match_dup 1)))]) + (set (pc) + (if_then_else (ltu (reg:CCC CC_REG) (const_int 0)) + (label_ref (match_operand 2 "")) + (pc)))] + "") + +(define_expand "negvsi3" + [(parallel [(set (reg:CCV CC_REG) + (compare:CCV (neg:SI (match_operand:SI 1 "arith_operand" "")) + (unspec:SI [(match_dup 1)] UNSPEC_NEGV))) + (set (match_operand:SI 0 "register_operand" "") + (neg:SI (match_dup 1)))]) + (set (pc) + (if_then_else (ne (reg:CCV CC_REG) (const_int 0)) + (label_ref (match_operand 2 "")) + (pc)))] +"") + (define_insn "*cmp_ccnz_neg" [(set (reg:CCNZ CC_REG) (compare:CCNZ (neg:SI (match_operand:SI 0 "arith_operand" "rI")) @@ -5213,6 +5791,73 @@ (define_insn "*cmp_ccxc_neg_set" "subcc\t%%g0, %1, %0" [(set_attr "type" "compare")]) +(define_insn "*cmp_ccc_neg_sltu_set" + [(set (reg:CCC CC_REG) + (compare:CCC (zero_extend:DI + (neg:SI (plus:SI (match_operand:SI 1 "arith_operand" "rI") + (ltu:SI (reg:CCC CC_REG) + (const_int 0))))) + (neg:DI (plus:DI (zero_extend:DI (match_dup 1)) + (ltu:DI (reg:CCC CC_REG) + (const_int 0)))))) + (set (match_operand:SI 0 "register_operand" "=r") + (neg:SI (plus:SI (match_dup 1) + (ltu:SI (reg:CCC CC_REG) (const_int 0)))))] + "" + "subxcc\t%%g0, %1, %0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccv_neg" + [(set (reg:CCV CC_REG) + (compare:CCV (neg:SI (match_operand:SI 0 "arith_operand" "rI")) + (unspec:SI [(match_dup 0)] UNSPEC_NEGV)))] + "" + "subcc\t%%g0, %0, %%g0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccxv_neg" + [(set (reg:CCXV CC_REG) + (compare:CCXV (neg:DI (match_operand:DI 0 "arith_operand" "rI")) + (unspec:DI [(match_dup 0)] UNSPEC_NEGV)))] + "TARGET_ARCH64" + "subcc\t%%g0, %0, %%g0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccv_neg_set" + [(set (reg:CCV CC_REG) + (compare:CCV (neg:SI (match_operand:SI 1 "arith_operand" "rI")) + (unspec:SI [(match_dup 1)] UNSPEC_NEGV))) + (set (match_operand:SI 0 "register_operand" "=r") + (neg:SI (match_dup 1)))] + "" + "subcc\t%%g0, %1, %0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccxv_neg_set" + [(set (reg:CCXV CC_REG) + (compare:CCXV (neg:DI (match_operand:DI 1 "arith_operand" "rI")) + (unspec:DI [(match_dup 1)] UNSPEC_NEGV))) + (set (match_operand:DI 0 "register_operand" "=r") + (neg:DI (match_dup 1)))] + "TARGET_ARCH64" + "subcc\t%%g0, %1, %0" + [(set_attr "type" "compare")]) + +(define_insn "*cmp_ccv_neg_sltu_set" + [(set (reg:CCV CC_REG) + (compare:CCV (neg:SI (plus:SI (match_operand:SI 1 "arith_operand" "rI") + (ltu:SI (reg:CCC CC_REG) (const_int 0)))) + (unspec:SI [(plus:SI (match_dup 1) + (ltu:SI (reg:CCC CC_REG) + (const_int 0)))] + UNSPEC_NEGV))) + (set (match_operand:SI 0 "register_operand" "=r") + (neg:SI (plus:SI (match_dup 1) + (ltu:SI (reg:CCC CC_REG) (const_int 0)))))] + "" + "subxcc\t%%g0, %1, %0" + [(set_attr "type" "compare")]) + (define_insn "one_cmpldi2" [(set (match_operand:DI 0 "register_operand" "=r") Index: testsuite/gcc.target/sparc/overflow-1.c =================================================================== --- testsuite/gcc.target/sparc/overflow-1.c (revision 0) +++ testsuite/gcc.target/sparc/overflow-1.c (working copy) @@ -0,0 +1,43 @@ +/* { dg-do compile } */ +/* { dg-options "-O -mcpu=v8" } */ +/* { dg-require-effective-target ilp32 } */ + +#include +#include + +bool my_uadd_overflow (uint32_t a, uint32_t b, uint32_t *res) +{ + return __builtin_add_overflow (a, b, res); +} + +bool my_usub_overflow (uint32_t a, uint32_t b, uint32_t *res) +{ + return __builtin_sub_overflow (a, b, res); +} + +bool my_uneg_overflow (uint32_t a, uint32_t *res) +{ + return __builtin_sub_overflow (0, a, res); +} + +bool my_add_overflow (int32_t a, int32_t b, int32_t *res) +{ + return __builtin_add_overflow (a, b, res); +} + +bool my_sub_overflow (int32_t a, int32_t b, int32_t *res) +{ + return __builtin_sub_overflow (a, b, res); +} + +bool my_neg_overflow (int32_t a, int32_t *res) +{ + return __builtin_sub_overflow (0, a, res); +} + +/* { dg-final { scan-assembler-times "addcc\t%" 2 } } */ +/* { dg-final { scan-assembler-times "subcc\t%" 4 } } */ +/* { dg-final { scan-assembler-times "addx\t%" 3 } } */ +/* { dg-final { scan-assembler-times "bvs" 3 } } */ +/* { dg-final { scan-assembler-not "cmp\t%" } } */ +/* { dg-final { scan-assembler-not "save\t%" } } */ Index: testsuite/gcc.target/sparc/overflow-2.c =================================================================== --- testsuite/gcc.target/sparc/overflow-2.c (revision 0) +++ testsuite/gcc.target/sparc/overflow-2.c (working copy) @@ -0,0 +1,46 @@ +/* { dg-do compile } */ +/* { dg-options "-O -mcpu=v8" } */ +/* { dg-require-effective-target ilp32 } */ + +#include +#include + +bool my_uadd_overflow (uint64_t a, uint64_t b, uint64_t *res) +{ + return __builtin_add_overflow (a, b, res); +} + +bool my_usub_overflow (uint64_t a, uint64_t b, uint64_t *res) +{ + return __builtin_sub_overflow (a, b, res); +} + +bool my_uneg_overflow (uint64_t a, uint64_t *res) +{ + return __builtin_sub_overflow (0, a, res); +} + +bool my_add_overflow (int64_t a, int64_t b, int64_t *res) +{ + return __builtin_add_overflow (a, b, res); +} + +bool my_sub_overflow (int64_t a, int64_t b, int64_t *res) +{ + return __builtin_sub_overflow (a, b, res); +} + +bool my_neg_overflow (int64_t a, int64_t *res) +{ + return __builtin_sub_overflow (0, a, res); +} + +/* { dg-final { scan-assembler-times "addcc\t%" 2 } } */ +/* { dg-final { scan-assembler-times "addxcc\t%" 2 } } */ +/* { dg-final { scan-assembler-times "subcc\t%" 4 } } */ +/* { dg-final { scan-assembler-times "subxcc\t%" 4 } } */ +/* { dg-final { scan-assembler-times "addx\t%" 2 } } */ +/* { dg-final { scan-assembler-times "blu" 1 } } */ +/* { dg-final { scan-assembler-times "bvs" 3 } } */ +/* { dg-final { scan-assembler-not "cmp\t%" } } */ +/* { dg-final { scan-assembler-not "save\t%" } } */ Index: testsuite/gcc.target/sparc/overflow-3.c =================================================================== --- testsuite/gcc.target/sparc/overflow-3.c (revision 0) +++ testsuite/gcc.target/sparc/overflow-3.c (working copy) @@ -0,0 +1,44 @@ +/* { dg-do compile } */ +/* { dg-options "-O" } */ +/* { dg-require-effective-target lp64 } */ + +#include +#include + +bool my_uadd_overflow (uint64_t a, uint64_t b, uint64_t *res) +{ + return __builtin_add_overflow (a, b, res); +} + +bool my_usub_overflow (uint64_t a, uint64_t b, uint64_t *res) +{ + return __builtin_sub_overflow (a, b, res); +} + +bool my_uneg_overflow (uint64_t a, uint64_t *res) +{ + return __builtin_sub_overflow (0, a, res); +} + +bool my_add_overflow (int64_t a, int64_t b, int64_t *res) +{ + return __builtin_add_overflow (a, b, res); +} + +bool my_sub_overflow (int64_t a, int64_t b, int64_t *res) +{ + return __builtin_sub_overflow (a, b, res); +} + +bool my_neg_overflow (int64_t a, int64_t *res) +{ + return __builtin_sub_overflow (0, a, res); +} + +/* { dg-final { scan-assembler-times "addcc\t%" 2 } } */ +/* { dg-final { scan-assembler-times "subcc\t%" 4 } } */ +/* { dg-final { scan-assembler-times "movlu\t%" 1 } } */ +/* { dg-final { scan-assembler-times "blu" 2 } } */ +/* { dg-final { scan-assembler-times "bvs" 3 } } */ +/* { dg-final { scan-assembler-not "cmp\t%" } } */ +/* { dg-final { scan-assembler-not "save\t%" } } */