From patchwork Mon Nov 21 20:50:04 2016 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 8bit X-Patchwork-Submitter: Steve Ellcey X-Patchwork-Id: 83298 Delivered-To: patch@linaro.org Received: by 10.140.97.165 with SMTP id m34csp1760044qge; Mon, 21 Nov 2016 12:50:38 -0800 (PST) X-Received: by 10.129.179.197 with SMTP id r188mr16313980ywh.102.1479761438722; Mon, 21 Nov 2016 12:50:38 -0800 (PST) Return-Path: Received: from sourceware.org (server1.sourceware.org. [209.132.180.131]) by mx.google.com with ESMTPS id b186si5198264ywa.290.2016.11.21.12.50.38 for (version=TLS1_2 cipher=ECDHE-RSA-AES128-GCM-SHA256 bits=128/128); Mon, 21 Nov 2016 12:50:38 -0800 (PST) Received-SPF: pass (google.com: domain of libc-alpha-return-75000-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) client-ip=209.132.180.131; Authentication-Results: mx.google.com; dkim=pass header.i=@sourceware.org; spf=pass (google.com: domain of libc-alpha-return-75000-patch=linaro.org@sourceware.org designates 209.132.180.131 as permitted sender) smtp.mailfrom=libc-alpha-return-75000-patch=linaro.org@sourceware.org DomainKey-Signature: a=rsa-sha1; c=nofws; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:message-id:subject:from:to:cc:date:in-reply-to :references:content-type:mime-version; q=dns; s=default; b=gh5Le T88LDIIKl9iAzlzGQDLMD9PSzbo7Th+WMzATf9gPmAhz9Q6kJqCf/+UioqbCHHYH GtQVnQXdvyOwMPDgu9h5D52kR6bmVcaDRLtXx3HCFrujJbiEQxVCUkHS1gUvJ7Dl 1Y7kGQdBR6ZdWJItCMe13woqv++HL1+YWq6KO4= DKIM-Signature: v=1; a=rsa-sha1; c=relaxed; d=sourceware.org; h=list-id :list-unsubscribe:list-subscribe:list-archive:list-post :list-help:sender:message-id:subject:from:to:cc:date:in-reply-to :references:content-type:mime-version; s=default; bh=yCIV9oMmJOW 9F/0Ld+Zu5KdDgLk=; b=PDS9dd+6WRhcw2SjLjiEU/13q1fHsr9N89LviAB6d+h no2cUrBpWI3L6c4t6LLt65Z36SPKdHBJEJdthsLGvJO2bCZlFj42VPhbC+sF/A3b R12nVhrJC1EMgEt09cLo3n87UQBu1hSc42VuHVDIw9Py5XO+Evm7ZE+zwWPSMaEM = Received: (qmail 5256 invoked by alias); 21 Nov 2016 20:50:23 -0000 Mailing-List: contact libc-alpha-help@sourceware.org; run by ezmlm Precedence: bulk List-Id: List-Unsubscribe: List-Subscribe: List-Archive: List-Post: List-Help: , Sender: libc-alpha-owner@sourceware.org Delivered-To: mailing list libc-alpha@sourceware.org Received: (qmail 5239 invoked by uid 89); 21 Nov 2016 20:50:22 -0000 Authentication-Results: sourceware.org; auth=none X-Virus-Found: No X-Spam-SWARE-Status: No, score=0.7 required=5.0 tests=AWL, BAYES_50, KAM_STOCKGEN, RCVD_IN_DNSWL_NONE, SPF_HELO_PASS autolearn=no version=3.3.2 spammy=5426, initfirstc, UD:init-first.c, 2717 X-HELO: NAM03-CO1-obe.outbound.protection.outlook.com Authentication-Results: spf=none (sender IP is ) smtp.mailfrom=Steve.Ellcey@cavium.com; Message-ID: <1479761404.14643.10.camel@caviumnetworks.com> Subject: Re: [PATCH] Partial ILP32 support for aarch64 From: Steve Ellcey To: Joseph Myers CC: Date: Mon, 21 Nov 2016 12:50:04 -0800 In-Reply-To: References: <1479515990.908.96.camel@caviumnetworks.com> MIME-Version: 1.0 X-ClientProxiedBy: CO2PR11CA0034.namprd11.prod.outlook.com (10.141.242.172) To CO2PR0701MB1063.namprd07.prod.outlook.com (10.160.8.142) X-Microsoft-Exchange-Diagnostics: 1; CO2PR0701MB1063; 2:pktkZLLXJ6llCqOhtNUELhctiEG+6tTfe908S8DKOImG4hg6A3X7Cee553yfhJWs/HHTVDkbHzihFMSq5Nulz7CW7n/4PkudrzRnjdYBJafO6szuKuFOv7/XdD6nBx9vq02chNdC4r9kFbRJlHxcWJiLruaNoMNJQFkQ6SHBvYY=; 3:S2fW12vbRHlPUtfcy5EeaHDSsaCphUByVXMI0wLPe9n44wewsgSQgZe135OE04K7n9mAToZnXYyLEKJ9tWg9NxHTSYsiK+GyWr8G79kxuBarbHKnCOC6/dygsF6r1uXcEH3KFZpKhV8YYyJ0s7VL7lSvcL2S8YPNGyXHMt/BZbk= X-MS-Office365-Filtering-Correlation-Id: caf15dce-31c0-4e84-de4c-08d4124ffacc X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(22001); SRVR:CO2PR0701MB1063; X-Microsoft-Exchange-Diagnostics: 1; CO2PR0701MB1063; 25:8lXfLEjyQe2nppQzStikgzRYlIyLMaTW3ReMXzOqUhKDiS2Dve1FMDHjqifUYB538/IobFgrX6xZqoEdATDIpwmqY3eiXD17QjbU7UDTis83CYWW+C4jT9PPjfjfnFyCeMLOwJcj3AqX518VS3wd5y4RJwXV2WQWCeMTWHGMZsNYkzAYQGD3S3lNVgfWYgbnBBdrQiI91vhga/YZdtoqOxWHB0b5OGoc4vE0/gDlQv2DLN74Bf3+2Rwpxpue1FdCBCyttyBghz64gQfizGnYJEK9zyNvhPozrTffS3O2mvEjGudepAeSDb5tEuQ24t8pg7mZWpKnBPV0ujBygmIViXEPERhWZ4JxRhUW8H/GYW6fwCDyet9YYIUFTotimql+tgoc9uLZUATv656EB8BkEwsYq7V58xgihCQD8m4ZB+DvHlnXsc3cyizHlfc7YVoc5rKe6LlDBisjM2fxfLESkW3sY0q1jOKZP9+PSaVxqQbkpjdESaTZ5QiIAH1PcyGA9caJCAM/HM/Zq190OuzP2YD7FNjOf73y5kAEkbL3OK5CBVE1WZr/lhY3FPvR53DYrygDN7pt0uoEY4lWFVNqIiC/wnMOrvnPi1C+LtUeE2zuE8Td9dA5Wx3A3CkNtGrVfOQT7jZDFOZCZhjk2Xd59SZDeE8ooisnLLqwrA+sbs2Qzwk8VBBrtD0938ss5ZiGqCUgYy+zydyBOFdN8s9OVLYtieV0E9V1rw1Vey1jwsXo8gIXhda/ff+dex4qpoHOi+Fq0Ury3Y+1Lmvp+B2n8//jm9Reh6mIJLY8ETztTZspOF2VFwRFErTbfowL17+Jpa1i9KZIwZS7gX3AfK/bGVfxInW2xRkUtXqLSXzHdM0= X-Microsoft-Exchange-Diagnostics: 1; CO2PR0701MB1063; 31:82EUIBCHUORDudlf08BWThGoaIyV3t4KcM5OuBUaFkSI5bS+TAL+yKUUgjpCtUaWl9R+1mgblks6hz22TO/9aPwP1EF8jvEaRHp6qIQzHKwT4yKFhF9lQFd6ghfdzBq1kkUx9AYGKgz5jurCozQeAO2x08BtIkN08yNBuQeatB0me4trtrQ8im0tK4SMPotjM0ff6MNNryCKMz09BnwPPPK/Hizmc6sv3o1DqiklQV35WpMO/NimMNnL76ehALMJTWUeaPZFlm2VPtExKXkxFJnkzO1h1tu47nuPyoQKEmUv7iYXNTUE9jnbYbx6VPbQiiM0SGBcUgi+qF30mV2RLQ== X-Microsoft-Exchange-Diagnostics: 1; CO2PR0701MB1063; 20:5eE2iBE7dC693YBCBaAhre00TOzXdHTECSR6c2pQAQs7zRaRqSKItInThNjyLUjvVWe9aVwd/7mHOlHzPjHKc7F/F3Z0j1OkyMdJBCU8IS0HGA7i8/N/BgAa/5o6Nd9Bo+mCQbnpRsAzxsmwLo/Gp3CHghHCfpCSZzGfU6OBB4zkMwLcOMTtUqI6kFJosaLgxZRI3w1t7t94GAmdIidirG7zcrH6siIJtJfObIVTrMdf1/6X5/fSEhm2Kbk7xIZWd3hTDQMMyGNFnXmmG2odOa8MAgehTddRBkRmGsmzUMdfqykfvpRD4Awel8VcmrBuWQt8d5l1uyXtiqaoRyikOtJ+VN4J1TsgBJN8rvAtsw6LLuyZbSfNCssP8BKwcTj8hIo8CfiMzjchk3bF34dNtFwnwG4dvonu27O6WWP9+D2275vRgpoKuEtRcpwNXT3ydY1cIv7TwOXCosH3ROztiWzzTy3G9EQmrfij6+XRTqvJGX+sgpYdLtZ4GzW080Sd0fMugh6Wb6DwnF/cru2ZVcdJv/twBDGmKGfDjKqtLOsAD5W6bu9qy9YsTojleyv5l9GJm7dEvY0BUkaNUQnw70zr5gTVETrYuNmYk006BTw= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:; X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(102415395)(6060326)(6040307)(6045199)(601004)(2401047)(8121501046)(5005006)(10201501046)(3002001)(6041248)(6061324)(6072148)(6042181); SRVR:CO2PR0701MB1063; BCL:0; PCL:0; RULEID:; SRVR:CO2PR0701MB1063; X-Microsoft-Exchange-Diagnostics: 1; CO2PR0701MB1063; 4:bNU4Y6IFMfySG+rE5CUNlLZttOQ9rBAf+ujvqrhXT++v3zC0FkGMZkK8czERPjzf4sMIgWbI/iGv1lPdTPcGUeiONbx7cuyWgZ1VDceQ3MFfEkFvMBnDwOx7KSwTxbqBl7+YBispcHGR7wfSRCjVYbXMcd1kK45KuLGAW3Wcs1aX/wwGxkj3rf5P6FN2+DKlSl7NxXP4GQ31GSabllSWMjqGKu2hqb5R0aeHiFkla+l7mmGAwpEXT0lJrTVWrqaNmQpauN6eeA6kdwqanR1LSvYire2OLIadMr7lfYZ1ZO3ijZ1kqZB9+eWxcNQW1uJQ4SvaSd6XVc3EIGCcbyopEexsr7k4FDOAe1tNsQLYY/ZrV8054ADEYvzxFF4gy3nKUQKDVetazgAQnnyEp6CZz51LhHaIMbKERN0QqJl9xASK4y4CiJ9Yd5ph3Wle8SCDPqK6ufH4WOD7dVMEyhQKrBT3stZyRG8pkkNLARO7NAXsv1SrcecIzclS8ZMdm2zQ+fDC22RO0uhm8JrPrOn5a7aC21qvHAv2l87WizHgnVo= X-Forefront-PRVS: 01334458E5 X-Forefront-Antispam-Report: SFV:NSPM; SFS:(10009020)(4630300001)(6009001)(7916002)(189002)(377424004)(199003)(2906002)(7736002)(8676002)(50226002)(7846002)(84326002)(81166006)(568964002)(4610100001)(33646002)(68736007)(2476003)(110136003)(5890100001)(5660300001)(36756003)(38730400001)(4326007)(3846002)(81156014)(6116002)(103116003)(305945005)(69596002)(6666003)(229853002)(42882006)(106356001)(6916009)(2950100002)(50986999)(42186005)(105586002)(53416004)(101416001)(66066001)(97736004)(4001150100001)(92566002)(512874002)(77096005)(189998001)(76176999)(99106002); DIR:OUT; SFP:1101; SCL:1; SRVR:CO2PR0701MB1063; H:sellcey-lt.caveonetworks.com; FPR:; SPF:None; PTR:InfoNoRecords; MX:1; A:1; LANG:en; Received-SPF: None (protection.outlook.com: cavium.com does not designate permitted sender hosts) X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; CO2PR0701MB1063; 23:OaW8IhaZ8DBlxw30YEqwj3sZ0YJFP/jtzUhC1TT?= =?us-ascii?Q?m+dtjA5Y7g5BowcrzUiRVA+iZ3okvUBHYg7bDYiT/pnT5zlCzVh2O0VQB3Pg?= =?us-ascii?Q?ssSFc9WdBgKbiit8ElP7SqadWAcqAfSP6XO0jji694sNcKSXKVw9wGJanB0z?= =?us-ascii?Q?JhlzSM1mtbvZxoSNBEEh+sW0V2d8543dUOtaaxIvK4U6QlzkDIetPtYioQiY?= =?us-ascii?Q?8WqoXF+YRNHy1NlHpAR1yVBU1AtP6+fTqTtLNiOQVRFTUS8lAwp269+81FG3?= =?us-ascii?Q?SFbxvW4w65l4ChGixpMiE34k/Eh4j03WIqjFl5Bmneav3Ai4tq/SnCYO4Bn6?= =?us-ascii?Q?ifWDJhZCkqo6chGn10y4D9mpeH4JlgWX4+I30e6gMpHaPMUPLNfIOwCnCyE2?= =?us-ascii?Q?NDhKJliNdrt1tKD5/oIomIPBsCLnz/j7j4/0TWa4XMBkeSBsaRTE6MJyCQt1?= =?us-ascii?Q?jFxfp+sbdkly0HeOA9gh1go5B0gcMhSN3nF0b3lwHMUI8wy3kuD0T3TFult8?= =?us-ascii?Q?9FBaX9h9EW9ascEukNt4xubqMZlf+19sDhE8To+BL+l7QDJtAGPDjAbqLL0O?= =?us-ascii?Q?pYLjBXzW8dq7uldayTo+WgHb4PDYNYbSJTNTkLclEnWFzmP2Y6YI3kKbPHTk?= =?us-ascii?Q?2dwURUicNhH9E4z5wAfx5FMq5RQGwzMULTiN8B89M/RY9NK4eHACIzDONq7D?= =?us-ascii?Q?C2tA2NlIrX+vKWKvD6mkUDtU6MNamNMtGEjx8Ize6cFInCyuvibEXrV7yXQS?= =?us-ascii?Q?imeIgjaMQuIokVpw4x97/l7JhGMQVRGuTNEGwElLx2cMiolDhHb9/y8J6jcZ?= =?us-ascii?Q?l8n0Pze9Tk76lTMhchd1hV2gInUbUyrIGO2myvrkrDOw886Ph/y9rqEnok4h?= =?us-ascii?Q?k3hnCxkDWPErvpuBGJLRyuOZk97ebM87nIKieyQQKgaXfMn35YZM0WgcDsUD?= =?us-ascii?Q?9C8GZX8wVepcpzXUH1flC8wDm1g5k2cbVIXLRr2UHdy7u1DBKdZk/bq33/vr?= =?us-ascii?Q?erOnbegJ3SlEFk28fG3qbgszsZ+l05hGmqKt60Qhm25XLY05BhunSkCjEdzM?= =?us-ascii?Q?J3aq5KrDWztDmSuXUkTe2i0aetM3a08bcrBhBUM/KLfLjvkv3ntzaPtw+a6W?= =?us-ascii?Q?DT7KuwnU+bGANVmxf4wnvp6RzUuHEQMfHTJRyIehZ1abDRSOoMnoUX+FCj1j?= =?us-ascii?Q?5kejAQS9FgaUAf8NpEEhbmWnKxl5e2ykNV+oCuPfbDywz/nMDE4nvz2TKpQC?= =?us-ascii?Q?vDrP9dh9qgKyru/MDlK0=3D?= X-Microsoft-Exchange-Diagnostics: 1; CO2PR0701MB1063; 6:1xnNDoHNwKlxv5pX4lO9y3WIATRL0b3nZ6Bs8PlplHbkIFKpTR5e7LyaBeKActpaLrfeVoVX9U8bMx+Mxt/kKbntMGM1XCMWzBXKJsc+IgRnQDuBto/VJHeaFh9v/SGMrZZgSf07Roq6/RNpwFBZiwKY4mWcpEhSPyx7CdPBydvR2gbUiA9W3xaojJ5aqWsAxQslPTlEcrPA6VcIipR1M5dzcxx2qpIHC3tyVRiOg4AzLqP/WI+TBHVg4tI6q9HpSpj0ufuQRk3JAjyXM9oXD/r+xze+PZRhxmbSMu3hpa/5hS7TF17xHFeFJpAlEyYlehWa92Np3p1PiiNSEXKZO7JS+EmZegK9TMjE76Pryis=; 5:QOanqHqMVMGwUeOeuqEtwsZUO8tHrFPXsWBhEaTmMrO4bqqNBFSQNsbE7qJK2aMailUBT15lzxBFR4LzLmQ0aVZpBx6PE3BV6gONpmfEM5c6r6q/Sr9sYZxpWBw3GdR3rWu3ptYny9ePGSfe/VUTRg==; 24:jyVT+UHsjwjUouXDOWOMekJP+O5vRLNmCwCwhYZcB/u5Oa9HY64pMP+LsC6pts7XBI2drI5VRzs+f9ivAOT5O937/UKSq1WcgP4kFq0uaW4= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; CO2PR0701MB1063; 7:G6fsXQqpPj4OHjVTz6oZ34gpgN6rB/DGXoiCQ53IZO6+cypqYcaY6kmqqicCA8Na9cZTEle6AELffd2YQShWQIbgmAK7hfDeC5WDOip5356rLafHPfwohUgFLPrBNrO2XIS9gFjFPvbrRpl0EA59BQ6UeSw3QdyENMpX6GNrriOmAxZL8+sdchhMxJ03GbTBwlAcL1VLE63+kotK+CmS68QuYxem54TjPr7fXo/Pt1iUhgVe6xqfBWzexHwqTyTKbPNXToTir00p5EjqnWw69dOKMFkuj7QAdq4w1WDTKgNcq5OwC/hrfFvCvM5oYjDGYoFBlN1QfRhVOuNezNCxde4y+lRpQyChW5RXtuK0fdI= X-OriginatorOrg: caviumnetworks.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 Nov 2016 20:50:08.4017 (UTC) X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted X-MS-Exchange-Transport-CrossTenantHeadersStamped: CO2PR0701MB1063 Here is an updated version with the indentation fixed. Steve Ellcey sellcey@caviumnetworks.com 2016-11-21  Andrew Pinski       Yury Norov       Steve Ellcey   * sysdeps/aarch64/crti.S: Add include of sysdep.h. (call_weak_fn): Use PTR_REG to get correct reg name in ILP32. * sysdeps/aarch64/dl-irel.h: Add include of sysdep.h. (elf_irela): Use AARCH64_R macro to get correct relocation in ILP32. * sysdeps/aarch64/dl-machine.h: Add include of sysdep.h. (elf_machine_load_address, RTLD_START, RTLD_START_1, RTLD_START, elf_machine_type_class, ELF_MACHINE_JMP_SLOT, elf_machine_rela, elf_machine_lazy_rel): Add ifdef's for ILP32 support. * sysdeps/aarch64/dl-tlsdesc.S (_dl_tlsdesc_return, _dl_tlsdesc_return_lazy, _dl_tlsdesc_dynamic, _dl_tlsdesc_resolve_hold): Extend pointers in ILP32, use PTR_REG to get correct reg name for ILP32. * sysdeps/aarch64/dl-trampoline.S (ip01): New Macro. (RELA_SIZE): New Macro. (_dl_runtime_resolve, _dl_runtime_profile): Use new macros and PTR_REG to support ILP32. * sysdeps/aarch64/jmpbuf-unwind.h (_JMPBUF_CFA_UNWINDS_ADJ): Add cast for ILP32 mode. * sysdeps/aarch64/memcmp.S (memcmp): Extend arg pointers for ILP32 mode. * sysdeps/aarch64/memcpy.S (memmove, memcpy): Ditto. * sysdeps/aarch64/memset.S (__memset): Ditto. * sysdeps/aarch64/strchr.S (strchr): Ditto. * sysdeps/aarch64/strchrnul.S (__strchrnul): Ditto. * sysdeps/aarch64/strcmp.S (strcmp): Ditto. * sysdeps/aarch64/strcpy.S (strcpy): Ditto. * sysdeps/aarch64/strlen.S (__strlen): Ditto. * sysdeps/aarch64/strncmp.S (strncmp): Ditto. * sysdeps/aarch64/strnlen.S (strnlen): Ditto. * sysdeps/aarch64/strrchr.S (strrchr): Ditto. * sysdeps/unix/sysv/linux/aarch64/clone.S: Ditto. * sysdeps/unix/sysv/linux/aarch64/setcontext.S (__setcontext): Ditto. * sysdeps/unix/sysv/linux/aarch64/swapcontext.S (__swapcontext): Ditto. * sysdeps/aarch64/__longjmp.S (__longjmp): Extend pointers in ILP32, change PTR_MANGLE call to use register numbers instead of names. * sysdeps/unix/sysv/linux/aarch64/getcontext.S (__getcontext): Ditto. * sysdeps/aarch64/setjmp.S (__sigsetjmp): Extend arg pointers for ILP32 mode, change PTR_MANGLE calls to use register numbers. * sysdeps/aarch64/start.S (_start): Ditto. * sysdeps/aarch64/nptl/bits/pthreadtypes.h (__PTHREAD_RWLOCK_INT_FLAGS_SHARED): New define. * sysdeps/aarch64/nptl/bits/semaphore.h (__SIZEOF_SEM_T): Change define. * sysdeps/aarch64/sysdep.h (AARCH64_R, PTR_REG, PTR_LOG_SIZE, DELOUSE, PTR_SIZE): New Macros. (LDST_PCREL, LDST_GLOBAL) Update to use PTR_REG. * sysdeps/unix/sysv/linux/aarch64/bits/fcntl.h (O_LARGEFILE): Set when in ILP32 mode. (F_GETLK64, F_SETLK64, F_SETLKW64): Only set in LP64 mode. * sysdeps/unix/sysv/linux/aarch64/dl-cache.h (DL_CACHE_DEFAULT_ID): Set elf flags for ILP32. (add_system_dir): Set ILP32 library directories. * sysdeps/unix/sysv/linux/aarch64/init-first.c (_libc_vdso_platform_setup): Set minimum kernel version for ILP32. * sysdeps/unix/sysv/linux/aarch64/ldconfig.h (SYSDEP_KNOWN_INTERPRETER_NAMES): Add ILP32 names. * sysdeps/unix/sysv/linux/aarch64/sigcontextinfo.h (GET_PC, SET_PC): New Macros. * sysdeps/unix/sysv/linux/aarch64/sysdep.h: Handle ILP32 pointers. diff --git a/sysdeps/aarch64/__longjmp.S b/sysdeps/aarch64/__longjmp.S index 65116be..4d411fe 100644 --- a/sysdeps/aarch64/__longjmp.S +++ b/sysdeps/aarch64/__longjmp.S @@ -46,6 +46,8 @@ ENTRY (__longjmp) cfi_offset(d14, JB_D14<<3) cfi_offset(d15, JB_D15<<3) + DELOUSE (0) + ldp x19, x20, [x0, #JB_X19<<3] ldp x21, x22, [x0, #JB_X21<<3] ldp x23, x24, [x0, #JB_X23<<3] @@ -53,7 +55,7 @@ ENTRY (__longjmp) ldp x27, x28, [x0, #JB_X27<<3] #ifdef PTR_DEMANGLE ldp x29, x4, [x0, #JB_X29<<3] - PTR_DEMANGLE (x30, x4, x3, x2) + PTR_DEMANGLE (30, 4, 3, 2) #else ldp x29, x30, [x0, #JB_X29<<3] #endif @@ -98,7 +100,7 @@ ENTRY (__longjmp) cfi_same_value(d15) #ifdef PTR_DEMANGLE ldr x4, [x0, #JB_SP<<3] - PTR_DEMANGLE (x5, x4, x3, x2) + PTR_DEMANGLE (5, 4, 3, 2) #else ldr x5, [x0, #JB_SP<<3] #endif diff --git a/sysdeps/aarch64/crti.S b/sysdeps/aarch64/crti.S index 53ccb42..5c42fd5 100644 --- a/sysdeps/aarch64/crti.S +++ b/sysdeps/aarch64/crti.S @@ -39,6 +39,7 @@ they can be called as functions. The symbols _init and _fini are magic and cause the linker to emit DT_INIT and DT_FINI. */ +#include #include #ifndef PREINIT_FUNCTION @@ -60,7 +61,7 @@ .type call_weak_fn, %function call_weak_fn: adrp x0, :got:PREINIT_FUNCTION - ldr x0, [x0, #:got_lo12:PREINIT_FUNCTION] + ldr PTR_REG (0), [x0, #:got_lo12:PREINIT_FUNCTION] cbz x0, 1f b PREINIT_FUNCTION 1: diff --git a/sysdeps/aarch64/dl-irel.h b/sysdeps/aarch64/dl-irel.h index 63a8e50..2effca4 100644 --- a/sysdeps/aarch64/dl-irel.h +++ b/sysdeps/aarch64/dl-irel.h @@ -23,6 +23,7 @@ #include #include #include +#include #define ELF_MACHINE_IRELA 1 @@ -40,7 +41,7 @@ elf_irela (const ElfW(Rela) *reloc) ElfW(Addr) *const reloc_addr = (void *) reloc->r_offset; const unsigned long int r_type = ELFW(R_TYPE) (reloc->r_info); - if (__glibc_likely (r_type == R_AARCH64_IRELATIVE)) + if (__glibc_likely (r_type == AARCH64_R (IRELATIVE))) { ElfW(Addr) value = elf_ifunc_invoke (reloc->r_addend); *reloc_addr = value; diff --git a/sysdeps/aarch64/dl-machine.h b/sysdeps/aarch64/dl-machine.h index 282805e..b5ea7a8 100644 --- a/sysdeps/aarch64/dl-machine.h +++ b/sysdeps/aarch64/dl-machine.h @@ -21,6 +21,7 @@ #define ELF_MACHINE_NAME "aarch64" +#include #include #include #include @@ -53,19 +54,33 @@ elf_machine_load_address (void) by constructing a non GOT reference to the symbol, the dynamic address of the symbol we compute using adrp/add to compute the symbol's address relative to the PC. - This depends on 32bit relocations being resolved at link time - and that the static address fits in the 32bits. */ + This depends on 32/16bit relocations being resolved at link time + and that the static address fits in the 32/16 bits. */ ElfW(Addr) static_addr; ElfW(Addr) dynamic_addr; asm (" \n" " adrp %1, _dl_start; \n" +#ifdef __LP64__ " add %1, %1, #:lo12:_dl_start \n" +#else +" add %w1, %w1, #:lo12:_dl_start \n" +#endif " ldr %w0, 1f \n" " b 2f \n" "1: \n" +#ifdef __LP64__ " .word _dl_start \n" +#else +# ifdef __AARCH64EB__ +" .short 0 \n" +# endif +" .short _dl_start \n" +# ifndef __AARCH64EB__ +" .short 0 \n" +# endif +#endif "2: \n" : "=r" (static_addr), "=r" (dynamic_addr)); return dynamic_addr - static_addr; @@ -125,80 +140,86 @@ elf_machine_runtime_setup (struct link_map *l, int lazy, int profile) /* Initial entry point for the dynamic linker. The C function _dl_start is the real entry point, its return value is the user program's entry point */ +#ifdef __LP64__ +# define RTLD_START RTLD_START_1 ("x", "3", "sp") +#else +# define RTLD_START RTLD_START_1 ("w", "2", "wsp") +#endif -#define RTLD_START asm ("\ -.text \n\ -.globl _start \n\ -.type _start, %function \n\ -.globl _dl_start_user \n\ -.type _dl_start_user, %function \n\ -_start: \n\ - mov x0, sp \n\ - bl _dl_start \n\ - // returns user entry point in x0 \n\ - mov x21, x0 \n\ -_dl_start_user: \n\ - // get the original arg count \n\ - ldr x1, [sp] \n\ - // get the argv address \n\ - add x2, sp, #8 \n\ - // get _dl_skip_args to see if we were \n\ - // invoked as an executable \n\ - adrp x4, _dl_skip_args \n\ - ldr w4, [x4, #:lo12:_dl_skip_args] \n\ - // do we need to adjust argc/argv \n\ - cmp w4, 0 \n\ - beq .L_done_stack_adjust \n\ - // subtract _dl_skip_args from original arg count \n\ - sub x1, x1, x4 \n\ - // store adjusted argc back to stack \n\ - str x1, [sp] \n\ - // find the first unskipped argument \n\ - mov x3, x2 \n\ - add x4, x2, x4, lsl #3 \n\ - // shuffle argv down \n\ -1: ldr x5, [x4], #8 \n\ - str x5, [x3], #8 \n\ - cmp x5, #0 \n\ - bne 1b \n\ - // shuffle envp down \n\ -1: ldr x5, [x4], #8 \n\ - str x5, [x3], #8 \n\ - cmp x5, #0 \n\ - bne 1b \n\ - // shuffle auxv down \n\ -1: ldp x0, x5, [x4, #16]! \n\ - stp x0, x5, [x3], #16 \n\ - cmp x0, #0 \n\ - bne 1b \n\ - // Update _dl_argv \n\ - adrp x3, _dl_argv \n\ - str x2, [x3, #:lo12:_dl_argv] \n\ -.L_done_stack_adjust: \n\ - // compute envp \n\ - add x3, x2, x1, lsl #3 \n\ - add x3, x3, #8 \n\ - adrp x16, _rtld_local \n\ - add x16, x16, #:lo12:_rtld_local \n\ - ldr x0, [x16] \n\ - bl _dl_init \n\ - // load the finalizer function \n\ - adrp x0, _dl_fini \n\ - add x0, x0, #:lo12:_dl_fini \n\ - // jump to the user_s entry point \n\ - br x21 \n\ + +#define RTLD_START_1(PTR, PTR_SIZE_LOG, PTR_SP) asm ("\ +.text \n\ +.globl _start \n\ +.type _start, %function \n\ +.globl _dl_start_user \n\ +.type _dl_start_user, %function \n\ +_start: \n\ + mov " PTR "0, " PTR_SP " \n\ + bl _dl_start \n\ + // returns user entry point in x0 \n\ + mov x21, x0 \n\ +_dl_start_user: \n\ + // get the original arg count \n\ + ldr " PTR "1, [sp] \n\ + // get the argv address \n\ + add " PTR "2, " PTR_SP ", #(1<<" PTR_SIZE_LOG ") \n\ + // get _dl_skip_args to see if we were \n\ + // invoked as an executable \n\ + adrp x4, _dl_skip_args \n\ + ldr w4, [x4, #:lo12:_dl_skip_args] \n\ + // do we need to adjust argc/argv \n\ + cmp w4, 0 \n\ + beq .L_done_stack_adjust \n\ + // subtract _dl_skip_args from original arg count \n\ + sub " PTR "1, " PTR "1, " PTR "4 \n\ + // store adjusted argc back to stack \n\ + str " PTR "1, [sp] \n\ + // find the first unskipped argument \n\ + mov " PTR "3, " PTR "2 \n\ + add " PTR "4, " PTR "2, " PTR "4, lsl #" PTR_SIZE_LOG " \n\ + // shuffle argv down \n\ +1: ldr " PTR "5, [x4], #(1<<" PTR_SIZE_LOG ") \n\ + str " PTR "5, [x3], #(1<<" PTR_SIZE_LOG ") \n\ + cmp " PTR "5, #0 \n\ + bne 1b \n\ + // shuffle envp down \n\ +1: ldr " PTR "5, [x4], #(1<<" PTR_SIZE_LOG ") \n\ + str " PTR "5, [x3], #(1<<" PTR_SIZE_LOG ") \n\ + cmp " PTR "5, #0 \n\ + bne 1b \n\ + // shuffle auxv down \n\ +1: ldp " PTR "0, " PTR "5, [x4, #(2<<" PTR_SIZE_LOG ")]! \n\ + stp " PTR "0, " PTR "5, [x3], #(2<<" PTR_SIZE_LOG ") \n\ + cmp " PTR "0, #0 \n\ + bne 1b \n\ + // Update _dl_argv \n\ + adrp x3, _dl_argv \n\ + str " PTR "2, [x3, #:lo12:_dl_argv] \n\ +.L_done_stack_adjust: \n\ + // compute envp \n\ + add " PTR "3, " PTR "2, " PTR "1, lsl #" PTR_SIZE_LOG " \n\ + add " PTR "3, " PTR "3, #(1<<" PTR_SIZE_LOG ") \n\ + adrp x16, _rtld_local \n\ + add " PTR "16, " PTR "16, #:lo12:_rtld_local \n\ + ldr " PTR "0, [x16] \n\ + bl _dl_init \n\ + // load the finalizer function \n\ + adrp x0, _dl_fini \n\ + add " PTR "0, " PTR "0, #:lo12:_dl_fini \n\ + // jump to the user_s entry point \n\ + br x21 \n\ "); #define elf_machine_type_class(type) \ - ((((type) == R_AARCH64_JUMP_SLOT || \ - (type) == R_AARCH64_TLS_DTPMOD || \ - (type) == R_AARCH64_TLS_DTPREL || \ - (type) == R_AARCH64_TLS_TPREL || \ - (type) == R_AARCH64_TLSDESC) * ELF_RTYPE_CLASS_PLT) \ - | (((type) == R_AARCH64_COPY) * ELF_RTYPE_CLASS_COPY) \ - | (((type) == R_AARCH64_GLOB_DAT) * ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA)) + ((((type) == AARCH64_R (JUMP_SLOT) \ + || (type) == AARCH64_R (TLS_DTPMOD) \ + || (type) == AARCH64_R (TLS_DTPREL) \ + || (type) == AARCH64_R (TLS_TPREL) \ + || (type) == AARCH64_R (TLSDESC)) * ELF_RTYPE_CLASS_PLT) \ + | (((type) == AARCH64_R (COPY)) * ELF_RTYPE_CLASS_COPY) \ + | (((type) == AARCH64_R (GLOB_DAT)) * ELF_RTYPE_CLASS_EXTERN_PROTECTED_DATA)) -#define ELF_MACHINE_JMP_SLOT R_AARCH64_JUMP_SLOT +#define ELF_MACHINE_JMP_SLOT AARCH64_R (JUMP_SLOT) /* AArch64 uses RELA not REL */ #define ELF_MACHINE_NO_REL 1 @@ -237,9 +258,9 @@ elf_machine_rela (struct link_map *map, const ElfW(Rela) *reloc, void *const reloc_addr_arg, int skip_ifunc) { ElfW(Addr) *const reloc_addr = reloc_addr_arg; - const unsigned int r_type = ELF64_R_TYPE (reloc->r_info); + const unsigned int r_type = ELFW (R_TYPE) (reloc->r_info); - if (__builtin_expect (r_type == R_AARCH64_RELATIVE, 0)) + if (__builtin_expect (r_type == AARCH64_R (RELATIVE), 0)) *reloc_addr = map->l_addr + reloc->r_addend; else if (__builtin_expect (r_type == R_AARCH64_NONE, 0)) return; @@ -257,7 +278,7 @@ elf_machine_rela (struct link_map *map, const ElfW(Rela) *reloc, switch (r_type) { - case R_AARCH64_COPY: + case AARCH64_R (COPY): if (sym == NULL) break; @@ -275,15 +296,17 @@ elf_machine_rela (struct link_map *map, const ElfW(Rela) *reloc, MIN (sym->st_size, refsym->st_size)); break; - case R_AARCH64_RELATIVE: - case R_AARCH64_GLOB_DAT: - case R_AARCH64_JUMP_SLOT: - case R_AARCH64_ABS32: - case R_AARCH64_ABS64: + case AARCH64_R (RELATIVE): + case AARCH64_R (GLOB_DAT): + case AARCH64_R (JUMP_SLOT): + case AARCH64_R (ABS32): +#ifdef __LP64__ + case AARCH64_R (ABS64): +#endif *reloc_addr = value + reloc->r_addend; break; - case R_AARCH64_TLSDESC: + case AARCH64_R (TLSDESC): { struct tlsdesc volatile *td = (struct tlsdesc volatile *)reloc_addr; @@ -318,7 +341,7 @@ elf_machine_rela (struct link_map *map, const ElfW(Rela) *reloc, break; } - case R_AARCH64_TLS_DTPMOD: + case AARCH64_R (TLS_DTPMOD): #ifdef RTLD_BOOTSTRAP *reloc_addr = 1; #else @@ -329,12 +352,12 @@ elf_machine_rela (struct link_map *map, const ElfW(Rela) *reloc, #endif break; - case R_AARCH64_TLS_DTPREL: + case AARCH64_R (TLS_DTPREL): if (sym) *reloc_addr = sym->st_value + reloc->r_addend; break; - case R_AARCH64_TLS_TPREL: + case AARCH64_R (TLS_TPREL): if (sym) { CHECK_STATIC_TLS (map, sym_map); @@ -343,7 +366,7 @@ elf_machine_rela (struct link_map *map, const ElfW(Rela) *reloc, } break; - case R_AARCH64_IRELATIVE: + case AARCH64_R (IRELATIVE): value = map->l_addr + reloc->r_addend; value = elf_ifunc_invoke (value); *reloc_addr = value; @@ -374,16 +397,16 @@ elf_machine_lazy_rel (struct link_map *map, int skip_ifunc) { ElfW(Addr) *const reloc_addr = (void *) (l_addr + reloc->r_offset); - const unsigned int r_type = ELF64_R_TYPE (reloc->r_info); + const unsigned int r_type = ELFW (R_TYPE) (reloc->r_info); /* Check for unexpected PLT reloc type. */ - if (__builtin_expect (r_type == R_AARCH64_JUMP_SLOT, 1)) + if (__builtin_expect (r_type == AARCH64_R (JUMP_SLOT), 1)) { if (__builtin_expect (map->l_mach.plt, 0) == 0) *reloc_addr += l_addr; else *reloc_addr = map->l_mach.plt; } - else if (__builtin_expect (r_type == R_AARCH64_TLSDESC, 1)) + else if (__builtin_expect (r_type == AARCH64_R (TLSDESC), 1)) { struct tlsdesc volatile *td = (struct tlsdesc volatile *)reloc_addr; @@ -392,7 +415,7 @@ elf_machine_lazy_rel (struct link_map *map, td->entry = (void*)(D_PTR (map, l_info[ADDRIDX (DT_TLSDESC_PLT)]) + map->l_addr); } - else if (__glibc_unlikely (r_type == R_AARCH64_IRELATIVE)) + else if (__glibc_unlikely (r_type == AARCH64_R (IRELATIVE))) { ElfW(Addr) value = map->l_addr + reloc->r_addend; if (__glibc_likely (!skip_ifunc)) diff --git a/sysdeps/aarch64/dl-tlsdesc.S b/sysdeps/aarch64/dl-tlsdesc.S index 05be370..42fa943 100644 --- a/sysdeps/aarch64/dl-tlsdesc.S +++ b/sysdeps/aarch64/dl-tlsdesc.S @@ -74,7 +74,8 @@ cfi_startproc .align 2 _dl_tlsdesc_return: - ldr x0, [x0, #8] + DELOUSE (0) + ldr PTR_REG (0), [x0, #PTR_SIZE] RET cfi_endproc .size _dl_tlsdesc_return, .-_dl_tlsdesc_return @@ -95,9 +96,10 @@ _dl_tlsdesc_return_lazy: so it reads the same value (this function is the final value of td->entry) and thus it synchronizes with the release store to td->entry in _dl_tlsdesc_resolve_rela_fixup ensuring that the load - from [x0,#8] here happens after the initialization of td->arg. */ - ldar xzr, [x0] - ldr x0, [x0, #8] + from [x0,#PTR_SIZE] here happens after the initialization of td->arg. */ + DELOUSE (0) + ldar PTR_REG (zr), [x0] + ldr PTR_REG (0), [x0, #PTR_SIZE] RET cfi_endproc .size _dl_tlsdesc_return_lazy, .-_dl_tlsdesc_return_lazy @@ -125,10 +127,11 @@ _dl_tlsdesc_undefweak: td->entry) and thus it synchronizes with the release store to td->entry in _dl_tlsdesc_resolve_rela_fixup ensuring that the load from [x0,#8] here happens after the initialization of td->arg. */ - ldar xzr, [x0] - ldr x0, [x0, #8] + DELOUSE (0) + ldar PTR_REG (zr), [x0] + ldr PTR_REG (0), [x0, #PTR_SIZE] mrs x1, tpidr_el0 - sub x0, x0, x1 + sub PTR_REG (0), PTR_REG (0), PTR_REG (1) ldr x1, [sp], #16 cfi_adjust_cfa_offset (-16) RET @@ -174,6 +177,7 @@ _dl_tlsdesc_dynamic: stp x29, x30, [sp,#-(32+16*NSAVEXREGPAIRS)]! cfi_adjust_cfa_offset (32+16*NSAVEXREGPAIRS) mov x29, sp + DELOUSE (0) /* Save just enough registers to support fast path, if we fall into slow path we will save additional registers. */ @@ -187,22 +191,22 @@ _dl_tlsdesc_dynamic: so it reads the same value (this function is the final value of td->entry) and thus it synchronizes with the release store to td->entry in _dl_tlsdesc_resolve_rela_fixup ensuring that the load - from [x0,#8] here happens after the initialization of td->arg. */ - ldar xzr, [x0] - ldr x1, [x0,#8] - ldr x0, [x4] - ldr x3, [x1,#16] - ldr x2, [x0] - cmp x3, x2 + from [x0,#PTR_SIZE] here happens after the initialization of td->arg. */ + ldar PTR_REG (zr), [x0] + ldr PTR_REG (1), [x0,#PTR_SIZE] + ldr PTR_REG (0), [x4] + ldr PTR_REG (3), [x1,#(PTR_SIZE * 2)] + ldr PTR_REG (2), [x0] + cmp PTR_REG (3), PTR_REG (2) b.hi 2f - ldr x2, [x1] - add x0, x0, x2, lsl #4 - ldr x0, [x0] + ldr PTR_REG (2), [x1] + add PTR_REG (0), PTR_REG (0), PTR_REG (2), lsl #(PTR_LOG_SIZE + 1) + ldr PTR_REG (0), [x0] cmn x0, #0x1 b.eq 2f - ldr x1, [x1,#8] - add x0, x0, x1 - sub x0, x0, x4 + ldr PTR_REG (1), [x1,#(PTR_SIZE * 2)] + add PTR_REG (0), PTR_REG (0), PTR_REG (1) + sub PTR_REG (0), PTR_REG (0), PTR_REG (4) 1: ldp x1, x2, [sp, #32+16*0] ldp x3, x4, [sp, #32+16*1] @@ -233,7 +237,7 @@ _dl_tlsdesc_dynamic: bl __tls_get_addr mrs x1, tpidr_el0 - sub x0, x0, x1 + sub PTR_REG (0), PTR_REG (0), PTR_REG (1) RESTORE_Q_REGISTERS @@ -279,13 +283,15 @@ _dl_tlsdesc_resolve_rela: SAVE_Q_REGISTERS - ldr x1, [x3, #8] + DELOUSE (3) + ldr PTR_REG (1), [x3, #PTR_SIZE] bl _dl_tlsdesc_resolve_rela_fixup RESTORE_Q_REGISTERS ldr x0, [sp, #32+16*8] - ldr x1, [x0] + DELOUSE (0) + ldr PTR_REG (1), [x0] blr x1 ldp x1, x4, [sp, #32+16*0] @@ -346,7 +352,8 @@ _dl_tlsdesc_resolve_hold: RESTORE_Q_REGISTERS ldr x0, [sp, #32+16*9] - ldr x1, [x0] + DELOUSE (0) + ldr PTR_REG (1), [x0] blr x1 ldp x1, x2, [sp, #32+16*0] diff --git a/sysdeps/aarch64/dl-trampoline.S b/sysdeps/aarch64/dl-trampoline.S index 947a515..63ef6f7 100644 --- a/sysdeps/aarch64/dl-trampoline.S +++ b/sysdeps/aarch64/dl-trampoline.S @@ -22,9 +22,13 @@ #include "dl-link.h" #define ip0 x16 +#define ip0l PTR_REG (16) #define ip1 x17 #define lr x30 +/* RELA relocatons are 3 pointers */ +#define RELA_SIZE (PTR_SIZE * 3) + .text .globl _dl_runtime_resolve .type _dl_runtime_resolve, #function @@ -79,7 +83,7 @@ _dl_runtime_resolve: cfi_rel_offset (q1, 80+7*16) /* Get pointer to linker struct. */ - ldr x0, [ip0, #-8] + ldr PTR_REG (0), [ip0, #-PTR_SIZE] /* Prepare to call _dl_fixup(). */ ldr x1, [sp, 80+8*16] /* Recover &PLTGOT[n] */ @@ -87,7 +91,7 @@ _dl_runtime_resolve: sub x1, x1, ip0 add x1, x1, x1, lsl #1 lsl x1, x1, #3 - sub x1, x1, #192 + sub x1, x1, #(RELA_SIZE<<3) lsr x1, x1, #3 /* Call fixup routine. */ @@ -191,7 +195,7 @@ _dl_runtime_profile: stp x0, x1, [x29, #OFFSET_RG + DL_OFFSET_RG_SP] /* Get pointer to linker struct. */ - ldr x0, [ip0, #-8] + ldr PTR_REG (0), [ip0, #-PTR_SIZE] /* Prepare to call _dl_profile_fixup(). */ ldr x1, [x29, OFFSET_PLTGOTN] /* Recover &PLTGOT[n] */ @@ -199,7 +203,7 @@ _dl_runtime_profile: sub x1, x1, ip0 add x1, x1, x1, lsl #1 lsl x1, x1, #3 - sub x1, x1, #192 + sub x1, x1, #(RELA_SIZE<<3) lsr x1, x1, #3 stp x0, x1, [x29, #OFFSET_SAVED_CALL_X0] @@ -210,8 +214,8 @@ _dl_runtime_profile: add x4, x29, #OFFSET_FS /* address of framesize */ bl _dl_profile_fixup - ldr ip0, [x29, #OFFSET_FS] /* framesize == 0 */ - cmp ip0, #0 + ldr ip0l, [x29, #OFFSET_FS] /* framesize == 0 */ + cmp ip0l, #0 bge 1f cfi_remember_state @@ -243,7 +247,7 @@ _dl_runtime_profile: 1: /* The new frame size is in ip0. */ - sub x1, x29, ip0 + sub PTR_REG (1), PTR_REG (29), ip0l and sp, x1, #0xfffffffffffffff0 str x0, [x29, #OFFSET_T1] diff --git a/sysdeps/aarch64/jmpbuf-unwind.h b/sysdeps/aarch64/jmpbuf-unwind.h index 3e0a37d..11ace17 100644 --- a/sysdeps/aarch64/jmpbuf-unwind.h +++ b/sysdeps/aarch64/jmpbuf-unwind.h @@ -27,7 +27,7 @@ ((void *) (address) < (void *) demangle (jmpbuf[JB_SP])) #define _JMPBUF_CFA_UNWINDS_ADJ(jmpbuf, context, adj) \ - _JMPBUF_UNWINDS_ADJ (jmpbuf, (void *) _Unwind_GetCFA (context), adj) + _JMPBUF_UNWINDS_ADJ (jmpbuf, (void *) (size_t) _Unwind_GetCFA (context), adj) #define _JMPBUF_UNWINDS_ADJ(_jmpbuf, _address, _adj) \ ((uintptr_t) (_address) - (_adj) < _jmpbuf_sp (_jmpbuf) - (_adj)) diff --git a/sysdeps/aarch64/memcmp.S b/sysdeps/aarch64/memcmp.S index ae2d997..8b87e9b 100644 --- a/sysdeps/aarch64/memcmp.S +++ b/sysdeps/aarch64/memcmp.S @@ -47,6 +47,9 @@ #define mask x13 ENTRY_ALIGN (memcmp, 6) + DELOUSE (0) + DELOUSE (1) + DELOUSE (2) cbz limit, L(ret0) eor tmp1, src1, src2 tst tmp1, #7 diff --git a/sysdeps/aarch64/memcpy.S b/sysdeps/aarch64/memcpy.S index de73f0f..b269316 100644 --- a/sysdeps/aarch64/memcpy.S +++ b/sysdeps/aarch64/memcpy.S @@ -61,6 +61,10 @@ ENTRY_ALIGN (memmove, 6) + DELOUSE (0) + DELOUSE (1) + DELOUSE (2) + sub tmp1, dstin, src cmp count, 96 ccmp tmp1, count, 2, hi @@ -71,6 +75,10 @@ END (memmove) libc_hidden_builtin_def (memmove) ENTRY (memcpy) + DELOUSE (0) + DELOUSE (1) + DELOUSE (2) + prfm PLDL1KEEP, [src] add srcend, src, count add dstend, dstin, count diff --git a/sysdeps/aarch64/memset.S b/sysdeps/aarch64/memset.S index 4d222c5..7bad29a 100644 --- a/sysdeps/aarch64/memset.S +++ b/sysdeps/aarch64/memset.S @@ -39,6 +39,9 @@ ENTRY_ALIGN (__memset, 6) + DELOUSE (0) + DELOUSE (2) + dup v0.16B, valw add dstend, dstin, count diff --git a/sysdeps/aarch64/nptl/bits/pthreadtypes.h b/sysdeps/aarch64/nptl/bits/pthreadtypes.h index c376e64..9dcf8d9 100644 --- a/sysdeps/aarch64/nptl/bits/pthreadtypes.h +++ b/sysdeps/aarch64/nptl/bits/pthreadtypes.h @@ -32,6 +32,8 @@ #define __SIZEOF_PTHREAD_BARRIER_T 32 #define __SIZEOF_PTHREAD_BARRIERATTR_T 8 +#define __PTHREAD_RWLOCK_INT_FLAGS_SHARED 1 + /* Thread identifiers. The structure of the attribute type is not exposed on purpose. */ diff --git a/sysdeps/aarch64/nptl/bits/semaphore.h b/sysdeps/aarch64/nptl/bits/semaphore.h index 3cc5b37..05333ea 100644 --- a/sysdeps/aarch64/nptl/bits/semaphore.h +++ b/sysdeps/aarch64/nptl/bits/semaphore.h @@ -21,7 +21,11 @@ #endif -#define __SIZEOF_SEM_T 32 +#ifdef __ILP32__ +# define __SIZEOF_SEM_T 16 +#else +# define __SIZEOF_SEM_T 32 +#endif /* Value returned if `sem_open' failed. */ diff --git a/sysdeps/aarch64/setjmp.S b/sysdeps/aarch64/setjmp.S index 22f4368..e03b3b5 100644 --- a/sysdeps/aarch64/setjmp.S +++ b/sysdeps/aarch64/setjmp.S @@ -33,6 +33,7 @@ END (_setjmp) libc_hidden_def (_setjmp) ENTRY (__sigsetjmp) + DELOUSE (0) 1: stp x19, x20, [x0, #JB_X19<<3] @@ -42,7 +43,7 @@ ENTRY (__sigsetjmp) stp x27, x28, [x0, #JB_X27<<3] #ifdef PTR_MANGLE - PTR_MANGLE (x4, x30, x3, x2) + PTR_MANGLE (4, 30, 3, 2) stp x29, x4, [x0, #JB_X29<<3] #else stp x29, x30, [x0, #JB_X29<<3] @@ -57,7 +58,7 @@ ENTRY (__sigsetjmp) stp d14, d15, [x0, #JB_D14<<3] #ifdef PTR_MANGLE mov x4, sp - PTR_MANGLE (x5, x4, x3, x2) + PTR_MANGLE (5, 4, 3, 2) str x5, [x0, #JB_SP<<3] #else mov x2, sp diff --git a/sysdeps/aarch64/start.S b/sysdeps/aarch64/start.S index efe2474..9198c57 100644 --- a/sysdeps/aarch64/start.S +++ b/sysdeps/aarch64/start.S @@ -16,6 +16,8 @@ License along with the GNU C Library. If not, see . */ +#include + /* This is the canonical entry point, usually the first thing in the text segment. @@ -25,7 +27,7 @@ At this entry point, most registers' values are unspecified, except: - x0 Contains a function pointer to be registered with `atexit'. + x0/w0 Contains a function pointer to be registered with `atexit'. This is how the dynamic linker arranges to have DT_FINI functions called for shared libraries that have been loaded before this code runs. @@ -52,26 +54,26 @@ _start: mov x5, x0 /* Load argc and a pointer to argv */ - ldr x1, [sp, #0] - add x2, sp, #8 + ldr PTR_REG (1), [sp, #0] + add x2, sp, #PTR_SIZE /* Setup stack limit in argument register */ mov x6, sp #ifdef SHARED adrp x0, :got:main - ldr x0, [x0, #:got_lo12:main] + ldr PTR_REG (0), [x0, #:got_lo12:main] adrp x3, :got:__libc_csu_init - ldr x3, [x3, #:got_lo12:__libc_csu_init] + ldr PTR_REG (3), [x3, #:got_lo12:__libc_csu_init] adrp x4, :got:__libc_csu_fini - ldr x4, [x4, #:got_lo12:__libc_csu_fini] + ldr PTR_REG (4), [x4, #:got_lo12:__libc_csu_fini] #else /* Set up the other arguments in registers */ - ldr x0, =main - ldr x3, =__libc_csu_init - ldr x4, =__libc_csu_fini + ldr PTR_REG (0), =main + ldr PTR_REG (3), =__libc_csu_init + ldr PTR_REG (4), =__libc_csu_fini #endif /* __libc_start_main (main, argc, argv, init, fini, rtld_fini, diff --git a/sysdeps/aarch64/strchr.S b/sysdeps/aarch64/strchr.S index 5e3aecf..c66fea3 100644 --- a/sysdeps/aarch64/strchr.S +++ b/sysdeps/aarch64/strchr.S @@ -62,6 +62,7 @@ /* Locals and temporaries. */ ENTRY (strchr) + DELOUSE (0) mov wtmp2, #0x0401 movk wtmp2, #0x4010, lsl #16 dup vrepchr.16b, chrin diff --git a/sysdeps/aarch64/strchrnul.S b/sysdeps/aarch64/strchrnul.S index a624c8d..c2cc47e 100644 --- a/sysdeps/aarch64/strchrnul.S +++ b/sysdeps/aarch64/strchrnul.S @@ -60,6 +60,7 @@ identify exactly which byte is causing the termination. */ ENTRY (__strchrnul) + DELOUSE (0) /* Magic constant 0x40100401 to allow us to identify which lane matches the termination condition. */ mov wtmp2, #0x0401 diff --git a/sysdeps/aarch64/strcmp.S b/sysdeps/aarch64/strcmp.S index ba0ccb4..49e528b 100644 --- a/sysdeps/aarch64/strcmp.S +++ b/sysdeps/aarch64/strcmp.S @@ -49,6 +49,8 @@ /* Start of performance-critical section -- one 64B cache line. */ ENTRY_ALIGN(strcmp, 6) + DELOUSE (0) + DELOUSE (1) eor tmp1, src1, src2 mov zeroones, #REP8_01 tst tmp1, #7 diff --git a/sysdeps/aarch64/strcpy.S b/sysdeps/aarch64/strcpy.S index 0694199..45809e8 100644 --- a/sysdeps/aarch64/strcpy.S +++ b/sysdeps/aarch64/strcpy.S @@ -91,6 +91,8 @@ #define MIN_PAGE_SIZE (1 << MIN_PAGE_P2) ENTRY_ALIGN (STRCPY, 6) + DELOUSE (0) + DELOUSE (1) /* For moderately short strings, the fastest way to do the copy is to calculate the length of the string in the same way as strlen, then essentially do a memcpy of the result. This avoids the need for diff --git a/sysdeps/aarch64/strlen.S b/sysdeps/aarch64/strlen.S index a07834b..5fb653a 100644 --- a/sysdeps/aarch64/strlen.S +++ b/sysdeps/aarch64/strlen.S @@ -85,6 +85,8 @@ boundary. */ ENTRY_ALIGN (__strlen, 6) + DELOUSE (0) + DELOUSE (1) and tmp1, srcin, MIN_PAGE_SIZE - 1 mov zeroones, REP8_01 cmp tmp1, MIN_PAGE_SIZE - 16 diff --git a/sysdeps/aarch64/strncmp.S b/sysdeps/aarch64/strncmp.S index f6a17fd..02de93c 100644 --- a/sysdeps/aarch64/strncmp.S +++ b/sysdeps/aarch64/strncmp.S @@ -51,6 +51,9 @@ #define endloop x15 ENTRY_ALIGN_AND_PAD (strncmp, 6, 7) + DELOUSE (0) + DELOUSE (1) + DELOUSE (2) cbz limit, L(ret0) eor tmp1, src1, src2 mov zeroones, #REP8_01 diff --git a/sysdeps/aarch64/strnlen.S b/sysdeps/aarch64/strnlen.S index 4cce45f..af765f1 100644 --- a/sysdeps/aarch64/strnlen.S +++ b/sysdeps/aarch64/strnlen.S @@ -50,6 +50,9 @@ #define REP8_80 0x8080808080808080 ENTRY_ALIGN_AND_PAD (__strnlen, 6, 9) + DELOUSE (0) + DELOUSE (1) + DELOUSE (2) cbz limit, L(hit_limit) mov zeroones, #REP8_01 bic src, srcin, #15 diff --git a/sysdeps/aarch64/strrchr.S b/sysdeps/aarch64/strrchr.S index 44c1917..ea37968 100644 --- a/sysdeps/aarch64/strrchr.S +++ b/sysdeps/aarch64/strrchr.S @@ -68,6 +68,7 @@ identify exactly which byte is causing the termination, and why. */ ENTRY(strrchr) + DELOUSE (0) cbz x1, L(null_search) /* Magic constant 0x40100401 to allow us to identify which lane matches the requested byte. Magic constant 0x80200802 used diff --git a/sysdeps/aarch64/sysdep.h b/sysdeps/aarch64/sysdep.h index e045759..0a7dccb 100644 --- a/sysdeps/aarch64/sysdep.h +++ b/sysdeps/aarch64/sysdep.h @@ -16,8 +16,25 @@ License along with the GNU C Library; if not, see . */ +#ifndef _AARCH64_SYSDEP_H +#define _AARCH64_SYSDEP_H + #include +#ifdef __LP64__ +# define AARCH64_R(NAME) R_AARCH64_ ## NAME +# define PTR_REG(n) x##n +# define PTR_LOG_SIZE 3 +# define DELOUSE(n) +#else +# define AARCH64_R(NAME) R_AARCH64_P32_ ## NAME +# define PTR_REG(n) w##n +# define PTR_LOG_SIZE 2 +# define DELOUSE(n) mov w##n, w##n +#endif + +#define PTR_SIZE (1< -#define _DL_CACHE_DEFAULT_ID (FLAG_AARCH64_LIB64 | FLAG_ELF_LIBC6) +#ifdef __LP64__ +# define _DL_CACHE_DEFAULT_ID (FLAG_AARCH64_LIB64 | FLAG_ELF_LIBC6) +#else +# define _DL_CACHE_DEFAULT_ID (FLAG_AARCH64_LIB32 | FLAG_ELF_LIBC6) +#endif #define _dl_cache_check_flags(flags) \ ((flags) == _DL_CACHE_DEFAULT_ID) @@ -27,18 +31,25 @@ do \ { \ size_t len = strlen (dir); \ - char path[len + 3]; \ + char path[len + 6]; \ memcpy (path, dir, len + 1); \ if (len >= 6 && ! memcmp (path + len - 6, "/lib64", 6)) \ { \ len -= 2; \ path[len] = '\0'; \ } \ + if (len >= 9 && ! memcmp (path + len - 9, "/libilp32", 9))\ + { \ + len -= 5; \ + path[len] = '\0'; \ + } \ add_dir (path); \ if (len >= 4 && ! memcmp (path + len - 4, "/lib", 4)) \ { \ memcpy (path + len, "64", 3); \ add_dir (path); \ + memcpy (path + len, "ilp32", 6); \ + add_dir (path); \ } \ } while (0) diff --git a/sysdeps/unix/sysv/linux/aarch64/getcontext.S b/sysdeps/unix/sysv/linux/aarch64/getcontext.S index c2dd5b8..f6bf24f 100644 --- a/sysdeps/unix/sysv/linux/aarch64/getcontext.S +++ b/sysdeps/unix/sysv/linux/aarch64/getcontext.S @@ -30,6 +30,7 @@ .text ENTRY(__getcontext) + DELOUSE (0) /* The saved context will return to the getcontext() call point with a return value of 0 */ str xzr, [x0, oX0 + 0 * SZREG] @@ -90,7 +91,7 @@ ENTRY(__getcontext) /* Grab the signal mask */ /* rt_sigprocmask (SIG_BLOCK, NULL, &ucp->uc_sigmask, _NSIG8) */ - add x2, x0, #UCONTEXT_SIGMASK + add PTR_REG (2), PTR_REG (0), #UCONTEXT_SIGMASK mov x0, SIG_BLOCK mov x1, 0 mov x3, _NSIG8 diff --git a/sysdeps/unix/sysv/linux/aarch64/init-first.c b/sysdeps/unix/sysv/linux/aarch64/init-first.c index f7224a2..f7bfc4d 100644 --- a/sysdeps/unix/sysv/linux/aarch64/init-first.c +++ b/sysdeps/unix/sysv/linux/aarch64/init-first.c @@ -27,17 +27,21 @@ int (*VDSO_SYMBOL(clock_getres)) (clockid_t, struct timespec *); static inline void _libc_vdso_platform_setup (void) { - PREPARE_VERSION (linux2639, "LINUX_2.6.39", 123718537); +#ifdef __LP64__ + PREPARE_VERSION (linux_version, "LINUX_2.6.39", 123718537); +#else + PREPARE_VERSION (linux_version, "LINUX_4.9", 61765625); +#endif - void *p = _dl_vdso_vsym ("__kernel_gettimeofday", &linux2639); + void *p = _dl_vdso_vsym ("__kernel_gettimeofday", &linux_version); PTR_MANGLE (p); VDSO_SYMBOL(gettimeofday) = p; - p = _dl_vdso_vsym ("__kernel_clock_gettime", &linux2639); + p = _dl_vdso_vsym ("__kernel_clock_gettime", &linux_version); PTR_MANGLE (p); VDSO_SYMBOL(clock_gettime) = p; - p = _dl_vdso_vsym ("__kernel_clock_getres", &linux2639); + p = _dl_vdso_vsym ("__kernel_clock_getres", &linux_version); PTR_MANGLE (p); VDSO_SYMBOL(clock_getres) = p; } diff --git a/sysdeps/unix/sysv/linux/aarch64/ldconfig.h b/sysdeps/unix/sysv/linux/aarch64/ldconfig.h index ee91ef8..ac84194 100644 --- a/sysdeps/unix/sysv/linux/aarch64/ldconfig.h +++ b/sysdeps/unix/sysv/linux/aarch64/ldconfig.h @@ -21,6 +21,8 @@ #define SYSDEP_KNOWN_INTERPRETER_NAMES \ { "/lib/ld-linux-aarch64.so.1", FLAG_ELF_LIBC6 }, \ { "/lib/ld-linux-aarch64_be.so.1", FLAG_ELF_LIBC6 }, \ + { "/lib/ld-linux-aarch64_ilp32.so.1", FLAG_ELF_LIBC6 }, \ + { "/lib/ld-linux-aarch64_be_ilp32.so.1", FLAG_ELF_LIBC6 }, \ { "/lib/ld-linux.so.3", FLAG_ELF_LIBC6 }, \ { "/lib/ld-linux-armhf.so.3", FLAG_ELF_LIBC6 }, #define SYSDEP_KNOWN_LIBRARY_NAMES \ diff --git a/sysdeps/unix/sysv/linux/aarch64/setcontext.S b/sysdeps/unix/sysv/linux/aarch64/setcontext.S index d17f8c8..c2bca26 100644 --- a/sysdeps/unix/sysv/linux/aarch64/setcontext.S +++ b/sysdeps/unix/sysv/linux/aarch64/setcontext.S @@ -34,6 +34,7 @@ .text ENTRY (__setcontext) + DELOUSE (0) /* Save a copy of UCP. */ mov x9, x0 diff --git a/sysdeps/unix/sysv/linux/aarch64/sigcontextinfo.h b/sysdeps/unix/sysv/linux/aarch64/sigcontextinfo.h index a579501..ee54222 100644 --- a/sysdeps/unix/sysv/linux/aarch64/sigcontextinfo.h +++ b/sysdeps/unix/sysv/linux/aarch64/sigcontextinfo.h @@ -19,7 +19,7 @@ #include #define SIGCONTEXT siginfo_t *_si, struct ucontext * -#define GET_PC(ctx) ((void *) (ctx)->uc_mcontext.pc) +#define GET_PC(ctx) ((void *) (size_t) (ctx)->uc_mcontext.pc) /* There is no reliable way to get the sigcontext unless we use a three-argument signal handler. */ diff --git a/sysdeps/unix/sysv/linux/aarch64/swapcontext.S b/sysdeps/unix/sysv/linux/aarch64/swapcontext.S index c1a16f3..8e2cadd 100644 --- a/sysdeps/unix/sysv/linux/aarch64/swapcontext.S +++ b/sysdeps/unix/sysv/linux/aarch64/swapcontext.S @@ -27,6 +27,7 @@ .text ENTRY(__swapcontext) + DELOUSE (0) /* Set the value returned when swapcontext() returns in this context. */ str xzr, [x0, oX0 + 0 * SZREG] diff --git a/sysdeps/unix/sysv/linux/aarch64/sysdep.h b/sysdeps/unix/sysv/linux/aarch64/sysdep.h index a397e50..1ffabc2 100644 --- a/sysdeps/unix/sysv/linux/aarch64/sysdep.h +++ b/sysdeps/unix/sysv/linux/aarch64/sysdep.h @@ -250,12 +250,14 @@ (!defined SHARED && (IS_IN (libc) \ || IS_IN (libpthread)))) # ifdef __ASSEMBLER__ +/* Note, dst, src, guard, and tmp are all register numbers rather than + register names so they will work with both ILP32 and LP64. */ # define PTR_MANGLE(dst, src, guard, tmp) \ LDST_PCREL (ldr, guard, tmp, C_SYMBOL_NAME(__pointer_chk_guard_local)); \ PTR_MANGLE2 (dst, src, guard) /* Use PTR_MANGLE2 for efficiency if guard is already loaded. */ # define PTR_MANGLE2(dst, src, guard)\ - eor dst, src, guard + eor x##dst, x##src, x##guard # define PTR_DEMANGLE(dst, src, guard, tmp)\ PTR_MANGLE (dst, src, guard, tmp) # define PTR_DEMANGLE2(dst, src, guard)\ @@ -268,12 +270,14 @@ extern uintptr_t __pointer_chk_guard_local attribute_relro attribute_hidden; # endif #else # ifdef __ASSEMBLER__ +/* Note, dst, src, guard, and tmp are all register numbers rather than + register names so they will work with both ILP32 and LP64. */ # define PTR_MANGLE(dst, src, guard, tmp) \ LDST_GLOBAL (ldr, guard, tmp, C_SYMBOL_NAME(__pointer_chk_guard)); \ PTR_MANGLE2 (dst, src, guard) /* Use PTR_MANGLE2 for efficiency if guard is already loaded. */ # define PTR_MANGLE2(dst, src, guard)\ - eor dst, src, guard + eor x##dst, x##src, x##guard # define PTR_DEMANGLE(dst, src, guard, tmp)\ PTR_MANGLE (dst, src, guard, tmp) # define PTR_DEMANGLE2(dst, src, guard)\