From patchwork Fri Jun 30 14:10:55 2017 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Petri Savolainen X-Patchwork-Id: 106714 Delivered-To: patch@linaro.org Received: by 10.140.101.44 with SMTP id t41csp2391724qge; Fri, 30 Jun 2017 07:15:37 -0700 (PDT) X-Received: by 10.200.34.129 with SMTP id f1mr25905055qta.183.1498832137477; Fri, 30 Jun 2017 07:15:37 -0700 (PDT) ARC-Seal: i=1; a=rsa-sha256; t=1498832137; cv=none; d=google.com; s=arc-20160816; b=nHaDIoJMZwbqiKmXNFsPCiwUpE7TAmrXHonNrh9lIy9rsr8gIjFDZVAVOSvRtJGy5s aql502jceil+bbxv5l06AQUpuXYGkKsUFrrLThSjVGiPjHvHImmSFS51y/CLVeYt2GqE flm/pnYxc/XRBdJAg1jgV09UfKibFVdxVb3zLt+dpx1sWztjQDctuJkW5QZ6TltiBpcD g0u+3LvXrXftmJmB0HM23WUMK6b5OsBK2hfcbaBS0DsTM2K4c/4i4F93HlOuif6cnlbn 2PrEllBRVa9RdM9u7ATcXNUIBe4r2+IUL16Bp3pVP4M6xQN4Lr4T0mbZqJKWLKCv0xQV Ir8Q== ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=arc-20160816; h=sender:errors-to:list-subscribe:list-help:list-post:list-archive :list-unsubscribe:list-id:precedence:subject:spamdiagnosticmetadata :spamdiagnosticoutput:mime-version:references:in-reply-to:message-id :date:to:from:delivered-to:arc-authentication-results; bh=f7/k2+r6Z3Mc8RxqknAxGgv1YRBthowxWZ5u5vo5Kos=; b=0BVf/7vtcQWwvC/iHP+MHdNd/Lbh1gUxEtUORxA7qaqjiEorT9vHW/zeObKWoF/M8S HI0nlvIBikHzzrX1ZGkp6MGuueauIn0FmmwylOn1+i/qrBc+39+6FfM6f3LPolNqRZqp cYXYIE1ruhIChe+lnwfjOBsBV+XSD96ZA4gvutvhbspmdmWF9G0XbWvvdkC7dnVgSB+b vFX1UvpQJvEIqwGpuTClJeeDUTjJEL5P0kS0N4DlrS47lT9vhQQ5lEugmToL6MUB5FlX M0m95HbK+qbzM1jqWhm5Q39awqcrhrCvgFGHDBzpR1DieoAx/9MTRASA18zC76gHfAZU CVbg== ARC-Authentication-Results: i=1; mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Return-Path: Received: from lists.linaro.org (lists.linaro.org. [54.225.227.206]) by mx.google.com with ESMTP id s20si7715845qtc.127.2017.06.30.07.15.37; Fri, 30 Jun 2017 07:15:37 -0700 (PDT) Received-SPF: pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) client-ip=54.225.227.206; Authentication-Results: mx.google.com; spf=pass (google.com: domain of lng-odp-bounces@lists.linaro.org designates 54.225.227.206 as permitted sender) smtp.mailfrom=lng-odp-bounces@lists.linaro.org; dmarc=pass (p=NONE sp=NONE dis=NONE) header.from=linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id E60476164D; Fri, 30 Jun 2017 14:15:36 +0000 (UTC) X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on ip-10-142-244-252 X-Spam-Level: X-Spam-Status: No, score=-4.7 required=5.0 tests=BAD_ENC_HEADER,BAYES_00, RCVD_IN_DNSWL_NONE,RCVD_IN_MSPIKE_H2,SPF_HELO_PASS,URIBL_BLOCKED autolearn=disabled version=3.4.0 Received: from [127.0.0.1] (localhost [127.0.0.1]) by lists.linaro.org (Postfix) with ESMTP id 5A3FB6156A; Fri, 30 Jun 2017 14:13:25 +0000 (UTC) X-Original-To: lng-odp@lists.linaro.org Delivered-To: lng-odp@lists.linaro.org Received: by lists.linaro.org (Postfix, from userid 109) id 0521E6156A; Fri, 30 Jun 2017 14:13:13 +0000 (UTC) Received: from EUR01-DB5-obe.outbound.protection.outlook.com (mail-db5eur01on0104.outbound.protection.outlook.com [104.47.2.104]) by lists.linaro.org (Postfix) with ESMTPS id 2FBC562874 for ; Fri, 30 Jun 2017 14:11:48 +0000 (UTC) Received: from DB6PR07CA0074.eurprd07.prod.outlook.com (2603:10a6:6:2b::12) by AM4PR0701MB2195.eurprd07.prod.outlook.com (2603:10a6:200:45::6) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1220.5; Fri, 30 Jun 2017 14:11:46 +0000 Received: from VE1EUR03FT062.eop-EUR03.prod.protection.outlook.com (2a01:111:f400:7e09::209) by DB6PR07CA0074.outlook.office365.com (2603:10a6:6:2b::12) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1240.6 via Frontend Transport; Fri, 30 Jun 2017 14:11:46 +0000 Received-SPF: SoftFail (protection.outlook.com: domain of transitioning linaro.org discourages use of 131.228.2.241 as permitted sender) Received: from mailrelay.int.nokia.com (131.228.2.241) by VE1EUR03FT062.mail.protection.outlook.com (10.152.18.252) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384_P384) id 15.1.1199.9 via Frontend Transport; Fri, 30 Jun 2017 14:11:46 +0000 Received: from fihe3nok0735.emea.nsn-net.net (localhost [127.0.0.1]) by fihe3nok0735.emea.nsn-net.net (8.14.9/8.14.5) with ESMTP id v5UEAvb1018830 (version=TLSv1/SSLv3 cipher=DHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT) for ; Fri, 30 Jun 2017 17:10:57 +0300 Received: from 10.144.19.15 ([10.144.104.219]) by fihe3nok0735.emea.nsn-net.net (8.14.9/8.14.5) with ESMTP id v5UEAu4J018803 (version=TLSv1/SSLv3 cipher=AES128-SHA256 bits=128 verify=NOT) for ; Fri, 30 Jun 2017 17:10:56 +0300 X-HPESVCS-Source-Ip: 10.144.104.219 From: Petri Savolainen To: Date: Fri, 30 Jun 2017 17:10:55 +0300 Message-ID: <20170630141056.11272-4-petri.savolainen@linaro.org> X-Mailer: git-send-email 2.13.0 In-Reply-To: <20170630141056.11272-1-petri.savolainen@linaro.org> References: <20170630141056.11272-1-petri.savolainen@linaro.org> X-EOPAttributedMessage: 0 X-MS-Office365-Filtering-HT: Tenant X-Forefront-Antispam-Report: CIP:131.228.2.241; IPV:CAL; SCL:-1; CTRY:FI; EFV:NLI; SFV:NSPM; SFS:(10019020)(6009001)(39840400002)(39860400002)(39400400002)(39450400003)(39850400002)(39410400002)(2980300002)(199003)(189002)(9170700003)(1076002)(86362001)(356003)(22756006)(305945005)(33646002)(76176999)(2950100002)(110136004)(6916009)(106466001)(77096006)(2351001)(2906002)(38730400002)(5660300001)(105596002)(48376002)(50986999)(50466002)(47776003)(53936002)(498600001)(8676002)(81166006)(36756003)(8936002)(5003940100001)(50226002)(189998001)(217873001); DIR:OUT; SFP:1102; SCL:1; SRVR:AM4PR0701MB2195; H:mailrelay.int.nokia.com; FPR:; SPF:SoftFail; MLV:sfv; A:1; MX:1; LANG:en; X-Microsoft-Exchange-Diagnostics: 1; VE1EUR03FT062; 1:RhfSBk3BYrjaXzA+6wEETypIz7HRUqlDHrLH9176N5BhTg9gM6cooTiHkD9WjWFh93H2+razwJkTiH0pt+hnJM+2nWkbfmmNPpc+Dj509u3+BYsPF0fav3X0II3pGnyfz2mqB/WFFl6kqZTN7NII7UQrlIqoxUc0GQEeMThs//7lPtWl1LJjCFaRIvcfXhPbDrH+tDHdFvrf+6wkFjDCpR5epMkxp0Yrnh2/ZanCdau+ly+ZD6T7LoxWFUo/SQ18f+unCnxwib136z/gLLo6fR1xSLcCn623GhAIhqJE5hZ4jIMqIhOstABPJsK4CIaFy0lyl6CV8ISlpJQKdOiAANgerPbG/wYEvrIX7z2ajEZbT9ymyx9hpP6v7Ao0TM4VFZXDf9hDJ3FZrze6hjPlDy6hDTTO5xk8ZHQW4gcdUvimQKNKy8lwmF8OY0sYj7Ph6zNltEdnrMZIWSx+H6gIL1sIj9WeVuOMkzz7I1C9bnObdKa6QXSalL4BcDKCkTDBTV5GoxVzWdnzTgX0Hbiv5oe5N5+Hg64WVK+mpbceyPPnNR8jaHFZyS2AxptpNPGhSzGvD29oFiZJ1KfIOLnsAO92cUAdQhSpJsriKOG51nNx555EIGxKC2rsmIOov6pNtO+jBbf5V7cgy4vttXhOmzL2BM8MLNj7vKpVy2Ru5j0nd9UwQR2ShJ8W4FkmZGM4gHyLk5WlhO16lGX7uh5VUHnjGEYSq226xE/o7fLe0W0DQPP651fOg5kLxLPJ3L5ulzrwqFqrjhcjTiYJgmb9w1BOfTiENThogw+MooF5EhaD1ueIdKm7dJ+JVsJE5wE9lhwYyb0CBanbSQqBV9+QtD4w+VeU9g/6HetzJvyjvKOuc0jBVsER8eEAFqqasmCo MIME-Version: 1.0 X-MS-PublicTrafficType: Email X-MS-Office365-Filtering-Correlation-Id: c21b0195-6a0a-4cf1-af8c-08d4bfc1f133 X-Microsoft-Antispam: UriScan:; BCL:0; PCL:0; RULEID:(300000500095)(300135000095)(300000501095)(300135300095)(22001)(300000502095)(300135100095)(2017030254075)(300000503095)(300135400095)(2017052603031)(201703131423075)(201703031133081)(300000504095)(300135200095)(300000505095)(300135600095)(300000506095)(300135500095); SRVR:AM4PR0701MB2195; X-Microsoft-Exchange-Diagnostics: 1; AM4PR0701MB2195; 3:Cyzjx48O1l6SWRa2lCVtMHPnPAo8EvGLdlwJLmN3xdBl5VnbZblkVT9s6POxH38dIBhQqgZlXo4kvCppdToJirzy/6qUIIPboPtot3aki9z1LBcH7PizXaHGBselIYUMyG3uv5/+tEj4kvo8/EtgPZewzWZBUYCsKYRK2I1Z6fpggziBpaqVBj0IiyKQ3Pbb0D8LyYodtSbiXbKbyZSr3BXLxN3rdew4oNhdWSn2kKOq28ZHZzhiapheXch1maR+pLKkXDTC18H3HLYfYyRfeGzwB/fV0fqPPAiPCbA/5BUgrtC/8af5BiCJbOGiESD/Ywwr/rpzRLvlRCYW60G1/xC0Bj8+VQwhTPEIdNn881Zw1RRZM/ZBv7lemEtWdeBQx/UtI7webicJnwxDoITQnFYegWwrCPSg1WWCpAXW4wxoLGHZzlTLymXqZGxOTZ+kdn3A5sy/ihGhyDyMiMiACzUEa+BzNnA68hzdGft+aBvecT9pQG4wgCNk4wptRJ7E12jtHxB8tA89/utgK6yHRXe4bCeLDr2khkpj8C0gcPAKjuUGkH0SsYwTbMnh2gVQL15MNHK6KY0BsAM25prdGO68X5aVjoTwiF9w03TuOljE0YuvJt7kq/DFlJd6/YBVZuMquQQHavgI5XJSowFxlk0Zo6m+MeC6JloJtSqTbiqYKbX+2RjXOjunxtU2XQnm7D5+sx/lFXaAQJkOK9C3uxASqcPl7khEKWynXR+QtWRH8l+mCWHrpHcAVOSuFvY0jZ6jPaBSVQP0ACxjZfAijX07pbo9IEk91hGA3D+Mh9Vu9hgaDD3Fshom3AqMaQXByXyCzehZdqZCZi21xxNNRfp6/z166lX/2khDkVAlXvVQa7n6AlcLdfVzQVcrCl9iTtRi/UPvzJjMd+43cy7N5A== X-MS-TrafficTypeDiagnostic: AM4PR0701MB2195: X-Microsoft-Exchange-Diagnostics: 1; AM4PR0701MB2195; 25:9WbP0nz1ArWXTUUT+UCzON2QhO7qyUhh5yF0FFa9e0/Xk60QBpcK8HGTW9YHboTyaf6I215ovA73pSQwErSOupbo4plaYYXkMdel7N1D0eqOXQweeVLatyhm8JGCFku0IZ7hvmSHaZ7KaZ0dhu5j/suec6vZQp+cu5W785iz9SQJUOVT+uomQm3gBMo28BdLVWXoQMhV/JOQMEQtXtRKRxNmc6AmK7AMLQ3kNUFtVEe9Ua8cTXW0vQ8DuP1FpaMiZnHhE7JpUx9KELzqv0QB1qN6EXl2kvQJ5biJUmJCq32Cd8K2SNmIZHmclh+9PVFVxSBUpB5u6QJtz6YvfyhE5/puZH496cIDbGWYWtEdJEzpr2FoUlDQ+aj9MwGiQ3b+L4DTeuQkzohszs0tsJv5VTus2zNQ4Yej7Z7j//WvIton168SIFP4+4yvXX9Zrv6SFNv2QNrCpTzLYd068sWw4saXVNFZXZuPh7ESCPIs8I7Oy9+00EmQIKZROSfG+B9/GvWY7fKsgjPqBIDY/VZXXOAjjv3Q8RYT11vmLNKtGbjcZDrYuzgrdw64q/FhPB0jGGxxPAL08WCBAMwRK00U11bVgwGupNrVoXL96bo7VRSBwi97Nwv8K8/vANQ5IOw+qWBwxd3VlwiyzBWPbdnOyJHmPCXExjalpVjfHM/7VO7Q0e9jP/EdxsmJaGZbImi95lmRSiDvvY0jR7nS72Qf0xGbpgplcJWfypiyZVC0k0fVSw1pKmoAKxCrV3qE0upLrzNlbgExMMC5IwEpsfTEtnBKAxIG4EOEi4X1owj86kd/0w300etBprqG2sh5W3vcUxBT9TmZoHjUNOngMmq8nipcZXLtE8nsK5orD61dgznixPTz4SE4rMm8FtNFv72lLTpAD4Ez/O6KL7sT/M7L1/zvJ0V5OAHBfoXdzVyoKrI= X-Microsoft-Exchange-Diagnostics: 1; AM4PR0701MB2195; 31:KJ4lko733P1gezS/fPNxMElIaPDdlY/8PrKBMFCSKuIg1JjmPVqzJOWVLhJmpGkoA7ARCQ3Lbn/GUUxEmLWWqcbS056g851Tx7rro7x0HX9J/AXVYVD/4NL7bQGL7fPwPq39IFOYC5L4ugMTzZwbi93cW+NbH8ORZI5nEvjT0bmMXuB8mXNN9OD8M4xdn3XM4uY+fi9KJlKiq99+wDA0B38Y2phfrJK4lI6vgIj1d0ymc5TDaEpWc9FW4JxJQiVD1QH3qIAhJa46M+ZWpCrT03uoYnrk6LAnSR5Pgp5hQ+3Lj305hMWrnmDLPvbMpZOSxffNVyFiTE8plTmoiIAnCNNQyQ7olFDRS3hAbPFkQvT8XFaYdlNobKLvsMSbDkA3wCrZ4g8rjzPHolBk+E7YF09U6Tj/Dp2qAD426rnIA36waFeint/Ece/Din6bK2r3Kdup+aMHNt6/fj07GoO324YMWLsyVZ3yx1l7pd92TAGbytmHuBazjaISXaVnoG5j+iCkVqQ6q8BuhQLRsHWtk+VO9mU4fcNsRKdqLtC87WWOx/km087hN/KftWwu+GB1MFbfKNZjUhAWUgsWiO4HI7N6zFVtmcQqcgU2H0aEe5wXXW56YBr2T3aae1JX1uxwCZPUmBgCOzUr3VyIPvWmbBwzpxT17cATJUqe/JwTyghyDvkpOQhUy0TGRTAlAMJh5CzRJeQWk42xW6qTonza0g== X-Microsoft-Exchange-Diagnostics: 1; AM4PR0701MB2195; 20:xXtdS0JmIdZ7aaoT0iF47CG0uUIMtgUWsrfwqb9sQpPzPgUGHqYba4nfhu0gROHTIXMO7MQi2bMRydMOxe6Sv/zRwlAOzHMxTqQrQIw7uB/hhBVXhYxWF7U+T6mMqWCh7ONMGsKjAXZHDO3NgYpTP5EZUJ4LIZ8C7J5mG2AY3VzFG21Q4LJ9hruEaCPhPxQTFD6gH6cWK+hB226btXK95yMp0nq7FhbVvsUzvLaVw9/d1BcAcn3CS78/YTStF/QH5ROQlplTCTEuZjU19wiPy0fkBNHH+2zM74z0/7aSvaro4NO+EYkXYYMhKenaCJ/qjrLpA19qMBh+PQzZCQR8xx8tFkqAxYR4yIdxIrdQGgoQSxcoOpCW4f1MwCsme/TlOLmjKzZBzDqGk2VFIgXDiIV75J/wT34Q7KVgUjeYcddnk4BdDZ6Mm4UakpllqXL9pyXdmFfqhyy/+9++IGysXpr6jKO1ofIVrRYcjaschJDMTwssPbrN0zISorbkVF+veX7lpdYxAzsjAyFq3SXdtkTGuvegstnEZh/qAnUho5XbtZLm5ELXqtCwM9CRkKniRPU2JwYZ5l6eRdHG7dLXJEIIQy4mWKpjbqn8Z+2uEaQ= X-Microsoft-Antispam-PRVS: X-Exchange-Antispam-Report-Test: UriScan:(236129657087228)(22766785571888)(167848164394848); X-Exchange-Antispam-Report-CFA-Test: BCL:0; PCL:0; RULEID:(100000700101)(100105000095)(100000701101)(100105300095)(100000702101)(100105100095)(6040450)(601004)(2401047)(13021025)(8121501046)(13013025)(5005006)(93006095)(93003095)(100000703101)(100105400095)(10201501046)(3002001)(6055026)(6041248)(20161123564025)(201703131423075)(201702281528075)(201703061421075)(201703061750153)(20161123555025)(20161123562025)(20161123560025)(20161123558100)(6072148)(100000704101)(100105200095)(100000705101)(100105500095); SRVR:AM4PR0701MB2195; BCL:0; PCL:0; RULEID:(100000800101)(100110000095)(100000801101)(100110300095)(100000802101)(100110100095)(100000803101)(100110400095)(100000804101)(100110200095)(100000805101)(100110500095); SRVR:AM4PR0701MB2195; X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; AM4PR0701MB2195; 4:OYU6tLHC9lg2tiS2bP0NTSLAEDxYKeTewadU2O1i?= hx9BetgIieuYHgUYgHKfZ0UonamakyHUb7FQlOtFYXOfeyk0BOK7E384nQ2+V+arkmAfuls4CQfkG8f2BFj9LpuBlwuPkMdyOKpTChJszLR2en7pWVfj6bWI/ur2c0k/azYt+vp9aw8QfuNY8G8+ADdtKuPwqG/8ga0gxV5PUKyzuhElMo4k2sct/YEAToI947lTLSR/GYtHA2u/DKM4rxZR6y1HUoJabjh2VH7nstz/YYeCguANPkM7PMbkgIccsI59+lYY0iquw8UZcCRcI4aGD0NJq2rxqlOUfQA9WNKdSEpaHeU1hBHOkIMBKqnqR/874EiUUoCAaQmr1Z2kEbtzqIGsWe5PQvUv0ONKHk2qKOYtyE4uxx5U6HJ/lUJLGKlaF5xWKdo8SYyNNppWl914hUZrbiYamXT8D0G1rpPtUpAUWhahjHbz+CuSeXZwSY6/66w67vbZ5EnXUXhMztyTIQjPADv/jgmQL93LjShvpc0Fafv79i4Wg9TqCiRoTUfKxuiQy9vDOQE0ZVOwPZU0tWsG9oUTLRFNUbJdLR5wRY9rJCnPdqtTBauXxk9NVZuYkVopasD4WD2m4N/BJL3TBVxW2CLZndSXVJXpACD1fNBfK4+hfZ/4b0NBwzfyiQ2qO+qK+vhgMiU5Gc+ltrr7t+7iN+E+IdaAGTIK1F3h71N0n3SM/gYf39n+x23Ka7pSDs3zUS5N8Igov9OXR6k+kXEhGz/iqQ/o4BSPenHhbFIBvASBmX0X1lB4TuLq+JyVsyOV2+4/UbpEu3APCgnoYj2pp6vWgodRmxEUZdLfovA99YKmRDhI6PGX5rvs9pkS4ghNPFKhI0cTpuRZIVD/tccckuE4iBjCDzMxNuaIN8FRb39kP91YBNQG8x8/gLSRRunmWux7Lywz/VAme7KZYpNu8/4M+B3tTQ6g06WBTDPxz0f4rTbEMt9I4dyy1e+UfMLYdGsIleIRgZOmRuAjxE/wQoFcf9uuNDGo1mHfyBlOwdfKJElPqMFgkVRFFOu8OYUq9oiKg25adQhEPQPxyH0TEB6NqEFnbOs0JjjuiEafF9PLBnSv3UF4T0NMJiPPj/aZISsAtTWWWv7sDtky325oYXGbKZvPUF2IGYtRuBYxWEyZ3lH0yxZUM/RqjmgnUhhzv6j+HI9XaZSb09NMZQ6mSokOvzU+qVUWj71/wTVrdYc2K9HLDpPCuVVxPLZbCOBHP5dtAZ6VIXuwBKqVb7B1cEDLPUsE545SBp3zGQJfq+koxOjydU3XgW66daA= X-Forefront-PRVS: 0354B4BED2 X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; AM4PR0701MB2195; 23:4AgEbdsnpImji5Qw3+IqH5ZkiRq8Eakyl+IYpGY?= trlMkBpaIYQ4gCZs162z0Y3jQbqWGZ+GV48TiYNG88xOtFfoKd1Gb3poYxS3geDTqZ0LakhgjK06huIp04U8ixOa5bJcva2O/LyJM9Zgn7xUzBdA7dMpWcU89dc4YPC1pqiQhhBqA/LmFsfSmpvhnf15pMY5EEfvCyd4FPwuMJ99k3vgaVXbK9zg9uXhfgNNLadiw4zBR77CYgRjgZk+bxrg4neVE6yZwS9/PWFKmKEdoX4xnBkWPe6aBY6+QVJCIITl0QQBE33znU3jY8p+xE4TBmaoU4Q5H+8xZy2sAZkR0mCO49qlL+Npbm4YZfsGuz64Ew4DTTR1212OHzZVe0JM8xUsnTVivfHZ0RH5lzB9nSGBljYa1TTnP6wVPJ+180299El4WhXASf77KI2h0wAHxvVtbskQuMLvi65NldDODNWkL2G1XftdHD8JKz1xKaEBCOlIOzMAWyf34km6i0hdNZ9pKr7Sk65ChSQcl6K67bSxwqn6fUNg7Siqu+DQo52g+LB82anWqgpYg3BORu4DK31hRLbPkBIKmjFo0Vy+HGAvCwLOXtPuFhdR89Lu9g6nhNo2PVPPT07CEIUR+u65RoEuTivfUQngdXf+e6vtB0xHQZmVU9UseAs2yRBQFpeGkaL+bzOJD+71kFez/T7gKw06uSxs57HC8LYU8sxgJIBMMlWrP8y2g69T/cRuAtMvF2iUb0RSSFUo4LmhLk41b8YM8e9bnnwmxjBCKNDtr3cgj7012YrxfDPFUWTCQLRKk1f5UL1kXOoicVEHuoxWYxe7IDPkOZBusPgQjtWJOtPjfmkTHWc/paTZi8Q9kJNERNk0E2hEKtVGuV2oBmJJdq3L0QwtgRPWYWuPtVYchQRiR/iXuh+O4YhyY4ESxIBwG+G8NNoySHS61ihXVxTGovXVXthO7aRQyYmZ26zhB51sfBUNCqtyWAMpgQf2qYyKnOq22DI6dylEaP5CQM7JXusFJGq9kxWNT+KBWEXROa+Kn1zLNSIu6OroEfkSt8vkjPRXuJaFCh364Z+sUmdIR X-Microsoft-Exchange-Diagnostics: =?us-ascii?Q?1; AM4PR0701MB2195; 6:ZSoEJqO87UJPlmuyjqdWEJltjZYh4yeN0N1UwMoD?= Cbas6TzZzG5PPYt2oVAwH6SiARBLLuqcWRVFHPY0JSqY3zoUKI+I9M8NFEcRA2P1z1GlTRm4RnkCdokEB85g7blW49cx4hcpIy8uilj/cETfskamqCBXOdRHhfKJooTbXdYufQF4blUAy09dmrxl1qPP9sfu5tp5cxE3MuVv+EmpQijW0x69MktFaWWDJUeL2trjwBSVPKYLIOAlAWzvUvWz2jotpkcV/sX+bfCczDqLqBuYPzofHbiuW5+q1euwE+LiP2oXrlsgtT0ZbPm5fxpPklqba7p2lXLwXIoiuAoWy0k0OXhGSV6fi31BbJf5bkkUPYy3ynjkXezUVTSGCDkaOJUvMJtOHiuyoV7kZxBkH1QUatpiygiurBWcZnatyBAirKRczBuK9u3b/5j1Tjz0b0NceI3yaXD2vtTJ1/D48HZx/CpEmMGQBlQudYFUVVeQIoNe86pQVsEqo6LnkkWqEfSR151uw6uJH1uVCYglntMIGAARTfSfiVZ0fB2Ypmi7BVdSB+sxjg/RJEaRkQPkyz0vjC2wCar4ywBruM4/zwGkhz7yN6JTNWVWo8RX4cwXa3Sk/Pc1uersYbnDuNoA8eK1p3AacHz7mocM0F7D8NvfXrIoXZdFSaxCXrohI9Dgsy5QWwmoO4nNLfs0mZCe0FHTskW5Z0cYBGuoLR48GC1LD22aILRTg/nrYk5sJX4jQTzS4ZjhVnHpQM5bwaPy8Fj+a/kUHeMc7VZzkeMHXuzu+LKFgwZPJz2ZOR9Jwbc8neO81x4hcrTqf/6B0572rNO/1h2rHy9ZBXmfwCW1sKIFcdsQ1jdKWm7pZihf800LuDeOMXujqVDvoqPR9+jZaQfmoKQKRjM6zdWgQXOPBb06VZmgDOhwZ+oMfM5jQiaU+UjghFvULPMO4qc5npsDIErky17Vv/rSrPZV/1gloh4GSfLJnwUuxIzKJS7gKm+O9aSe7isIfvzjkb/btmlz X-Microsoft-Exchange-Diagnostics: 1; AM4PR0701MB2195; 5:56vHAhcKhDcoBJOoKKzaTufpPf0aJo5rteCbVK1QjxWtH3EWGcp7ASuA9zQSMdYPgGlh1y/eJ7QdXMmOEmpL5myiprc5B8FYF6zQQWcuM5cgZFSU+zLHmAEIQwkPPTPLR43DyDMSmjR9msy/cZLduBYlYz5SctcjWf8+nmu+0mH0tSOKrpwxiYn1fvROHPUzszOwmgsDYLGfAp2+ipKTNxvgwXc59yUUxePL3+YkIE3bv5eYwoUl0NrGU+KkO4Um+d4viKptG8d+B3XjeGlOOru9XWYeTYGnU7NZGexcUh1+Lz5YNNXflf2UWefKIxWm8PpsG1ucJNffxWPJFSiqWAeYGsnwVlNjifs0oTVKfl8lKlKs7g3DBaONe6JFc6OD5qvMD49q/G9TCf3Zn6Ud4peDWhq35wz89vkfJjO3aeG+cEZKEoAQyVsZzh3MxFXyx3VrJnTUgyXgPKoNafSFVF9U1eBawvm/nxvqBDzMrftiOUJU+gnbq2SB8au4ThnX; 24:Hqy2nxhk61JOzslGApDUn6HLydNqfoc9asugBRWUikARmW1jA6by1/UMIk6YxbzpmLmPebUDSwmf6/u26i+Qx6rKCLnTnX/lI4lPQUV4RMw= SpamDiagnosticOutput: 1:99 SpamDiagnosticMetadata: NSPM X-Microsoft-Exchange-Diagnostics: 1; AM4PR0701MB2195; 7:c6vHVMrPTU1hVVWNO2XfHM0h6jPzNgdmlGVU1O5sRDlbrw6t9fBOnkDySK4Fno+DBwctCEwcKfQeAPepjWtMYP7xMKo9bT3DML8LJMk10jivvBAob5F/yUQ1AAyy53oE/M/98mgSS3YnRCGT9+0JqVj6URvTopke9qk68PBsYWaqYi2BNjqYdYJFT6bHi8DEBfoPQl0lastlJdA4l6ZI0Lkye5uOfftDIEFpNhBEi+CQDekVpPJNtQusz/+GVfYU9MUaT4UeSIECIC2q4yhs/DgL0viXdDUilrmo8BzWXiLZnkREh3Zs6Rv0ETtfOmMa9bcXp6swJoT6RazEsQw7u5vZ5IkbM3TYFJ/lQyv3d6pRa0YyagY71CCqcCdke2bZAvdUynlmjhJQOZzKTRJzUINTG2w7bL0vo3LqEmv13NPwJiIQMJinWQXIW+LZNBY4JQec4DfbyXYZ3sTDeaBaFgdTLBJ18FLreIwHpvu758GvI7kDt1weDrJ70+ZCRcD8b4JrNbImvPqjr84pLMOjESAb5Finao6THu3VJx9e7S3+2A3+qgGR0dJo7X3Ozjaqk3NxumoBFVODs2B6xsIYMMl2Erg2jqVaU67pUm1LsebNvaK2prpQSI9HBH15G8czd8QSQl0zdOjAxukff2oA7Efm1GlXbZrFz7gP8wn0Q3Fed5ja5PSP7uh6DWaxyTNH9Orvv0klDh6VT7fAbQ2F9OuJ0EdBkm7KuJ6qlIVhVszmY8/JL3uA3dMOEunJXkGX986p8pGJTx0K0AVSCE4PlVjsZzuNpmdEW0mkjXefgtk= X-OriginatorOrg: nokia.onmicrosoft.com X-MS-Exchange-CrossTenant-OriginalArrivalTime: 30 Jun 2017 14:11:46.0340 (UTC) X-MS-Exchange-CrossTenant-Id: 5d471751-9675-428d-917b-70f44f9630b0 X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=5d471751-9675-428d-917b-70f44f9630b0; Ip=[131.228.2.241]; Helo=[mailrelay.int.nokia.com] X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR0701MB2195 Subject: [lng-odp] [API-NEXT PATCH 3/4] linux-gen: sched: remove most dependecies to qentry X-BeenThere: lng-odp@lists.linaro.org X-Mailman-Version: 2.1.16 Precedence: list List-Id: "The OpenDataPlane \(ODP\) List" List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: lng-odp-bounces@lists.linaro.org Sender: "lng-odp" Moved ordered queue context structure from queue internal structure to the scheduler. Ordering is a scheduler feature and thus all data and code about ordering should be in scheduler implementation. This removes most dependencies to qentry from the scheduler. Remaining dependecies are due to queue interface definition, which is not changed in this patch. Signed-off-by: Petri Savolainen --- .../linux-generic/include/odp_queue_internal.h | 7 -- platform/linux-generic/odp_queue.c | 40 +----- platform/linux-generic/odp_schedule.c | 135 ++++++++++++++------- platform/linux-generic/odp_schedule_iquery.c | 113 +++++++++++------ 4 files changed, 166 insertions(+), 129 deletions(-) -- 2.13.0 diff --git a/platform/linux-generic/include/odp_queue_internal.h b/platform/linux-generic/include/odp_queue_internal.h index d79abd23..032dde88 100644 --- a/platform/linux-generic/include/odp_queue_internal.h +++ b/platform/linux-generic/include/odp_queue_internal.h @@ -42,13 +42,6 @@ struct queue_entry_s { odp_buffer_hdr_t *tail; int status; - struct { - odp_atomic_u64_t ctx; /**< Current ordered context id */ - odp_atomic_u64_t next_ctx; /**< Next unallocated context id */ - /** Array of ordered locks */ - odp_atomic_u64_t lock[CONFIG_QUEUE_MAX_ORD_LOCKS]; - } ordered ODP_ALIGNED_CACHE; - queue_enq_fn_t enqueue ODP_ALIGNED_CACHE; queue_deq_fn_t dequeue; queue_enq_multi_fn_t enqueue_multi; diff --git a/platform/linux-generic/odp_queue.c b/platform/linux-generic/odp_queue.c index 2db95fc6..d907779b 100644 --- a/platform/linux-generic/odp_queue.c +++ b/platform/linux-generic/odp_queue.c @@ -57,16 +57,6 @@ static inline odp_queue_t queue_from_id(uint32_t queue_id) return _odp_cast_scalar(odp_queue_t, queue_id + 1); } -static inline int queue_is_atomic(queue_entry_t *qe) -{ - return qe->s.param.sched.sync == ODP_SCHED_SYNC_ATOMIC; -} - -static inline int queue_is_ordered(queue_entry_t *qe) -{ - return qe->s.param.sched.sync == ODP_SCHED_SYNC_ORDERED; -} - queue_entry_t *get_qentry(uint32_t queue_id) { return &queue_tbl->queue[queue_id]; @@ -278,13 +268,6 @@ static int queue_destroy(odp_queue_t handle) ODP_ERR("queue \"%s\" not empty\n", queue->s.name); return -1; } - if (queue_is_ordered(queue) && - odp_atomic_load_u64(&queue->s.ordered.ctx) != - odp_atomic_load_u64(&queue->s.ordered.next_ctx)) { - UNLOCK(&queue->s.lock); - ODP_ERR("queue \"%s\" reorder incomplete\n", queue->s.name); - return -1; - } switch (queue->s.status) { case QUEUE_STATUS_READY: @@ -610,20 +593,9 @@ static int queue_init(queue_entry_t *queue, const char *name, if (queue->s.param.sched.lock_count > sched_fn->max_ordered_locks()) return -1; - if (param->type == ODP_QUEUE_TYPE_SCHED) { + if (param->type == ODP_QUEUE_TYPE_SCHED) queue->s.param.deq_mode = ODP_QUEUE_OP_DISABLED; - if (param->sched.sync == ODP_SCHED_SYNC_ORDERED) { - unsigned i; - - odp_atomic_init_u64(&queue->s.ordered.ctx, 0); - odp_atomic_init_u64(&queue->s.ordered.next_ctx, 0); - - for (i = 0; i < queue->s.param.sched.lock_count; i++) - odp_atomic_init_u64(&queue->s.ordered.lock[i], - 0); - } - } queue->s.type = queue->s.param.type; queue->s.enqueue = queue_int_enq; @@ -719,16 +691,6 @@ int sched_cb_queue_grp(uint32_t queue_index) return qe->s.param.sched.group; } -int sched_cb_queue_is_ordered(uint32_t queue_index) -{ - return queue_is_ordered(get_qentry(queue_index)); -} - -int sched_cb_queue_is_atomic(uint32_t queue_index) -{ - return queue_is_atomic(get_qentry(queue_index)); -} - odp_queue_t sched_cb_queue_handle(uint32_t queue_index) { return queue_from_id(queue_index); diff --git a/platform/linux-generic/odp_schedule.c b/platform/linux-generic/odp_schedule.c index 53670a71..8af27673 100644 --- a/platform/linux-generic/odp_schedule.c +++ b/platform/linux-generic/odp_schedule.c @@ -65,8 +65,11 @@ ODP_STATIC_ASSERT((ODP_SCHED_PRIO_NORMAL > 0) && /* Maximum number of pktio poll commands */ #define NUM_PKTIO_CMD (MAX_PKTIN * NUM_PKTIO) +/* Not a valid index */ +#define NULL_INDEX ((uint32_t)-1) + /* Not a valid poll command */ -#define PKTIO_CMD_INVALID ((uint32_t)-1) +#define PKTIO_CMD_INVALID NULL_INDEX /* Pktio command is free */ #define PKTIO_CMD_FREE PKTIO_CMD_INVALID @@ -90,7 +93,7 @@ ODP_STATIC_ASSERT((ODP_SCHED_PRIO_NORMAL > 0) && #define PRIO_QUEUE_MASK (PRIO_QUEUE_RING_SIZE - 1) /* Priority queue empty, not a valid queue index. */ -#define PRIO_QUEUE_EMPTY ((uint32_t)-1) +#define PRIO_QUEUE_EMPTY NULL_INDEX /* For best performance, the number of queues should be a power of two. */ ODP_STATIC_ASSERT(CHECK_IS_POWER2(ODP_CONFIG_QUEUES), @@ -127,7 +130,7 @@ ODP_STATIC_ASSERT((8 * sizeof(pri_mask_t)) >= QUEUES_PER_PRIO, /* Storage for stashed enqueue operation arguments */ typedef struct { odp_buffer_hdr_t *buf_hdr[QUEUE_MULTI_MAX]; - queue_entry_t *queue; + uint32_t queue_index; int num; } ordered_stash_t; @@ -152,7 +155,8 @@ typedef struct { odp_queue_t queue; odp_event_t ev_stash[MAX_DEQ]; struct { - queue_entry_t *src_queue; /**< Source queue entry */ + /* Source queue index */ + uint32_t src_queue; uint64_t ctx; /**< Ordered context id */ int stash_num; /**< Number of stashed enqueue operations */ uint8_t in_order; /**< Order status */ @@ -197,6 +201,19 @@ typedef struct { uint32_t cmd_index; } pktio_cmd_t; +/* Order context of a queue */ +typedef struct { + /* Current ordered context id */ + odp_atomic_u64_t ctx ODP_ALIGNED_CACHE; + + /* Next unallocated context id */ + odp_atomic_u64_t next_ctx; + + /* Array of ordered locks */ + odp_atomic_u64_t lock[CONFIG_QUEUE_MAX_ORD_LOCKS]; + +} order_context_t ODP_ALIGNED_CACHE; + typedef struct { pri_mask_t pri_mask[NUM_PRIO]; odp_spinlock_t mask_lock; @@ -230,6 +247,8 @@ typedef struct { int grp; int prio; int queue_per_prio; + int sync; + unsigned order_lock_count; } queue[ODP_CONFIG_QUEUES]; struct { @@ -237,6 +256,8 @@ typedef struct { int num_cmd; } pktio[NUM_PKTIO]; + order_context_t order[ODP_CONFIG_QUEUES]; + } sched_global_t; /* Global scheduler context */ @@ -259,6 +280,7 @@ static void sched_local_init(void) sched_local.thr = odp_thread_id(); sched_local.queue = ODP_QUEUE_INVALID; sched_local.queue_index = PRIO_QUEUE_EMPTY; + sched_local.ordered.src_queue = NULL_INDEX; id = sched_local.thr & (QUEUES_PER_PRIO - 1); @@ -488,16 +510,35 @@ static void pri_clr_queue(uint32_t queue_index, int prio) static int schedule_init_queue(uint32_t queue_index, const odp_schedule_param_t *sched_param) { + int i; int prio = sched_param->prio; pri_set_queue(queue_index, prio); sched->queue[queue_index].grp = sched_param->group; sched->queue[queue_index].prio = prio; sched->queue[queue_index].queue_per_prio = queue_per_prio(queue_index); + sched->queue[queue_index].sync = sched_param->sync; + sched->queue[queue_index].order_lock_count = sched_param->lock_count; + + odp_atomic_init_u64(&sched->order[queue_index].ctx, 0); + odp_atomic_init_u64(&sched->order[queue_index].next_ctx, 0); + + for (i = 0; i < CONFIG_QUEUE_MAX_ORD_LOCKS; i++) + odp_atomic_init_u64(&sched->order[queue_index].lock[i], 0); return 0; } +static inline int queue_is_atomic(uint32_t queue_index) +{ + return sched->queue[queue_index].sync == ODP_SCHED_SYNC_ATOMIC; +} + +static inline int queue_is_ordered(uint32_t queue_index) +{ + return sched->queue[queue_index].sync == ODP_SCHED_SYNC_ORDERED; +} + static void schedule_destroy_queue(uint32_t queue_index) { int prio = sched->queue[queue_index].prio; @@ -506,6 +547,11 @@ static void schedule_destroy_queue(uint32_t queue_index) sched->queue[queue_index].grp = 0; sched->queue[queue_index].prio = 0; sched->queue[queue_index].queue_per_prio = 0; + + if (queue_is_ordered(queue_index) && + odp_atomic_load_u64(&sched->order[queue_index].ctx) != + odp_atomic_load_u64(&sched->order[queue_index].next_ctx)) + ODP_ERR("queue reorder incomplete\n"); } static int poll_cmd_queue_idx(int pktio_index, int pktin_idx) @@ -606,20 +652,20 @@ static void schedule_release_atomic(void) } } -static inline int ordered_own_turn(queue_entry_t *queue) +static inline int ordered_own_turn(uint32_t queue_index) { uint64_t ctx; - ctx = odp_atomic_load_acq_u64(&queue->s.ordered.ctx); + ctx = odp_atomic_load_acq_u64(&sched->order[queue_index].ctx); return ctx == sched_local.ordered.ctx; } -static inline void wait_for_order(queue_entry_t *queue) +static inline void wait_for_order(uint32_t queue_index) { /* Busy loop to synchronize ordered processing */ while (1) { - if (ordered_own_turn(queue)) + if (ordered_own_turn(queue_index)) break; odp_cpu_pause(); } @@ -635,52 +681,54 @@ static inline void ordered_stash_release(void) int i; for (i = 0; i < sched_local.ordered.stash_num; i++) { - queue_entry_t *queue; + queue_entry_t *queue_entry; + uint32_t queue_index; odp_buffer_hdr_t **buf_hdr; int num; - queue = sched_local.ordered.stash[i].queue; + queue_index = sched_local.ordered.stash[i].queue_index; + queue_entry = get_qentry(queue_index); buf_hdr = sched_local.ordered.stash[i].buf_hdr; num = sched_local.ordered.stash[i].num; - queue_fn->enq_multi(qentry_to_int(queue), buf_hdr, num); + queue_fn->enq_multi(qentry_to_int(queue_entry), buf_hdr, num); } sched_local.ordered.stash_num = 0; } static inline void release_ordered(void) { + uint32_t qi; unsigned i; - queue_entry_t *queue; - queue = sched_local.ordered.src_queue; + qi = sched_local.ordered.src_queue; - wait_for_order(queue); + wait_for_order(qi); /* Release all ordered locks */ - for (i = 0; i < queue->s.param.sched.lock_count; i++) { + for (i = 0; i < sched->queue[qi].order_lock_count; i++) { if (!sched_local.ordered.lock_called.u8[i]) - odp_atomic_store_rel_u64(&queue->s.ordered.lock[i], + odp_atomic_store_rel_u64(&sched->order[qi].lock[i], sched_local.ordered.ctx + 1); } sched_local.ordered.lock_called.all = 0; - sched_local.ordered.src_queue = NULL; + sched_local.ordered.src_queue = NULL_INDEX; sched_local.ordered.in_order = 0; ordered_stash_release(); /* Next thread can continue processing */ - odp_atomic_add_rel_u64(&queue->s.ordered.ctx, 1); + odp_atomic_add_rel_u64(&sched->order[qi].ctx, 1); } static void schedule_release_ordered(void) { - queue_entry_t *queue; + uint32_t queue_index; - queue = sched_local.ordered.src_queue; + queue_index = sched_local.ordered.src_queue; - if (odp_unlikely(!queue || sched_local.num)) + if (odp_unlikely((queue_index == NULL_INDEX) || sched_local.num)) return; release_ordered(); @@ -688,7 +736,7 @@ static void schedule_release_ordered(void) static inline void schedule_release_context(void) { - if (sched_local.ordered.src_queue != NULL) + if (sched_local.ordered.src_queue != NULL_INDEX) release_ordered(); else schedule_release_atomic(); @@ -715,9 +763,9 @@ static int schedule_ord_enq_multi(queue_t q_int, void *buf_hdr[], int i; uint32_t stash_num = sched_local.ordered.stash_num; queue_entry_t *dst_queue = qentry_from_int(q_int); - queue_entry_t *src_queue = sched_local.ordered.src_queue; + uint32_t src_queue = sched_local.ordered.src_queue; - if (!sched_local.ordered.src_queue || sched_local.ordered.in_order) + if ((src_queue == NULL_INDEX) || sched_local.ordered.in_order) return 0; if (ordered_own_turn(src_queue)) { @@ -740,7 +788,7 @@ static int schedule_ord_enq_multi(queue_t q_int, void *buf_hdr[], return 0; } - sched_local.ordered.stash[stash_num].queue = dst_queue; + sched_local.ordered.stash[stash_num].queue_index = dst_queue->s.index; sched_local.ordered.stash[stash_num].num = num; for (i = 0; i < num; i++) sched_local.ordered.stash[stash_num].buf_hdr[i] = buf_hdr[i]; @@ -803,7 +851,7 @@ static inline int do_schedule_grp(odp_queue_t *out_queue, odp_event_t out_ev[], prio > ODP_SCHED_PRIO_DEFAULT)) max_deq = MAX_DEQ / 2; - ordered = sched_cb_queue_is_ordered(qi); + ordered = queue_is_ordered(qi); /* Do not cache ordered events locally to improve * parallelism. Ordered context can only be released @@ -835,21 +883,18 @@ static inline int do_schedule_grp(odp_queue_t *out_queue, odp_event_t out_ev[], if (ordered) { uint64_t ctx; - queue_entry_t *queue; odp_atomic_u64_t *next_ctx; - queue = get_qentry(qi); - next_ctx = &queue->s.ordered.next_ctx; - + next_ctx = &sched->order[qi].next_ctx; ctx = odp_atomic_fetch_inc_u64(next_ctx); sched_local.ordered.ctx = ctx; - sched_local.ordered.src_queue = queue; + sched_local.ordered.src_queue = qi; /* Continue scheduling ordered queues */ ring_enq(ring, PRIO_QUEUE_MASK, qi); - } else if (sched_cb_queue_is_atomic(qi)) { + } else if (queue_is_atomic(qi)) { /* Hold queue during atomic access */ sched_local.queue_index = qi; } else { @@ -1041,14 +1086,14 @@ static int schedule_multi(odp_queue_t *out_queue, uint64_t wait, static inline void order_lock(void) { - queue_entry_t *queue; + uint32_t queue_index; - queue = sched_local.ordered.src_queue; + queue_index = sched_local.ordered.src_queue; - if (!queue) + if (queue_index == NULL_INDEX) return; - wait_for_order(queue); + wait_for_order(queue_index); } static void order_unlock(void) @@ -1058,14 +1103,15 @@ static void order_unlock(void) static void schedule_order_lock(unsigned lock_index) { odp_atomic_u64_t *ord_lock; - queue_entry_t *queue; + uint32_t queue_index; - queue = sched_local.ordered.src_queue; + queue_index = sched_local.ordered.src_queue; - ODP_ASSERT(queue && lock_index <= queue->s.param.sched.lock_count && + ODP_ASSERT(queue_index != NULL_INDEX && + lock_index <= sched->queue[queue_index].order_lock_count && !sched_local.ordered.lock_called.u8[lock_index]); - ord_lock = &queue->s.ordered.lock[lock_index]; + ord_lock = &sched->order[queue_index].lock[lock_index]; /* Busy loop to synchronize ordered processing */ while (1) { @@ -1084,13 +1130,14 @@ static void schedule_order_lock(unsigned lock_index) static void schedule_order_unlock(unsigned lock_index) { odp_atomic_u64_t *ord_lock; - queue_entry_t *queue; + uint32_t queue_index; - queue = sched_local.ordered.src_queue; + queue_index = sched_local.ordered.src_queue; - ODP_ASSERT(queue && lock_index <= queue->s.param.sched.lock_count); + ODP_ASSERT(queue_index != NULL_INDEX && + lock_index <= sched->queue[queue_index].order_lock_count); - ord_lock = &queue->s.ordered.lock[lock_index]; + ord_lock = &sched->order[queue_index].lock[lock_index]; ODP_ASSERT(sched_local.ordered.ctx == odp_atomic_load_u64(ord_lock)); diff --git a/platform/linux-generic/odp_schedule_iquery.c b/platform/linux-generic/odp_schedule_iquery.c index 8d8dcc29..f315a4f0 100644 --- a/platform/linux-generic/odp_schedule_iquery.c +++ b/platform/linux-generic/odp_schedule_iquery.c @@ -71,6 +71,8 @@ typedef struct { /* Maximum number of pktio poll commands */ #define NUM_PKTIO_CMD (MAX_PKTIN * NUM_PKTIO) +/* Not a valid index */ +#define NULL_INDEX ((uint32_t)-1) /* Pktio command is free */ #define PKTIO_CMD_FREE ((uint32_t)-1) @@ -117,6 +119,19 @@ typedef struct { /* Forward declaration */ typedef struct sched_thread_local sched_thread_local_t; +/* Order context of a queue */ +typedef struct { + /* Current ordered context id */ + odp_atomic_u64_t ctx ODP_ALIGNED_CACHE; + + /* Next unallocated context id */ + odp_atomic_u64_t next_ctx; + + /* Array of ordered locks */ + odp_atomic_u64_t lock[CONFIG_QUEUE_MAX_ORD_LOCKS]; + +} order_context_t ODP_ALIGNED_CACHE; + typedef struct { odp_shm_t selfie; @@ -139,6 +154,8 @@ typedef struct { /* Quick reference to per thread context */ sched_thread_local_t *threads[ODP_THREAD_COUNT_MAX]; + + order_context_t order[ODP_CONFIG_QUEUES]; } sched_global_t; /* Per thread events cache */ @@ -154,7 +171,7 @@ typedef struct { /* Storage for stashed enqueue operation arguments */ typedef struct { odp_buffer_hdr_t *buf_hdr[QUEUE_MULTI_MAX]; - queue_entry_t *queue; + uint32_t queue_index; int num; } ordered_stash_t; @@ -195,7 +212,8 @@ struct sched_thread_local { sparse_bitmap_iterator_t iterators[NUM_SCHED_PRIO]; struct { - queue_entry_t *src_queue; /**< Source queue entry */ + /* Source queue index */ + uint32_t src_queue; uint64_t ctx; /**< Ordered context id */ int stash_num; /**< Number of stashed enqueue operations */ uint8_t in_order; /**< Order status */ @@ -314,6 +332,7 @@ static void sched_thread_local_reset(void) thread_local.thread = odp_thread_id(); thread_local.cache.queue = ODP_QUEUE_INVALID; + thread_local.ordered.src_queue = NULL_INDEX; odp_rwlock_init(&thread_local.lock); @@ -395,7 +414,7 @@ static int schedule_term_local(void) static int init_sched_queue(uint32_t queue_index, const odp_schedule_param_t *sched_param) { - int prio, group, thread; + int prio, group, thread, i; sched_prio_t *P; sched_group_t *G; sched_thread_local_t *local; @@ -428,6 +447,12 @@ static int init_sched_queue(uint32_t queue_index, memcpy(&sched->queues[queue_index], sched_param, sizeof(odp_schedule_param_t)); + odp_atomic_init_u64(&sched->order[queue_index].ctx, 0); + odp_atomic_init_u64(&sched->order[queue_index].next_ctx, 0); + + for (i = 0; i < CONFIG_QUEUE_MAX_ORD_LOCKS; i++) + odp_atomic_init_u64(&sched->order[queue_index].lock[i], 0); + /* Update all threads in this schedule group to * start check this queue index upon scheduling. */ @@ -502,6 +527,11 @@ static void destroy_sched_queue(uint32_t queue_index) __destroy_sched_queue(G, queue_index); odp_rwlock_write_unlock(&G->lock); + + if (sched->queues[queue_index].sync == ODP_SCHED_SYNC_ORDERED && + odp_atomic_load_u64(&sched->order[queue_index].ctx) != + odp_atomic_load_u64(&sched->order[queue_index].next_ctx)) + ODP_ERR("queue reorder incomplete\n"); } static int pktio_cmd_queue_hash(int pktio, int pktin) @@ -1070,20 +1100,20 @@ static void schedule_release_atomic(void) } } -static inline int ordered_own_turn(queue_entry_t *queue) +static inline int ordered_own_turn(uint32_t queue_index) { uint64_t ctx; - ctx = odp_atomic_load_acq_u64(&queue->s.ordered.ctx); + ctx = odp_atomic_load_acq_u64(&sched->order[queue_index].ctx); return ctx == thread_local.ordered.ctx; } -static inline void wait_for_order(queue_entry_t *queue) +static inline void wait_for_order(uint32_t queue_index) { /* Busy loop to synchronize ordered processing */ while (1) { - if (ordered_own_turn(queue)) + if (ordered_own_turn(queue_index)) break; odp_cpu_pause(); } @@ -1099,52 +1129,55 @@ static inline void ordered_stash_release(void) int i; for (i = 0; i < thread_local.ordered.stash_num; i++) { - queue_entry_t *queue; + queue_entry_t *queue_entry; + uint32_t queue_index; odp_buffer_hdr_t **buf_hdr; int num; - queue = thread_local.ordered.stash[i].queue; + queue_index = thread_local.ordered.stash[i].queue_index; + queue_entry = get_qentry(queue_index); buf_hdr = thread_local.ordered.stash[i].buf_hdr; num = thread_local.ordered.stash[i].num; - queue_fn->enq_multi(qentry_to_int(queue), buf_hdr, num); + queue_fn->enq_multi(qentry_to_int(queue_entry), buf_hdr, num); } thread_local.ordered.stash_num = 0; } static inline void release_ordered(void) { + uint32_t qi; unsigned i; - queue_entry_t *queue; - queue = thread_local.ordered.src_queue; + qi = thread_local.ordered.src_queue; - wait_for_order(queue); + wait_for_order(qi); /* Release all ordered locks */ - for (i = 0; i < queue->s.param.sched.lock_count; i++) { + for (i = 0; i < sched->queues[qi].lock_count; i++) { if (!thread_local.ordered.lock_called.u8[i]) - odp_atomic_store_rel_u64(&queue->s.ordered.lock[i], + odp_atomic_store_rel_u64(&sched->order[qi].lock[i], thread_local.ordered.ctx + 1); } thread_local.ordered.lock_called.all = 0; - thread_local.ordered.src_queue = NULL; + thread_local.ordered.src_queue = NULL_INDEX; thread_local.ordered.in_order = 0; ordered_stash_release(); /* Next thread can continue processing */ - odp_atomic_add_rel_u64(&queue->s.ordered.ctx, 1); + odp_atomic_add_rel_u64(&sched->order[qi].ctx, 1); } static void schedule_release_ordered(void) { - queue_entry_t *queue; + uint32_t queue_index; - queue = thread_local.ordered.src_queue; + queue_index = thread_local.ordered.src_queue; - if (odp_unlikely(!queue || thread_local.cache.count)) + if (odp_unlikely((queue_index == NULL_INDEX) || + thread_local.cache.count)) return; release_ordered(); @@ -1152,7 +1185,7 @@ static void schedule_release_ordered(void) static inline void schedule_release_context(void) { - if (thread_local.ordered.src_queue != NULL) + if (thread_local.ordered.src_queue != NULL_INDEX) release_ordered(); else schedule_release_atomic(); @@ -1164,9 +1197,9 @@ static int schedule_ord_enq_multi(queue_t q_int, void *buf_hdr[], int i; uint32_t stash_num = thread_local.ordered.stash_num; queue_entry_t *dst_queue = qentry_from_int(q_int); - queue_entry_t *src_queue = thread_local.ordered.src_queue; + uint32_t src_queue = thread_local.ordered.src_queue; - if (!thread_local.ordered.src_queue || thread_local.ordered.in_order) + if ((src_queue == NULL_INDEX) || thread_local.ordered.in_order) return 0; if (ordered_own_turn(src_queue)) { @@ -1189,7 +1222,7 @@ static int schedule_ord_enq_multi(queue_t q_int, void *buf_hdr[], return 0; } - thread_local.ordered.stash[stash_num].queue = dst_queue; + thread_local.ordered.stash[stash_num].queue_index = dst_queue->s.index; thread_local.ordered.stash[stash_num].num = num; for (i = 0; i < num; i++) thread_local.ordered.stash[stash_num].buf_hdr[i] = buf_hdr[i]; @@ -1202,14 +1235,14 @@ static int schedule_ord_enq_multi(queue_t q_int, void *buf_hdr[], static void order_lock(void) { - queue_entry_t *queue; + uint32_t queue_index; - queue = thread_local.ordered.src_queue; + queue_index = thread_local.ordered.src_queue; - if (!queue) + if (queue_index == NULL_INDEX) return; - wait_for_order(queue); + wait_for_order(queue_index); } static void order_unlock(void) @@ -1219,14 +1252,15 @@ static void order_unlock(void) static void schedule_order_lock(unsigned lock_index) { odp_atomic_u64_t *ord_lock; - queue_entry_t *queue; + uint32_t queue_index; - queue = thread_local.ordered.src_queue; + queue_index = thread_local.ordered.src_queue; - ODP_ASSERT(queue && lock_index <= queue->s.param.sched.lock_count && + ODP_ASSERT(queue_index != NULL_INDEX && + lock_index <= sched->queues[queue_index].lock_count && !thread_local.ordered.lock_called.u8[lock_index]); - ord_lock = &queue->s.ordered.lock[lock_index]; + ord_lock = &sched->order[queue_index].lock[lock_index]; /* Busy loop to synchronize ordered processing */ while (1) { @@ -1245,13 +1279,14 @@ static void schedule_order_lock(unsigned lock_index) static void schedule_order_unlock(unsigned lock_index) { odp_atomic_u64_t *ord_lock; - queue_entry_t *queue; + uint32_t queue_index; - queue = thread_local.ordered.src_queue; + queue_index = thread_local.ordered.src_queue; - ODP_ASSERT(queue && lock_index <= queue->s.param.sched.lock_count); + ODP_ASSERT(queue_index != NULL_INDEX && + lock_index <= sched->queues[queue_index].lock_count); - ord_lock = &queue->s.ordered.lock[lock_index]; + ord_lock = &sched->order[queue_index].lock[lock_index]; ODP_ASSERT(thread_local.ordered.ctx == odp_atomic_load_u64(ord_lock)); @@ -1275,7 +1310,7 @@ static inline bool is_ordered_queue(unsigned int queue_index) static void schedule_save_context(uint32_t queue_index, void *ptr) { - queue_entry_t *queue = ptr; + (void)ptr; if (is_atomic_queue(queue_index)) { thread_local.atomic = &sched->availables[queue_index]; @@ -1283,11 +1318,11 @@ static void schedule_save_context(uint32_t queue_index, void *ptr) uint64_t ctx; odp_atomic_u64_t *next_ctx; - next_ctx = &queue->s.ordered.next_ctx; + next_ctx = &sched->order[queue_index].next_ctx; ctx = odp_atomic_fetch_inc_u64(next_ctx); thread_local.ordered.ctx = ctx; - thread_local.ordered.src_queue = queue; + thread_local.ordered.src_queue = queue_index; } }