From patchwork Fri Mar 2 01:58:10 2012 Content-Type: text/plain; charset="utf-8" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit X-Patchwork-Submitter: Spring Zhang X-Patchwork-Id: 7042 Return-Path: X-Original-To: patchwork@peony.canonical.com Delivered-To: patchwork@peony.canonical.com Received: from fiordland.canonical.com (fiordland.canonical.com [91.189.94.145]) by peony.canonical.com (Postfix) with ESMTP id BCE0023EAE for ; Fri, 2 Mar 2012 01:58:13 +0000 (UTC) Received: from mail-iy0-f180.google.com (mail-iy0-f180.google.com [209.85.210.180]) by fiordland.canonical.com (Postfix) with ESMTP id 5118BA1840D for ; Fri, 2 Mar 2012 01:58:13 +0000 (UTC) Received: by iage36 with SMTP id e36so2208223iag.11 for ; Thu, 01 Mar 2012 17:58:12 -0800 (PST) Received: by 10.50.193.131 with SMTP id ho3mr138178igc.55.1330653492772; Thu, 01 Mar 2012 17:58:12 -0800 (PST) X-Forwarded-To: linaro-patchwork@canonical.com X-Forwarded-For: patch@linaro.org linaro-patchwork@canonical.com Delivered-To: patches@linaro.org Received: by 10.231.53.18 with SMTP id k18csp32375ibg; Thu, 1 Mar 2012 17:58:11 -0800 (PST) Received: by 10.216.135.102 with SMTP id t80mr79308wei.59.1330653490710; Thu, 01 Mar 2012 17:58:10 -0800 (PST) Received: from indium.canonical.com (indium.canonical.com. [91.189.90.7]) by mx.google.com with ESMTPS id z8si3056152wec.127.2012.03.01.17.58.10 (version=TLSv1/SSLv3 cipher=OTHER); Thu, 01 Mar 2012 17:58:10 -0800 (PST) Received-SPF: pass (google.com: best guess record for domain of bounces@canonical.com designates 91.189.90.7 as permitted sender) client-ip=91.189.90.7; Authentication-Results: mx.google.com; spf=pass (google.com: best guess record for domain of bounces@canonical.com designates 91.189.90.7 as permitted sender) smtp.mail=bounces@canonical.com Received: from ackee.canonical.com ([91.189.89.26]) by indium.canonical.com with esmtp (Exim 4.71 #1 (Debian)) id 1S3Hl4-0000uh-4o for ; Fri, 02 Mar 2012 01:58:10 +0000 Received: from ackee.canonical.com (localhost [127.0.0.1]) by ackee.canonical.com (Postfix) with ESMTP id 146A2E0435 for ; Fri, 2 Mar 2012 01:58:10 +0000 (UTC) MIME-Version: 1.0 X-Launchpad-Project: lava-test X-Launchpad-Branch: ~linaro-validation/lava-test/trunk X-Launchpad-Message-Rationale: Subscriber X-Launchpad-Branch-Revision-Number: 121 X-Launchpad-Notification-Type: branch-revision To: Linaro Patch Tracker From: noreply@launchpad.net Subject: [Branch ~linaro-validation/lava-test/trunk] Rev 121: add doc how to write new cases Message-Id: <20120302015810.15599.94702.launchpad@ackee.canonical.com> Date: Fri, 02 Mar 2012 01:58:10 -0000 Reply-To: noreply@launchpad.net Sender: bounces@canonical.com Errors-To: bounces@canonical.com Precedence: bulk X-Generated-By: Launchpad (canonical.com); Revision="14886"; Instance="launchpad-lazr.conf" X-Launchpad-Hash: fa4dc04bb79f2301a8b18a92c22f50c280da96cd X-Gm-Message-State: ALoCoQmCrulHWJJR3eeA+v/o7teYGLPxVHYV282JaW2Jr5zDDzKl4XKdyFEgutXe9uG1q0YLstIy Merge authors: Spring Zhang (qzhang) Related merge proposals: https://code.launchpad.net/~qzhang/lava-test/doc-add-case/+merge/95304 proposed by: Spring Zhang (qzhang) review: Approve - Zygmunt Krynicki (zkrynicki) ------------------------------------------------------------ revno: 121 [merge] committer: Spring Zhang branch nick: lava-test-doc-add-case timestamp: Fri 2012-03-02 09:56:22 +0800 message: add doc how to write new cases modified: doc/usage.rst --- lp:lava-test https://code.launchpad.net/~linaro-validation/lava-test/trunk You are subscribed to branch lp:lava-test. To unsubscribe from this branch go to https://code.launchpad.net/~linaro-validation/lava-test/trunk/+edit-subscription === modified file 'doc/usage.rst' --- doc/usage.rst 2011-09-12 09:19:10 +0000 +++ doc/usage.rst 2012-03-01 03:04:30 +0000 @@ -135,18 +135,163 @@ tests need to follow Linaro development work flow, get reviewed and finally merged. Depending on your situation this may be undesired. -.. todo:: - - Describe how tests are discovered, loaded and used. It would be - nice to have a tutorial that walks the user through wrapping a - simple pass/fail test. +There is a wonderful guide describing `How to Writing Test Definitions +`_ from Linaro wiki: + +Test definitions are simply a way of telling LAVA-Test how to install a test, +run it, and interpret the results. Tests definitions are in a simplified python +format, and can be as simple as a few lines. More advanced test definitions can +be written by deriving from the base classes. + +Defining a simple test +++++++++++++++++++++++ + +**Example 1** The simplest possible example might look something like this:: + + from lava_test.core.installers import TestInstaller + from lava_test.core.runners import TestRunner + from lava_test.core.tests import Test + + RUNSTEPS = ['echo "It works!"'] + runme = TestRunner(RUNSTEPS) + testobj = Test(test_id="example1", runner=runme) + +In this example, we simply give it a list of commands to run in a shell, +provided by RUNSTEPS. We pass RUNSTEPS to create an TestRunner instance. Then +that runner is passed to create an !Test instance called 'testobj'. If you were +to save this under the test_definitions directory as 'example1.py', then run +'./lava-test run example1' from the bin directory, you would have a test result +for it under your results directory, with output saying "It works!" + +**Example 2** Usually, you will want to do more than just interact with things +already on the system. Usually a test suite needs to be installed before running +it. For this example, let's say you have a test suite you can download from +http://www.linaro.org/linarotest-0.1.tgz. NOTE: This file does not actually +exist, but is used only for example purposes:: + + from lava_test.core.installers import TestInstaller + from lava_test.core.parsers import TestParser + from lava_test.core.runners import TestRunner + from lava_test.core.tests import Test + + INSTALLSTEPS = ['tar -xzf linarotest-0.1.tgz', + 'cd linarotest-0.1', + 'make install'] + RUNSTEPS = ['cd linarotest-0.1/bin', + './runall'] + installit = TestInstaller(INSTALLSTEPS, url='http://www.linaro.org/linarotest-0.1.tgz', md5='a9cb8a348e0d8b0a8247083d37392c89f' + runit = TestRunner(RUNSTEPS) + testobj = Test(test_id="LinaroTest", version="0.1", installer=installit, runner=runit) + +Before running the test in this example, an extra installation step will take +place. Since we provided a url and md5, the file specified by the url will first +be downloaded, and the md5sum will be checked. An md5 is recommended for +checking the integrity of the download, but if it is not provided then it will +simply skip it rather than fail. Next the steps specified in INSTALLSTEPS will +be executed, and finally the steps in RUNSTEPS will be executed. + +**Example 3** A slight variation on example 2 might be a case where you want to install a test that is already in the archive. Rather than specifying a url to download the test from, you can simply do something like this instead:: + + ... + DEPS = ['linarotest'] + installit = TestInstaller(deps=DEPS) + ... + +This is also how dependencies can be specified if you have, for instance, +libraries that need to be installed before attempting to compile a test you want +to run. Those dependencies will be installed before attempting to run any steps. +In this example though, there is no need to specify a url, and no need to +specify steps to run to build from a tarball. All that is needed is to specify a +dependency which will take care of installing the test. Again, this is a +fictitious example. + +Adding Results Parsing +++++++++++++++++++++++ + +Because every test has its own way of displaying results, there is no common, +enforced way of interpreting the results from any given test. That means that +every test definition also has to define a parser so that LAVA-Test can +understand how to pick out the most useful bits of information from the output. +What we've tried to do, is make this as simple as possible for the most common +cases, while providing the tools necessary to handle more complex output. + +To start off, there are some fields you are always going to want to either pull +from the results, or define. For all tests: + +* test_case_id - This is just a field that uniquely identifies the test. This can contain letters, numbers, underscores, dashes, or periods. If you use any illegal characters, they will automatically be dropped by the TestParser base class before parsing the results. Spaces will be automatically converted to underscores. If you wish to change this behaviour, make sure that you either handle fixing the test_case_id in your parser, or override the TestParser.fixids() method. +* result - result is simply the result of the test. This applies to both qualitative as well as quantitative tests, and the meaning is specific to the test itself. The valid values for result are: "pass", "fail", "skip", or "unknown". + +For performance tests, you will also want to have the following two fields: + +* measurement - the "score" or resulting measurement from the benchmark. +* units - a string defining the units represented by the measurement in some way that will be meaningful to someone looking at the results later. + +For results parsing, it's probably easier to look at some examples. Several +tests have already been defined in the lava-test test_definitions directory +that serve as useful examples. Here are some snippits to start off though: + +**Stream example** + +The stream test does several tests to measure memory bandwidth. The relevant +portion of the output looks something like this:: + + Function Rate (MB/s) Avg time Min time Max time + Copy: 3573.4219 0.0090 0.0090 0.0094 + Scale: 3519.1727 0.0092 0.0091 0.0095 + Add: 4351.7842 0.0112 0.0110 0.0113 + Triad: 4429.2382 0.0113 0.0108 0.0125 + +So we have 4 test_case_ids here: Copy, Scale, Add, and Triad. For the result, we +will just use pass for everything. Optionally though, if there were some +threshold under which we knew it would be considered a fail, we could detect +that and have it fail in that case. The number we really care about in the +results is the rate, which has a units of (MB/s). + +First we need a pattern to match the lines and yield the test_case_id and the +measurement:: + + PATTERN = "^(?P\w+):\W+(?P\d+\.\d+)" + +Passing this pattern when initializing an TestParser object will help it to find +the test_case_id and measurement from the lines that have it, while not matching +on any other lines. We also want to append the pass result, and units to each +test result found. There's a helper for that when creating the TestParser object +called appendall, that lets you give it a dict of values to append to all test +results found at parse time. The full line to create the parser would be:: + + streamparser = lava_test.core.parsers.TestParser(PATTERN, appendall={'units':'MB/s', 'result':'pass'}) + +For the complete code, see the stream test definition in LAVA-Test. + +**LTP** + +Another useful case to look at is LTP, because it is a qualitative test with +several possible result codes. The TestParser also supports being created with a +fixupdict, which takes a dict of result strings to match to the valid Lava-test +result strings. For instance, LTP has result strings such as "TPASS", "TFAIL", +"TCONF", "TBROK", which the dashboard will now accept:: + + FIXUPS = {"TBROK":"fail", + "TCONF":"skip", + "TFAIL":"fail", + "TINFO":"unknown", + "TPASS":"pass", + "TWARN":"unknown"} + +Now when creating the TestParser object, we can call it with fixupdict = FIXUPS +so that it knows how to properly translate the result strings. + +The full LTP test definition actually derives its own TestParser class to deal +with additional peculiarities of LTP output. This is sometimes necessary, but as +common features are found that would make it possible to eliminate or simplify +cases like this, they should be merged into the Lava-test libraries. Maintaining out-of-tree tests ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ For some kinds of tests (proprietary, non-generic, in rapid development, fused with application code) contributing their definition to upstream LAVA Test -project would be impractical. +project would be impractical. In such cases the test maintainer can still leverage LAVA to actually run and process the test without being entangled in the review process or going through