Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
Comment: Migrated to Confluence 4.0

...

Product name and version:

Tool name, version (proto or stable version)

Organisers of the test:

Xxxxx Name, Name

...

Date of the test:

...

dd.mm.yyyy

Date of the report:

Xxx,xxx dd.mm.yyyy

Editor:

Xxx Name

...

Contact name(s):

...

Name and e-mail|

...

Executive summary

Provide a brief level overview of the test (including purpose of test)

Name the product:
Purpose/objectives of the test:
Method 1:
Number and type of participants:
Tasks (if task are used):
Method 2:

Results in main points
e.g. bullet list (this is needed for being able to get the main results without reading the full report, this is seen important, since the reports serve different purposes and sometimes the need is to get a fast overview)Table of contents
Title Page
Introduction
Full Product
Test Objectives
Method
Participants
Context of Product Use in the Test
Test Facility
Experimental
Results

...

Introduction

Full Product Description

...

  • State the objectives for the test and any areas of specific interest
    • Functions and components with which the user directly and indirectly interacted
    • Reason for focusing on a product subset

Method

Participants

  • The total number of participants tested
  • Segmentation of user groups tested, if more than one user group was tested
  • Key characteristics and capabilities of user group (this info might have been acquired through the background (pre) questionnaires, thus it can be just referred here, e.g. linked to the description of the results of the background (pre) questionnaires)
  • How participants were selected; whether they had the essential characteristics
  • Differences between the participant sample and the user population

...

  • Any known differences between the evaluated context and the expected context of use
  • Tasks
  • Describe the task scenarios for testing
    • Explain why these tasks were selected
    • Describe the source of these tasks
    • Include any task data/information given to the participants
    • Completion or performance criteria established for each task

Test Facility
Describe the setting, and type of space in which the evaluation was conducted
Detail any relevant features or circumstances, which could affect the results (e.g. There was a brake down of the server, which messed up the test for a while and created unnecessary tension. There was unforeseeable noise that disturbed the test, etc.)

...

  • Instructions given to the participants
  • Task instruction summary
  • Usability Metrics (if used)
  • Metrics for effectiveness
    • Metrics for efficiency
    • Metrics for satisfaction, etc.

Results

  • Data Analysis
  • Quantitative data analysis 
  • Qualitative data analysis
  • Presentation of the Results
    • From quantitative data analysis
    • From qualitative data analysis (descriptive and clarifying presentation of the results)

Reliability and Validity

Reliability is the question of whether one would get the same result if the test were to be repeated.
This is hard to acquire in usability tests, but it can be reasoned how significant the findings are.
(Example from expert evaluation: This review was made by one reviewer in order to give quick feedback to the development team. To get more reliable results it would have been desirable to use three, or at least two reviewers, as it is often the case that different reviewers look at different things. We do feel, however, that for the purpose of this report, and the essence of quick feedback, one reviewer has given enough feedback to enhance the usability of the system.)

...