...
- Operational definitions of measures (e.g., how is it decided that that a task is completed)** When the questionnaire is filled.
- Policies and procedures for interaction between tester(s) and subjects (e.g., is the test conductor aloud to answer questions of the user, provide help, etc.)** Just provide the questionnaire and wait for the answers(no help was provided in answering the questions).
- State used: non-disclosure agreements, form completion, warm-ups, pre-task training, and debriefing
- none of the above
- Specific steps followed to execute the test sessions and record data
- questionnaire was given and collected after finnishing.
- Number and roles of people who interacted with the participants during the test session
- 4 of us
- Specify if other individuals were present in the test environment
- No
- State whether participants were paid
- No
...
- Instructions given to the participants
- perform the tasks whcih were provided in the questionnaire and answer the questions.
- Task instruction summary
- The tasks are to answer the given questions.
- Usability Metrics (if used)
- Efficency,Memorabilty,Errors,Satisfaction
- Metrics for effectiveness
- Metrics for efficiency
- According to their answer
- Metrics for satisfaction, etc.
- According to their answer
- Metrics for efficiency
Results
- Data Analysis
- We have Attached the xls file, about the result analysis
- Quantitative data analysis analysis
- We have Attached the xls file, about the result analysis
- Qualitative data analysis
- We have done the analysis based on the answers given by the respondents,hope it reflects the real feel of the respondents.
- Presentation of the Results
- From quantitative data analysis
- From the number of answers given by our respodents we have analysed the the users are not verymuch satisfied and the system is not logically desined, the language aspect also a bit clumsy.
- From qualitative data analysis qualitative data analysis (descriptive and clarifying presentation of the results)
- From the Users point of view the answers given are generally valuable and really acceptedand it impplies there is magor task is to be on place for school.
- From quantitative data analysis
Reliability and Validity
Reliability is the question of whether one would get the same result if the test were to be repeated.
This is hard to acquire in usability tests, but it can be reasoned how significant the findings are.
(Example from expert evaluation: This review was made by one reviewer in order to give quick feedback to the development team. To get more reliable results it would have been desirable to use three, or at least two reviewers, as it is often the case that different reviewers look at different things. at different things. We do feel, however, that for the purpose of this report, and the essence of quick feedback, one reviewer has given enough feedback to enhance the usability of the system.)
- We do feel
...
- that for the purpose of this report, and the essence of quick feedback,
...
- the reviewers has given us enough feedback to enhance the usability of the system.
...
Validity is the question of whether the usability test measured what was thought it would measure, i.e., provide answers to. Typical validity problems involve: using the wrong users, giving them the wrong tasks.
(Example from expert evaluation: "The reviewer is an experienced usability professional that has evaluated systems for many years. We therefore feel that the method used as well as the tasks used give an appropriate view of how ordinary users would behave in the system.")
- The users invoolved in the testing are mostly third year metropolia students,so that we have been sure that they good experience on the suystem and the result comes out from them is valid and acceptable.
Summary Appendices
- Custom Questionnaires, (if used, e.g., in expert evaluation there is no participants)
- Participant General Instructions
- Participant Task Instructions, if tasks were used in the test
- We provideded a questionnaire to be filled.