Recently, the concept of Usability Engineering (UE) is emerging to be a popular and pragmatic issue due to scarcity of resources. UE is generally concerned with human-computer interaction and specifically with making human-computer interfaces that have high usability or user friendliness. Usability is a science that deals with interaction between two end-points. One serves the interface and the other end consumes the information that can be presented via the interface. However we will add an original concept of quantification to the existing model through a designed algorithm by the author to calculate the usability (U) index. To accomplish this task owing to the field expertise of the corresponding author, numerical and/or cognitive data must be collected to supply the input parameters to calculate the quantitative risk index for Usability. A qualitative-input version of assessment will also be presented. This article will not only present a metric model but also generate a prototype numerical index, new in this field.
Usability Engineering is regarded as one of the four pillars of the Trustworthy Computing, the others being Reliability, Security and Privacy1. Usability Engineering is a relatively new discipline, and was not even offered in computer science courses up to ten or fifteen years ago. Usability engineering is now central to proper software development. In this research paper, we will adopt a model of usability taking into account the task / user / user-interface variables, which have been inspired from Shackel, Nielsen and Eason2. Another strong source on Usability Engineering is by Nielsen3. Nielsen offers examples of measuring the Usability icons (such as ticket vending machines in Grand Central Stations or used by regular commuters, or stamp vending machines in Post Offices, or Personal Computers’ usage of a certain piece of Software) and Usability testing as well as Usability Assessment methods beyond testing. Usability only becomes an issue when it is not present in an interface. In large part, an interface is usable when the user can accomplish their task smoothly without hindrance or frustration. Assessing the nature of usability quantitatively is the goal of this paper. To do so, a Usability Meter based on a series of questions designed to assess the user’s perceptions of an interface’s usability will be utilized. Based on the user’s responses, a usability risk index will be calculated.
Inspired by Shackel, Nielsen and EasonIbid, three vulnerabilities are assessed: Task, User, and System Interface. Within each vulnerability category, questions pertain to specific threats and countermeasures. Within Task vulnerability, users are asked questions regarding Infrequency, Rigidity, and Unfavorable Situational Constraints threats and countermeasures. Within User vulnerability, users are asked questions regarding Lack of Knowledge, Lack of Motivation, and Lack of Choice threats and countermeasures. Within System Interface vulnerability, users are asked questions regarding Difficulty of Learning, Difficulty of Application, Lack of Flexibility, Dissatisfaction, and Task Mismatch threats and countermeasures. See Figure 1 below for the Usability diagram detailing vulnerabilities and threats4. The user’s responses are then used to generate a quantitative usability risk index.
Figure 1: Usability Tree Diagram
Questions are designed to elicit the user’s response regarding the perceived risk from particular threats, and the countermeasures the users may employ to counteract those threats. For example, regarding System Interface vulnerability, questions regarding Difficulty of Application include both threat and countermeasure questions. Threat questions would include [Ibid]:
• Are system interface commands difficult to find?
• Does the task require an excessive number of steps to be accomplished?
• Is the system interface poorly laid out?
• Are system interface commands unclear or unintuitive?
• Do tasks require a long time to perform?
While countermeasure questions would include:
• Are system interface commands easy to find?
• Does the task require only a reasonable number of steps to be accomplished?
• Is the system interface well designed and laid out?
• Are system interface commands clear and intuitive?
• Do tasks require only a relatively short time to perform?
For a full listing of the vulnerability/threat assessment and remediation (countermeasure) questions employed in the Usability Meter, see Appendix A at the end of this paper.
RISK CALCULATION AND MITIGATION
Essentially, the users are responding yes or no to these questions, and feeding numerical values out of 10. These responses are used to calculate residual risk. Using a game-theoretical mathematical approach, the calculated risk index is used to generate an optimization or lowering of risk to desired levels. Further, mitigation advice will be generated to show the user in what areas the risk can be reduced to optimized or desired levels such as from 62% to 30% in the screenshot. See Figure 2 and Figure 3 below for a screenshot of the Usability Meter Results Table displaying threat, countermeasure, and residual risk indices; optimization options; as well as risk mitigation advice. The survey was taken by anonymous persons in 2009 on VISTA OS before it was revoked off the market in 2010. Further, Figure 4 and Figure 5 are displayed for a qualitative approach.
Figure 2: Usability Meter Results Table by an Anonymous Survey Taker 1 (Quantitative)
Figure 3: Usability Meter Results Table by an Anonymous Survey Taker 2 (Quantitative)
Figure 4: Usability Meter Results Table by an Anonymous Survey Taker 3 (Qualitative)
Figure 5: Usability Meter Results Table by an Anonymous Survey Taker 4 (Qualitative)
DISCUSSIONS AND CONCLUSION
Usability Meter Results Table by an Anonymous Survey Taker 1 through a Quantitative approach yielded 62.5% risk and when optimized to 30%, several actions had to be taken as follow. i) Enhance the Countermeasure (CM) capacity versus the threat of “Rigidity” within the vulnerability of “Task” from 45% to 100% for an improvement of 55%, ii) Enhance the CM capacity versus the threat of “Lack of Knowledge” within the vulnerability of “User” from 40% to 100% for an improvement of 60% iii) Enhance the CM capacity versus the threat of “Lack of Choice” within the vulnerability of “User” from 20% to 100% for an improvement of 80% and iv) Enhance the CM capacity versus the threat of “Difficulty of Learning” within the vulnerability of “System Interface” from 40% to 64% for an improvement of 24%. A total cost of $2607 out of total asset of $8000 had to be allocated. We can recursively continue to mitigate the present risk of 30% (down from an initial 62.5%) to lower target values such as 20% if we have sufficient budget remaining for further improvement. This is to say that the survey taker 1 will implement the the above clarified countermeasures by purchasing the recovery services needed to mitigate her privacy/security risk from 62.5% to a low of 30%. She will do that by simply referring to the CM questions as cited, and converting the negative (No) responses to positives (Yes) by taking renmedial countermeasures.While doing so, she will optimize her costs by following the optimal allocation plan suggested by the Usability Meter’s game-theoretical solutions as explained in Figure 2. Another survey result taken by Survey Taker 2 is presented in Figure 3, where an initial 41.3% risk is reduced to 20% by taking remedial course of actions similar to those explained for Figure 2.
Additionally the two other survey takers did not have the extent of input data and preferred to apply their High, Medium. Low or Very Low attributes resulting in Figure 4 and 5 with resultant risk values of 55% and 45% respectively. Qualitative versions depicted in Figures 4 and 5 do not lead to game-theoretic optimization. Therefore, no optimization process was available for the qualitative approach as it was for Figures 2 and 3 that used a game–theoretic optimization process.
The Usability Meter breaks new ground in that it provides a quantitative assessment of risk to the user as well as recommendations to mitigate that risk. As such, it will be a highly useful tool to both the casual user as well as IT professionals involved in usability engineering. Future work may involve the incorporation of new questions so as to better refine user responses and subsequent calculation of risk and mitigation recommendations. As one of the four pillars of Trustworthy Computing, advances in Usability Engineering will greatly enhance human-computer interaction and more specifically make human-computer interfaces have higher usability factor or user-friendliness. The proposed Usability Meter tool and its refinement will aid in those advances before commercially operating IT companies release a version of their products to be U-tested by mainstream users and customer base. Always remember the Microsoft VISTA which may have been secure and private, but not sufficiently usable, and therefore not so trustworthy! The proposed Usability Meter tells us how sufficient or insufficient new operating system software, e.g. VISTA OS, might have been, using a metric based approach. Compare this approach to taking simple averages from a few surveys that do not lend to any remediation guidelines as offered by the quantitative version of the Usability Meter. However, if numerical data or intensive XML surveys do not apply to a survey taker, a quick qualitative approach as in Figure 4 and Figure 5 will generate an initial result to yield a preliminary opinion, which can be further improved using a quantitative version.
Mehmet Sahinoglu, Scott Morton and Erman Samelo are from Informatics Institute, AUM, Montgomery whilst Sukanta Ganguly is from Texas Tech University, Lubbock.
Appendix A: Sample Assessment Questions (XML format) for USABILITY Meter