Following our previous blog post titled Technical Testing of Mobile Applications for Cadastral Surveys, we continue our look back at the work we did a while ago on evaluating mobile tools/apps aimed at cadastral surveys and land/property mapping. In this blog post, we explain the tool we developed for user experience testing of mobile applications.
The user experience testing of application touched on evaluating the experience of users while operating selected mobile applications during fieldwork. Indicators such as accessibility, flow, functionality, information architecture, consistency, and satisfaction were developed during the user experience workshop in Nairobi and then tested in the field, in Taita Hills.
Below is a breakdown of the tool for testing the user experience criteria and rationale:
1. Accessibility: Is it clear where to go to in the application to achieve different tasks? Is it obvious, which buttons to press and which not to press?
The application should be accessible through clear language and functionality so that the user can intuitively accomplish relevant tasks without confusion.
2. Flow: Is the path from start to finishing clear?
The process of accomplishing a task should be as simple as possible from start to finish. The user should be able to know at what step of the process he/she is, what the next step is, and how to track back if necessary.
3. Functionality: Does the application load quickly? Are there any dead links or ‘glitches’?
Functionality touches mainly on the technical features of mobile phones and its interaction with the application. In developing countries, the majority of the people own medium- to low-end smartphones and, as such, the mobile application should be built in a way that these mobile phones can sustain it.
4. Information Architecture: Does the application have good navigation? Are the icons understandable? Is labeling consistent across the application?
The application should have clear and simple navigation, good visual cues, and well-considered iconography that aids the user through different tasks.
5. Consistency: Is the application consistent throughout in order for the user to perform additional tasks without problems?
The application should have a common convention that requires a short learning curve in case new tasks are necessary. Multiple conventions can be disorienting and frustrating for a user with limited technical capacity.
6. Satisfaction: Is it satisfactory to use the applications? Did the application sufficiently fulfill the intended task?
Was the work accomplished with minimum user experience friction and in a satisfactory manner?
These are the user-experience specifications we developed and tested in the field. The next blog post will look at the field testing itself.