You are on page 1of 15

Journal of Physics: Conference Series

PAPER • OPEN ACCESS You may also like


- Features and Behaviours Mapping In
Data-Driven Approaches to User Interface Design: Model-based Testing in Software Product
Line
A Case Study Raduni Sulaiman, DNA. Jawawi and
Shahliza Abdul Halim

- Design and Realisation of Reusable


To cite this article: Denis Pimenov et al 2021 J. Phys.: Conf. Ser. 2134 012020 Artefacts for Internal Supply Chain
Management in Manufacturing Company
Oman Komarudin, Hanifa Arrumaisha and
Ade Azurat

- Modeling a distributed environment for a


View the article online for updates and enhancements. petroleum reservoir engineering
application with software product line
Rafael de Faria Scheidt, Patrícia Vilain
and M A R Dantas

This content was downloaded from IP address 125.166.118.7 on 15/06/2022 at 14:41


ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

Data-Driven Approaches to User Interface Design:


A Case Study
Denis Pimenov, Alexander Solovyov, Nursultan Askarbekuly, Manuel
Mazzara
Innopolis University, Universitetskaya 1, Innopolis, Tatarstan, 420500, Russia
E-mail: {d.pimenov,a.solovyov, m.mazzara}@innopolis.ru,
n.askarbekuly@innopolis.university

Abstract. The most common reason for software product failure is misunderstanding user
needs. Analysing and validating user needs before developing a product can allow to prevent
such failures. This paper investigates several data-driven techniques for user research and
product design through prototyping, customer validation, and usability testing. The authors
implemented a case study software product using the proposed techniques, and analyses how
the application of UX/UI research techniques affected the development process. The case study
results indicate that preliminary UX/UI research before the development reduces the cost of
product changes. Moreover, the paper proposes a set of metrics for testing the effectiveness of
UX/UI design.

1. Introduction
Inability to understand and meet user needs is one of the most common reasons for a software
product failure [6]. To target the problem, companies do user research to learn the user demands
and solve their problems through the product. An important related aspect is user experience
(UX). According to [11], product with a poor user experience have 41% higher abandonment
rates. For those products, it means that the users are likely to stop using an application after a
trial, which also means that users are likely to stop bringing money to the company.
User Experience and User Interface (UX/UI) is a field that studies and develops the structure
and details of interactive parts of the software application, which are visible for the end-user.
It covers both visual parts such as design, elements layout, colours and icons, and the user’s
experience provided by the overall logic of the application and actions that the user should
perform to do the task. A thoughtful UX/UI design results in higher conversion rates and
purchase probability, customer loyalty, and also leads to a higher recommendation rate and less
abandonment [31].
This paper is a case study that implements the UX/UI techniques in a case study and analysis
their effectiveness through a set of metrics. The following section reviews previous research
on UX/UI techniques and data-driven approaches to product development. In section 3, the
methodology is presented of how these UX/UI design practices were implemented in the case
study project, and a system of metrics is proposed to analyse the effectiveness of the approaches.
Finally, the paper concludes with the results and discussion on the effectiveness of the UX/UI
techniques.
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

2. Related work
This section reviews popular UX/UI product design techniques. It also covers related data-driven
approaches to UX/UI design, the data types commonly used in it, and some of the limitations of
the data-driven approaches. UX/UI allows companies to gain competitive advantage. However,
producing an effective product interface and logic requires a deep understanding of the user
needs and natural behaviour. Data-driven UX/UI design approaches provide methodologies
for understanding user’s demand via collecting and processing data generated or provided by
users. The following section presents the methodology of how the described approaches where
implemented and analysed in the case study.

2.1. UX Research and understanding user demand


UX design requires a defined set of approaches and methodologies to be efficient and profitable.
As proposed by [11], UX/UI primarily employs a user-oriented approach, fundamentally aiming
at interacting and empathising with the users. The main goal in this activity is to understand
the demands and wishes of the applications’ target audience so that a product team can provide
them with the needed functionality and high usability.
For addressing this target, the UX researcher team should listen to users via surveys, feedback
forms, social media activity analysis or direct interviews [3]. Moreover, their behaviour within
the application should also be observed in a natural environment via collected data about their
actions and context. That means learning about the user’s profile and tasks that he expects to
perform on the application and understanding how much time it takes him to finish the task.
After understanding the preliminary requirements from the user’s side, there also should be
requirements in terms of business and terms of constraints [2]. The initial design should allow
changes in future due to a continuously changing environment and the user’s needs.
Even though we can have feedback from end-users, experts, research and other credible
sources, the designer should test the new design to confirm its attributes and influence on the
application
According to [18], there are two ways to collect data about users’ experience of dealing with
the product:
(i) Active - a development team collects subjective user feedback via interviews, surveys or
research, involving user interaction.
(ii) Passive - data is received automatically through software tools. It helps to collect user-
product interaction events and various user properties.
Moreover, various techniques can be used to perform testing in direct or indirect
communication with the customer, such as:
(i) Click-Stream Analysis
(ii) Usability Testing
(iii) A/B Testing

2.1.1. Click-stream Analysis The clickstream analysis is a technique for collecting and analysis
of the path and the behaviour that consumer produce in the application [7]. In other words, the
observer can collect and analyse automatically all events that the consumer produce. There are
several possible types of events that can be collected (Figure 1):
(i) Clicks: any clickable buttons can be recorded
(ii) Inputs: forms can collect users’ inputs
(iii) API calls: any calls to the external API can be tracked with the parameters sent to the
server

2
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

(iv) Errors: misclicks, mistakes in filling inputs or system errors are also logged for analysis

Figure 1: Possible events in the click-stream analysis

Clickstream analysis can be implemented for analysis across several applications of websites
[10], [13], [24] for understanding the behaviour of the user in searching information and
choosing the website that fits him most. However, we mostly focus on the analysis within
the application [23], [20], [22], [21]. The analysis of a single application can help identify the
target audience, understand the most common patterns of the behaviour within the application
and the inconveniences that users meet while using the application.

2.1.2. Usability Testing Usability testing is the technique for evaluating the UX/UI of the
application in direct cooperation with customers. During the usability testing, users are asked
to complete typical tasks simulating the real usage of the application. In different cases, users
can be asked to use the application without specific tasks freely. The observer records the time
required for completing tasks or actions, the completion rate and errors that users make during
the test [4], [26].
The main goal of usability testing is to evaluate the UX/UI with the objective metrics of
the customers. It helps to find typical errors or complex actions that users make in the natural
behaviour.
There is a commonly used procedure to perform the usability testing, documented by [9],
[14], [15], [27], [17] and [1]:

(i) Define the goal and the objective of the test


(ii) Recruit participants
(iii) Select task for the test
(iv) Create task scenarios and their description
(v) Choose metrics for evaluation and tools for recording data
(vi) Prepare test materials and environment
(vii) Design the test protocol for the tester
(viii) Design the satisfaction questionnaires and data analysis procedures
(ix) Present the results

3
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

2.1.3. A/B Testing A/B testing is a practice to compare different implementations of the
feature to analyse which one is more efficient in terms of objective metrics and users’ opinion.
Two (or more) implementations are delivered for the users’ independently so that one user can
receive only one version of the product without any knowledge about another. After a trial
usage, the tester collects necessary data about execution time, events, user’s behaviour and
other data from the experiment. In the end, the user passes a feedback form to describe his
opinion on satisfaction with the given version of the product [ [30]].

Figure 2: Scheme of the A/B testing

As Fig. 2, proposed by [30], shows, A/B testing is an iterative methodology. Each round,
there is a winning prototype, and it can be compared with the all-new prototype in the next
iteration.
As research state, A/B testing is commonly used in the industry [16]. However, some
throwbacks should be mentioned in the description of the method: it is costly to run A/B
tests manually, and it is not easy to manage [ [8]]. The automated approach can be applied to
tackle these throwbacks.

2.2. Data-Driven Approaches


A data-driven UX/UI design approach is any technique for collecting and processing data to
formulate the user’s demand precisely. It helps to exclude the influence of subjective opinion on
a creating feature list for software. Data collected directly from users can describe their demand
objectively.

2.2.1. Why do we need data-driven approaches? Companies usually rely on the opinions of
senior developers in decisions about future feature list [5]. Their opinions are usually based on
previous experience. On the other hand, customer’s needs are constantly changing in time [12].
Hence, the previous experience becomes irrelevant, and companies might misunderstand their
users. For that reason, they need a data-driven approach to collect objective information in
order to produce a relevant feature list.

2.2.2. Types of Data Research states that the amount of data generated by users while they are
using the application is increasing exponentially [28]. In this reality, it must be clearly defined

4
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

what kind of data might be collected and processed and how to interpret it to match the goal
described in Section 2.2.
There are three classifications of data division [19]:
(i) By the way of interaction with users
(ii) By the classification of metrics
(iii) By the source of data
Data classified by way of interaction with users can be divided into two groups [18]:
(i) Active - data generated in direct communication with users via surveys, feedback forms and
interviews
(ii) Passive - data collected automatically from users’ actions like mouse clicks, eye-focus,
keyboard actions and logs
Classification of metrics divides the collected data in the following groups [25]:
(i) PULSE metrics: including more statistical data such as the number of pages viewed, income
information, activities and others
(ii) HEART metrics:
(a) Happiness - meaning users’ satisfaction with the aesthetic part and usability of the app
(b) Engagement - frequency or duration of the usage of the app
(c) Adoption - statistics about unique users in a given period
(d) Retention - statistics about users that are still using the app after the given period
(e) Task success - how the app can solve the task that the user demand. That can be
calculated using the time to perform the task, the application’s efficiency and error
rate.
Another commonly-used classification for data is a source where the data comes from. Data
from different sources have different attributes and represents different information. That is why
we are interested in this kind of classification. There are several possible sources, either internal
in the app or external from the third-party software [29]:
(i) Logs, which reflect different states of the application and their transitions triggered by the
user’s actions. Logs can represent mouse clicks, keyboard input, performing the task, errors,
timestamps and action duration, scrolling events and other possible events happening with
the software.
(ii) Visitor metric, which is a third-party software, which enables to collect and present some
different metrics. External tools like Google Metrics can provide information about traffic
inside the application, the user’s activity and other events.
(iii) Visual metrics, which is metrics that are visible and understandable for the user. It includes
heat maps, eye movement tracking or video recording of the user’s session.

2.2.3. Challenges in Data-Driven UX/UI Design Approach Nowadays, IT companies collect


enormous data about their users’ behaviour and profiles, and this amount is growing
exponentially [28].
The amount of data causes a serious throwback to the approach. IT companies should store
and process big data, which is a real problem for relatively small companies without resources
such as supercomputers and data centres, except for companies with few users who are unlikely
to overload their infrastructure [19].
Another throwback is privacy policies regarding collecting private data. In the interview [19]
provided with the experts from the industry, 2 out of 4 experts mentioned GDPR as the main
problem preventing them from using this approach in their projects.

5
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

3. Methodology
This section describes the process and methods used during the research. That includes a
description of evaluation criteria for the research and methods used in the development. The
research aims to investigate and evaluate practices to communicate with users to provide
solutions that will satisfy their needs. It is necessary to understand and then solve their problems
to achieve users’ satisfaction.

3.1. Research question and evaluation criteria


The research question is how to make objective metrics and UX/UI design requirements, which
is a subjective field. The first step to answer this question is to define the goal of software.
Since the software is a product and has its users, software should meet users’ requirements and
demands. To enrich this, software developers should communicate closely with customers to
understand their demands.
The research is conducted in the following way:
(i) Product MVP development
(ii) Evaluation of MVP UX/UI using Usability Testing, Interviews and Customers’ Feedback
(iii) Analytics integration in the application
(iv) Prototyping of new features
(v) Collecting the users’ feedback about prototypes
The process timeline is also shown in the figure ??.

Figure 3: Research process timeline

For evaluating prototypes, we used the following research technique:

(i) First step is Usability Testing: users get the application or its prototype for free use in a
one-on-one meeting with a tester. Users try to use the application in a natural environment
and comment on things they notice about their experience.
(ii) Second step is an interview with a tester: the user leaves comments on positive and negative
points that he noticed during the test session.
(iii) Last step is Customers’ Feedback: the tester asks the user to fill a form with questions that
ask the user to evaluate aspects of his experience with a grade from 1 to 7.

This feedback collection technique is also shown as a timeline in figure 4

3.2. Case-Study
A platform for this paper is a mobile application, ”My People”, that helps its users to keep
in touch with friends, family, and relatives. To achieve its goal, there are several implemented
features:

(i) Add and prioritize contacts: user can add a person, define his category and choose how
often the user wants to contact the person.

6
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

Figure 4: Usability testing pipeline

(ii) Notifications: when it is time to contact a person, the application sends a push notification
with a reminder to call or send a message to the person
(iii) Report: the user can report that he contacted a person so that the application counts
interactions and provides statistics about previous interactions.
(iv) Content: there is a section with various content about the psychology of communication and
interaction. The content aims to introduce the educational process inside the application.

Figure 5: Screenshots of MVP application

The development process was built in the following way:

(i) Customer development: first, several interviews with potential users were conducted to
understand the relevance of the suggested application in the real world. Interviews were
designed to understand potential demand in the application, potential audience, and use-
cases.
(ii) Design prototypes: user interface and functionality were designed before implementation to
decrease the cost of changes in the initial stage.
(iii) Implementation: MVP of iOS and Android applications are developed and pushed into
respective marketplaces.

7
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

3.2.1. Customer Development The first step in each development is customer development. It
is costly to develop a product without understanding the demand among people. To understand
it, we organized several one-on-one interviews with a target audience. The goal of the activity
is to validate that application is needed. Preliminary interviews showed interest among users,
so research was admitted as successful.

3.2.2. Design prototypes After receiving feedback about demand and possible use-cases of the
application, we established a minimal set of required features and designed a prototype. For
design, we used Figma because it contains the necessary set of tools specifically for UI design.

3.2.3. Implementation Minimal Valuable Product was developed for both iOS and Android
platforms. The development process was organized iteratively with one-week sprints and weekly
meetings to validate progress, discuss and plan further work. Tasks were distributed among
developers using the Trello board. This approach partially implements an agile framework,
which means that changes in the design are not costly and can be discussed in time. A
young product must implement agile because there might be changes during the design and
development process.

3.3. Outcome
As a result of this part of the research, we have a working application on different platforms
and a design prototype. That is a starting point of the research because further activities were
conducted based on this application.

3.4. From application to prototypes


During developing the MVP application, we understood that development takes an enormous
amount of time. Given time limitations for the research, we used design prototypes for further
UX/UI development since they are cheaper in development. They provide the ability to create
several versions of the product for A/B testing. Also, the resulting application prototype can
include a higher amount of new features, so that comparison with MVP might be contrasting.

4. Implementation and Design


Implementation and Design section shows the exact steps that we undertook during the research.
Also, it describes tools used to implement the application and testing upgrades with customers.

4.1. Customer Development


The first step before the development of the application is customer development. At this stage,
we formulated the idea of the application and provided interviews with five potential users to
ask their opinion.
Interviews showed that the idea of the application is interesting for them. Moreover,
interviewees helped formulate main features - regularity of contacts, notifications, and statistics
of streaks that motivate people to contact friends regularly to keep a streak going.
With the information collected from interviews, the first simple prototype was developed.
It had the basic functionality: tasks for contacting people were part of the application with
different task and habit control types.
After the development, the prototype was published for usability testing among several users.
It was an unmoderated usability testing in the natural environment. During the usability testing,
we received feedback that the application is overloaded. Hence, we decided to redesign the
application and fork the functionality of contacting people into a separate application.

8
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

4.2. UI Design
When the application concept was ready, the analysis of required features was developed, we
started to create a UI design. Creating a UI prototype reduces the costs of making changes
in the interface during development, so developers can analyse the current interface and test it
with customers without spending resources on the development.
For the UI design prototype, we used Figma. There were several iterations of prototyping:

(i) Abstract prototype: shows the preliminary layouts of the application, features that are
present in it and represents a business logic
(ii) Final interface: shows the exact view of the application, including colours, buttons, layouts
and other visual elements of the interface

After the design was developed, revised and updated, the implementation stage started.

4.3. Application Development


The application development phase consisted of developing the application on the iOS and
Android platforms by separate developers in parallel with an iterative approach and weekly
meetings for synchronisation and discussions. This approach helped to make changes during the
process without a global redesigning of the application and without high time costs.
We chose a set of minimal features for the product to minimise the time spent to deliver to
customers. Other increments were planned in the features’ roadmap to start a stream of users
in the application and continue upgrading it.
After the MVP was released, the basic event-tracking analysis was integrated to further
understand and analyse the users’ behaviour inside the application. The event-tracking analysis
represents a set of actions that users perform during the session in the application, and we
can learn common usage patterns and provide metrics to understand the weaknesses of the
application.

4.4. MVP Usability Testing


When the MVP was published, the next step was to evaluate users’ opinion about it. For this,
we chose a target audience of 5 people who were asked to start using the application without
any additional description. The idea was to investigate how intuitive the interface is. In other
words, how easy it would be for users to learn how to use the application. During the session,
they left their comments to the moderator. After the session, users were asked to fill the form
with several questions that required grades ranging from one to seven as an answer. Those
questions were targeted to learn how easy it was to understand the goal of the application, how
easy it was to understand how to use it, how relevant is the application for people and how they
like the application overall. Also, there was a section for additional comments.

4.5. Design Prototypes


The insight from the development process is that it is costly to produce a working application
for testing UX/UI. The better choice is to create design prototypes and validate them with
users. This approach allowed us to test several features with users without direct development.
Moreover, it allowed us to upgrade prototypes according to users’ feedback.
For prototype-based testing, we used a combination of Figma and Maze tools. We used
Figma to create interfaces and simulate their behaviour with a prototype. Then created Figma
prototypes were uploaded to the Maze. Maze allows creating tasks for interviewees to pass a
path in the application prototype. For example, we could ask users to create a contact using
the uploaded prototype. Moreover, Maze supports simple questions so that we could conduct
an interview after tests.

9
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

4.6. On-boarding Usability Testing


From the MVP usability testing, we learned that users struggle to identify the application’s goal
and understand how to use it. Hence, we decided to introduce the on-boarding to describe the
application’s main features before the first start. At this stage, the prototype was developed
with Figma and then tested with the Maze.
To evaluate the efficiency of the on-boarding we prepared a usability test. The first task for
users was to pass the tutorial. After that, users performed simple tasks simulating the basic
functionality of the application. Users were asked to describe the application’s target in the
open form in the interview. The goal of this activity was to identify how users understand the
application after passing the starting tutorial.

4.7. A/B Testing: Bottom menu redundancy


Another insight from the MVP usability testing is that buttons are not obvious and that the
bottom menu is redundant for the application. To understand the problem in more detail, we
analysed the buttons’ usage in the bottom menu and asked users’ opinions about them. We
decided that the Learn section button is rarely used and can be hidden in the header or the
lurking menu. Also, we found that Add Person button should not be in the bottom menu
because it is not the navigation between screens, but it is an action that the user performs, so
this button should be more highlighted and located distinctively.
Using these insights, we created two versions of the prototype with new layouts. Prototypes
are presented in the Results section. For these two versions, we recruited two distinct groups
of people. These groups had a roughly equal background and knowledge. They were asked to
complete several tasks with the usage of tested buttons. One was to add a new contact to the
list, and the second was to read an article from the Learn section.
To evaluate the efficiency of versions and to compare them objectively, we chose the following
criteria:
(i) Average time to perform the task
(ii) Average rating given by users in the interview after the test
(iii) Additional comments and opinions in the open form

5. Results
During the research, several artefacts were produced by the development team. These artefacts
include several prototypes, mobile application on two different platforms, usability tests.
Twenty-five people were involved as interviewees to help define weaknesses in the existing
product and in the planned features to be developed.
The Result section of the research contains the results of the usability tests and users’
interviews. It does not include the results of the development process. The first reason is
that they were presented throughout the paper several times with screenshots and exhaustive
descriptions. The second reason is that the paper topic is about working with the audience to
understand their demands, while application development is a service process for achieving the
primary target.
The research consists of the usability testing of the MVP product, verification of several new
features on the design stage and testing of the prototype, including those new validated features.

5.1. MVP Usability Testing and Interviews


First MVP Usability testing consisted of the unmoderated natural use of the application by users
who tried it for the first time. Their task was to learn how to use the application, understand the
application’s goal, and try all available features. They commented on their actions and feelings
during the tasks.

10
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

There is a list of comments and observations that came from the usability testing:
(i) Users did not know how to delete the person from the list of contacts
(ii) Users filled their name and contacts in the form for adding contacts. That means that they
did not understand the target of the application and the target of this screen correctly.
They thought that it is a registration form
(iii) Bottom menu is redundant, users misunderstand the meaning of the buttons there, and
they expect those buttons in other areas.
After using the application, users completed the quiz. They were asked to answer questions
with grades ranging from 1 to 7, where 1 is a bad grade, and seven is excellent. The table of
questions and average results are defined in the table 1

Question Grade
How easy was it to find out what is the application about? 5.6
How relevant is an application for you? 5
After starting the application, how easy it was to find out what
5.2
to do next to use it?
What is your overall feeling of application, how do you like it? 5.4

Table 1: Results of the interview about the MVP

5.2. On-boarding Usability Testing


From the usability testing, we learned that users could not quickly understand the application’s
target and how to use it. It was analysed from the click-stream analysis - 40% of users filled the
form for adding contacts with their credentials instead of adding their friends’ contact. They
could realise the idea of the application only after several actions that took around 1 minute on
average.
To tackle this issue of the application, we created a prototype of the on-boarding screens that
would appear at the first launch of the application. After designing the prototype, we started
a usability testing of the hypothesis that this screen helped users understand how to use the
application.
The prototype was designed in Figma. Afterwards, it was uploaded to the Maze platform
to perform UX research with users. The research was simple - pass the on-boarding screens,
perform basic tasks with the application, and describe the application. The hypothesis was that
users would be able to describe the application’s target after passing the on-boarding screens.
There are results and insights after that usability testing:
(i) 75% of users answered correctly about the goal of the application
(ii) 25% of users skipped the on-boarding and were not able to answer correctly
Even though it was a straightforward task for the interviewees to pass the on-boarding, 25%
were attracted to skip it from the first screen. To correct this, we decided to make a ”Skip”
button smaller and less attractive. That is the insight we collected from the usability testing.
This upgrade reduced the rate of skipping the tutorial to 15%.
On the other hand, all users who passed the tutorial were able to answer the question about
the application’s target. Hence, the on-boarding feature was admitted as successful, and we
decided to promote it to the development stage.

11
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

5.3. Bottom Menu A/B Testing


Another insight of the MVP usability testing was a redundant bottom menu. For this issue,
we design two versions of the application prototype with different buttons positions. Those
prototypes are presented in figures ?? and 6.

Figure 6: Screenshots of prototypes A and B

To decide which design would be accepted for the development, we performed A/B testing.
Two batches of interviewees were chosen, there were six users in each batch, and they did not
intersect in any way during the interview. Moreover, users did not see an alternative version of
the prototype. Our goal was to define subjective users’ opinion about the redesign and compare
objective metrics such as the time for completing the tasks.
The interview contained several tasks that require the usage of observed buttons, questions
with grades to evaluate subjective opinion and question for leaving free feedback with the text
to add comments on the prototype. Also, the Maze platform supports click-stream analysis and
provide time metrics for each of them, so we were able to analyse objective metrics.
Metrics and average results are presented in table 2 and compared for both prototypes.

Prototype Prototype
Metric
A B
Question: How easy was it to complete tasks
6.3 5.7
(1-7)
Time to complete tasks 21.2 17.8

Table 2: Results of the A/B testing

Additional comments stated that it was not obvious to find a ”Content” section with a
prototype A, while there were misunderstandings with prototype B. These misunderstandings
were caused by tool limitations rather than with the design issue, so it also distorted the average
grade from users.
With all these results, the second prototype was admitted as a leading because it cost less
time for tasks, and the prototype itself did not cause low grade.

5.4. Final Prototype Usability Testing


When new features were investigated and designed, they were included in the application’s
complete prototype, which implements all features that the application has. The new usability

12
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

testing started with this prototype. It had to be identical to the usability testing of the MVP,
but it slightly differed because MVP is an actual product, but the prototype is not. However,
we copied questions from the initial testing. The only difference between experiments is that we
prepared tasks for the second one, so it was scripted, while initial testing was a natural use.
The results of the testing are presented in table 3

Question Grade
How easy was it to find out what is application about? 5.4
How relevant is an application for you? 3.6
What is your overall feeling of application, how do you like it? 4.9

Table 3: Results of the interview about the final prototype

6. Conclusion
During the research, we implemented three data-driven approaches, namely usability testing,
A/B Testing, and click-stream analysis. Furthermore, we applied several processes for different
stages of the application development, specifically implementation of an MVP version of the
product and prototyping. With the results that we received from the work, we can identify
different approaches to the data-driven UX/UI design and map them to the most appropriate
stage of the product.
Development of the MVP product enabled us to publish the product to the market, find
real users for the application, and integrate the event analytics to analyse user behavior. On
the other hand, the development and release of the MVP was a time-consuming and expensive
process. The case study required 3-4 months to produce the first version. Moreover, it requires
technical knowledge in the field of application development.
Designing prototypes is a cheaper approach to create UX/UI and test it with users. In the
case project, it required 1-2 weeks to create a prototype. After that, the team was able to
experiment and test different versions of the prototype to improve the initial UX/UI of the
product.
To summarize, the recommendation for beginner development teams is to not use the MVP-
implementation approach at the early stage when product design and potential value are being
formed and validated. Instead, prototyping should be used to establish the initial product design
and to validate the product’s value against user needs. Moreover, the prototyping approach can
be used both in the early and later stages. Early stage projects can save time and resources to
test the product value hypothesis, and it is less expensive to implement changes in the initial
design. For advanced projects, the prototyping approach can be used to design new features.
The team can first create a prototype of the application with the new feature, test it with the
users, collect their feedback, iterate the design, and then proceed to implementation afterwards.

References
[1] William Albert and Thomas Tullis. Measuring the user experience: collecting, analyzing, and presenting
usability metrics. Newnes, 2013.
[2] Nursultan Askarbekuly, Andrey Sadovykh, and Manuel Mazzara. Combining two modelling approaches:
Gqm and kaos in an open source project. Open Source Systems, 582(106-119):9–17, 2020.
[3] Nursultan Askarbekuly, Alexandr Solovyov, Elena Lukyanchikova, Denis Pimenov, and Manuel Mazzara.
Building an educational product: Constructive alignment and requirements engineering. pages 358–365,
2021.
[4] J.M. Christian Bastien. Usability testing: a review of some methodological and technical aspects of the
method. International Journal of Medical Informatics, 2010.

13
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020

[5] Jan Bosch. Building products as innovation experiment systems. In International Conference of Software
Business, pages 27–39. Springer, 2012.
[6] Alan Brown, Jerry Fishenden, and Mark Thompson. Digitizing government. New York, Palgrave Macmillan,
2014.
[7] Randolph E. Bucklin, James M. Lattin, Asim Ansari, Sunil Gupta, David Bell, Eloise Coupey, John D. C.
Little, Carl Mela, Alan Montogomery, and Joel Steckel. Choice and the internet: From clickstream to
research stream. Marketing Letters, pages 245–258, 2002.
[8] Thomas Crook, Brian Frasca, Ron Kohavi, and Roger Longbotham. Seven pitfalls to avoid when running
controlled experiments on the web. In Proceedings of the 15th ACM SIGKDD international conference on
Knowledge discovery and data mining, pages 1105–1114, 2009.
[9] Joseph S. Dumas. User-based evaluations. The Human-Computer Interaction Handbook, pages 1093–1117,
2002.
[10] Avi Goldfarb. Analyzing website choice using clickstream data. The Economics of the Internet and E-
commerce, 2002.
[11] Mike Gualtieri. Best practices in user experience (ux) design. Design Compelling User Experiences to Wow
your Customers, pages 1–17, 2009.
[12] Ragnhild Halvorsrud, Knut Kvale, and Asbjørn Følstad. Improving service quality through customer.
Journal of Service Theory and Practice, 2016.
[13] Eric J. Johnson, Wendy W. Moe, Peter Fader, Steven Bellman, and Johannes Lohse. On the depth and
dynamics of world wide web shopping behavior. Upper Saddle River, 2004.
[14] Janice C. Redish Joseph S. Dumas. A practical guide to usability testing. Intellect books, 1999.
[15] John Karat. User-centered software evaluation methodologies. The Human-Computer Interaction Handbook,
pages 689–704, 1997.
[16] Ron Kohavi, Randal M Henne, and Dan Sommerfield. Practical guide to controlled experiments on the
web: listen to your customers not to the hippo. In Proceedings of the 13th ACM SIGKDD international
conference on Knowledge discovery and data mining, pages 959–967, 2007.
[17] James Lewis. Usability testing. Handbook of human factors and ergonomics, 2006.
[18] Walid Maalej, Maleknaz Nayebi, Timo Johann, and Guenther Ruhe. Toward data-driven requirements
engineering. IEEE Software, 33(1):48–54, 2015.
[19] Tea Mijać, Mario Jadrić, and Maja Čakušić. In search of a framework for user-oriented data-driven
development of information systems. Economic and Business Review, 21(3):439–465, 2019.
[20] Wendy W. Moe. Buying, searching, or browsing: differentiating between online shoppers using in-store
navigational clickstream. Journal of Consumer Psychology, 2003.
[21] Wendy W. Moe and Peter Fader. Uncovering patterns in cybershopping. California Management Review,
2001.
[22] Wendy W. Moe and Peter Fader. Capturing evolving visit behavior in clickstream data. Journal of Interactive
Marketing, 2004.
[23] Alan L. Montgomery, Shibo Li, Kannas Srinivasan, and Jonh C. Liechty. Modeling online browsing and path
analysis using clickstream data. Marketing Science, pages 579–595, 2004.
[24] Young-Hoon Park and Peter S. Fader. Modeling browsing behavior at multiple websites. Marketing Science,
pages 280–303, 2004.
[25] Kerry Rodden, Hilary Hutchinson, and Xin Fu. Measuring the user experience on a large scale: User-
centered metrics for web applications. SIGCHI Conference on Human Factors in Computing Systems,,
pages 2395–2398, 2010.
[26] Stephanie Rosenbaum, Janice Anne Rohn, and Judee Humburg. A toolkit for strategic usability: results
from workshops, panels, and surveys. In Proceedings of the SIGCHI conference on Human factors in
computing systems, pages 337–344, 2000.
[27] J. Rubin and D. Chisnell. How to plan, design and conduct effective tests. Handbook of Usability Testing,
1994.
[28] Jeffrey Spiess, Yves T’Joens, Raluca Dragnea, Peter Spencer, and Laurent Philippart. Using big data to
improve customer experience and business performance. Bell Labs Technical Journal, pages 3–17, 2014.
[29] Richard Berntsson Svensson, Robert Feldt, and Richard Torkar. The unfulfilled potential of data-
driven decision making in agile software development. In International Conference on Agile Software
Development, pages 69–85. Springer, 2019.
[30] Giordano Tamburrelli and Alessandro Margara. Towards automated a/b testing. Lecture Notes in Computer
Science, pages 184–198, 2014.
[31] Bruce D. Temkin. Customer experience boosts revenue. Forrester Research Document, page 12, 2009.

14

You might also like