Professional Documents
Culture Documents
Abstract. The most common reason for software product failure is misunderstanding user
needs. Analysing and validating user needs before developing a product can allow to prevent
such failures. This paper investigates several data-driven techniques for user research and
product design through prototyping, customer validation, and usability testing. The authors
implemented a case study software product using the proposed techniques, and analyses how
the application of UX/UI research techniques affected the development process. The case study
results indicate that preliminary UX/UI research before the development reduces the cost of
product changes. Moreover, the paper proposes a set of metrics for testing the effectiveness of
UX/UI design.
1. Introduction
Inability to understand and meet user needs is one of the most common reasons for a software
product failure [6]. To target the problem, companies do user research to learn the user demands
and solve their problems through the product. An important related aspect is user experience
(UX). According to [11], product with a poor user experience have 41% higher abandonment
rates. For those products, it means that the users are likely to stop using an application after a
trial, which also means that users are likely to stop bringing money to the company.
User Experience and User Interface (UX/UI) is a field that studies and develops the structure
and details of interactive parts of the software application, which are visible for the end-user.
It covers both visual parts such as design, elements layout, colours and icons, and the user’s
experience provided by the overall logic of the application and actions that the user should
perform to do the task. A thoughtful UX/UI design results in higher conversion rates and
purchase probability, customer loyalty, and also leads to a higher recommendation rate and less
abandonment [31].
This paper is a case study that implements the UX/UI techniques in a case study and analysis
their effectiveness through a set of metrics. The following section reviews previous research
on UX/UI techniques and data-driven approaches to product development. In section 3, the
methodology is presented of how these UX/UI design practices were implemented in the case
study project, and a system of metrics is proposed to analyse the effectiveness of the approaches.
Finally, the paper concludes with the results and discussion on the effectiveness of the UX/UI
techniques.
Content from this work may be used under the terms of the Creative Commons Attribution 3.0 licence. Any further distribution
of this work must maintain attribution to the author(s) and the title of the work, journal citation and DOI.
Published under licence by IOP Publishing Ltd 1
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
2. Related work
This section reviews popular UX/UI product design techniques. It also covers related data-driven
approaches to UX/UI design, the data types commonly used in it, and some of the limitations of
the data-driven approaches. UX/UI allows companies to gain competitive advantage. However,
producing an effective product interface and logic requires a deep understanding of the user
needs and natural behaviour. Data-driven UX/UI design approaches provide methodologies
for understanding user’s demand via collecting and processing data generated or provided by
users. The following section presents the methodology of how the described approaches where
implemented and analysed in the case study.
2.1.1. Click-stream Analysis The clickstream analysis is a technique for collecting and analysis
of the path and the behaviour that consumer produce in the application [7]. In other words, the
observer can collect and analyse automatically all events that the consumer produce. There are
several possible types of events that can be collected (Figure 1):
(i) Clicks: any clickable buttons can be recorded
(ii) Inputs: forms can collect users’ inputs
(iii) API calls: any calls to the external API can be tracked with the parameters sent to the
server
2
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
(iv) Errors: misclicks, mistakes in filling inputs or system errors are also logged for analysis
Clickstream analysis can be implemented for analysis across several applications of websites
[10], [13], [24] for understanding the behaviour of the user in searching information and
choosing the website that fits him most. However, we mostly focus on the analysis within
the application [23], [20], [22], [21]. The analysis of a single application can help identify the
target audience, understand the most common patterns of the behaviour within the application
and the inconveniences that users meet while using the application.
2.1.2. Usability Testing Usability testing is the technique for evaluating the UX/UI of the
application in direct cooperation with customers. During the usability testing, users are asked
to complete typical tasks simulating the real usage of the application. In different cases, users
can be asked to use the application without specific tasks freely. The observer records the time
required for completing tasks or actions, the completion rate and errors that users make during
the test [4], [26].
The main goal of usability testing is to evaluate the UX/UI with the objective metrics of
the customers. It helps to find typical errors or complex actions that users make in the natural
behaviour.
There is a commonly used procedure to perform the usability testing, documented by [9],
[14], [15], [27], [17] and [1]:
3
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
2.1.3. A/B Testing A/B testing is a practice to compare different implementations of the
feature to analyse which one is more efficient in terms of objective metrics and users’ opinion.
Two (or more) implementations are delivered for the users’ independently so that one user can
receive only one version of the product without any knowledge about another. After a trial
usage, the tester collects necessary data about execution time, events, user’s behaviour and
other data from the experiment. In the end, the user passes a feedback form to describe his
opinion on satisfaction with the given version of the product [ [30]].
As Fig. 2, proposed by [30], shows, A/B testing is an iterative methodology. Each round,
there is a winning prototype, and it can be compared with the all-new prototype in the next
iteration.
As research state, A/B testing is commonly used in the industry [16]. However, some
throwbacks should be mentioned in the description of the method: it is costly to run A/B
tests manually, and it is not easy to manage [ [8]]. The automated approach can be applied to
tackle these throwbacks.
2.2.1. Why do we need data-driven approaches? Companies usually rely on the opinions of
senior developers in decisions about future feature list [5]. Their opinions are usually based on
previous experience. On the other hand, customer’s needs are constantly changing in time [12].
Hence, the previous experience becomes irrelevant, and companies might misunderstand their
users. For that reason, they need a data-driven approach to collect objective information in
order to produce a relevant feature list.
2.2.2. Types of Data Research states that the amount of data generated by users while they are
using the application is increasing exponentially [28]. In this reality, it must be clearly defined
4
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
what kind of data might be collected and processed and how to interpret it to match the goal
described in Section 2.2.
There are three classifications of data division [19]:
(i) By the way of interaction with users
(ii) By the classification of metrics
(iii) By the source of data
Data classified by way of interaction with users can be divided into two groups [18]:
(i) Active - data generated in direct communication with users via surveys, feedback forms and
interviews
(ii) Passive - data collected automatically from users’ actions like mouse clicks, eye-focus,
keyboard actions and logs
Classification of metrics divides the collected data in the following groups [25]:
(i) PULSE metrics: including more statistical data such as the number of pages viewed, income
information, activities and others
(ii) HEART metrics:
(a) Happiness - meaning users’ satisfaction with the aesthetic part and usability of the app
(b) Engagement - frequency or duration of the usage of the app
(c) Adoption - statistics about unique users in a given period
(d) Retention - statistics about users that are still using the app after the given period
(e) Task success - how the app can solve the task that the user demand. That can be
calculated using the time to perform the task, the application’s efficiency and error
rate.
Another commonly-used classification for data is a source where the data comes from. Data
from different sources have different attributes and represents different information. That is why
we are interested in this kind of classification. There are several possible sources, either internal
in the app or external from the third-party software [29]:
(i) Logs, which reflect different states of the application and their transitions triggered by the
user’s actions. Logs can represent mouse clicks, keyboard input, performing the task, errors,
timestamps and action duration, scrolling events and other possible events happening with
the software.
(ii) Visitor metric, which is a third-party software, which enables to collect and present some
different metrics. External tools like Google Metrics can provide information about traffic
inside the application, the user’s activity and other events.
(iii) Visual metrics, which is metrics that are visible and understandable for the user. It includes
heat maps, eye movement tracking or video recording of the user’s session.
5
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
3. Methodology
This section describes the process and methods used during the research. That includes a
description of evaluation criteria for the research and methods used in the development. The
research aims to investigate and evaluate practices to communicate with users to provide
solutions that will satisfy their needs. It is necessary to understand and then solve their problems
to achieve users’ satisfaction.
(i) First step is Usability Testing: users get the application or its prototype for free use in a
one-on-one meeting with a tester. Users try to use the application in a natural environment
and comment on things they notice about their experience.
(ii) Second step is an interview with a tester: the user leaves comments on positive and negative
points that he noticed during the test session.
(iii) Last step is Customers’ Feedback: the tester asks the user to fill a form with questions that
ask the user to evaluate aspects of his experience with a grade from 1 to 7.
3.2. Case-Study
A platform for this paper is a mobile application, ”My People”, that helps its users to keep
in touch with friends, family, and relatives. To achieve its goal, there are several implemented
features:
(i) Add and prioritize contacts: user can add a person, define his category and choose how
often the user wants to contact the person.
6
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
(ii) Notifications: when it is time to contact a person, the application sends a push notification
with a reminder to call or send a message to the person
(iii) Report: the user can report that he contacted a person so that the application counts
interactions and provides statistics about previous interactions.
(iv) Content: there is a section with various content about the psychology of communication and
interaction. The content aims to introduce the educational process inside the application.
(i) Customer development: first, several interviews with potential users were conducted to
understand the relevance of the suggested application in the real world. Interviews were
designed to understand potential demand in the application, potential audience, and use-
cases.
(ii) Design prototypes: user interface and functionality were designed before implementation to
decrease the cost of changes in the initial stage.
(iii) Implementation: MVP of iOS and Android applications are developed and pushed into
respective marketplaces.
7
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
3.2.1. Customer Development The first step in each development is customer development. It
is costly to develop a product without understanding the demand among people. To understand
it, we organized several one-on-one interviews with a target audience. The goal of the activity
is to validate that application is needed. Preliminary interviews showed interest among users,
so research was admitted as successful.
3.2.2. Design prototypes After receiving feedback about demand and possible use-cases of the
application, we established a minimal set of required features and designed a prototype. For
design, we used Figma because it contains the necessary set of tools specifically for UI design.
3.2.3. Implementation Minimal Valuable Product was developed for both iOS and Android
platforms. The development process was organized iteratively with one-week sprints and weekly
meetings to validate progress, discuss and plan further work. Tasks were distributed among
developers using the Trello board. This approach partially implements an agile framework,
which means that changes in the design are not costly and can be discussed in time. A
young product must implement agile because there might be changes during the design and
development process.
3.3. Outcome
As a result of this part of the research, we have a working application on different platforms
and a design prototype. That is a starting point of the research because further activities were
conducted based on this application.
8
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
4.2. UI Design
When the application concept was ready, the analysis of required features was developed, we
started to create a UI design. Creating a UI prototype reduces the costs of making changes
in the interface during development, so developers can analyse the current interface and test it
with customers without spending resources on the development.
For the UI design prototype, we used Figma. There were several iterations of prototyping:
(i) Abstract prototype: shows the preliminary layouts of the application, features that are
present in it and represents a business logic
(ii) Final interface: shows the exact view of the application, including colours, buttons, layouts
and other visual elements of the interface
After the design was developed, revised and updated, the implementation stage started.
9
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
5. Results
During the research, several artefacts were produced by the development team. These artefacts
include several prototypes, mobile application on two different platforms, usability tests.
Twenty-five people were involved as interviewees to help define weaknesses in the existing
product and in the planned features to be developed.
The Result section of the research contains the results of the usability tests and users’
interviews. It does not include the results of the development process. The first reason is
that they were presented throughout the paper several times with screenshots and exhaustive
descriptions. The second reason is that the paper topic is about working with the audience to
understand their demands, while application development is a service process for achieving the
primary target.
The research consists of the usability testing of the MVP product, verification of several new
features on the design stage and testing of the prototype, including those new validated features.
10
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
There is a list of comments and observations that came from the usability testing:
(i) Users did not know how to delete the person from the list of contacts
(ii) Users filled their name and contacts in the form for adding contacts. That means that they
did not understand the target of the application and the target of this screen correctly.
They thought that it is a registration form
(iii) Bottom menu is redundant, users misunderstand the meaning of the buttons there, and
they expect those buttons in other areas.
After using the application, users completed the quiz. They were asked to answer questions
with grades ranging from 1 to 7, where 1 is a bad grade, and seven is excellent. The table of
questions and average results are defined in the table 1
Question Grade
How easy was it to find out what is the application about? 5.6
How relevant is an application for you? 5
After starting the application, how easy it was to find out what
5.2
to do next to use it?
What is your overall feeling of application, how do you like it? 5.4
11
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
To decide which design would be accepted for the development, we performed A/B testing.
Two batches of interviewees were chosen, there were six users in each batch, and they did not
intersect in any way during the interview. Moreover, users did not see an alternative version of
the prototype. Our goal was to define subjective users’ opinion about the redesign and compare
objective metrics such as the time for completing the tasks.
The interview contained several tasks that require the usage of observed buttons, questions
with grades to evaluate subjective opinion and question for leaving free feedback with the text
to add comments on the prototype. Also, the Maze platform supports click-stream analysis and
provide time metrics for each of them, so we were able to analyse objective metrics.
Metrics and average results are presented in table 2 and compared for both prototypes.
Prototype Prototype
Metric
A B
Question: How easy was it to complete tasks
6.3 5.7
(1-7)
Time to complete tasks 21.2 17.8
Additional comments stated that it was not obvious to find a ”Content” section with a
prototype A, while there were misunderstandings with prototype B. These misunderstandings
were caused by tool limitations rather than with the design issue, so it also distorted the average
grade from users.
With all these results, the second prototype was admitted as a leading because it cost less
time for tasks, and the prototype itself did not cause low grade.
12
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
testing started with this prototype. It had to be identical to the usability testing of the MVP,
but it slightly differed because MVP is an actual product, but the prototype is not. However,
we copied questions from the initial testing. The only difference between experiments is that we
prepared tasks for the second one, so it was scripted, while initial testing was a natural use.
The results of the testing are presented in table 3
Question Grade
How easy was it to find out what is application about? 5.4
How relevant is an application for you? 3.6
What is your overall feeling of application, how do you like it? 4.9
6. Conclusion
During the research, we implemented three data-driven approaches, namely usability testing,
A/B Testing, and click-stream analysis. Furthermore, we applied several processes for different
stages of the application development, specifically implementation of an MVP version of the
product and prototyping. With the results that we received from the work, we can identify
different approaches to the data-driven UX/UI design and map them to the most appropriate
stage of the product.
Development of the MVP product enabled us to publish the product to the market, find
real users for the application, and integrate the event analytics to analyse user behavior. On
the other hand, the development and release of the MVP was a time-consuming and expensive
process. The case study required 3-4 months to produce the first version. Moreover, it requires
technical knowledge in the field of application development.
Designing prototypes is a cheaper approach to create UX/UI and test it with users. In the
case project, it required 1-2 weeks to create a prototype. After that, the team was able to
experiment and test different versions of the prototype to improve the initial UX/UI of the
product.
To summarize, the recommendation for beginner development teams is to not use the MVP-
implementation approach at the early stage when product design and potential value are being
formed and validated. Instead, prototyping should be used to establish the initial product design
and to validate the product’s value against user needs. Moreover, the prototyping approach can
be used both in the early and later stages. Early stage projects can save time and resources to
test the product value hypothesis, and it is less expensive to implement changes in the initial
design. For advanced projects, the prototyping approach can be used to design new features.
The team can first create a prototype of the application with the new feature, test it with the
users, collect their feedback, iterate the design, and then proceed to implementation afterwards.
References
[1] William Albert and Thomas Tullis. Measuring the user experience: collecting, analyzing, and presenting
usability metrics. Newnes, 2013.
[2] Nursultan Askarbekuly, Andrey Sadovykh, and Manuel Mazzara. Combining two modelling approaches:
Gqm and kaos in an open source project. Open Source Systems, 582(106-119):9–17, 2020.
[3] Nursultan Askarbekuly, Alexandr Solovyov, Elena Lukyanchikova, Denis Pimenov, and Manuel Mazzara.
Building an educational product: Constructive alignment and requirements engineering. pages 358–365,
2021.
[4] J.M. Christian Bastien. Usability testing: a review of some methodological and technical aspects of the
method. International Journal of Medical Informatics, 2010.
13
ITTCS 2021 IOP Publishing
Journal of Physics: Conference Series 2134 (2021) 012020 doi:10.1088/1742-6596/2134/1/012020
[5] Jan Bosch. Building products as innovation experiment systems. In International Conference of Software
Business, pages 27–39. Springer, 2012.
[6] Alan Brown, Jerry Fishenden, and Mark Thompson. Digitizing government. New York, Palgrave Macmillan,
2014.
[7] Randolph E. Bucklin, James M. Lattin, Asim Ansari, Sunil Gupta, David Bell, Eloise Coupey, John D. C.
Little, Carl Mela, Alan Montogomery, and Joel Steckel. Choice and the internet: From clickstream to
research stream. Marketing Letters, pages 245–258, 2002.
[8] Thomas Crook, Brian Frasca, Ron Kohavi, and Roger Longbotham. Seven pitfalls to avoid when running
controlled experiments on the web. In Proceedings of the 15th ACM SIGKDD international conference on
Knowledge discovery and data mining, pages 1105–1114, 2009.
[9] Joseph S. Dumas. User-based evaluations. The Human-Computer Interaction Handbook, pages 1093–1117,
2002.
[10] Avi Goldfarb. Analyzing website choice using clickstream data. The Economics of the Internet and E-
commerce, 2002.
[11] Mike Gualtieri. Best practices in user experience (ux) design. Design Compelling User Experiences to Wow
your Customers, pages 1–17, 2009.
[12] Ragnhild Halvorsrud, Knut Kvale, and Asbjørn Følstad. Improving service quality through customer.
Journal of Service Theory and Practice, 2016.
[13] Eric J. Johnson, Wendy W. Moe, Peter Fader, Steven Bellman, and Johannes Lohse. On the depth and
dynamics of world wide web shopping behavior. Upper Saddle River, 2004.
[14] Janice C. Redish Joseph S. Dumas. A practical guide to usability testing. Intellect books, 1999.
[15] John Karat. User-centered software evaluation methodologies. The Human-Computer Interaction Handbook,
pages 689–704, 1997.
[16] Ron Kohavi, Randal M Henne, and Dan Sommerfield. Practical guide to controlled experiments on the
web: listen to your customers not to the hippo. In Proceedings of the 13th ACM SIGKDD international
conference on Knowledge discovery and data mining, pages 959–967, 2007.
[17] James Lewis. Usability testing. Handbook of human factors and ergonomics, 2006.
[18] Walid Maalej, Maleknaz Nayebi, Timo Johann, and Guenther Ruhe. Toward data-driven requirements
engineering. IEEE Software, 33(1):48–54, 2015.
[19] Tea Mijać, Mario Jadrić, and Maja Čakušić. In search of a framework for user-oriented data-driven
development of information systems. Economic and Business Review, 21(3):439–465, 2019.
[20] Wendy W. Moe. Buying, searching, or browsing: differentiating between online shoppers using in-store
navigational clickstream. Journal of Consumer Psychology, 2003.
[21] Wendy W. Moe and Peter Fader. Uncovering patterns in cybershopping. California Management Review,
2001.
[22] Wendy W. Moe and Peter Fader. Capturing evolving visit behavior in clickstream data. Journal of Interactive
Marketing, 2004.
[23] Alan L. Montgomery, Shibo Li, Kannas Srinivasan, and Jonh C. Liechty. Modeling online browsing and path
analysis using clickstream data. Marketing Science, pages 579–595, 2004.
[24] Young-Hoon Park and Peter S. Fader. Modeling browsing behavior at multiple websites. Marketing Science,
pages 280–303, 2004.
[25] Kerry Rodden, Hilary Hutchinson, and Xin Fu. Measuring the user experience on a large scale: User-
centered metrics for web applications. SIGCHI Conference on Human Factors in Computing Systems,,
pages 2395–2398, 2010.
[26] Stephanie Rosenbaum, Janice Anne Rohn, and Judee Humburg. A toolkit for strategic usability: results
from workshops, panels, and surveys. In Proceedings of the SIGCHI conference on Human factors in
computing systems, pages 337–344, 2000.
[27] J. Rubin and D. Chisnell. How to plan, design and conduct effective tests. Handbook of Usability Testing,
1994.
[28] Jeffrey Spiess, Yves T’Joens, Raluca Dragnea, Peter Spencer, and Laurent Philippart. Using big data to
improve customer experience and business performance. Bell Labs Technical Journal, pages 3–17, 2014.
[29] Richard Berntsson Svensson, Robert Feldt, and Richard Torkar. The unfulfilled potential of data-
driven decision making in agile software development. In International Conference on Agile Software
Development, pages 69–85. Springer, 2019.
[30] Giordano Tamburrelli and Alessandro Margara. Towards automated a/b testing. Lecture Notes in Computer
Science, pages 184–198, 2014.
[31] Bruce D. Temkin. Customer experience boosts revenue. Forrester Research Document, page 12, 2009.
14