Professional Documents
Culture Documents
Quality
Qual ty
The essential guide to improving your app’s
performance, stability and testing
2020 Instabug, All Rights Reserved.
About the author
Nezar Mansour
Nezar spends his free time indulging in his favorite desserts or play-
ing games of all sorts from football to PlayStation.
Contents
The term quality is often thrown around to describe any sort of product or ser-
vice. In that sense, quality can mean a lot of things. If we’re considering quality
to be how fast or easy-to-use a mobile app is then it gets a bit complicated
because apps come in all shapes and sizes. Some apps attempt to do every-
thing you need, while others focus on a single function and try to do it well.
The way that we define app quality here is the fitness of the app for its in-
tended use. Factors that affect app quality include performance, stability, test-
ing, and usability, and these all come down to the end user experience. Some
performance and stability considerations include: Is it loading fast enough? Is
With the abundance of mobile apps available, especially the number of apps
that claim to perform the same function, users are more sensitive to app qual-
ity as a differentiator when deciding which app to use. When users are select-
ing between five apps that all do the same thing, they probably won’t choose
to use the apps that crash, are slow, and are not regularly updated with bug
fixes and performance improvements.
Mobile app quality can be broken down into four equally important areas de-
scribed below. In this book, we’ll focus on the first three: performance, sta-
bility, and testing.
2. Stability is in essence how reliable your app is. If an app is behaving the
way that it should, then it’s considered stable. App stability is commonly as-
sociated with crashes and errors. While it’s almost impossible to avoid any
crashes or errors, how often they occur is the measure of your app’s stability,
which in turn affects its quality.
• Crashes: The worst-case scenario of any user session is a crash. A crash
completely ends the user’s session and ruins their experience with your
app. Constant crashes are the biggest indicator of a poor quality app.
• Errors: Errors disrupt the user experience. If users experience too many
errors or too often, they will probably delete your app and download
one of your competitors instead.
• Resources: The most common cause of crashes and errors are resource
constraints. Apps are often developed in a vacuum, but in the wild, an
app would be running alongside a host of other apps. If an app con-
sumes too many resources, be it CPU usage, memory usage, or battery,
4. Usability refers to how the user interacts with your mobile application. If
a user finds it difficult to use your app, they are more likely to remove it and
leave a bad review. Usability is a rich topic that warrants its own book, but
when it comes to app quality, here are a few key aspects:
• User-friendliness: How easily your users are able to reach their goals
with your app. If your app is user friendly, that means your app is intui-
tive, easy to use, simple, and reliable for your target users. This can be
sometimes hard to quantify and can be very relative.
In an ideal world, every mobile team would want to make their app the highest
quality possible. In reality, we are faced with resource allocation constraints,
whether that is time, money, or engineering capacity. Compromising your
app’s quality in favor of something else comes with a lot of consequences.
Here are some of the key reasons why you should avoid releasing a low-qual-
ity app at all costs:
• 49% of users expect apps to start in two seconds or less
• 88% of app users will abandon apps based on bugs and glitches
• 51% would abandon an app completely if they experienced one or more
bugs per day
A study of apps on the Play Store shows that 50% of one-star reviews men-
tioned app stability. Excessive battery usage, slow render times, and crashes
were among the key sources of frustration. Higher quality apps are more dis-
coverable in the Play Store than similar apps of lower quality. Higher quality
apps tend to have more users and less uninstalls.
Business impact
Mobile apps are not cheap to develop and maintain. And generating enough
user interest and daily active users (DAU) to increase revenue streams is
a huge challenge. Let’s take a look at what low app quality means for your
business bottom line.
• 65% of app users believe that companies don’t do enough to ensure
better user experience and fail to offer updated versions of apps with
fewer bugs
• 50% of app users are likely to be dissuaded from downloading an app
based on customer reviews that mention bugs and glitches
• 50% of adults under 50 always or almost always read online reviews
Benchmarks to meet
The reasons to avoid having a low quality app are clear, but since every app
is different, how do you know what metrics to measure? When it comes to
stability and performance there are a couple of standard industry bench-
marks that you should always aim to meet.
• 49% of users expect an app to respond in 2 seconds or less
• Based on an analysis of the top 100 apps, 39 apps start in under or
around 2 seconds, and 73 in under 3 seconds. Ideally, you should be
Specifically, APM tools help monitor the end-user experience. This includes
metrics like how fast the application opens and how long it takes requests to
execute.
It’s important to have a mechanism for users to give you feedback. First, of-
fering a direct channel where you can collect private comments will prevent
users from posting their complaints on the app stores. Second, you’ll often
discover problems from these bug reports that don’t show up in other places.
Third, a robust bug reporting tool like Instabug will provide you with compre-
hensive data to help you quickly diagnose and fix problems.
But waiting for users to tell you what’s wrong can be very costly, especially
since user feedback can be vague. So while bug reporting is essential, the
best practice is to have an APM tool alongside your bug and crash reporting
tools. An APM tool will alert you about things like slow screen transitions, slow
network calls, and UI hangs that also negatively impact your end users’ expe-
rience. With an APM tool like Instabug’s, you can take a proactive approach
to discover and fix performance issues in your application before these prob-
lems become bug reports.
The best way to use APM is not to work backwards and fix issues that arise
APM components
According to Gartner Research there are three essential components of APM,
each with their own set of key metrics.
• End-user experience monitoring: tracks how your mobile app is behav-
ing from users’ perspective. This looks into aspects such as load times,
app slowness, or any errors.
• Application discovery, tracing, and diagnostics: assesses how the dif-
ferent components of your app are behaving from a diagnostic per-
spective. Determining whether particular logic is performing as desired
is important to understand how your app is performing overall.
• Application analytics: raw data that you can analyze, spot trends, and
learn insights to help inform decision making.
Now that we know the importance of utilizing an APM tool to improve perfor-
mance, it’s time to look at what features you’ll need. To fully cover different
use cases and the three main components of APM, you’ll want an APM tool
that offers this essential set of features.
Instabug’s crash reports offer comprehensive data to help developers fix problems fast
App launches
This is the first gate that your app users face. App launch monitoring shows
you how long users are waiting until the app loads and is responsive. A user’s
first impression is crucial and often sets the tone for the rest of their experi-
ence. If it starts with a long launch time, then you’ve already frustrated your
user before they’ve even got into your app, and they’re more likely to have a
negative overall impression.
You’ll need to monitor your app launches for each of your app versions to
have a complete picture of how your app is performing. You can also set a
UI hangs
You need to know if your users are experiencing any UI hangs, or if any part
of your app’s UI is unresponsive. If your app responds slowly, your users will
feel frustrated because they have high expectations. Users have been condi-
tioned to expect quick and accessible experiences, and if they face UI hangs,
it can cause them to abandon your app and hurt their impression of your
brand. UI hangs are assessed by comparing your app’s delay in responding to
a user input to the total amount of time they spend on a certain screen.
Instabug’s APM dashboard shows you everything you need to know about your app’s UI
hangs
Network performance
While it is important to monitor service-side network performance, it doesn’t
represent the whole story. Tracking response times and errors from your us-
ers’ standpoint will also offer key insights about your app’s performance and
quality. Monitoring both will help you determine whether your app’s overall
network performance is poor, average, or good.
With Instabug’s SDK, you can track client/server-side network requests. To
Instabug’s APM tool monitors both server-side and client-side network performance
App traces
Another important aspect to measure is how long any client-side logic takes
to complete. An APM tool can automatically fetch app traces and determine
whether they are slow to execute. With Instabug, you can define custom ex-
ecution traces to track, and you can set target thresholds for your traces so
that you can easily spot where performance issues are occurring in your code
and not meeting your KPIs.
App apdex
With a lot of granular metrics like network calls, screens hangs, etc, it is hard
to tell how the overall app quality is doing. The app Apdex score takes several
variables into account and is a numerical representation of your app’s per-
formance. With this quick tell-all score, you’ll have a quantified signal of how
your app is doing. And it makes it much easier to align the team around one
KPI and communicate it to all stakeholders.
Instabug’s app Apdex score tells you whether your app’s overall user experi-
ence is Unacceptable, Poor, Fair, Good, or Excellent, and you can track it over
time and across versions. For more granular analysis, you can also see indi-
vidual Apdex scores for app launch, specific app traces, and specific network
requests.
More than ever, performance plays a key role in the monetary success of a
mobile app. Here, we’ll take a closer look at the key mobile app performance
metrics that you need to be tracking in order to optimize your app’s perfor-
mance and ultimately improve app quality. These metrics have the biggest
impact on the end-user experience, so it’s important to measure them and
understand what benchmarks to hit in order to meet and exceed users’ ex-
pectations.
App stability
One of the biggest indicators of an unusable app is how often it crashes.
Wait times
An app doesn’t have to be completely unusable for it to be considered of
poor quality and worthy of being uninstalled. A huge part of an app’s perfor-
mance is assessing how long a user has to wait at launch, between screen
transitions, and after performing any requests. Let’s look at the different kinds
of wait times:
• App launch times: Speed is an essential part of app performance. Users
don’t like to wait and they especially don’t like slow apps. How fast your
app launches can set very important first impressions with your users
and goes a long way to retain them. App launch speed is very indica-
tive of the overall quality of your app and tracking it will help assess the
responsiveness of your app. Based on an analysis of the top 100 apps,
39 apps start in under or around 2 seconds, and 73 in under 3 seconds.
Ideally, you should be targeting an app launch time of 1.5 to 2 sec-
onds.
• Network response times: Network calls play a huge role in the speed
and responsiveness of your app. A study shows that to be competi-
tive, your app should respond to user requests within 1 second. It also
In addition to having the right tools in your app to monitor performance and
stability and collect user feedback, here are some basic tips for optimizing
your app’s speed and usability.
Network performance
• Offline mode: While this doesn’t affect network performance, it will help
you and your users in the event of network drops. If your users lose net-
work connection and your app doesn’t have an offline mode, their ex-
perience is completely interrupted, and when they regain connection,
you’ll have to reload everything again. Offline mode also allows your
users to continue using your app while not connected to the internet
and helps make the transition smoother when regaining network con-
nection.
• General network performance tips:
• Load data as you need it by splitting up assemblies or pre-loading/
pre-fetching data
• Make as few HTTP requests as possible. Although seemingly obvi-
ous, optimizing what each request does makes a huge difference
when you’re operating at a scale of millions of sessions.
• Using a content delivery network (CDN) will help accelerate APIs
and reduce latency for most of your app’s network requests
• Reduce your number of DNS lookups
• Avoid redirects because they involve new TCP connections, a TLS,
and then a DNS lookup, creating a lot of overhead
According to a report by Qualitest, 88% of app users will abandon apps based
on bugs and glitches. One of the biggest challenges an app has to over-
come in order to be successful is stability. Making sure that your users have
a smooth and stable experience might not guarantee success, but an unsta-
ble app will definitely guarantee an app’s failure. Next, we’ll discuss mobile
app stability, how to measure it, and how to improve it.
When aiming for high app quality, it’s important to constantly measure and
monitor your app’s stability to make sure it isn’t failing your users.
You could also measure the number of users that experienced a crash. Some-
times, a particular device could be causing a lot of crashed sessions, which
artificially lowers your stability score. The formula to calculate crashes by user
is: (1-(Number of users that experienced a crash / total number of users)) *
100.
You also want to consider recency. If your app was crashing over a year ago
and hasn’t faced any errors since then, it doesn’t make much sense to look
at absolute numbers. Adding time variables will help paint a more accurate
If most apps are targeting these rates, you won’t want your app to fall below.
However, your goal should always be to improve against yourself and get the
best stability possible based on your own prior data.
This is why it’s important to have comprehensive crash reports with rich data
that can guide you to make smarter decisions. Instabug’s severity metric, for
example, ranks crashes from least to most severe based on different factors
including affected users. For the complete picture of your app’s quality, you’ll
want to pair Crash Reporting with tools like Application Performance Monitor-
ing so that you can be alerted about factors like long wait times and sluggish
responsiveness and also Bug Reporting so you can collect crucial user feed-
back.
In this chapter we’ll dive into the most common causes for mobile app crash-
es. Make sure to cover these basics for a stable, high quality app.
Resource management
It’s expected that developers want to build the best looking apps with hy-
per-fast performance. Sometimes the pursuit of that means severe resource
mismanagement.
One example is using up too much memory as a direct result of running too
The same goes for CPU usage. Managing how processing intensive your app
is in order to avoid consuming too much CPU power is key to keeping your
app from crashing.
Network issues
A perk of the job can sometimes be a curse in disguise. When developing
your mobile app, you’re probably enjoying doing so with the comfort of high-
speed internet and WiFi. The reality of the situation is that most users will not
have such stable connectivity while using your app. A lot of users will be reli-
ant on mobile network data. That means that your users will likely experience
constant connection speed changes as well as frequent drops in connection.
Error handling
Not every error should spell doom for your mobile app. Unexpected behav-
ior and errors are bound to happen, and sometimes they can be anticipated.
Device differences
Not all devices are created equal -- not just across operating systems, but
even between different generations of the same device. Device specs im-
prove at an exponential pace, and a device from two years ago might not be
able to handle the resources required by your app today. Testing on a wide
range and age of devices and OEMs will go a long way to making sure your
app doesn’t experience unforeseen issues.
Testing
Today’s agile, iterative processes mean that developers are constantly up-
dating their apps with fixes to improve performance or updates that add new
features. Developers are in an ongoing cycle of shipping things and this cycle
comes with a lot of complexity. A single app is touched by multiple external
libraries and APIs as well as capabilities that serve different internal business
needs. With so many inputs, the chances of issues and crashes occurring
increases. Testing every separate component isn’t enough. You need to test
the entire app as a whole to make sure all components are functioning as
expected together.
SIGSEGV
The most common crash users run into on iOS is SIGSEGV. It amounts to
about 50% of all crashes due to it being very generic. Broken down it is a
signal (SIG) sent to interrupt the code when a segmentation violation (SEGV)
occurs. This happens when your app attempts to access memory that has not
been allocated by the program (or memory that has recently been freed).
There are two common causes for the SIGSEGV crash. First, any variable that
has been deallocated then accessed from somewhere else will cause this
crash. Debug this issue by making sure the crash is consistent and try to
Second, the crash also occurs in a low-memory situation where the device
frees objects to make room for system resources, which can make it difficult
to reproduce. Debug this by using a stack trace to go through the classes and
methods to find out the deallocated object that’s being referenced. A com-
mon cause is a dangling delegate or listener that has been deallocated.
SIGBUS
Similar to SIGSEGV, SIGBUS is also a signal (SIG) that indicates a bus error.
They are mixed up usually because they both attempt to access invalid mem-
ory. The main difference is that for SIGSEGV the logical address is invalid,
while for SIGBUS the physical address is invalid. This happens when the ad-
dress doesn’t exist entirely or when it has an invalid alignment.
This signal can be sent from most synchronous methods and often can be
found when the code attempts to access a lock.
EXC_CRASH (SIGABRT)
EXC_CRASH is an exception that basically means that the application termi-
nated in an abnormal way. SIGABRT is the most common cause of the EXC_
CRASH exception. It means that there is either an unhandled exception in the
code or that somewhere in the code abort() is being called.
NSInvalidArgumentException
NSInvalidArgumentException is accompanied with the error message: “un-
recognized selector sent to instance”. This is a good indicator of the issue, the
“unrecognized selector” in this situation are classes exchanging methods that
aren’t recognizable. This usually means that a method that isn’t recognized by
A stack trace can help you pinpoint where the exception is being thrown and
possibly the cause behind it. In the most common cause, look for an unrecog-
nized selector. And make sure that your code isn’t referencing unrecognized
methods.
java.lang.NullPointerException
This is definitely the most common Android crash. Chances are if you’ve ever
developed an Android app then you have run into this error. The most com-
mon cause behind NullPointerException is for data being referenced after
going out of scope or being garbage collected.
A common event in which this occurs is when the app goes to the back-
ground, it usually gets rid of some memory. This results in some references
being lost and when the Android onResume() method is called again it can
result in the NullPointException error.
java.lang.IllegalStateException
This is one of the more common Android crashes that you will likely have en-
countered if you’ve dealt with fragments. The actual error IllegalStateExcep-
tion can occur from a multitude of sources but at the core of it, the reason is
mismanagement of Activity states.
java.lang.IndexOutOfBoundsException
IndexOutOfBoundsException is one of the most common RunTimeExcep-
tions that you can run into. If you encounter this error it means that an excep-
tion was thrown to indicate that an index of some sort, usually an array (but
could be a string or a vector), is out of range while using, for example, List.
Fixing this issue is relatively simple, it’s always best to check the stack trace
whenever the error occurs. Knowing what caused the crash is a key first step.
After that, it’s a matter of validating that this array/string/vector isn’t dealing
with any of the common issues listed above.
java.lang.IllegalArgumentException
Probably the most varied and general cause of crashes on the list is the Il-
legalArgumentException. Its definition is the most simple and it is that your
argument is illegal. That could mean a lot and that’s why the cause behind
an IllegalArgumentException is very varied. There are, however, a couple of
common causes.
One of the more common causes is trying to access UI elements directly from
a background thread. Another is if you are trying to use a recycled bitmap.
Another common cause is forgetting to add an Activity to the Manifest. This
won’t be caught by the compiler since they are RuntimeExceptions.
To help prevent this crash, focus on making sure castings are always correct.
A lot of errors won’t be caught by the compiler so you need to be smart when
java.lang.OutOfMemoryError
Last but not least on our list is one of the more frustrating crashes. Android
devices come in all shapes and memory sizes and it can often be the case
that you are dealing with limited resources. OutOfMemoryError occurs when
the OS needs to free up memory for high-priority operations and that is taken
from your app’s heap allocation. Memory management is becoming easier
with newer devices that have a lot more memory. However, not all of your
users will be on these devices.
As for the error itself, one of the biggest causes of memory leaks leading to
OutOfMemoryError is keeping a reference to an object for too long. This can
cause your app to use up a lot more memory than it needs to and hitting that
OS limit sooner than it should cause the crash. One of the more common cul-
prits is bitmaps since images can be too large in size. So be sure to recycle
any object you can whenever it is no longer necessary.
You can request more heap memory from the OS by adding to your manifest
file the attribute android:largeHeap=”true”. However, it’s not advised unless
absolutely necessary since this is still a soft request and could be denied.
While mobile app testing is absolutely crucial to app quality, it’s a challenging
endeavor and requires a lot of resources and dedication from your team. Ac-
cording to the World Quality Report 2018/19, QA captured more than a quarter
(26%) of IT budgets in 2018. If you ask QA, they will likely tell you that isn’t
nearly enough and they require a lot more resources.
Here are some of the top mobile app testing strategies to help you release
with confidence and make the most out of your QA resources.
The best way to do this is to stop treating QA as its own isolated entity when
the app is ready to be sent to QA. Instead, you should integrate QA into every
process from the beginning so all teams are aligned as to what needs to be
tested and how. This approach will ensure that your QA team knows their test
cases as well as business and functional requirements that need to be met.
Bitbar’s research into the most common mobile devices per country show-
cases just how much geography makes a difference. For example, in the US
market, the top 20 mobile devices totaling 73.43% of the total devices, consist
of only iPhones and Samsung Galaxy phones. This means that if you are plan-
ning on releasing an app to the US market you should prioritize testing on:
• All iPhone variants from the iPhone 6 to the most recent iPhone 11
• All Samsung Galaxy variants from the S7 to the most recent
Looking at India, another huge market for mobile devices, the top 20 devices
tell a different story. They are only 32.16% of the total mobile devices and only
one iPhone is included, in the 19th spot. What dominates the Indian market
is the Chinese Xiaomi brand, not available in the US. If you are planning on
releasing an app to the Indian market you should prioritize testing on:
• Samsung Galaxy variants, especially the J series
• Xiaomi variants, especially the Redmi series
Financial status is another thing to keep in mind about the global audience.
Everyone has access to smartphones now, even in developing countries.
However, cheaper smartphones will likely have a larger share of quantity than
$800 smartphones.
Device testing
One of the toughest tasks facing QA teams is making sure an app is working
This, however, isn’t a perfect solution. Emulators won’t perfectly replicate the
behavior of an actual physical device. And knowing how a device behaves
when not in a test vacuum is also essential since most users will already have
a lot of background processes hogging up memory.
Whenever possible, it’s important that you use emulators in tandem with phys-
ical device testing, especially for popular devices. Setting up a device testing
strategy early in the process to decide how exactly this will unfold can go a
long way to saving testing time.
The QA team needs to make sure that the application is working on every
Battery testing
Battery life is one of the greatest concerns for users. If your app drains the
device battery quickly and noticeably, users will uninstall it. Various apps now-
adays use a lot of battery-intensive processes such as storing and sharing
heavy data, using geo-location, streaming video content, and general memo-
ry consuming processes.
QA testers need to run a lot of battery tests feature by feature to know which
parts of the app drains the battery the most. This is a huge concern from both
the business and QA perspective and needs to be planned for accordingly.
Security testing
The more data being sent back and forth with apps the more security be-
comes a primary concern. A study by IBM on the financial impact of data
breaches found that the cost of a data breach on average is more than $2.5
million.
Security breaches have become more common in news headlines and the
stats for mobile app security are abysmal. It’s critical that you plan security
testing early on and check for any data leakages, make sure web data isn’t
Automated testing
Automated testing is one of the biggest future trends in mobile app testing.
Automated testing allows for test cases to be scripted and reused many times
over for iterations. They are also done in no time compared to manual test-
ing. This works particularly well for the very tedious repetitive test cases that
testers have to go through one by one. Automated testing saves a lot of QA
resources and manpower even though the initial investment might be heavy.
Look to integrate automated testing into your mobile app testing to maximize
your output and reduce time to delivery as much as possible. Note that you
should only integrate automated testing where applicable and not replace
manual testing completely. The technology is just not there yet for automated
tests to cover everything needed. Manual testing isn’t going away anytime
soon but an automated helping hand is useful.
Figuring out the best ways to test and make sure that users get the best ex-
perience should always be a priority for mobile teams. But how do you mea-
sure how much of your app should be tested? And how do you test more of
your app without delaying your release? This guide is designed to help you
maximize test coverage for your mobile app using some commonly adopted
strategies.
Aiming for 100% test coverage would be ideal, but that isn’t entirely possible.
Testing every possible aspect of your app, and testing on every device and
OS version supported, would set your release back indefinitely. There is no
“correct” percentage of test coverage you’re supposed to hit since every app
is different, but 70% is considered a good number to aim for.
Now that we’ve defined what test coverage is, let’s look at the ways to im-
prove it.
Critical components
As we’ve already mentioned, you’re not going to be able to hit 100% test
coverage of your app. While planning and setting a realistic goal in mind, you
Code coverage
Code coverage is a measure of how much of your code is executed during
testing. Compared to test coverage, code coverage is a measure of how much
of your app’s feature set is covered with tests. Code coverage is usually seen
as the alternative to test coverage but they both serve different purposes.
Trying to maximize both is again ideal, but resource consuming. It’s important
to work with developers to pinpoint how you can increase code coverage
as much as possible. By increasing your code coverage, it will in tandem in-
crease your test coverage by assuring the quality of code underneath. The
goal isn’t to use one or the other, but to try to increase code coverage as
much as possible in service of test coverage.
Test automation
Test automation is becoming an indispensable part of testing and QA. There
are plenty of repetitive tests that can be performed with automation and per-
formed quickly as opposed to manual testers repeatedly doing them. This
depends on your test requirements, but utilizing test automation properly will
help you increase test coverage while saving you a lot of time. It’s important
to remember that test automation should not be considered an all-encom-
passing replacement to manual testing, but as an assisting tool to cover more
repetitive and mundane test cases whenever possible.
Manual testing
When focusing on increasing test coverage, it could come at the cost of the
quality of tests themselves. We can never overstate the importance of getting
your app tested “in the wild”. Apps behave differently when they’re fighting
for RAM with other apps on a weak device, or struggling with spotty network
connectivity, and so it’s important to test your app in a variety of environments.
Depending on your testing and release plan, this can be done with alpha and
beta releases, rigorously testing on real devices, and dogfooding your app
with colleagues. Instrumenting your app with an SDK like Instabug that auto-
matically captures crashes, makes bugs super easy to report, and sends you
a plethora of background data will dramatically accelerate your testing and
debugging processes.
Innovations like 5G and AI are transforming the world of mobile app testing.
However, while those technologies have yet to reach mass adoption, soft-
ware performance engineering is becoming the norm.
For mobile teams big and small, it’s important to know the differences be-
tween performance engineering and performance testing. In this article, we
will take a high level look at both categories and highlight the differences be-
tween each. We’ll also discuss why companies are making the shift and how
to make the best out of both.
Performance tests usually come in the form of scripts that check for bottle-
necks in the application. In the software development lifecycle, there are var-
ious phases: gathering requirements, architecture and design, development,
testing, and release. Software performance testing has input in two of the
phases: requirements and testing. The testing phase itself breaks down into
smaller sections that follow a similar pattern to the overall app development
lifecycle.
Crash reporting
Probably the biggest indicator of your app’s performance is its stability. An
app crashing too often is a surefire way to lose users for good. Since crashes
specifically are fatal errors and completely stop the user from using the app,
they can become a huge issue quickly. It’s important to measure your app’s
stability score and to keep constant track of it. One of the best ways to moni-
tor your app’s health is to have a strong crash reporting tool like Instabug.
Bug reporting
It is impossible to avoid bugs altogether. Similar to crashes, the best way to
handle them is to be able to quickly detect, diagnose, and fix them before
they affect a lot of users. Instabug’s bug reporting tool makes it as easy as
possible for your testers and users to report bugs and as easy as possible for
your developers to understand the problem and fix it. This is crucial for both
internal testing and for live apps.
And while APM and crash reporting are important to app quality, they’re miss-
ing a critical dimension: the voice of your users. With bug reporting, you com-
plete the picture by collecting real user feedback, from which you can learn
tons of valuable insights.
Instabug
For the best results, use Instabug’s APM, Crash Reporting, and Bug Report-
ing together to have a complete picture of your app’s quality, performance,
and stability. Each tool is essential on its own, and when combined, the sum
is even greater than its parts and you’ll have all the data and information you
Learn More
Powered by: