You are on page 1of 15

.

Abstract

This study tries to determine how a real-time system's performance is impacted by


programming language and design patterns. In this study, two chosen KNN simulations from
Github.com were designed and developed using Java programming in order to evaluate the
effectiveness of the Java coding framework. To comprehend and restructure the simulations
using proper concurrency principles, a thorough investigation was undertaken. After that, this
study will proceed with performance testing utilising the Java Microbenchmark Harness (JMH)
and JConsole software benchmarking tools. JMH is used to evaluate the throughput and
average time of the first simulation before and after refactoring.

. INTRODUCTION

Real-time systems are becoming increasingly relevant in many areas, particularly in industrial
settings. It is typically used in circumstances when numerous events where it has to be
processed and accepted in a short amount of time. A realtime system, in layman's words, is a
time-constrained system built for real-time activities and differentiated by its ability to respond
instantaneously. It is known as RTOS, short for real-time operating systems. The goal of RTOS
is to execute many tasks concurrently or switch between them while preserving precise timing
and excellent dependability. It is used in Real-Time applications, which demand the data
process to be finished within a certain and limited duration. Response time is essential for real-
time systems, and to attain respectable timeliness, the buffer delay between input and output
durations must be decreased. If the real-time system is unable to react quickly enough within
the given time constraints, the system will cause trouble. This forces the real-time operating
system to respond quickly to outside events, sometimes in a split second when a CPU receives
an interrupt request and begins to execute an interrupt handler, this is referred to interrupt
latency. Therefore, priority is included in RTOS to track the program's execution in order to
guarantee the system fulfils any important deadlines. There are two different categories of
priority: low and high priority processes. In the RTOS scheduler, when a high priority task is
ready to execute, it will pre-empt or suspend the lower priority task which is already running.
Hence, this will allow the higher priority task to execute first, and the lower priority task will
wait until the the higher priority task completed. If the timing criteria are not met in a soft real-
time system, the system will tolerate a given amount of delay without leading to system failure.
In other words, failure to accomplish the chores within the allotted time will not significantly
affect. A hard real-time system does not have the same tolerance for missed task deadlines
because doing so will have serious consequences and lead to system failure. However, both
soft and hard systems are frequently used in a variety of applications, including memory
management, planetary rovers, aircraft sensors, and more. In order to provide appropriate
outputs and conduct a classification method based on the supplied data, the K-Nearest
Neighbors (KNN) algorithm was implemented into the two simulations of the Java
programmes. In the first simulation, the KNN algorithm determines the closest neighbour based
on the least distance and time devoted to classifying the distances from the provided dataset.
The second simulation incorporates necessary algorithms to calculate the distance between the
destination and source using the Euclidean and Manhattan distances in KNN. The values from
all the training samples will then be identified, and they will be sorted in ascending order. These
samples will be categorized, and the majority of the closest neighbours will determine the
prediction value. The result of the simulations with varied design patterns will be studied and
compared to see how the design pattern affects the real-time system’s performance. The results
of this study will be useful in determining how design patterns impact real-time system
performance. This research paper is organized into a few sections. The following section
provides a literature review or research background of the real-time limitations and
considerations, the consequences of programming language and design, and a discussion on
performance assessment through benchmarking and testing methods analysis to explore the
research background. Then, the methodology section will cover an overview of the
performance of the simulations, along with the methods used for the benchmarking techniques
and tools. Then, the results and discussion section will outline a summary of the results,
conclusions and extensive explanations of the findings for both simulations. A conclusion of
the comprehensive research study will also be included at the end of this paper.

The researchers at Zambrano et al., defined real-time system (RTS) as a system whose
behaviour or performance is determined by the passage of time as the data that enters are
continuously processed till the results are produced (Zambrano, et al., 2017). One of these
systems' fundamental requirements is that their reaction rate be constrained and predictable,
which is advantageous for handling interruptions or events that happen within a predetermined
window of time (Hahm et al., 2021). In other words, the idea of real-time systems stresses the
need for quick, low-latency service or task execution. When there is a slow response to a
request, a system is said to have failed (Siewert & Pratt, 2016). In addition to meeting timing
specifications, a real-time system is made to switch between processes fast and run many
applications at once (Williams, 2021). Furthermore, multitasking is essential for a real-time
system to handle several programmes running on a computer system at once while also
addressing the execution time in each of these programmes' processes (Costa et al., 2019).
Therefore, memory management is a crucial component to consider in order to multitask in a
real-time system (Shah, 2018). Ilhan et al. claim that systems can switch between main and
secondary memory. Depending on the desired outcomes and application, real-time systems
may be divided into two categories: soft and harsh real-time limitations (Siewert & Pratt, 2016).
Although it is crucial to fulfil deadlines in a soft real-time system, there is some room for brief
delays due to missed deadlines. Deadlines that are missed often have minimal consequences
and the system will continue to function. 2013 (Thakur).a real-time system can be categorized
into two commonly known real-time constraints: soft and hard real-time constraints depending
on the expected results and application (Siewert & Pratt, 2016). Although the importance of
meeting deadlines cannot be overstated in a soft Realtime system, there is still some leeway for
short delays of missed deadlines. Missed deadlines will not result in disastrous repercussions,
and the system will remain operational usually (Thakur, 2013). Nonetheless, the performance
for a soft real-time system will deteriorate and be less deterministic as the response time is
longer. Applications of soft real-time systems are online transaction systems, webpage loading
time, video streaming and user interface. Besides, deadlines are severely enforced in hard
Realtime systems, which implies that a task must always be fulfilled, and no flexibility for
missing the deadlines within the specified period (Nower et al., 2015). It is the most limiting
situation in this circumstance, as failure to meet the deadlines indicates that the system has
entirely failed (Thakur, 2013). Depending on the desired outcomes and application, real-time
systems may be divided into two categories: soft and harsh real-time limitations (Siewert &
Pratt, 2016)
Figure 1/Soft vs hard real time

An object-oriented programming language that takes advantage of procedural language


features is called Java Real-Time Extension. Java is a high-level, object-oriented programming
language that supports a variety of systems and permits greater programming abstraction in
real-time development, according to Sonar & Lande (Sonar & Lande, 2018).

Figure 2The JVM program execution in Java (Wilston, 2019

3. Methodology

3.1. Overview of the simulations

Two simulations in this study utilising various Java In data analytics, programming languages
are selected to determine how system performance relates to and design principles. These two
simulations are from utilising the categorization tools on Github.com, With the K-Nearest
Neighbours (KNN) method, produce outcomes depending on the specified data. Such
simulations involve identifying the closest neighbours depending on the time it takes to
categorise, and the distance travelled most quickly the data set's distance measurements.

Benchmarking the original code in various modes, such as application throughput and average
execution time, is the first stage in this part. A micro-benchmark suite is run as part of the
benchmarking process to examine the initial system performance for the two Java simulations
that were selected. To increase the effectiveness and speed of the real-time system, the
simulations will be refactored into a concurrency technique. Following the use of suitable
concurrent principles in the simulations' architecture, a second benchmark is conducted to
evaluate the differences and improvements from the first simulation. The researcher will
provide the outcomes and conclusions of the comparative analysis in the final presentation.

3.2. Implementation of Method

The augmentation technique will be created for these simulations utilising the Java
programming language with Maven using the Apache NetBeans IDE 12.6, JDK 17.0.1. Project
Object Model (POM)-based Apache Maven is a Java project repository. As a result, a POM
file containing a few dependencies is added to the project in order to use Maven to convert the
simulations into real-time Java applications. The POM files containing dependency codes for
the corresponding simulations of this research are shown in the image below.

Figure 3/POM files with dependencies in both simulations

3.2.1 Runnable Interface Implementation


To develop a concurrent simulation that can be conducted in real-time using the Runnable and
Callable interfaces, both simulation algorithms will be refactored for this assignment. The
Runnable interface or Java.lang. If one of Runnable's instances is performed by a thread, its
class method will be invoked (Kurular, 2021). To execute it, the programmer must first create
a Runnable implementer and override the undefined method run (). The function also has a
return type of void and does not take any parameters. Typically, the run method of the
independently running thread is invoked whenever a member of any class that implements
Runnable launches a thread (Bhatnagar, 2019). The Runnable interface is the most
straightforward way to implement concurrent programming threads.

Figure 4/Implementation for Runnable Interface in Simulation 1


Figure 5/Implementation for Runnable Interface in Simulation 2

3.2.2 Callable Interface Implementation

The Callable interface is then put into practise. The Callable interface produces threads that are
more intricate and sophisticated than those created by the Runnable interface. The Callable
interface makes this capability possible, but the Runnable interface lacks the thread to provide
results after it has finished. This is due to the fact that it makes use of generics, enabling the
callable interface to create any object type (Edureka, 2021). The Executor Framework offers a
submit () function to allow Callable implementations to be executed over a number of threads.
Based on the pool's availability of threads, the Executor Framework distributes work (runnable
targets) to those threads. When all threads are active, the job will be put on hold. The thread
returns to the pool as a ready thread to accept new tasks after finishing one (Edureka, 2021).
Briefly said, Callable and Runnable are comparable, however Callable can return the object
type from the task result. The implementation of a Java callable interface in the corresponding
simulations of this study is shown in the example below.

3.2.3 Thread Management


Thread management is essential in Java multithreading programming to serve as a barrier and
make sure that necessary activities are carried out without interfering with one or more threads
in a synchronised way. To handle the threads in a real-time system, there are several
synchronisation tools available, including CountDownLatch, Semaphore, and CyclicBarrier.
They are frequently used in applications that have several threads and require waiting for each
thread to reach a shared execution point (Baeldung, 2021a). Both CountDownLatch and
Semaphore are used in this study. CountDownLatch is typically used to start a series of threads,
let them run their course, and then repeatedly use countDown(). Normally, await() will cause
all threads to stall out and begin at the same time as the countdown reaches zero.

In contrast, semaphore is used to limit the number of concurrent threads that are utilising a
resource. A set of permissions is tracked by a semaphore. Each acquire () function awaits the
availability of a permit before accepting it. Each call to release () adds a permission and
unblocks a blocked acquirer. The count on a semaphore will rise and decrease when different
threads call acquire() and release() (Baeldung, 2021b).

3.3. Performance Analysis

Java Microbenchmark Harness (JMH) is used in the performance analysis as a micro


benchmarking tool to gauge the simulation's performance. Performance metrics for this study
will be average execution time, throughput, and graphical CPU and heap memory utilisation
monitoring using Java profiling tools. To determine a real-time system's capacity to carry out
or finish a certain number of operations in a certain amount of time, throughput analysis is
essential. If the throughput score is poor, it usually signifies that the delay is excessive as a
result of insufficient memory or management. Average time analysis, as its name suggests, is
important to gauge how long it typically takes a benchmark technique to complete one
execution. The developer can get a rough idea of how long the benchmark method typically
takes to complete using the average time analysis because execution times can vary. In order
to do this, Simulation 1 will utilise Semaphore and Simulation 2 will use CountDownLatch.
Regarding the JMH micro-benchmarking settings, there are 5 warm-up measures and 10 real
benchmark measurements for each simulation. To guarantee that JVM is in a stable condition
and to prevent underperformance, the warmup iteration is essential. For better measurement,
the time unit is in microseconds, and the fork value is set to 1. The benchmark annotation code
used to gauge how well the research's simulations performed is shown in the picture below.
Figure 6/Benchmark Annotation for Simulation 1 & 2.

Figure 7/Benchmark Execution for Simulation 1 & 2.

JConsole will also be put into use as an additional monitoring tool to examine how much
resource Java applications or the JVM use. The researcher may keep a careful eye on the
simulations' runtime behaviour, including CPU and heap memory utilisation, garbage
collection techniques, and more, using this graphical monitoring tool. A example visualisation
output from JConsole Overview is shown in the picture below.

6. Results and Discussions

will evaluate the system performance in real-time while discussing the design patterns of two
separate simulations using JMH microbenchmarking tools and JConsole. The JMH benchmark
tool is used to examine the performance of two simulations that were chosen, each of which
used a different design pattern. These design patterns included the original simulation structure
and the refactored simulation with both a Runnable interface and a Callable interface.
Throughput and average execution time are examined for these simulations. The figures and
tables below provide examples of the outcomes.
Figure 8 :JMH Benchmark - Simulation 1

Figure 9: JMH Benchmark - Simulation 1

Table 1: JMH Benchmarking result for Simulation 1

The Throughput and Average Time benchmark results for Simulation 1 are shown in Table 1
above. The benchmark for this simulation was completed by the researcher using microseconds
(s). At provide more accurate and objective findings, the warmup iteration was first set to 5
and was followed by 10 real benchmark iterations for all benchmark in simulation 1. Based on
the results of the benchmarking, the callable interface outperformed the runnable and callable
interfaces in the refactored simulation in terms of throughput and average execution time. To
be clear, Callable performed with an error of 0.038 at the greatest rate of 4.114 operations per
second. . In other words, 4.114 operations in all might be finished in a single microsecond.
Runnable, on the other hand, achieved a throughput rate of 4.080 ops/s with an error rate of
0.039. As a result, this demonstrates that Callable interface performs better in Simulation 1
whereas Runnable interface lagged Callable interface by a very little margin. The lowest
throughput rate was achieved by the initial simulation, which had no refactored classes, with
3.980 ops/s and a 0.085 margin of error.
JMH Benchmark – Simulation 1

Mode: Throughput (ops / µs)

Benchmark Cnt Score Error

KNN 10 3.910 ± 0.081

KNN_Runnable 10 4.060 ± 0.032

KNN_Callable 10 4.114 ± 0.037

Mode: Average Time (op / µs)

KNN 10 0.315 ± 0.035

KNN_Runnable 10 0.318 ± 0.031

KNN_Callable 10 0.273 ± 0.017

The Callable interface was measured at 0.281 op/s according to the JMH benchmark results,
which was the lowest average time result. Runnable interface moved at 0.316 ops per second,
closely followed by the original simulation at 0.318 ops per second. The Callable interface beat
the other two in the average time performance study for Simulation 1 because it could complete
jobs with the fastest response time. The researcher subsequently came to the conclusion that
the upgraded simulation employing concurrency ideas was able to function at a reduced latency
level based on the comparison study on Simulation 1. Figures 14 and 15 illustrate, respectively,
the throughput and average time performance results from the Apache Netbeans IDE.
Figure 10 : JMH Benchmark - Simulation 2

Figure 11 : JMH Benchmark - Simulation 2


JMH Benchmark – Simulation 2

Mode: Throughput (ops / µs)

Benchmark Cnt Score Error

KNN 10 0.244 ± 0.019

KNN_Runnable 10 0.251 ± 0.004

KNN_Callable 10 0.249 ± 0.006

Mode: Average Time (op / µs)

KNN 10 6.023 ± 0.218

KNN_Runnable 10 5.910 ± 0.221

KNN_Callable 10 4.403 ± 0.409

The JMH benchmark results are shown in Table 2 above after the researcher ran Simulation 2
on 5 warmup iterations and 10 real iterations. Starting with the throughput rate result, it was
clear that Runnable had the highest score at 0.251 operations per second with a 0.004 error rate,
which was superior to the original simulation code's reported throughput rate of 0.244
operations per second. Alternatively put, the Runnable interface completed 0.251 operations
every millisecond. The Runnable interface scored 0.264 ops/ms with a 0.006 error rate, while
the Callable interface scored 0.249% ops/ms with 0.006 errors. As a result, the researcher came
to the conclusion that Runnable and Callable outperformed the original simulation equally well,
with the throughput rate for the Runnable interface only slightly improving. The Callable
interface scored the lowest in the Average Time study (4.403 op/ms), with a 0.409 margin of
error. The Runnable interface, on the other hand, had an average time that was 5.910 op/ms,
which was likewise faster than the simulation code. The simulation code's average execution
time was 6.023 op/ms. Therefore, the researcher came to the conclusion that upgraded
simulation employing Callable interface performed the fastest in terms of reaction time for the
average time performance study of Simulation 2.
. Conclusion

The performance of the two selected simulations, each of which employed a distinct design
pattern, is examined using the JMH benchmark tool. The initial simulation structure and the
refactored simulation with a Runnable interface and a Callable interface were two examples of
these design patterns. For these simulations, throughput and average execution time are
analysed. Examples of the results are shown in the figures and tables that follow. Based on the
findings of the research, both testing tools have highlighted the design pattern's importance. To
find the appropriate design pattern for any real-time system, it is proposed that developers build
several prototypes of a system. The Runnable and Callable interfaces did not differ
significantly in terms of performance or resource use. Because they had a significant influence
on the simulations' overall memory management and CPU use, which cannot be fully explained
by the types of concurrency principles utilised, concurrent design patterns were mostly to
blame for this. Future research should be conducted to ascertain the additional effects of design
patterns and examine other aspects of enhancing real-time systems in greater detail.

. Reference(s)

Edureka. (2021, October 25). How to Implement Callable Interface in Java. Edureka.
https://www.edureka.co/blog/callableinterface-injava/?ranMID=42536&ranEAID=a1LgFw09t
88&ranSiteID=a1LgFw09t88-

Thakur, D. (2013, January 23). What is the Real Time System ? Difference between Hard and Soft Real-
Time Systems. Computer Notes.

Laaber, C., Würsten, S., Gall, H. C., & Leitner, P. (2020). Dynamically reconfiguring software
microbenchmarks: reducing execution time without sacrificing result quality. Proceedings of the 28th
ACM Joint Meeting on European Software Engineering Conference and Symposium on the
Foundations of Software

Kurular, Ö. (2021, August 11). Java - Runnable vs Callable. Javarevisited.


https://medium.com/javarevisited/javarunnable-vs-callable-786aa706775d

Dahiya, P. (2019, August 14). Real Time Systems - GeeksforGeeks. GeeksforGeeks.


https://www.geeksforgeeks.org/real-timesystems/ Davis, R. I., Cucu-Grosjean, L., Bertogna, M., &
Burns, A. (2016). A review of priority assignment in real-time systems. Journal of Systems Architecture,
65, 64–82. https://doi.org/10.1016/j.sysarc.2016.04.002

You might also like