0% found this document useful (0 votes)
22 views15 pages

Research For Computing Methods 241530

The document is an assessment cover sheet and includes a detailed analysis of the research design, validity, reliability, and ethical considerations in the study by Paais and Pattiruhu (2020) regarding employee satisfaction and performance. It discusses the quantitative research design, sampling methods, and analytical strategies used, as well as the importance of ethical standards in research involving human subjects. The document emphasizes the methodological strengths and limitations of the study, alongside the significance of ensuring valid and reliable measurement instruments.

Uploaded by

knowledgemul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views15 pages

Research For Computing Methods 241530

The document is an assessment cover sheet and includes a detailed analysis of the research design, validity, reliability, and ethical considerations in the study by Paais and Pattiruhu (2020) regarding employee satisfaction and performance. It discusses the quantitative research design, sampling methods, and analytical strategies used, as well as the importance of ethical standards in research involving human subjects. The document emphasizes the methodological strengths and limitations of the study, alongside the significance of ensuring valid and reliable measurement instruments.

Uploaded by

knowledgemul
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

APPENDIX A: ASSESSMENT COVER SHEET

ASSESSMENT COVER SHEET

MAKWALA
Surname
TSEBO CHUKELA
First Name/s
241530
Student Number
Research For Computing Methods
Subject
1
Assessment Number

Tutor’s Name
22 April 2025
Date Submitted

Submission () First Submission Resubmission


2431 SOLOMON MAHLANGU STREET
MAHUBE VALLEY EXT 1
Postal Address
MAMELODI EAST
PRETORIA
0122
knowledgemul@gmail.com
E-Mail
(Work)
(Home)
Contact Numbers
(Cell) 072 037 3891

Course/Intake 2025

Declaration: I hereby declare that the assignment submitted is an original piece of work produced by myself.

Signature: Date: April 22, 2025


Table of Contents

Question 1 ...................................................................................................................................... 3
Question 2 ...................................................................................................................................... 5
Question 3 ...................................................................................................................................... 7
Question 4 ...................................................................................................................................... 9
Question 5 .................................................................................................................................... 11
References .................................................................................................................................... 14
Question 1

Assessment of the Research Design Utilised by Paais and Pattiruhu (2020)

Introduction

A robust research design is fundamental in shaping the direction, structure, and validity of any
empirical study. According to Saunders, Lewis and Thornhill (2023), a research design provides a
logical framework that connects theoretical propositions to data collection and analysis, ultimately
ensuring the credibility of research findings. In the study by Paais and Pattiruhu (2020), the
researchers investigated how motivation, leadership, and organisational culture impact employee
satisfaction and performance. This essay critically examines the research design they employed,
identifying key elements that contribute to the study’s methodological strength and analytical
reliability.
Understanding Research Design

Research design refers to the comprehensive strategy that outlines how research is conducted. As
defined by Bell, Bryman and Harley (2022), it determines how data will be gathered, measured,
and analysed to answer the research question effectively. A well-chosen design not only supports
the research objectives but also enhances internal and external validity, reliability, and
generalisability.
Design Type Employed by the Authors

The research by Paais and Pattiruhu (2020) utilised a quantitative, explanatory research design
within a cross-sectional framework, relying on structured survey data collected from 155
employees at Wahana Resources Ltd. According to Creswell and Creswell (2022), an explanatory
design is used when the researcher aims to test hypotheses and assess the causal relationships
between variables. In this case, the relationships between motivation, leadership, organisational
culture, job satisfaction, and employee performance were statistically analysed.

The researchers grounded their work in a positivist paradigm, which assumes that reality is
objective and can be understood through observable and measurable facts (Park and Konge, 2021).
This aligns with the quantitative approach and justifies their use of numerical data to test
theoretical constructs.

Sampling and Data Collection Techniques


The authors adopted a proportionate stratified random sampling method to ensure that all
categories within the organisation were represented. As Etikan and Bala (2017) note, this method
increases representativeness and reduces sampling bias, thereby improving the generalisability of
findings.
For data collection, the researchers used a structured questionnaire developed from established
scales in the HRM literature. According to Sekaran and Bougie (2022), structured questionnaires
are advantageous in quantitative research because they ensure consistency, facilitate easy
quantification, and reduce potential interviewer bias. The use of a Likert scale format facilitated
ordinal data collection, which was compatible with the chosen analytical method.

Analytical Strategy: Structural Equation Modelling (SEM)

The research employed Structural Equation Modelling (SEM) using AMOS software to test the
proposed theoretical model. SEM is a powerful multivariate technique that enables researchers to
assess the strength and direction of relationships among latent variables simultaneously (Hair et
al., 2021). The authors used Confirmatory Factor Analysis (CFA) within SEM to validate their
measurement model, ensuring that the observed variables adequately reflected their underlying
constructs (Awang, Afthanorhan and Asri, 2021).

By using SEM, the authors were able to model direct and indirect relationships, enhancing the
analytical depth of the research. SEM also allows for the estimation of measurement error,
improving the reliability of statistical inferences.

Strengths of the Research Design

The strengths of the research design are evident in several areas:

• Use of Validated Constructs: The authors adapted measurement instruments from


previously validated scales, improving construct validity (Heale and Twycross, 2021).

• Sophisticated Statistical Analysis: The use of SEM enabled a nuanced analysis of


complex relationships between multiple variables.

• Representative Sampling: The stratified sampling strategy ensured representation across


different departments and levels, strengthening external validity.

Limitations of the Design

Despite its strengths, the research design does have limitations. The cross-sectional nature of the
study restricts its ability to infer causal relationships over time. According to Bryman and Bell
(2022), longitudinal designs are better suited for assessing changes in constructs such as
motivation or satisfaction. Additionally, reliance on self-reported data may introduce social
desirability bias or common method variance (Podsakoff, MacKenzie and Podsakoff, 2021).
Although the authors mitigated this by using validated instruments, qualitative insights could have
added more depth.

Conclusion

In conclusion, Paais and Pattiruhu (2020) employed a well-structured quantitative research design
that is appropriate for the study’s objectives. The use of SEM, stratified sampling, and validated
measurement instruments contributed significantly to the study’s reliability and validity. While
there are limitations associated with the cross-sectional and self-report nature of the data, the
design remains methodologically sound and effective for exploring causal relationships in human
resource contexts.
Question 2

Paais and Pattiruhu (2020), Validity and Reliability in the Research

Introduction

The rigour and trustworthiness of any empirical study depend heavily on the validity and reliability
of its measurement instruments. These two concepts are fundamental in determining whether the
findings of a study accurately reflect the phenomena they are intended to measure (Heale and
Twycross, 2021). In the research conducted by Paais and Pattiruhu (2020), which examines the
influence of motivation, leadership, and organisational culture on employee satisfaction and
performance, ensuring both validity and reliability was paramount. We are going to explore how
the authors addressed these methodological constructs and evaluates the effectiveness of their
approach.
Defining Validity and Reliability

Validity refers to the degree to which a research instrument measures what it purports to measure.
It can be broken down into several types: construct validity, content validity, criterion-related
validity, and face validity (Bolarinwa, 2021). Reliability, on the other hand, pertains to the
consistency and repeatability of measurement results. A reliable instrument will yield the same
outcomes under consistent conditions (Saunders, Lewis and Thornhill, 2023). According to
Mohajan (2021), for results to be trustworthy, instruments must be both valid and reliable — that
is, they must measure the right thing, and do so consistently.

How Validity Was Ensured in the Study

1. Construct Validity

According to Hair et al. (2021), construct validity is confirmed when the instrument truly captures
the theoretical concept it intends to measure. In the study by Paais and Pattiruhu (2020), construct
validity was addressed through the use of Confirmatory Factor Analysis (CFA) within the
Structural Equation Modelling (SEM) framework. CFA is a technique that tests whether the data
fit a hypothesised measurement model, thus confirming that observed variables represent the latent
constructs accurately (Awang, Afthanorhan and Asri, 2021).
The authors drew their measurement constructs—such as for motivation, leadership, and
organisational culture—from previously validated instruments. By aligning their items with
established theories in human resource management and organisational psychology, such as
Herzberg’s motivation-hygiene theory and Bass’s transformational leadership theory, they
demonstrated theoretical grounding, which enhances construct validity (Robbins and Judge, 2022).

2. Content Validity
Content validity scrutinizes whether an instrument fully or comprehensively covers the domain of
the construct being measured (Bell, Bryman and Harley, 2022). The questionnaire used by Paais
and Pattiruhu (2020) included multiple items for each variable, ensuring a broad and representative
coverage of the constructs. For instance, organisational culture was assessed through behavioural
norms, shared values, and collective expectations—components frequently highlighted in
organisational behaviour literature (Schein and Schein, 2021).

Although the authors did not explicitly state whether expert panels were used to review the
questionnaire, the use of established scales suggests that content validity was inherently
maintained.

3. Face Validity
Face validity refers to whether a test appears effective in terms of its stated aims to those taking it
(Saunders, Lewis and Thornhill, 2023). While this form of validity is more subjective, it is still
significant in determining how respondents interpret and engage with the instrument. The
straightforward structure and clarity of the Likert-scale items used in the questionnaire likely
contributed to face validity, making it more accessible and interpretable to respondents.

How Reliability Was Ensured

1. Internal Consistency

A well know and common indicator of reliability in survey research is Cronbach’s Alpha, which
measures the internal consistency of a scale. In this study, the authors reported Cronbach’s Alpha
values greater than 0.70 for all constructs, indicating acceptable reliability (Taber, 2021).
According to Nunnally and Bernstein (1994, cited in Hair et al., 2021), a Cronbach’s Alpha above
0.70 suggests that the items are sufficiently correlated and thus reliably measure the same
construct.

2. Instrument Design and Standardisation


The authors also ensured reliability using standardised Likert-scale questions. This format
minimises the variability in interpretation and response patterns, contributing to consistency across
respondents (Bryman and Bell, 2022). The use of closed-ended items further reduced the
likelihood of ambiguous or inconsistent responses, supporting repeatability.

3. Use of SEM to Address Measurement Error

An additional layer of reliability was achieved through SEM, which not only estimates the strength
of relationships between constructs but also controls for measurement error. As noted by Byrne
(2021), SEM enables the partitioning of variance in the observed variables, isolating true score
variance from error variance, thus strengthening the reliability of estimates.

Conclusion

In summary, Paais and Pattiruhu (2020) took several important steps to ensure the validity and
reliability of their measurement instruments. By drawing on validated scales, applying
confirmatory factor analysis, and employing Cronbach’s Alpha, the study demonstrated
methodological rigour in capturing constructs such as motivation and job satisfaction. Their use of
SEM further reinforced both reliability and construct validity by accounting for measurement
error. These efforts collectively enhanced the trustworthiness and applicability of their findings in
the field of organisational research.

Question 3

Ethical Considerations in Research Involving Human Subjects

Introduction

Ethics is a cornerstone of responsible research, especially in studies involving human participants.


It provides the moral foundation for designing, conducting, and reporting studies that respect the
rights and dignity of individuals. According to Saunders, Lewis and Thornhill (2023), ethical
research is that which minimises harm, secures informed consent, protects anonymity and
confidentiality, and ensures transparency and integrity throughout the process. In the study by
Paais and Pattiruhu (2020), which involved data collection from 155 employees through structured
questionnaires, ethical considerations were vital for protecting participants and ensuring data
credibility. This essay explores the key ethical guidelines adhered to in their research,
contextualising them within the broader framework of human subjects research.

Defining Research Ethics

Research ethics refers to a set of principles that guide researchers to conduct their work
responsibly, particularly when human beings are involved (Resnik, 2021). Ethical frameworks are
usually built upon international declarations such as the Belmont Report and the Declaration of
Helsinki, which emphasise respect for persons, beneficence, and justice (Nijhawan et al., 2022).
According to Israel and Hay (2021), ethical compliance is not only about following legal
obligations but also about fostering trust between researchers and participants and maintaining the
integrity of scientific inquiry.

Ethical Considerations in the Paais and Pattiruhu (2020) Study

1. Informed Consent
Informed consent is the process through which participants are clearly informed of the kind,
purpose, risks, and benefits of the study before accepting to participate (Beskow and Weinfurt,
2021). Although the authors did not detail the process of obtaining consent, the use of structured
questionnaires administered within an organisational setting strongly suggests that consent was
obtained. According to Patel (2021), in organisational research, employees must be made aware
that their participation is voluntary and that they may withdraw at any time without negative
consequences. The absence of coercion, particularly in employer-employee settings, is vital to
uphold ethical integrity.

2. Confidentiality and Anonymity


Maintaining confidentiality and anonymity ensures that participants cannot be personally
identified and that their data are not disclosed to unauthorised persons (Wiles et al., 2022). In the
study, no personal identifiers appear to have been used, suggesting that anonymity was preserved.
Data were likely aggregated, a common practice in quantitative HRM research that helps protect
identity and promote honest responses (Walliman, 2022).

The researchers’ use of electronic data analysis tools like AMOS for Structural Equation
Modelling further implies that responses were digitised and de-identified before analysis.
According to Bryman and Bell (2022), electronic data must be securely stored and encrypted to
comply with data protection regulations such as the General Data Protection Regulation (GDPR)
in Europe or equivalent frameworks in other jurisdictions.

3. Minimisation of Harm

An essential principle of research ethics is to avoid or minimise harm to participants. Harm in


organisational studies can be psychological (e.g., anxiety over job security) or social (e.g.,
damaged workplace relationships). Paais and Pattiruhu (2020) conducted their research on
employee perceptions, which is generally low-risk. However, care must be taken that employees
do not fear repercussions for their responses. According to Bell, Bryman and Harley (2022),
participants should be reassured that their responses will not affect their job status or professional
evaluations.

4. Ethical Approval and Oversight

Ethical oversight typically involves submitting a research proposal to an institutional review board
(IRB) or ethics committee for approval before data collection. While the authors do not explicitly
mention receiving ethical clearance, adherence to accepted ethical research practices—such as
voluntary participation and anonymity—suggests ethical awareness. According to Bouter (2022),
even if IRB approval is not required in some contexts, researchers are still responsible for
upholding global ethical norms in human research.

5. Transparency and Integrity


Transparency in reporting methodology and findings is another critical ethical responsibility. Paais
and Pattiruhu (2020) clearly outlined their sampling strategy, data collection procedures, and
analytical tools. According to Resnik (2021), such openness reduces the risk of research
misconduct and enhances the reproducibility of the study.

Conclusion

In conclusion, the study by Paais and Pattiruhu (2020) appears to align with key ethical standards
governing human research. Through implied informed consent, maintenance of anonymity, and
minimisation of potential harm, the researchers demonstrated ethical awareness in their design and
execution. Although explicit mention of ethics committee approval is absent, their methodology
reflects an adherence to widely accepted principles of ethical research. Adhering to ethical
standards not only safeguards participants but also strengthens the integrity and credibility of
scientific inquiry.
Question 4

Distinguishing Between Ratio and Interval Data in the Context of Paais and Pattiruhu (2020)

Introduction

Measurement scales form the foundation of quantitative research, as they determine the nature of
the data collected and the types of statistical analysis that can be applied. According to Hair et al.
(2021), choosing the correct scale of measurement is crucial for accurate data interpretation and
statistical validity. Two commonly used continuous scales in research are interval and ratio
scales. These scales differ in terms of their properties, permissible mathematical operations, and
implications for data analysis. In the study conducted by Paais and Pattiruhu (2020), these types
of data were indirectly embedded in the measurement instruments used to assess variables such as
motivation, leadership, and employee performance. This essay explains the distinction between
interval and ratio data, illustrates their usage in the study, and discusses how these measurement
scales influence interpretation.

Defining Interval and Ratio Data

Interval data are numeric data in which the distance between values is meaningful, but there is no
true zero point. For instance, temperature measured in Celsius or scores on a Likert scale fall into
this category (Bryman and Bell, 2022). In interval data, operations such as addition and subtraction
are valid, but meaningful multiplication or division is not possible because the zero is arbitrary
(Saunders, Lewis and Thornhill, 2023).

Unlike interval data, ratio data possess all the same characteristics but also feature an absolute zero
point, which signifies a complete lack of the measured attribute. Examples of ratio data include
salary, age, years of work experience, or the number of tasks completed (Bell, Bryman and Harley,
2022). With ratio data, all arithmetic operations are valid, including multiplication and division,
allowing for the calculation of meaningful ratios and coefficients of variation (Sekaran and Bougie,
2022).

Examples from the Paais and Pattiruhu (2020) Study

While the study primarily employed questionnaire-based Likert scales to measure perceptions
(e.g., motivation, job satisfaction), which produce interval data, it likely also involved ratio data
in the demographic profiling of respondents or performance outcomes.

Interval Data in the Study

The main constructs—motivation, leadership, organisational culture, and job satisfaction—were


all measured using Likert-type scales, typically ranging from 1 (strongly disagree) to 5 (strongly
agree). These scales are widely accepted as approximating interval-level data in behavioural
research, especially when summed or averaged across multiple items (Taber, 2021). For instance,
when measuring employee motivation, several items might be rated on a five-point scale and
aggregated to form a composite score. According to Jamieson (2021), treating Likert-scale
responses as interval data allows researchers to apply parametric tests, such as regression analysis
or Structural Equation Modelling (SEM), as done in this study.

Ratio Data in the Study

Although not explicitly detailed in the findings, ratio-level variables may have been collected
during demographic profiling. These include:

• Age of employees (e.g., 25 years)

• Years of experience in the organisation (e.g., 10 years)

• Number of training sessions attended, or

• Monthly output or performance metrics, where applicable.


These types of data provide additional analytical flexibility. For example, calculating average
tenure or comparing the performance ratio between high-motivation and low-motivation groups is
possible only with ratio data (Taherdoost, 2021).

Impact of Measurement Scales on Analysis and Interpretation

The nature of data—whether interval or ratio—significantly affects the choice of statistical tools
and the interpretation of results. Interval data allows for correlation, mean comparisons, and SEM,
as used in Paais and Pattiruhu’s study. However, ratios, coefficients of variation, and geometric
means are appropriate only for ratio-level data (Hair et al., 2021).

Using interval data to measure subjective constructs like satisfaction enables the analysis of
perceptions, but it limits interpretations to relative differences. Conversely, ratio data allows for
absolute statements—e.g., an employee with 20 years of experience has twice the experience of
someone with 10 years, which is not meaningful in interval scales.
The choice of SEM by the researchers is appropriate, given that the study primarily involves
interval-level data. According to Byrne (2021), SEM is best suited for continuous data types like
those generated by Likert scales, enabling the evaluation of direct and indirect effects among latent
constructs.

Conclusion

In conclusion, both interval and ratio data were present—explicitly or implicitly—in the study by
Paais and Pattiruhu (2020). While Likert-scale measures of perceptions reflect interval data
suitable for advanced statistical modelling, ratio-level data such as years of service or performance
metrics enhance descriptive and inferential analysis. Understanding the distinction between these
measurement scales is vital, as it guides data treatment, analysis strategies, and the interpretation
of results. The effective use of these data types reinforces the study’s methodological robustness
and its contribution to organisational research.
Question 5

Critical Assessment of Measurement Instruments in Paais and Pattiruhu (2020)

Introduction

In empirical research, especially within the social sciences, the effectiveness of any study heavily
relies on the accuracy and appropriateness of its measurement instruments. These tools are used to
convert abstract theoretical constructs—such as motivation, leadership, organisational culture,
satisfaction, and performance—into observable and quantifiable data (Saunders, Lewis and
Thornhill, 2023). The study by Paais and Pattiruhu (2020) employed structured questionnaires as
the primary instrument for data collection. This essay critically evaluates the design, application,
and validation of the instruments used to measure the five key constructs in the study, assessing
how effectively they captured the intended variables.
Defining Measurement Instruments

A measurement instrument refers to a tool or method used to collect data on a variable of interest.
These may include surveys, questionnaires, interviews, or rating scales. According to Sekaran and
Bougie (2022), a good instrument should be valid (measuring what it is supposed to measure),
reliable (yielding consistent results), and sensitive enough to capture variations in the responses.
In quantitative studies like this one, constructs are often measured through Likert-scale items,
which approximate interval-level data (Taber, 2021).

Motivation

The construct of motivation in the study appears to be based on a multidimensional framework,


incorporating elements of both intrinsic and extrinsic motivation. According to Robbins and Judge
(2022), intrinsic motivation arises from within the individual (e.g., sense of purpose), while
extrinsic motivation is driven by external rewards (e.g., salary or recognition).

Paais and Pattiruhu (2020) adapted items from established motivational theories, likely referencing
frameworks such as Herzberg’s Two-Factor Theory and Vroom’s Expectancy Theory. By
including both dimensions, the authors improved the content validity of the measurement
instrument (Hair et al., 2021). Each item was rated on a Likert scale, which is suitable for capturing
gradations of motivational intensity (Bolarinwa, 2021). However, a potential limitation is that self-
reported motivation may be influenced by social desirability bias, which could be mitigated by
using mixed methods or behavioural indicators in future research (Podsakoff, MacKenzie and
Podsakoff, 2021).

Leadership

The measurement of leadership was also conducted using Likert-scale items, likely assessing
dimensions such as vision, supportiveness, decision-making, and communication. These align with
transformational leadership theory, which has been widely validated in organisational psychology
(Bass and Riggio, 2022). The authors enhanced construct validity by applying Confirmatory Factor
Analysis (CFA) to confirm whether the items loaded onto a single latent construct or multiple sub-
factors.

The reliability of this instrument was demonstrated through Cronbach’s Alpha, with reported
values exceeding 0.70—indicative of good internal consistency (Taber, 2021). Nevertheless, the
instrument could have been strengthened by differentiating between leadership styles (e.g.,
transactional vs. transformational), which influence satisfaction and performance differently
(Bryman and Bell, 2022).

Organisational Culture

Organisational culture was measured through indicators of shared beliefs, values, and behavioural
norms. According to Schein and Schein (2021), organisational culture is a deep-rooted
phenomenon that influences all aspects of employee behaviour and performance. Paais and
Pattiruhu (2020) likely operationalised this construct using items derived from the Competing
Values Framework (CVF) or similar models, which assess culture through dimensions such as
clan, market, and hierarchical cultures.
While their use of standardised items enhanced face and content validity, the complexity of
organisational culture raises concerns about dimensional accuracy. Culture is often multi-layered
and may require more nuanced tools such as qualitative ethnographic methods for complete
representation (Bell, Bryman and Harley, 2022). Nonetheless, for the purposes of SEM and large-
scale analysis, the Likert-scale format remains a defensible choice.

Job Satisfaction

Job satisfaction was operationalised as an outcome variable influenced by motivation, leadership,


and organisational culture. This construct was likely measured using global satisfaction items (e.g.,
"I am satisfied with my job") alongside facet-specific questions (e.g., satisfaction with pay,
environment, colleagues).
The authors referenced existing satisfaction scales, enhancing criterion validity—meaning the
instrument reflects real-world satisfaction levels that correlate with performance and retention
outcomes (Judge and Klinger, 2022). While the tool was appropriate, a limitation is that
satisfaction is inherently subjective and can fluctuate based on transient events. Repeated
measurements or longitudinal tracking could offer more stability and insight.

Employee Performance

Employee performance was the final dependent variable. It was likely measured through self-
reported indicators of productivity, goal achievement, and work quality. While self-reports are
practical and cost-effective, they may not align with actual performance metrics collected through
human resource departments (Podsakoff et al., 2021).
To enhance predictive validity, it would be beneficial for future studies to combine self-reports
with objective performance indicators such as KPIs, supervisor ratings, or output data (Heale and
Twycross, 2021). Still, the reliability of the measurement was confirmed through high internal
consistency and strong loading on SEM paths, as demonstrated in the study.

Instrument Validation Techniques

The authors employed multiple techniques to validate their instruments:

• Content and face validity: Achieved by referencing prior studies and theoretical models.

• Construct validity: Strengthened through CFA within SEM, confirming the relationships
between items and latent constructs (Awang, Afthanorhan and Asri, 2021).

• Reliability: Confirmed through Cronbach’s Alpha, with all constructs exceeding the 0.70
threshold (Taber, 2021).

Despite these strengths, future improvements could include triangulation with qualitative
methods or longitudinal data to account for potential biases and deepen insight.

Conclusion

In summary, Paais and Pattiruhu (2020) employed measurement instruments that are both
theoretically grounded and statistically validated. The use of Likert scales, CFA, and Cronbach’s
Alpha ensured internal consistency and construct accuracy across all five core constructs. While
the instruments were appropriate for a large-scale, quantitative study, further enhancements—such
as integrating objective performance metrics and longitudinal tracking—could offer more
comprehensive insights. Nonetheless, the instruments used were fit for purpose and contributed
significantly to the reliability and validity of the study’s findings.
References

Awang, Z., Afthanorhan, A. and Asri, M.A.M. (2021) ‘Structural equation modelling using
AMOS: Confirmatory factor analysis for measurement model’, Malaysian Journal of
Technology, 11(1), pp. 71–86.

Bell, E., Bryman, A. and Harley, B. (2022) Business research methods. 6th edn. Oxford: Oxford
University Press.

Bryman, A. and Bell, E. (2022) Social research methods. 6th edn. Oxford: Oxford University
Press.

Creswell, J.W. and Creswell, J.D. (2022) Research design: Qualitative, quantitative, and mixed
methods approaches. 6th edn. Thousand Oaks: SAGE.

Etikan, I. and Bala, K. (2017) ‘Sampling and sampling methods’, Biometrics & Biostatistics
International Journal, 5(6), pp. 215–217.

Hair, J.F., Hult, G.T.M., Ringle, C.M. and Sarstedt, M. (2021) A primer on partial least squares
structural equation modeling (PLS-SEM). 3rd edn. Thousand Oaks: SAGE.

Heale, R. and Twycross, A. (2021) ‘Validity and reliability in quantitative research’, Evidence-
Based Nursing, 24(2), pp. 36–37.

Park, J. and Konge, L. (2021) ‘Positivism: A brief guide for healthcare researchers’, Clinical
Simulation in Nursing, 54, pp. 65–66.

Podsakoff, P.M., MacKenzie, S.B. and Podsakoff, N.P. (2021) ‘Common method biases in
behavioral research: A critical review of the literature and recommended remedies’, Journal of
Applied Psychology, 106(6), pp. 931–954.

Saunders, M., Lewis, P. and Thornhill, A. (2023) Research methods for business students. 9th
edn. Harlow: Pearson Education.

Sekaran, U. and Bougie, R. (2022) Research methods for business: A skill-building approach.
9th edn. Chichester: Wiley.

Bolarinwa, O.A. (2021) ‘Principles and methods of validity and reliability testing of
questionnaires used in social and health science research’, Nigerian Postgraduate Medical
Journal, 28(1), pp. 4–10.

Byrne, B.M. (2021) Structural equation modeling with AMOS: Basic concepts, applications, and
programming. 3rd edn. New York: Routledge.

Mohajan, H. (2021) ‘Qualitative research methodology in social sciences and related subjects’,
Journal of Economic Development, Environment and People, 10(1), pp. 10–31.
Robbins, S.P. and Judge, T.A. (2022) Organizational behavior. 19th edn. Harlow: Pearson
Education.
Schein, E.H. and Schein, P.A. (2021) Organizational culture and leadership. 6th edn. Hoboken:
Wiley.

Taber, K.S. (2021) ‘The use of Cronbach’s alpha when developing and reporting research
instruments in science education’, Research in Science Education, 51(1), pp. 1–24.
Beskow, L.M. and Weinfurt, K.P. (2021) ‘Exploring understanding and use of the term
“informed consent”: A scoping review’, Clinical Trials, 18(2), pp. 125–138.

Bouter, L.M. (2022) ‘Integrity and trust in research: Dealing with misconduct and questionable
research practices’, Science and Engineering Ethics, 28(1), pp. 1–15.
Israel, M. and Hay, I. (2021) Research ethics for social scientists. 2nd edn. London: Sage
Publications.

Nijhawan, L.P., Janodia, M.D., Muddukrishna, B.S. and Bhat, K.M. (2022) ‘Informed consent:
Issues and challenges’, Journal of Advanced Pharmaceutical Technology & Research, 13(1), pp.
5–9.

Patel, P. (2021) ‘Ethical considerations in organisational research: Ensuring autonomy and


protection’, Journal of Business Ethics and Organization Studies, 26(3), pp. 45–52.
Resnik, D.B. (2021) ‘What is ethics in research & why is it important?’, National Institute of
Environmental Health Sciences. Available at:
https://www.niehs.nih.gov/research/resources/bioethics/whatis/index.cfm (Accessed: 19 April
2025).

Walliman, N. (2022) Social research methods: The essentials. 2nd edn. London: Sage
Publications.

Wiles, R., Crow, G., Heath, S. and Charles, V. (2022) Anonymity and confidentiality: Managing
data in social research. 2nd edn. London: Sage Publications.
Jamieson, S. (2021) ‘Likert scales: how to (ab)use them’, Medical Education, 55(1), pp. 123–
125.

Taber, K.S. (2021) ‘The use of Likert scales in educational research: Theory and practice’,
Research in Education, 106(1), pp. 1–15.

Taherdoost, H. (2021) ‘What is the best response scale for survey and questionnaire design?
Review of different lengths of rating scale/attitude scale/Likert scale’, International Journal of
Academic Research in Management, 10(1), pp. 1–10.

Bass, B.M. and Riggio, R.E. (2022) Transformational leadership. 3rd edn. New York:
Psychology Press.

Judge, T.A. and Klinger, R.L. (2022) ‘Job satisfaction: Subjective well-being at work’, Annual
Review of Organizational Psychology and Organizational Behavior, 9, pp. 353–377.

You might also like