You are on page 1of 4

Technology audit model by Mohammad et al.

Mohammad et al. (2010) have proposed a model for assessing technological capability in
R&D centers whose main activity is developing technologies: ‘‘Regarding the unique
specifications of R&D organizations in the progress of a firm, industry or country, applying
an appropriate model to assess their technological capability is essential. These models should
concentrate on factors such as employees, ideas and their implementation as well as
organizational culture and its impact on organization’s function. However, most of the models
of technology capability assessment don’t pay enough attention to these factors. These models
are complicated; their implementation is time-consuming and needs a lot of analysis. The
final outcome of some of these models is a general and compound index that specifies the
organization’s current condition and the gap between current and ideal situation without
representing its reasons. So it seems necessary to present a comprehensive model specialized
for R&D centers to comply their needs. This model should be simple and results easily and
rapidly’’ (ibid., 4-5).
In a proposed model for technology capability assessment in R&D centers, capability is
assessed both at macro and micro level.
The indicators used at the macro level ‘‘evaluate issues that are common between all
innovative organizations. These indicators are evaluated in the whole organization and
include:
(i) The position of innovation in the organization,
(ii) Knowledge management and importance of knowledge acquisition,
(iii) The position of innovation in developing strategies,
(iv) Learning,
(v) Team working,
(vi) Training’’ (ibid., 5).
Assessment of these indicators by using a descriptive questionnaire can provide an analysis of
‘‘innovation culture in the organization’’.
At the micro level, technological capability of an R&D organization is assessed. This is based
on the separate evaluation of each of R&D centers’ main activities that are divided into 4
main groups, and based on these groups, 4 types of capabilities can be defined for assessment:
(i) Capability of internal development of technologies,
(ii) Capability of technology development via cooperative R&D,
(iii) Capability of performing basic researches,
(iv) Capability of presenting consultation services to industry.
Mohammad et al. (ibid., 5) point out that we can define R&D centre’s main activities
differently and customize different types of capabilities for it.
Mohammad et al. (ibid., 6) have identified numerous indicators to be appropriate for
technology capability assessment in R&D centers. These indicators are divided into 6 groups:
(i) Human resource indicators,
(ii) Equipments,
(iii) Knowledge management and communication indicators,
(iv) Management indicators,
(v) Marketing and sales indicators,
(vi) Achievements indicators.
‘‘In order to assess technology capability of R&D centers, each technological area should be
evaluated separately’’ (ibid., 6). A model is then implemented by applying a scoring table:
each type of 4 capabilities is scored regarding each indicator. Scores are between 1 (=very
weal) and 5 (very good). A scoring table then shows the gap between current situation and the
ideal one. Next, a weighting table is used in order to assign relative weights to indicators.
Based on scoring and weighting tables, the final scores of each type of capabilities are
calculated. Then a scale for these final scores is defined. And finally, based on calculated
scored and the scale, technological capabilities in a specific technological area can be
identified: ‘‘Therefore the status of each technological area in several types of capabilities
will be specified. By analyzing these three tables, we can find out which reasons cause
capabilities to be weak, mediocre or good’’ (ibid., 8).
Comment: This model applies to R&D centers whose main activity is developing
technologies (where a lot of inventiveness, project work and R&D management are needed)
and not to other organizations, so this is not generally applicable technological capability
model. It is suggested that technological capability models (for R&D centers) should
concentrate on factors that have some impact on an organization, but not on essential elements
of technological capability as such. From this it follows that all the things, essential and
nonessential,
that have some impact/influence on an organization should be included in the
model, so that arbitrary selection of some factors and exclusion of others is avoided.
It is also suggested that the model should follow practical considerations of assessment
(simplicity, rapidity, easiness) and not theoretical considerations of what are essential
elements of technological capability and how to correctly assess them.
However, if all the things that have some impact/influence (minor or major) on an
organization should be included in the model, then this can make a model very complex and
that would be in contradiction with the practical considerations of assessment (simplicity,
rapidity, easiness) that a model should follow.
Assessment of technology capability in R&D centers at macro level is not a direct
technological capability assessment, but is more generally invention, knowledge management,
learning and team work assessment. Indicators at this level are very abstract, since they assess
what is common to all innovative organizations. But this general and abstract level of the
model is in contradiction with the presupposition that the model should be designed especially
for R&D centers and not for other (innovative) organizations. So a question arises, why this
macro level of assessment is included in the model of technology capability assessment at all,
if it does not directly assess technological capability.
It is also not clear what kind of questions should follow from this abstract indicators and what
the principle of this derivation of questions from the macro level should be. Assessment at the
macro level of the model does not provide a technology capability assessment but more of an
analysis of inventive culture in an organization.
Technological capability of an R&D organization is assessed only at the micro level. But it is
somewhat inconsistent to propose a technology capability assessment model in which
technology capability is assessed in only one part while in other(s) it is not. In the model, it is
also not explained how macro level is connected to the micro level and in what relation are
the results of the macro level to the results of the micro level.
It is not clear whether the micro level presupposes that every R&D centre is involved in all
the above presented 4 main groups of activities, or can a R&D centre be involved in only
some of them. E.g., an R&D centre may carry out only applied research and internal
technology development (capability 1), while it does not carry out technology development
via cooperative R&D, it does not perform basic researches and it does not present consultation
services to industry. From the above description of the model it also follows that research
organizations that carry out only basic research cannot be assessed by the suggested model,
since the model presupposes R&D centers that are developing technology.
At the micro level of the model (technology capability assessment in R&D centers), numerous
indicators are identified to be appropriate for assessing technological capability. However,
none of these indicators is directly technology (or technological capability related). A content
of these numerous indicators cannot provide a direct technological capability assessment, but
only a much more general (e.g. managerial, educational, financial, equipment,
communication, marketing, sales) capability of an organization. This is so because the model
is based on a notion of a factor that has some influence/impact on an organization (and its
technology capability).
The principle of scoring of indicators in each technological area (4 main groups of R&D
activities above) is based on a quantitative determination (from 1 to 5) of qualitative
determinations (from very weak to very good). Indicators of the model are rated according to
5 qualitative determinations: from very weak (1) to very good (5). From the model it does not
follow, how this rating process should be performed. This rating may be based on a subjective
evaluation or opinions or on some objective criteria. In the model, it is proposed that numbers
that are assigned to 5 qualitative determinations can be multiplied with weights for indicators
(in the model, it is not explained what is the principle of this weight assignment, what are the
criteria for such assignment) and then added up and overall score can be calculated by
multiplying and adding up all the individual scores for indicators. But such reasoning might
be mistaken in that:
(i) Qualitative determinations cannot be multiplied or added up, because multiplication
and addition presupposes the same quality (measure) – ‘‘good’’ cannot be summed up
or multiplied with ‘‘very weak’’. Also weak and good do not belong to the same scale,
because a contrast to weak is strong, while a contrast of good is bad.
(ii) Numbers from 1 to 5 for indicators are not quantitative determinations of indicators
but they are indexing numbers for qualitative determinations – 1 does not
quantitatively determine an indicator in the model but only stands for a qualitative
determination ‘‘very weak’’ – therefore, we could just as well select numbers 44, 45,
46, 47, 48 for the above qualitative determinations and nothing would change.
The model aims at quantitative technology capability assessment but operates with qualitative
determinations and in that it may be inconsistent. Its aim is to result in a number that would
express how well an R&D centre is doing in relation to technology capability. But the
question is, not only if such a number is based on a correct reasoning, but also what is
practical value of such rating process for individual R&D centers. If some complex and
demanding technology capability assessment of a large R&D centre results in only one
number (e.g. 350, whereby the model does not specify whether individual scores for
technological areas can also be weighted and summed up to only one overall score), then what
basis for practical activity of as R&D centre does such a number provide that is the result of
an attempt to quantitatively evaluate as R&D centre? What concrete practical measures can
follow from one number (an overall score)? Mohammad et al. (2010, 8) explain that ‘‘by
analyzing these three tables, we can find out which reasons cause capabilities to be weak,
mediocre or good’’. But these reasons refer only to indicators of capabilities as such and not
to why these indicators are weak, mediocre or good. An indicator of something is not a reason
of something. From the scoring tables it only follows in which indicators an R&D centre is
weak or good, but it does not follow why a centre is weak or good in each particular indicator,
what are the causes for being good or weak. And because the analysis of these causes and
reasons is absent in the model, concrete practical measures for improving technological
capability cannot be followed from one number (an overall score) or several of them.
A review of the above three technology audit models shows that there are some
insufficiencies and critical elements in these models. All three models were designed for
application in specific organizations (technology-intensive companies, R&D centers that
develop technology) and not for general application or for theoretical modeling. However, in
none of the above three models we could find what the principle of technology auditing itself
actually is and how one of the above models is derived from such a principle. We consider a
question of technology auditing principle as a very important one in the theory of MoT,
because thereupon depends a successful development of all specific models designed for
auditing individual organizations. So before we would try to suggest how to overcome
insufficiencies and critical elements of the above models, we think it is worth to first try to
answer the question of what is actually the essence of technology auditing in the

You might also like