You are on page 1of 4

Technology audit model by Rush et al.

In 2007, Rush et al. (2007, 227-230) presented a technology assessment model that was based
on the ‘‘attempt to link knowledge about key abilities in technological innovation to states of
development of technological capability that enable a firm to choose and use technology to
create strategic competitive advantage. We have identified nine principal components as
being fundamental to the model. These are:
(i) Initial awareness of the need to change and willingness to begin looking inside and
outside the firm for possible triggers for change.
(ii) Searching out triggers for change – picking up demand signals from the market or
within the firm about the changes needed or picking up signals about potential
opportunities raised by new technological developments.
(iii) Building of core competencies – recognition of requirements for technology through a
systematic and regular audit of its current competencies and a comparison of those that
it needs to develop or acquire in order to become or remain competitive.
(iv) Development from these of a technology strategy – some clear idea of where to
change and why.
(v) The exploration and assessment of the range of technological options available –
making comparisons between all the options available that can be achieved through
some form of benchmarking, feasibility studies, etc. – and selection of the most
appropriate option based upon the comparison.
(vi) Acquisition of the technology.
(vii) Implementation, absorption and operation of the technology within the firm.
(viii) Learning forms an important part of the building of technological competencies and
involves reflecting upon and reviewing technology projects and processes within the
firm, in order to learn from both successes and failures.
(ix) Exploiting external linkages and incentives.’’
Rush et al. (ibid., 228) point out that using this nine-component framework, a series of
questions can be generated to ‘‘ask firms to help assess their technological capability’’. These
questions have the corresponding guidance notes that accompany them. Question can then
allow identification of those behavior and routines that contribute to or are necessary for the
development of a firm’s technological capabilities.
The above 9 components of the model are incorporated into the technology assessment tool
(questionnaire). ‘‘The audit tool was originally developed to carry out in-depth case studies,
postal questionnaires and rapid face-to-face interview audits’’ (ibid., 228). The aim of such
audit tool is assignation of a score against a company in each of the dimensions of
technological capability.
The model and a tool presuppose 4 different possible states of technological capabilities of
companies that compete in a market economy. Rush et al. (ibid., 224-227) suggest that ‘‘the
development of technological capability can be seen as a set of ‘punctuated equilibrium’
states. As firms move into more complex environments, they need a richer set of capabilities
to deal effectively with the threats and opportunities that confront them. We discuss this
model in terms of four archetypes that characterize these states:
(i) unaware or passive,
(ii) reactive,
(iii) strategic,
(iv) creative.’’
Comment: Just like Garcia-Arreola’s 1996 model, this model also applies only to companies
that compete in a market economy and whose competition is based on technology
advancement. The model does not apply to other organizations, and so this is not a general
technological capability model. The model presupposes four technological capability
archetypes that are actually a classification of companies (whose competition is based on
technology) with respect to their technological capability from technology least capable
(‘‘passive’’) to technology most capable (‘‘creative’’). Classification of these four archetypes
is based on a prior identification of mainly technology-based success factors in a competitive
market economy. These four archetypes and their descriptions include not only technology
specific but also few non-technological elements which makes these archetypes not only
technology (or technological capability) based but a little bit more generally success-factors
based.
The reasoning in this classification and modeling may be circular. A model is supposed to be
based on ‘‘attempt to link knowledge about key abilities in technological innovation (actually
technology-based success factors in a competitive market economy) to states of development
of technological capability (four archetypes) that enable a firm to choose and use technology
to create strategic competitive advantage.’’ But these key abilities in technological innovation
are actually the above 9 principal components of the model. So key abilities in technological
innovation (technology-based success factors in a competitive market economy) are 9
principal components of the model and these factors are the basis for classification that the
model presupposes and the model (9 principal components) is based on an attempt to link key
abilities in technological innovation (actually 9 principal components) to the four archetypes
that are based on technology-based success factors in a competitive market economy (actually
9 principal components of the model).
The aim of the model is to find out, by using the above 9 principal components, to which of
the four archetypes a company that is being assessed belongs and also, based on the
descriptions of the principal components, to propose what a company should do in respect of
technological capability if it wants to become more successful in a competition (identification
of strengths and weaknesses). ‘‘Identifying archetypes that characterize each of four
‘punctuated equilibrium’ states in the development of technological capabilities, however,
remains an academic exercise of only limited value to policy actors. A means of accurately
locating firms within the framework is still required in order that their strengths and
weaknesses can be identified and appropriate policies and organizational development
strategies are applied’’ (Rush et al., 2007, 227).
3 of the 9 principal components (namely Acquisition of the technology, Implementation,
absorption and operation of the technology, and Exploiting external linkages and incentives)
include some practical instructions of what companies need to do or should do. The model is
thus not purely theoretical and since it also includes some practical elements, it is a
combination of a theoretical-practical model.
Rush et al. (ibid., 228) say that these 9 principal components ‘‘can map on to a simple model
of technological change over time that involves several stages based upon the four archetypes
described in the previous section. Although, as presented, such a model may appear to be a
linear process, we recognize that there are numerous interactions and feedback loops between
different components.’’ However, it is not explained how these 9 principal components can
map on to a simple model and what would be the necessary elements of such a model.
Rush et al. (2007) present only a highly simplified, summary version of the full technology
audit tool. ‘‘It can be used for an initial ‘filtering’ of firms and does provide a good indication
of the range of questions covered by the in-depth tool’’ (ibid., 228).
It is not explained how questions in the audit tool are derived from the model (9 principal
components). Explanation of the principle of this ‘‘incorporation’’ of 9 components into the
questionnaire is lacking.
Questions in the audit tool ‘‘call for a subjective assessment of the nine dimensions of
capability /…/ according to the scale in the table (which corresponds to the four levels of
capability)’’ (ibid., 228). So, this audit tool can hardly results in an objective assessment of
company’s technological capability: ‘‘Although scores are assigned that allow for the
positioning of the firm, it is recognized that such scores still represent a subjective process
and some of the capabilities being assessed are, to some degree, intangible – which is why the
explanatory answers and adherence to the guidelines provided are important for retaining
confidence in the tool’s reliability’’ (ibid., 230).
Rush et al. (ibid., 230) point out that ‘‘the short version of the tool not only provides a simple
mechanism for rapidly auditing the capability of individual firms but also a way of
benchmarking the strengths and weaknesses of individual firms against the ‘best-practice’
model defined by creative-type firms. The aim is not to develop precise quantitative
measurements but to rapidly generate a picture of how well the firm performs overall, and key
areas of strength and weakness across the nine dimensions.’’ It follows from this that
technology assessment nevertheless includes an objective criterion according to which
companies are then assessed, i.e. model defined by creative-type firms. So although this
assessment tool is based on subjective evaluation, it includes an objective criterion. This may
be inconsistent. This technology assessment is also not a precise quantitative measurement,
but a qualitative assessment of how well a company is doing in relation to its technological
capability that is needed for success in a market economy competition: ‘‘Explanatory answers
to the audit questions are written up to provide a detailed, qualitative assessment for each
firm’’ (ibid., 230).
Nevertheless, scores (quantitative measurements) are used in the audit tool. It is proposed that
numbers that are assigned to the above 4 archetypes (which are all qualitative determinations)
can be added up and overall score can be calculated by adding all the individual scores. But
such reasoning might be mistaken in that:
(i) Qualitative determinations cannot be added up, because addition presupposes the same
quality (measure) – ‘‘unaware’’ cannot be summed up with ‘‘creative’’.
(ii) Numbers from 1 to 4 above are not quantitative determinations of archetypes but they
are indexing numbers for qualitative determinations – 1 does not quantitatively
determine an archetype but only stands for a qualitative determination ‘‘unaware’’ –
therefore, we could just as well select numbers 44, 45, 46, 47 for the above qualitative
determinations (archetypes) and nothing would change.
Although the model aims at a qualitative technology assessment, it operates with
‘‘quantitative’’ determinations and in that it may be inconsistent. But the question is, not only
if such quantitative rating is based on a correct reasoning, but also what is practical value of
such rating process for individual companies. If some complex and demanding technology
assessment of a large company results in only one number (e.g. 3,4568), then what basis for
practical activity of a company does such a number provide? What concrete practical
measures can follow from one number (an overall score) or several of them?
Ultimately, technology audit tool, proposed by Rush et al. (2007), is aimed to ‘‘provide a
means of assisting policy makers in tailoring support according to the level of capability of
the firm’’ (ibid., 234). However, this tool is only one of many that are required for such an
aim (successful policy).

You might also like