You are on page 1of 30

VisionSystems

VISION AND
AUTOMATION
SOLUTIONS
FOR ENGINEERS
AND INTEGRATORS
D E S I G N
®

WORLDWIDE

SPECIAL REPORT
Machine vision:
The past, the present
and the future
Which particular individuals, companies and
organizations, technologies, products and Keystones of machine
Vision Systems Design
applications have most significantly affected PAGE 2
the adoption of machine vision and image
processing systems? What can history tell us
about the developments that have occurred in
the past 50 years and how will new technologies
be leveraged to automate every aspect of the
manual tasks now relegated to human operators?
Editor in Chief Andy Wilson takes a look back on
the past and, how, in the future, autonomous
Machine vision:
vision-based machines will be deployed in A look into the future
applications as diverse as food harvesting, PAGE 21
medical imaging and autonomous vehicles.

REPRINTED WITH REVISIONS TO FORMAT FROM VISION SYSTEMS DESIGN. COPYRIGHT 2015 BY PENNWELL CORPOR ATION
ORIGINALLY PUBLISHED SEP TEMBER 2013

Keystones of machine
Vision Systems Design

Today’s off the shelf machine vision components have


emerged from research in numerous fields including
optics, mathematics, physics and computer design.

Andrew Wilson, Editor

ASK ANYONE WHO HAS EVER designed, purchased,


built, installed or operated a machine vision
system what they believe to be some of the most
significant developments in the field and the
answers will be extremely diverse. Indeed, this was
just the case when, for this our 200th Anniversary issue of Vision Systems
Design we polled many of our readers with just such a question.

Which particular individuals, companies and organizations, types of


technologies, products and applications did they consider to have most significantly
affected the adoption of machine vision and image processing systems?

After reviewing the answers to these questions, it became immediately apparent


that the age of our audience played an important part in how their answers were
formulated. Here, perhaps their misconception (although understandable) was
that machine vision and image processing was relatively new dating back just a half
century. In his book “Understanding and Applying Machine Vision,” however, Nello
Zuech points out that the concepts for machine vision are evident as far back as the

2 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

1930s with Electronic Sorting Machines (then located in New Jersey) offering food
sorters based on using specific filters and photomultiplier detectors.

While it is true that machine vision systems have only been deployed for less than
a century, some of the most significant inventions and discoveries that led to the
development of such systems date back far longer. To thoroughly chronicle this, one
could begin by highlighting the development of early Egyptian optical lens systems
dating back to 700 BC, the introduction of punched paper cards in 1801 by Joseph
Marie Jacquard that allowed a loom to weave intricate patterns automatically or
Maxwell’s 1873 unified theory of electricity and magnetism.

To encapsulate the history of optics, physics, chemistry, electronics, computer and


mechanical design into a single article, however, would, of course, be a momentous
task. So, rather than take this approach, this article will examine how the discoveries
and inventions by early pioneers have impacted more recent inventions such as the
development of solid-state cameras, machine vision algorithms, LED lighting and
computer-based vision systems.

Along the way, it will highlight some of the people, companies and organizations
that have made such products a reality. This article will, of necessity, be a rather more
personal and opinionated piece and, as such, I welcome any additions that you feel
have been omitted.

How our readers voted


Vision Systems Design’s marketing department received hundreds of responses to our
questionnaire about what our readers thought were the most important companies,
technologies, and individuals that have made the most important contributions to
the field of machine vision. Not surprisingly, many of the companies our readers
deemed to have made the greatest impact have existed for over twenty years or
more (Figure 1).

3 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

Basler
Truesense Imaging 4%
4%
Edmund Optics
4% Cognex
30%
MVTec
4.5%
FLIR
4.5%

National
Instruments
4.5%

Pt Grey
5% FIGURE 1: When asked which
Sony
companies had made the greatest
7% Matrox impact on the machine vision over
13%
the past twenty years or more,
Keyence
9% Teledyne DALSA 30% of our readers chose Cognex.
10%

Of these, Cognex (Natick, MA; www.cognex.com) was mentioned more than


any1309VSDpieChart1
other company, probably due to its relatively long history, established product
line and large installed customer base. Formed in 1981 by Dr. Robert J. Shillman,
Marilyn Matz and William Silver, the company produces a range of hardware
and software products including VisionPro software and its DataMan series
of ID readers.

When asked what technologies and products have made the most impact on
machine vision, readers’ answers were rather more diverse (Figure 2). Interestingly,
the emergence of CMOS images sensors, smart cameras and LED lighting, all
relatively new development in the history of machine vision, were recognized as
some of the most important innovations.

Capturing images
Although descriptions of pin-hole camera date back to as early as the 5th century
BC, it was not until about 1800 that Thomas Wedgwood, the son of a famous
English potter, attempted to capture images using paper or white treated with
silver nitrate. Following this, Louis Daguerre and others demonstrated that a silver
plated copper plate exposed under iodine vapor would produce a coating of light-

4 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

Bayer filter
GigE vision 3.5%
3.5%
Vision
software CMOS imagers
3.5% 24%

High-speed
interfaces
12%

FIGURE 2: Relativelynew developments


IR cameras
12%
in CMOS imagers, smart cameras and
Smart cameras
19% LED lighting were deemed to be the most
important technological and product
LED lighting
19% innovations.

1309VSDpieChart2
sensitive silver iodide on the surface with the resultant fixed plate producing a
replica of the scene.

Developments of the mid-19th century was followed by others, notably Henry Fox
Talbot in England who showed that paper impregnated with silver chloride could be
used to capture images. While this work would lead to the development of a multi-
billion dollar photographic industry, it is interesting that, during the same time
period, others were studying methods of capturing images electronically.

In 1857, Heinrich Geissler a German physicist developed a gas discharge tube filled
with rarefied gasses that would glow when a current was applied to the two metal
electrodes at each end. Modifying this invention, Sir William Crookes discovered
that streams of electrons could be projected towards the end of such a tube using a
cathode-anode structure common in cathode ray tubes (CRTs).

In 1926, Alan Archibald Campbell-Swinton attempted to capture an image from


such a tube by projecting an image onto a selenium-coated metal plate scanned
by the CRT beam. Such experiments were commercialized by Philo Taylor

5 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

Farnsworth who demonstrated a working


version of such a video camera tube known as
an image dissector in 1927.

These developments were followed by the


introduction of the image Orthicon and
Vidicon by RCA in 1939 and the 1950s, Philips’
Plumbicon, Hitachi’s Saticon and Sony’s
Trinicon, all of which use similar principles.
These camera tubes, developed originally to
television applications, were the first to find
their way into cameras developed for machine
vision applications.

Needless to say, being tube-based, such cameras were not exactly purpose built
for rugged, high-EMI susceptible applications. This was to change when, in 1969,
Willard Boyle and George E. Smith working at AT&T Bell Labs showed how
charge could be shifted along the surface of a semiconductor in what was known as
a “charge bubble device”. Although they were both later awarded Nobel prizes for
the invention of the CCD concept, it was an English physicist Michael Tompsett,
a former researcher at the English Electric Valve Company (now e2V; Chelmsford,
England; www.e2v.com), that, in 1971 while working at Bell Labs, showed how the
CCD could be used as an imaging device.

Three years later, the late Bryce Bayer, while working for Kodak, showed how by
applying a checkerboard filter of red, green, and blue to an array of pixels on an area
CCD array, color images could be captured using the device.

While the CCD transfers collected charge from each pixel during readout and
erases the image, scientists at General Electric in 1972 developed and X-Y array of
addressable photosensitive elements known as a charge injection device (CID).

6 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

Unlike the CCD, the charge collected is retained in each pixel after the image is read
and only cleared when charge is “injected” into the substrate. Using this technology,
the blooming and smearing artifacts associated with CCDs is eliminated. Cameras
based around this technology were originally offered by CIDTEC, now part of
Thermo Fischer Scientific (Waltham, MA; www.thermoscientific.com).

While the term active pixel image sensor or CMOS image sensor was not to emerge
for two decades, previous work on such devices dates back as far as 1967 when
Dr. Gene Weckler described such a device in his paper “Operation of pn junction
photodetectors in a photon flux integrating mode,” IEEE J. Solid-State Circuits
(http://bit.ly/12SnC7O).

Despite this, the success of CMOS imagers was not to become widely adopted for
the next thirty years, due in part to the variability of the CMOS manufacturing
process. Today, however, many manufacturers of active pixel image sensors widely
tout the performance of such devices as comparable to those of CCDs.

Building such devices is expensive, however and – even with the emergence
of “fabless” developers–just a handful of vendors currently offer CCD and
CMOS imagers. Of these, perhaps the best known are Aptina (San Jose, CA;
www.aptina.com), CMOSIS (Antwerp, Belgium http://cmosis.com), Sony
Electronics (Park Ridge, NJ; www.sony.com) and Truesense Imaging (Rochester,
NY; www.truesenseimaging.com), all of whom offer a variety of devices in multiple
configurations.

While the list of imager vendors may be small, however, the emergence of
such devices has spawned literally hundreds of camera companies worldwide.
While many target low-cost applications such as webcams, others such as
Basler (Ahrensburg, Germany; www.baslerweb.com), Imperx (Boca Raton, FL;
www.imperx.com) and JAI (San Jose, CA; www.jai.com) are firmly focused on the

7 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

machine vision and image processing markets often incorporating on board FPGAs
into their products.

Lighting and illumination


Although Thomas Edison is widely credited with the invention of the first practical
electric light, it was Alessandro Volta, the invention of the forerunner of today’s
storage battery, who noticed that when wires were connected to the terminals of
such devices, they would glow.

In 1812, using a large voltaic battery, Sir Humphry Davy demonstrated that an
arch discharge would occur and in 1860 Michael Faraday, an early associate of
Davy’s, demonstrated a lamp exhausted of air that used two carbon electrodes to
produce light.

Building on these discoveries, Edison formed the Edison Electric Light Company in
1878 and demonstrated his version of an incandescent lamp just one year later. To
extend the life of such incandescent lamps, Alexander Just and Franjo Hannaman
developed and patented an electric bulb with a Tungsten filament in 1904 while
showing that lamps filled with an inert gas produce a higher luminosity than
vacuum-based tubes.

Just as the invention of the incandescent lamp predates Edison so too does the
halogen lamp. As far back as 1882, chlorine was used to stop the blackening effects
caused by the blackening of the lamp
and slow the thinning of the tungsten
filament. However, it was not until
Elmer Fridrich and Emmitt Wiley
working for General Electric in Nela
Park, Ohio patented a practical version
of the halogen lamp in 1955 that such
illumination devices became practical.

8 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

Like the invention of the incandescent lamp, the origins of the fluorescent lamp date
back to the mid 19th century when in 1857, Heinrich Geissler a German physicist
developed a gas discharge tube filled with rarefied gasses that would glow when a
current was applied to the two metal electrodes at each end. As well as leading to the
invention of commercial fluorescent lamps, this discovery would form the basis of
tube-based image capture devices in the 20th century (see “Capturing images”).

In 1896 Daniel Moore, building on Geissler’s discovery developed a fluorescent


lamp that used nitrogen gas and founded his own companies to market them. After
these companies were purchased by General Electric, Moore went on to develop a
miniature neon lamp.

While incandescent and fluorescent lamps became widely popular in the 20th
century, it would be research in electroluminescence that would form the basis
of the introduction of solid-state LED lighting. While electroluminescence was
discovered by Henry Round working at Marconi Labs in 1907, it was pioneering
work by Oleg Losev, who in the mid-1920s, observed light emission from zinc oxide
and silicon carbide crystal rectifier diodes when a current was passed through them
(see “The life and times of the LED,” http://bit.ly/o7axVN).

Numerous papers published by Mr. Losev constitute the discovery of what is now
known as the LED. Like many other such discoveries, it would be years later before
such ideas could be commercialized. Indeed, it would not be until 1962 when,
while working at General Electric Dr. Nick Holonyak, experimenting with GaAsP
produced the world’s first practical red LED. One decade later, Dr. M. George
Craford, a former graduate student of Dr. Holonyak, invented the first yellow
LED. Further developments followed with the development of blue and phosphor-
based white LEDs.

For the machine vision industry the development of such low-cost,


long-life and rugged light sources has led to the formation of numerous

9 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

lighting companies including Advanced illumination (Rochester, VT;


www.advancedillumination.com), CCS America (Burlington, MA;
www.ccsamerica.com), ProPhotonix (Salem, NH; www.prophotonix.com) and
Spectrum Illumination (Montague, MI; www.spectrumillumination.com) that all
offer LED lighting products in many different types of configurations.

Interface standards
The evolution of machine vision owes as much to the television and broadcast
industry as it does to the development of digital computers. As programmable
vacuum tube based computers were emerging in the early 1940s, engineers working
on the National Television System Committee (NTSC) were formulating the first
monochrome analog NTSC standard.

Adopted in 1941, this was modified in 1953 in what would become the RS-170a
standard to incorporate color while remaining compatible with the monochrome
standard. Today, RS-170 is still being used is numerous digital CCD and CMOS-
based cameras and frame grabber boards allowing 525 line images to be captured
and transferred at 30 frames/s.

Just as open computer bus architectures led to the development of both analog
and digital camera interface boards, television standards committees followed an
alternative path introducing high-definition serial digital interfaces such as SDI and
HD-SDI. Although primarily developed for broadcast equipment, these standards
are also supported by computer interface boards allowing HDTV images to be
transferred to host computers.

To allow these computers to be networked together, the Ethernet originally


developed at XEROX PARC in the mid-1970s was formally standardized in 1985
and has become the de-facto standard for local area networks. At the same time,
serial busses such as FireWire (IEEE 1394) under development since 1986 by

10 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

Apple Computer were widely adopted in the mid-1990s by many machine vision
camera companies.

Like FireWire, the Universal Serial Bus (USB) introduced in a similar time frame
by a consortium of companies including Intel and Compaq were also to become
widely adopted by both machine vision camera companies and manufacturers of
interface boards.

When first introduced, however, these standards could not support the higher
bandwidths of machine vision cameras and by their very nature were non-
deterministic and because no high-speed point to point interface formally
existed, the Automated Imaging Association (Ann Arbor, MI) formed the
Camera Link committee in the late 1990s. Led by companies such as Basler and
JAI, the well-known Camera Link standard was introduced in October 2000
(http://bit.ly/1cgEdKH).

For some, however, even the 680MByte/s data transfer rate was not sufficient to
support the high data rates demanded by (at the time) high-performance machine
vision cameras and it was Basler and others that, in 2004, by reassigning certain pins
on the Camera Link specification managed to attain a 850 MByte/s transfer rate.

Just as this point-to-point protocol was gaining hold, other technologies such
as Gigabit Ethernet were emerging to challenge the distance limitations of the
Camera Link protocol. In 2006, for example, Pleora Technologies (Kanata, Ontario,
Canada; www.pleora.com) pioneered the introduction of the GigE Vision standard,
that although primarily based on the Gigabit Ethernet standard, incorporated
numerous additions such as how camera data could be more effectively streamed,
how systems developers could control and configure devices and – perhaps more
importantly–the GenICam generic programming interface (http://bit.ly/13TDlba)
for different types of machine vision cameras.

11 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

At the same time, the limitations of the Camera Link interface were posing
problems for systems integrators because even the extended 680MByte/s interface
required multiple connectors and still could not support emerging higher speed
CMOS cameras. So it was that in 2008, a consortium of companies led by Active
Silicon (Iver, United Kingdom; www.activesilicon.com), Adimec (Eindhoven,The
Netherlands; www.adimec.com) and EqcoLogic (Brussels, Belgium;
www.eqcologic.com) introduced the CoaXPress interface standard that, as its name
implies, is a high-speed serial communications interface that allows 6.25Gbit/s
to be transferred over a single co-ax cable. To increase this speed further, multiple
channels can be used.

Under development at the same time, and primarily led by Teledyne Dalsa
(Waterloo, Ontario, Canada; www.teledynedalsa.com), the Camera Link HS
(CLHS) standard – supposedly the successor to the Camera Link standard offers
scalable bandwidths from 300 MBytes/s to 16 GBytes/s. At the time of writing,
however, more companies have endorsed the CoaXPress standard than CLHS.

While high-speed standards such as CXP and CLHS support high-performance


cameras, the emergence of the USB 3.0 standard in 2008 offered systems integrators
a way to attain a maximum throughput of 400MBytes/s over 10 times fast than USB
2.0 (see “USB 3 Vision: Extending camera to computer interfaces,” p. 43, this issue).

Like the original Gigabit Ethernet, the USB 3.0 standard was not well-suited
to machine vision applications. Even so, Point Grey (Richmond, BC, Canada;
www.ptgrey.com) was the first to introduce a camera for this as early as 2009
(http://bit.ly/15gILiO). So it was that in January this year, the USB Vision
Technical Committee of the Automated Imaging Association announced the
introduction of the USB 3.0 Vision standard which builds on many of the advances
of GigE Vision standard including device discovery, device control, event handling,
and streaming data mechanisms, a standard that will be supported by Point Grey
and numerous others.

12 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

Computers and software


Although low-cost digital computers were not to emerge until the 1980s, research
into the field of digital image processing dates back to the 1950s with pioneers
such as Dr. Robert Nathan of JPL who in 1959 help develop imaging equipment to
map the moon. In 1961, analog image data from Ranger spacecraft was then first
transferred to digital data using a video film converter, and digitally processed by
what NASA refers to as a “small” NCR 102D computer (http://1.usa.gov/162UGgI).
In fact, it filled a room.

It was during the 1960s that many of the algorithms used in today’s machine vision
systems were developed. The pioneering work by Georges Matheron and Jean Serra
on mathematical morphology in the 1960s led to the foundation of the Center
of Mathematical Morphology at the École des Mines in Paris, France. Originally
dealing with binary images, this work was later extended to include grey-scale
images and the commonly known gradient, top-hat and watershed operators that
are used in such software packages as Amerinex Applied Imaging’s (Monroe Twp,
NJ; www.amerineximaging.com) Aphelion.

In 1969, Professor Azriel Rozenfeld described many of the most commonly


algorithms used today in his book “Picture processing by computer,” (Academic
Press, 1969) and two years later, William K. Pratt and Harry C. Andrews founded
the USC Signal and Image Processing Institute (SIPI), one of the first research
organizations in the world dedicated to image processing.

During the 1970s, researchers at SIPI developed the basic theory of image
processing and how it could be applied in image de-blurring, image coding and
feature extraction. Indeed, early work on transform coding at SIPI now forms the
basis the basis of the JPEG and MPEG standards.

Tired of the standard text images used to compare the results of such algorithms,
Dr. Alexander Sawchuk, now Leonard Silverman Chair Professor at the USC Viterbi

13 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

School of Engineering (http://bit.ly/13vrXOB), then an assistant professor of


electrical engineering at SIPI, digitized what was to become one of the most popular
test images of the late 20th century.

Using a Muirhead wirephoto scanner,


Dr. Sawchuk digitized the top third of
Lena Soderberg, a Playboy centerfold
from November 1972. Notable for its
detail, color, high and low-frequency
regions, the image became very
popular in numerous research papers
published in the 1970s. So popular, in
fact, that in May 1997, Miss Soderberg
was invited to attend the 50th
Anniversary IS&T conference in Boston
(http://bit.ly/1bfOTD). Needless
to say, in today’s rather more “politically correct” world, the image seems to have
fallen out of favor!

Because of the lack of processing power offered by Von Neumann architectures,


a number of companies introduced specialized image processing hardware in the
mid-1980s. By incorporating proprietary and often parallel processing concepts
these machines were at once powerful, while at the same time expensive.

Stand-alone systems by companies such as Vicom and Pixar were at the same
time being challenged by modular hardware from companies such as Datacube,
the developer of the first Q-bus frame grabber for Digital Equipment Corp
(DEC) computers.

14 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

With the advent of PCs in the 1980s, board-level frame grabbers, processors and
display controllers for the open architecture ISA bus began to emerge and with it
software callable libraries for image processing.

Today, with the emergence of the PCs PCI-Express bus, off-the-shelf frame
grabbers can be used to transfer images to the host PC at very high-data rates using
a number of different interfaces (see “Interface standards”). At the same time, the
introduction of software packages from companies such as Microscan (Renton,
WA; www.microscan.com), Matrox (Dorval, Quebec, Canada; www.matrox.com),
MVTec (Munich, Germany; www.mvtec.com), Teledyne Dalsa (Waterloo,
Ontario, Canada; www.teledynedalsa.com) and Stemmer Imaging (Puchheim,
Germany; www.stemmer-imaging.de) make it increasingly easier to configure even
the most sophisticated image processing and machine vision systems.

Ten individuals our readers chose as pioneers in machine vision


Dr. Andrew Blake is a Microsoft Distinguished Scientist and the Laboratory
Director of Microsoft Research Cambridge, England. He joined Microsoft in
1999 as a Senior Researcher to found the Computer Vision group. In 2008 he
became a Deputy Managing Director at the lab, before assuming his current
position in 2010. In 2011, he and colleagues at Microsoft Research received the
Royal Academy of Engineering MacRobert Award for their machine learning
contribution to Microsoft Kinect human motion-capture. For more information,
go to: http://bit.ly/69Le5Z

Dr. William K. Pratt holds a Ph.D. in electrical engineering from University of


Southern California and has written numerous papers and books in the fields
of communications, signal and image processing. He is perhaps best known for
his book “Digital Image Processing” (Wiley; http://amzn.to/15sGYUd) and his
founding of Vicom Systems in 1981. After joining Sun Microsystems in 1988,
Dr. Pratt participated in the Programmers Imaging Kernel Specification (PIKS)

15 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

Will machine vision replace human beings?


John Salls, President, Vision ICS (Woodbury, MN; www.vision-ics.com)

Technology by its nature makes human beings more efficient. Automobiles


replaced stable hands and horse trainers. Power tools make carpenters more
efficient, yet nobody would suggest that we should eliminate nail guns or circular
saws because they “replace
people”. Automation of any kind
is the same, making people more
efficient and providing them with
goods that they can afford.
You could hire a professional
baker to make a batch of waffles,
freeze them, put them in a bag,
put them in a box, and sell them.
But it would cost $10-$20 a box,
not the $3-$4 that we pay and
still have the company that makes
From Fritz Lang’s Metropolis (1927) them make a profit. To make
them in the quantity would take
an army of bakers, not a half dozen semi-skilled laborers that run a production line.
If you happen to be a baker you might argue that automation replaces your
task. However, as a society we are all better off as we have increased purchasing
power and can afford to purchase products such as waffles, clothing, shoes,
computers, homes, furniture, and automobiles.
If I had to have my computer built without automation it would be impossible
to afford for anybody without a NASA budget. If you are that baker, open a
high end bakery making special occasion cakes, get a job in a restaurant, or
change careers.
At the turn of the last century, my ancestors were heavily invested in stables.
The automobile destroyed their business. They could have complained about

16 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

it and it still would not have diminished the effect of the introduction of the
automobile or made their business survive. Instead we adapted and went into
different businesses, we survived, and thrived.
I live in a home larger than my ancestors, have two trucks, multiple computers,
a kitchen and garage full of gadgets, affordable clothing, plenty of food in my
fridge, and live well largely thanks to automation. I think most of us can say that
automation makes our lives better. Even our poor and unemployed live better
as a result of automation. Food is fresher and safer. Goods are more abundant
and cost less.

application programming interface that was commercialized by PixelSoft (Los


Altos, CA; http://pixelsoft.com) in 1993.

Mr. William Silver has been at the forefront of development in the machine vision
industry for over 30 years. In 1981, he along with fellow MIT graduate student
Marilyn Matz joined Dr. Robert J. Shillman, a former lecturer in human visual
perception at MIT to co-found a start-up company called Cognex (Natick, MA;
www.cognex.com). He was principal developer of the company’s PatMax pattern
matching technology and its normalized correlation search tool.

Mr. Bryce Bayer (1929-2012) will be long remembered for the filter that bears his
name. After obtaining an engineering physics degree in 1951, Mr. Bayer worked
as a research scientist at Eastman Kodak until his retirement in 1986. U.S. patent
3,971,065 awarded to Mr. Bayer in 1976 and titled simply “Color imaging array” is
one of the most important innovation in image processing over the last 50 years. Mr.
Bayer was awarded the Royal Photographic Society’s Progress Medal in 2009 and
the first Camera Origination and Imaging Medal from the SMPTE in 2012.

Mr. James Janesick is a distinguished scientist and author of numerous technical


papers on CCD and CMOS devices, as well as several books on CCD devices

17 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

including “Scientific Charge-Coupled Devices”, SPIE Press; http://bit.ly/1304e9I).


Working at the Jet Propulsion Laboratory for over 20 years, Mr. Janesick developed
many scientific ground and flight-based imaging systems. He received NASA
medals for Exceptional Engineering Achievement in 1982 and 1992 and, in
2004 was the recipient of the SPIE Educator Award (2004) and was SPIE/IS&T
Imaging Scientist of the Year (2007). In 2008 he was awarded the Electronic
Imaging Scientist of the Year Award at the Electronic Imaging 2008 Symposium in
recognition of his innovative work with electronic CCD and CMOS sensors.

Dr. Gene Weckler received a Doctor of Engineering from Stanford University


and, in 1967, published the seminal “Operation of pn junction photodetectors in a
photon flux integrating mode,” IEEE J. Solid-State Circuits (http://bit.ly/12SnC7O).
In 1971 he co-founded Reticon to commercialize the technology and, after a twenty
year career, co-founded Rad-icon in 1997 to commercialize the use of CMOS-based
solid-state image sensors for X-ray imaging. Rad-icon was acquired by Teledyne
DALSA (Waterloo, Ontario, Canada; www.teledynedalsa.com) in 2008. This year,
he was awarded the Exceptional Lifetime Achievement Award by the International
Image Sensor Society (http://bit.ly/123wpd1) for significant contributions to the
advancement of solid-state image sensors.

Mr. Stanley Karandanis (1934 -2007) was Director of Engineering at Monolithic


Memories (MMI) when John Birkner and H.T. Chua invented the programmable
array logic (PAL). Teaming with J. Stewart Dunn in 1979, Mr. Karandanis co-
founded Datacube (http://bit.ly/qgzpl5) to manufacture the first commercially
available single-board from grabber for Intel’s now-obsolete Multibus. After
the (still existing) VMEbus was introduced by Motorola, Datacube developed
a series of modular and expandable boards and processing modules known as
MaxVideo and MaxModules. In recognition of his outstanding contributions to
the machine vision industry, Mr. Karandanis received the Automated Imaging
Achievement Award from the Automated Imaging Association (Ann Arbor, MI;
www.visiononline.org) in 1999.

18 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

Rafael C. Gonzalez received a Ph.D. in electrical engineering from the University of


Florida, Gainesville in 1970 and subsequently became a Professor at the University
of Tennessee, Knoxville in 1984 founding the University’s Image & Pattern Analysis
Laboratory and the Robotics & Computer Vision Laboratory. In 1982, he founded
Perceptics Corporation, a manufacturer of computer vision systems that was
acquired by Westinghouse in 1989. Dr. Gonzales is author of four textbooks in
the fields of pattern recognition, image processing, and robotics including “Digital
Image Processing” (Addison-Wesley Educational Publishers) which he co-authored
with Dr. Paul Wintz. In 1988, Dr. Gonzales was awarded the Albert Rose National
Award for Excellence in Commercial Image Processing.

Dr. Azriel Rosenfeld


(1931-2004) is widely
regarded as one of the
leading researchers in the
field of computer image
analysis. With a doctorate
in mathematics from
Columbia University in
1957, Dr. Rosenfeld joined
the University of Maryland Shep Siegel (left) with Stanley Karandanis (image courtesy Shep Siegel).
faculty where he became
Director of the Center for Automation Research. During his career, he published
over 30 books, making fundamental and pioneering contributions to nearly every
area of the field of image processing and computer vision. Among his numerous
awards is the IEEE’s Distinguished Service Award for Lifetime Achievement in
Computer Vision and Pattern Recognition.

Dr. Gary Bradski is a Senior Scientist at Willow Garage (Menlo Park, CA;
www.willowgarage.com) and is a Consulting Professor in the Computer Sciences
Department of Stanford University. With 13 issued patents, Dr. Bradski is

19 Vision Systems Design SPECIAL REPORT


Keystones of machine Vision Systems Design

responsible for the Open Source Computer Vision Library (OpenCV), an open
source computer vision and machine learning software library built to provide a
common infrastructure for computer vision applications in research, government
and commercial applications. Dr. Bradski also organized the vision team for
Stanley, the Stanford robot that won the DARPA Grand Challenge and founded
the Stanford Artificial Intelligence Robot (STAIR) project under the leadership of
Professor Andrew Ng.

20 Vision Systems Design SPECIAL REPORT


ORIGINALLY PUBLISHED DECEMBER 2013

Machine vision: A look


into the future

To produce their products more efficiently, manufacturers will


automate many of the functions now delegated to human operators

Andrew Wilson, Editor

IMAGINE A WORLD where the laborious and repetitive tasks once performed by man
are replaced with autonomous machines. A world in which crops are automatically
harvested, sorted, processed, inspected, packed and delivered without human
intervention. While this concept may appear to be the realm of science fiction, many
manufacturers already employ automated inspection systems that incorporate
OEM components such as lighting, cameras, frame grabbers, robots and machine
vision software to increase the efficiency and quality of their products.

While effective, these systems only represent a small fraction of a machine vision
market that will expand to encompass every aspect of life from harvesting crops and
minerals to delivering the products manufactured from them directly to consumers’
doors. In a future that demands increased efficiency, such machine vision systems
will be used in systems that automate entire production processes, effectively
eliminating the manpower and errors that this entails.

Vision-guided robotic vision systems will play an increasingly important role in


such systems as they become capable of such tasks as automated farming, animal
husbandry, and crop monitoring and analysis. Autonomous vehicles incorporating
vision, IR and ultrasound imagers will then be capable of delivering these products
to be further processed. Automated sorting machines and meat-cutting systems will

21 Vision Systems Design SPECIAL REPORT


Machine vision: A look into the future

then prepare the product for further processing. After which many of the machines
that are in use today will further process and package these products for delivery—
once again by autonomous vehicles. In this future, customers at supermarkets will
find no checkout personal as they will be replaced by automated vision systems that
scan, weigh and check every product.

Unconstrained environments
Today’s machine vision systems are often deployed in constrained environments
where lighting can be carefully controlled. However, to design vision-guided
robotic systems for agricultural tasks such as planting, tending and harvesting crops
requires that such systems must operate in unconstrained environments where
lighting and weather conditions may vary dramatically. To do so, requires systems
that incorporate a number of different imaging techniques depending on the
application to be performed.

Currently the subject of many research projects, these systems currently divide
agricultural tasks into automated systems that plant, tender and harvest. Since each
task will be application dependent, so too will the types of sensors that are used in
each system. For crop picking, for example, it is necessary to determine the color of
each fruit to determine its ripeness. For harvesting crops such as wheat, it may only
be necessary to guide an autonomous system across the terrain.

In such systems, machine vision systems that use stereo vision systems, LIDAR,
INS and GPS systems and machine vision software will be used for path planning
mapping and classification of crops. Such systems are currently under development
at many Universities and research institutes around the world (see “Machine
Vision in Agricultural Robotics – A short overview”–http://bit.ly/19MHn9M),
by Emil Segerblad and Björn Delight of Mälardalen University (Västerås,
Sweden; www.mdh.se).

22 Vision Systems Design SPECIAL REPORT


Machine vision: A look into the future

In many of these systems, 3D vision systems play a key role. In the development of
a path planning system, for example, John Reid of Deere & Company (Moline, IL,
USA; www.deere.com) has used a 22 cm baseline stereo camera from Tyzx (Menlo
Park, CA, USA; www.tyzx.com) mounted on the front of a Gator utility vehicle
from Deere with automatic steering capabilities to calculate the position of potential
obstacles and determine a path that for the vehicle’s on-board steering controller
(http://bit.ly/1bRFQAX).

Robotic guidance
While 3D systems can be used to aid the guidance of autonomous vehicles,
3D mapping can also aid crop spraying, perform yield analysis and detect crop
damage or disease. To accomplish such mapping tasks, Michael Nielsen, and his
colleagues at the Danish Technological Institute (Odense, Denmark; www.dti.dk)
used a Trimble GPS, a laser rangefinder from SICK (Minneapolis, MN, USA;
www.sick.com) a stereo camera and a tilt sensor from VectorNav Technologies
(Richardson, TX, USA; www.vectornav.com) mounted on a utility vehicle
equipped with a Halogen flood light and a custom made Xenon strobe. After
scanning rows of peach trees, 3D reconstruction of an orchard was performed by
using tilt sensor corrected GPS positions interpolated through encoder counts
(http://bit.ly/1fxp36W).

While 3D mapping can determine path trajectories, map fields and orchards, it can
also being classify fruits and plants. Indeed, this is the aim of a system developed
by Ulrich Weiss and Peter Biber at Robert Bosch (Schwieberdingen, Germany;
www.bosch.de). To demonstrate that it is possible to distinguish multiple plants
using a low resolution 3D laser sensor, an FX6 sensor from Nippon Signal (Tokyo,
Japan; www.signal.co.jp) was used to measure the distance and reflectance intensity
of the plants using an infrared pulsed laser light with a precision of 1 cm. After 3D
reconstruction, supervised learning techniques were used to identify the plants
(http://bit.ly/19ALVMM).

23 Vision Systems Design SPECIAL REPORT


Machine vision: A look into the future

Automated harvesting
Once identified, robots and vision systems will be employed to automatically
harvest such crops. Such a system for harvesting strawberries, for example has
been developed by Guo Feng and his colleagues at the Shanghai Jiao Tong
University (Shanghai, China; www.sie.sjtu.edu.cn). In the design of the system, a
two-camera imaging system was mounted onto a harvesting robot designed for
strawberry picking. A 640 x 480 DXC-151A from Sony (Park Ridge, NJ, USA;
www.sony.com) mounted on the top of robot frame captures images of 8-10
strawberries. Another camera, a 640 x 480 EC-202 II from Elmo (Plainview, NY,
USA; www.elmousa.com) was installed on the end effector of the robot to image
one or two strawberries.

While the Sony camera localized the fruit, the Elmo camera captures images of
strawberries at a higher resolution. Image from both these cameras were then
captured by an FDM-PCI MULTI frame grabber from Photron (San Diego, CA,
USA; www.photron.com) interfaced to a PC. A ring-shaped fluorescent lamp
installed around the local camera provided the stable lighting required for fruit
location (http://bit.ly/16mWhmq).

Although many such robotic


systems are still in early stages of
development, companies such
as Wall-Ye (Mâcon, France;
http://wall-ye.com) have
demonstrated the practical use of
such systems. The company’s solar
powered vine pruning robot Wall-Ye
incorporates four cameras and a GPS
system to perform this task (Figure
FIGURE 1: Wall-Ye is a solar powered vine pruning
1). A video of the robot in action can robot that incorporates four cameras and a GPS
be viewed at http://bit.ly/QDFy1q. system.

24 Vision Systems Design SPECIAL REPORT


Machine vision: A look into the future

Crop grading
Machine vision systems are also playing an increasingly important role in the
grading of crops once harvested. In many systems, this requires the use of
multispectral image analysis. Indeed, this is the approach taken by Olivier Kleynen,
and his colleagues at the Universitaire des Sciences Agronomiques de Gembloux
(Gembloux, Belgium; www.gembloux.ulg.ac.be) in a system to detect defects on
harvested apples.

To accomplish this, a MultiSpec Agro-Imager from Optical Insights (Santa Fe,


NM, USA; www.optical-insights.com) that incorporated four interference band-
pass filters from Melles Griot (Rochester, NY; USA; www.cvimellesgriot.com)
was coupled to a CV-M4CL 1280 x 1024 pixel monochrome digital camera from
JAI (San Jose, CA, USA; www.jai.com). In operation, the Multi-Spec Agro-Imager
projects on a single array CCD sensor four images of the same object corresponding
to four different spectral bands. Equipped with a Cinegon lens from Schneider
Optics (Hauppauge, NY, USA; www.schneideroptics.com, camera images were
then acquired using a Grablink Value Camera Link frame grabber from Euresys
(Angleur, Belgium; www.euresys.com). Hyper-spectral images were then used to
determine the quality of the fruit (http://bit.ly/19nGkdG).

Other methods such as the use of SWIR cameras have also been used for similar
purposes. Renfu Lu of Michigan State University (East Lansing, MI, USA:
www.msu.edu), for example, has shown how an InGaAs area array camera from
UTC Aerospace Systems (Princeton, NJ, USA; www.sensorsinc.com) covering
the spectral range from 900-1700nm mounted to an imaging spectrograph
from Specim (Oulu, Finland; www.specim.fi) can detect bruises on apples
(http://bit.ly/zF8jlN).

While crop picking robots have yet to be fully realized, those used for grading and
sorting fruit are no longer the realm of scientific research papers. Indeed, numerous
systems now grade and sort products ranging from potatoes, dates, carrots, and

25 Vision Systems Design SPECIAL REPORT


Machine vision: A look into the future

oranges. While many of these systems use visible light-based products for these
tasks, others incorporate multi-spectral image analysis.

Last year, for example, Com-N-Sense (Kfar Monash, Israel;


www.com-n-sense.com) teamed with Lugo Engineering (Talme Yaffe, Israel;
www.lugo-engineering.co.il) to develop a system capable of automatic sorting
of dates (http://bit.ly/KN2ITf). Using diffuse dome lights from Metaphase
(Bristol, PA, USA; www.metaphase-tech.com) and Prosilica 1290C GigE
Vision color cameras from Allied Vision Technologies (Stadtroda, Germany;
www.alliedvisiontec.com), the system is capable of sorting dates at speeds as fast as
1400 dates/min.

Multispectral image analysis is also being used to perform sorting tasks. At Insort
(Feldbach, Austria; www.insort.at), for example, a multispectral camera from
EVK DI Kerschhaggl, Raaba, Austria; www.evk.co.at) has been used in a system
to sort potatoes while Odenberg (Dublin, Ireland; www.odenberg.com) has
developed a system that can sort fruit and vegetables using an NIR spectrometer
(http://bit.ly/zqqr38).

Packing and wrapping


After grading and sorting, such products must be packed and wrapped for shipping.
Often this requires that the crops are manually packed and wrapped by human
operators. Alternatively, they can be transferred to other automated systems that
can perform this task. Once packed, these goods may be finally inspected by vision-
based systems. In either case, numerous steps are required before such products can
be shipped to their final destination.

In an effort to automate the entire sorting, packaging and checking process,


the European Union has announced a project known as PicknPack
(www.picknpack.eu) that aims to unite the entire production chain (Figure 2).
Initiated last November, the system will consist of a sensing system to assesses the

26 Vision Systems Design SPECIAL REPORT


Machine vision: A look into the future

quality of products before


or after packaging, a
vision controlled robotic
handling system that
picks and separates
the product from a
harvest bin or transport
system and places it in
the right position in a FIGURE 2: To unite the entire sorting, packaging and checking
package and an adaptive process, the European Union has announced a project known as
PicknPack.
packaging system that
can accommodate
various types of packaging. When complete, human intervention will be
reduced to a minimum.

Unmanned vehicles
After packing, of course, such goods must be transported to their final destination.
Today, manned vehicles are used to perform this task. In future, however, such
tasks will be relegated to autonomous vehicles. At present, says Professor William
Covington of the University of Washington School of Law (Seattle, WA, USA;
www.law.washington.edu) fully independent vehicles that operate without
instructions from a server based on updated map data remain in the research
prototype stage.

By 2020, however, Volvo expects accident-free cars and “road trains” guided by a
lead vehicle to become available. Other automobile manufacturers such as GM,
Audi, Nissan and BMW all expect fully autonomous, driverless cars to become
available during this time frame. (http://bit.ly/1ibdWxv). These will be equipped
with sensors such as radar, lidar, cameras, IR and GPS systems to perform this task.

27 Vision Systems Design SPECIAL REPORT


Machine vision: A look into the future

Automated retail
While today’s supermarkets, rely heavily on traditional bar-code readers to price
individual objects, future check-out systems will employ sophisticated scanning,
weighing and pattern recognition to relive human operators of such tasks.

Already, Toshiba-TEC (Tokyo, Japan; www.toshibatec.co.jp) has developed a


supermarket scanner that uses pattern recognition to recognize objects without
the use of bar-codes (http://bit.ly/18KCRXp). Others such as Wincor Nixdorf
(Paderborn, Germany; www.wincor-nixdorf.com) has developed a fully automated
system known as the Scan Portal that the company claims is the world’s first
practicable fully automatic scanning system (Figure 3). A video of the scanner at
work can be found at http://bit.ly/rSG5HB.

Whether it be crop grading, picking, sorting, packaging, or shipping,


automated robotic systems will impact the production of every product made
by man, effectively increasing the efficiency and quality of products and
services along the way. Although developments currently underway can make this
future a reality, some of the technologies required still need to be perfected for this
vision to emerge.

FIGURE 3: Wincor Nixdorf has developed a fully automated system known as the Scan
Portal that the company claims is the world’s first practicable fully automatic scanning
system.

28 Vision Systems Design SPECIAL REPORT


Machine vision: A look into the future

Companies mentioned

Allied Vision Technologies Melles Griot Sony


Stadtroda, Germany Rochester, NY, USA Park Ridge, NJ, USA
www.alliedvisiontec.com www.cvimellesgriot.com www.sony.com

Com-N-Sense Metaphase Specim


Kfar Monash, Israel Bristol, PA, USA Oulu; Finland
www.com-n-sense.com www.metaphase-tech.com www.specim.fi

Danish Technological Michigan State University Toshiba-TEC


Institute East Lansing, MI, USA Tokyo, Japan
Odense, Denmark www.msu.edu www.toshibatec.co.jp
www.dti.dk
Nippon Signal Tyzx
Deere & Company Tokyo, Japan Menlo Park, CA, USA
Moline, IL, USA www.signal.co.jp www.tyzx.com
www.deere.com
Odenberg Universitaire des Sciences
Elmo Dublin, Ireland Agronomiques de Gembloux
Plainview, NY, USA www.odenberg.com Gembloux, Belgium
www.elmousa.com www.gembloux.ulg.ac.be
Optical Insights
EVK DI Kerschhaggl Santa Fe, NM, USA University of Washington
Raaba, Austria www.optical-insights.com School of Law
www.evk.co.at Seattle, WA, USA
Photron
San Diego, CA, USA www.law.washington.edu
Euresys
Angleur, Belgium www.photron.com UTC Aerospace Systems
www.euresys.com Princeton, NJ, USA
Robert Bosch
Schwieberdingen, Germany www.sensorsinc.com
Insort
Feldbach, Austria www.bosch.de VectorNav Technologies
www.insort.at Richardson, TX, USA
Schneider Optics
Hauppauge, NY, USA www.vectornav.com
JAI
San Jose, CA, USA www.schneideroptics.com Wall-Ye
www.jai.com Mâcon, France
Shanghai Jiao Tong
http://wall-ye.com
Lugo Engineering University
Talme Yaffe, Israel Shanghai, China Wincor Nixdorf
www.lugo-engineering.co.il www.sie.sjtu.edu.cn Paderborn, Germany
www.wincor-nixdorf.com
Mälardalen University SICK
Västerås, Sweden Minneapolis, MN, USA
www.mdh.se www.sick.com

29 Vision Systems Design SPECIAL REPORT


NOVEMBER 2013
w w w.vis io n - s y s t e m s.c o m

VisionSystems
Subscribe
VISION AND AUTOMATION SOLUTIONS
D E S I G N
®

FOR ENGINEERS AND INTEGRATORS WORLDWIDE

WORLDWIDE
Industrial
Camera
DIRECTORY

now at www.vision-systems.com/
MAGAZINE subscriptions.html

V ision Systems Design is the leading media


resource for engineering and design professionals
who build complex machine vision and image
processing systems for the factory foor, biomedical
research, agriculture, security, transportation and
other sophisticated applications. Vision Systems
Design magazine, email newsletters and website
E-NEWSLETTERS
deliver unique, unbiased and in-depth technical
information about the design of systems for these
demanding applications.

Request a FREE subscription to Vision Systems


Design magazine and e-Newsletters and visit our
comprehensive website to see for yourself why
engineering and design professionals rely on
Vision Systems Design.

WEBSITE Subscribe at: www.vision-systems.com/


subscriptions.html

For advertising and sponsorship information,


www. v ision-syste ms.com contact Judy Leger at judyl@pennwell.com.

You might also like