You are on page 1of 36

1

Integrating vision and motion systems can reduce costs, increase efficiency, and improve
quality.
Advanced integration methods can greatly improve the performance of tomorrows smart
machines.
NI offers a wide selection of choices to meet your combined vision and motion needs.
Distributed processing allows easy scalability and a variety of performance options.
Centralized processing provides a small footprint and improves determinism and
latency between tasks.
LabVIEW reduces complexity by giving you a single development environment.

I/O connectivity
Centralized processing
Wide range of targets

Control Applications with Vision in the Control Loop


Visual servoing
Optics alignment
Sorting Machines
There are a number of benefits to close integration between vision and motion

components within a smart machine. Some of the key benefits include reduced costs,
increased efficiency, and improved quality. The most advanced methods of integration, like
visual servo control, will allow you to achieve these benefits while using low cost motion
hardware to further drive your costs down. National instruments has a wide selection of
products to meet both your vision and motion needs including smart cameras, GigE cameras,
and a unique embedded monitoring and control platform called CompactRIO. The flexibility
of CompactRIO and LabVIEW allow you to implement your smart machine in many different
ways. A distributed processing architecture can give you excellent scalability and the widest
selection of choices, but a centralized processing architecture will allow you to perform some
of the most advanced motion and vision integration methods by taking advantage of
improved determinism between tasks and faster loop rates. But no mater what hardware
you choose, you will only have a single LabVIEW development environment for both vision
and motion tasks which can greatly reduce design challenges and improve productivity.
Thank you for your time and attention today. I hope this was webcast was both informative
and useful.

NIs motion portfolio consists of:


NI SoftMotion, a feature rich motion API and motor control IP set built on the
LabVIEW RIO Architecture, a COTS hardware platform that combines an RT processor, user
programmable FPGA, and modular I/O, paired with
Industry Leading Drive Technology for stepper and brushless servo motors
So why is the RIO architecture and NI SoftMotion so great for high performance machine
builders?
In NI SoftMotion, motion tasks are Disaggregated so that largely you can choose where to
run a particular task to meet the needs of the application. Furthermore, each task or block
is Open, that is, you can modify the functionality down to a very low level. Finally, NI
SoftMotion was constructed to be Modular, such that specific tasks could be modified and
customized without largely impacting other blocks in the system. This lets a user
specifically design where they want to, and to abstract the rest. The modular approach to
NI SoftMotion and the LabVIEW RIO architecture also means that you can mix and match
elements of the NI platform to come up with the HW system that meets your needs in
terms of axis counts, processing power, integration with other I/O subsystems, and level of
customizability.

Lets return one more time to our Vision Guided Motion block diagram and see how
centralized processing effects the system. *Click for Animation* Now all tasks are
performed by a single CompactRIO system. All of the complex triggering and
communication across the Ethernet link has been removed, and so has the potential
performance bottleneck. Now we can simply communicate coordinates between different
loops running on the processor. And because both vision and motion loops are running side
by side in the same processor and within LabVIEW Real-Time, their communication is
completely deterministic. *Click for Animation* So where we once had an unknown
amount of latency between vision and motion components, we now have a defined period,
usually under 1 milisecond. This lets us do more complex vision and motion integrations
where vision is brought directly into the motion control loop.

Side-by-side look at the CVS-1457RT and the CVS-1459RT

Integrating vision and motion systems can reduce costs, increase efficiency, and improve
quality.
Advanced integration methods can greatly improve the performance of tomorrows smart
machines.
NI offers a wide selection of choices to meet your combined vision and motion needs.
Distributed processing allows easy scalability and a variety of performance options.
Centralized processing provides a small footprint and improves determinism and
latency between tasks.
LabVIEW reduces complexity by giving you a single development environment.

I/O connectivity
Centralized processing
Wide range of targets

Control Applications with Vision in the Control Loop


Visual servoing
Optics alignment
Sorting Machines
There are a number of benefits to close integration between vision and motion

10

components within a smart machine. Some of the key benefits include reduced costs,
increased efficiency, and improved quality. The most advanced methods of integration, like
visual servo control, will allow you to achieve these benefits while using low cost motion
hardware to further drive your costs down. National instruments has a wide selection of
products to meet both your vision and motion needs including smart cameras, GigE cameras,
and a unique embedded monitoring and control platform called CompactRIO. The flexibility
of CompactRIO and LabVIEW allow you to implement your smart machine in many different
ways. A distributed processing architecture can give you excellent scalability and the widest
selection of choices, but a centralized processing architecture will allow you to perform some
of the most advanced motion and vision integration methods by taking advantage of
improved determinism between tasks and faster loop rates. But no mater what hardware
you choose, you will only have a single LabVIEW development environment for both vision
and motion tasks which can greatly reduce design challenges and improve productivity.
Thank you for your time and attention today. I hope this was webcast was both informative
and useful.

10

11

12

13

14

15

16

We will learn more about all the points mentioned here in the coming slides.
Color and unsigned Grayscale
For effective communication between the VIs the control path has to be handled with
great care, for this purpose we make use of Four wire hand shake protocol.
As FPGA are capable of true parallelization we can run IP in parallel. If we wish to merge
results after parallelization then we have to merge the streams by latency balancing. If an
image stream is to be given to IPs running in parallel we branch the image stream
Minimal time and resources are to be utilized in order to communicate between the Host
and target
If the image size is know apriori then one can design IPs involving kernel buffering for
minimal area utilization

17

18

19

In the host implementation you will be familiar with a single VI which calls can be
configured for different data types. On FPGA we have taken a different approach where we
have a VI for every data type being supported. The reason behind this is to have an optimal
implementation on FPGA for a specific data type.
In Operators we support the Arithmetic and logical operation on an image with another
image or with a constant
Another flavor of Poly VI can be found in Morphology related operations where we have a
specific function followed by Kernel size and then with a data type

20

21

22

23

24

On the FPGA we have Transfer VIs which implements the DMA FIFO. The four wire
handshaking protocol is implemented for the transfer VIs as well. There is flexibility of
choosing the DMA FIFO for transfer VIs

25

26

27

Processor based approach optimization will involve vectorization and managing code
execution on multiple cores. Also explain the latency and delay.

28

29

30

31

32

33

34

You might also like