You are on page 1of 8

IOWA STATE UNIVERSITY

ELECTRICAL AND COMPUTER ENGINEERING

CPRE 488 - Embedded System Design


Final Project Report

Members:
Steven Frana Isaac Vrba
Clayton Kramper Jared Cox
Introduction
Given a picture, we want to generate signals such that an oscilloscope will
display a vector image.

This builds off the idea that older CRT-style oscilloscopes can produce images by
“drawing” geometric shapes using the electron beam. This differs from raster scanning
as there are no pixels and much of the display is not illuminated (akin to a high-speed
etch sketch).

An oscilloscope can function in X, Y mode, where two inputs dictate where the beam
goes on the display. These inputs are two AC signals generated by our FPGA.
Generating can be achieved using a DAC and PMOD port, or by splitting the L and R
signals that come out of an AUX port. The latter may require amplification of the signal.

One limitation of vector graphics is how many shapes can be drawn. Assuming the
electron beam outputs indefinitely, we are limited to one continuous line. How long this
line is (or how much detail we can see), is limited by how fast signals can change (either
of DAC or hardware) and our persistence of vision.

In order to convert an image to a vector, we must use edge detection so we are only
drawing lines or a “sketch” of an image. This can be done in Software or Hardware.

Implementation
Our approach to this problem was to generate edge detection software using
OpenCV. Once this was done we needed to convert the image into an audio file, we
chose the .wav format. On the hardware side, we needed to create a hardware design
that could boot a .wav file into a DMA and feed it to a DAC.

Hardware Design
Our initial hardware design came after the fact of us finding a DAC for the project
that utilized an I2S protocol. Once this was determined we needed to find out how to
utilize it. Our first step was to try to produce some sort of generated audio signal. This
would include using a FIFO and I2S transmitter. After initial implementation, we
concluded that the data transfer rate from the FIFO and the transmitter was not going to
work for what we needed. The transmitter read data faster than the fifo could produce it.
Our next step was to implement the DMA. Replacing the FIFO with the DMA solved that
issue. Once we had the FPGA hardware implemented the DAC was our focus. We
needed to understand how to configure it correctly. The DAC we used had three main
focuses. The Master Clock, the bit clock, and the sample rate. We utilized the datasheet
to compute these numbers and in the end produce appropriate data.
[Figure 1]
Software Design
The software implementation for converting an input image into a .wav file that would
provide the original shape embedded within it undertook multiple different attack
vectors. The only way for our team to reasonably traverse the series of filters that were
required, was realistically only feasible with OpenCV libraries. Steve and Jared were
responsible for interfacing a DAC and I2C and I2S protocol, Clay and Isaac were
responsible for obtaining a .wav file that would reproduce the image.

Our team first thought in order to have an ideal optimized system, we needed to create
or find Vivado IP cores that we could include in a modified MP2 camera hardware
pipeline. We could then feed it a continuous stream from the camera and in real-time
drive the CRT oscilloscope with the generated .wav.

In order to do that we downloaded the OpenCV library, a task that took more time than
should have ever been necessary, and program either all the different filters separately
and hook them up as we go and test, or to make one massive file. Although
hypothetically plausible, we were not able to successfully integrate the OpenCV libraries
into Vitis HLS so that it can utilize their function set. Several several hours were spent
scraping the internet for any help that would allow us to allow the program to find their
location. We even asked a handful of people and although good attempts at times,
getting full recognition was never achieved.

This caused us to revert back to a state of sanity and looking at the whole project over
again to see if we can accompass for this mistake. We were able to prove that we could
interact with the OpenCV libraries using VS Code, and once we were able to get that to
work, moved quite fluently through the different states that needed to be reached. When
given an input image it will need the following effects applied to it:

Input Image (.png) -> Greyscale -> Sobel -> Canny -> Vector Image -> Inverse
Short-Time Fourier Transform -> Low-Pass -> .wav
[Figure 2: Original Image - input.png]

[Figure 3: Sobel Edge Detection Filter Applied]


[Figure 4: Canny Filter and Vectorized image visual output]

In the end we were able to successfully run our code to produce a .wav file. However,
with any given input it always would produce a similar shape on the CRT oscilloscope
that looks like multiple eccentric ovals within one another that would be rotated 45
degrees to the right. Trying to debug this was a massive black box since this was more
or less the key component that we needed OpenCV for. Integrating the complicated
math required with the time remaining was very unrealistic.

After doing some research, we found some other possible solutions to achieve our .wav
file goal… kinda. We learned that SVG files are Scalable Vector Graphic images that
could be converted to .wav and reproduced fairly accurately. We found a program called
‘Rabiscoscopio’ that is able to take in SVG images and convert them .wav as long as
the image was a single continuous stoke. Figuring out how this one of a kind program
works was a black box that could not find any information on. At that same time we
decided to change the OpenCV core output as a SVG image since in theory we are able
to make a successful conversion from SVG to .wav. Unfortunately at this point we had
spent so much time on research, trying multiple routes, and a expiring working period
that we were forced to end our software implementation there.
[Figure 5: Batman logo .wav that was generated by Rabiscocopio]

The final version of the code that we ended on includes the libraries iostream, OpenCV,
then sndfile and fftw3 to help attempt a vector image to .wav file translation.

Results
Our results, although not quite what was expected, were satisfactory in proving the idea.
● Hardware results
○ Hardware implemented on FPGA
○ DMA Buffer stored and transmitted software generated data as well as
.wav audio data
○ Properly configured I2S transmitter IP
○ Gained understanding of I2S, and configured DAC to output appropriate
data
○ Output a sine wave from hardware
● Software results
○ Conquered OpenCV
○ Got edge detection working for PNG images
○ Got Vectorization working
○ Used Vector Image to Generate a Sound Wave file via Inverse Fast
Fourier Transform, sadly this did not create a proper wave file
○ able to display the company logo umbrella and a Batman SVG image

Conclusion
This project was something our group found to be very interesting, it was
something we were all kind of new to. We had some slight understanding in how image
filters work and designing custom hardware in Vivado do to previous labs. However
many new things were learned during this venture. One being how to install a library
and properly set up includes and how to use opencv with its own header files, lots of
make and cmake. These seem like relatively easy tasks but implementing a new
program no one has experience on can be a huge learning curve. We also learned
more about the program we've been using all semester. Vitis HLS is used to convert cop
code into VHDL to make IP cores. On the hardware side with the implementation of the
DAC that uses an I2S protocol, we learned the importance of clocking and most
importantly how another communications protocol worked. Even though we didn't quite
get the result we expected we enjoyed taking the time to learn something new and see
results.

You might also like