You are on page 1of 4

Design and Realization of Image Processing System Based on Embedded Platform

Yan Liping
, Song Kai

Software School, East China Jiaotong University, Nanchang 330013, China
School of Information Engineering, East China Jiaotong University, Nanchang 330013, China,

ABSTRACT: Aiming at the problem that traditional image
processing equipments are bulky, high power consumption,
difficult to move and costly, this paper puts forward a new idea of
developing an image processing system on embedded platform,
which is designed and implemented with S3C2410 for the core
processor, ARM Linux for the operation system platform and
MiniGui for the graphical user interface. The hardware
architecture is introduced and the design, realization and
operation of the software system is described in detail. The
testing results indicate that the embedded image processing
system is running well, strongly real-time under the conditions of
relatively scarce embedded system’s hardware and software
resources, and can display clear digital image processing results.
KEYWORDS: Embedded; S3C2410; Image Processing; MiniGui
With the development of information science with
computers and computer technology as the core, image
processing becomes more and more important in the areas of
communication, management, medicine, earthquake,
weather, aerospace and education etc. However, traditional
image processing techniques rely on large quantities of
electronic computing devices, which results in enormous
maintenance, transportation and other expenditures.
Embedded platform is excellent for it is small, low-cost, low
power consumption and low maintenance
. Therefore,
developing an image processing system on the embedded
platform can reduce the production maintenance costs,
improve the reliability and controllability and has a high
market value.
A. Hardware Chips Select
Image processing is a process of computing frequently
and has certain requirements for the processor’s speed.
S3C2410 chip based on ARM920T core adopting Harvard
architecture with five-stage pipeline and providing the
performance of 1.1MIPS/MHz is a hard macro-cell with high
performance and low power consumption. S3C2410 has a
complex internal structure and provides many scalable
functional modules, which are ARM920T kernel,
independent 16KB instruction and 16KB data cache, virtual
memory management unit MMU, LCD controller, NAND
Flash BootLoader, system management unit, 3-channel
UART, 4-channel DMA, 4-channel timers with PWM
function, I/O port, RTC(abbr. for Real Time Clock), 8-
channel 10bit precision ADC, touch screen controller, IIC
bus interface, IIS digital audio bus interface, USB host, USB
devices, SD/MMC card controller, 2-channel SPI and PLL
digital lock etc. From the above description it can be seen
that S3C2410 is suitable to be the core processor of
embedded image processing system for its basic rate for
image processing and low-cost characteristics
B. Software Platform Building
Using S3C2410 chip as the processor, the embedded
image processing system should be universal, speedy and
hardware-controllable. Based on such considerations the
software system is built by three parts which are the
establishment of ARM Linux, the transplant of graphical
user interface (MiniGui) and the design coding of image
processing software
. Because the establishment of ARM
Linux is the general process for building embedded Linux,
this paper does not describe it any more and mainly
introduces the design and realization of the graphical user
interface and the embedded image processing software.
The hardware platform adopts S3C2410 as the core and
mainly includes three parts: storage systems, system
interface and user interface (display LCD and keyboard),
which is as shown in Fig.1.

Figure 1. Hardware Platform Architecture
S3C2410 regards the external reset signal as an interrupt
handler. When the system is reset, the procedure pointer is
set to 0 and the program starts running by jumping to the
0x00000000 address. This space corresponds to Bank0,
which is connected to 2MB NORFlash. Stored in the
NORFlash is BootLoader which is responsible for
configuring the processor’s structure, work modes and
2010 International Forum on Information Technology and Applications
978-0-7695-4115-0/10 $26.00 © 2010 IEEE
DOI 10.1109/IFITA.2010.168
2010 International Forum on Information Technology and Applications
978-0-7695-4115-0/10 $26.00 © 2010 IEEE
DOI 10.1109/IFITA.2010.168
automatically detecting whether all the hardware is working
properly. After the system is initialized and self-detected
BootLoader copies the zImage of 16MB NANDFlash, which
is the image file of the software system, to the 0xc0008000
address that is the start address of 64MB SDRAM. Then
BootLoader sets the procedure pointer to the 0xc0008000
address and the system starts running.
The main problem to be solved in this system is how to
realize image processing on embedded platform. Because the
available programming languages are less so that the APIs
are less in embedded environment, the design of general
APIs (for example reading BMP files) and GUI is described
in detail in the latter part of this paper. In order to improve
the working efficiency and the portability of the software
system, frequently used interfaces and image processing
algorithms are encapsulated into APIs and can be invoked
easily, so that the universality of the system is greatly
A. Design of Function Modules
The software system is composed of seven modules, each
of which contains some relative image processing algorithms
encapsulated into APIs and being able to be invoked easily.
The seven modules are separately as follows: image
geometric transformation, edge detection and contour
tracing, histogram modification, penumbral reconciliation
without jitter, image color transform, image smoothing
sharpening and corrosion expansion refinement. The main
hierarchical structure is as shown in Fig.2. Each module
contains some image processing algorithms. For example,
the module of image geometric transformation covers eight
image processing APIs which are image rotation, vertical
mirror, horizontal mirror, image translation, true color image
zoom, non-true color image zoom, true color image
transpose, non-true color image transpose. The Fig.3
describes the composing of the module of image geometric
transformation, which is similar to the other six modules that
are not introduced any more.

Figure 2. Main Hierarchical Structure of the Software System

Figure 3. Composing of Image Geometric Transformation Module
B. Realization and Running of the Software System
The realization of the image processing software system
includes several processes such as reading the image file,
development of GUI and encapsulation of common
1) Reading BMP Files into Memory
In the embedded system there is no ready-made library
functions for BMP files can be used. So how to read a BMP
file into memory is a fundamental prerequisite for the
transplantation of image processing algorithms. In this
system there are two kinds of images which are true color
images and 256-color images including 256-level gray
images and accordingly two functions for reading BMP files
are designed which are separately stored in Graphic.h and
Graphic.c to prepare for the subsequent development. The
two functions are listed as follows:
• GetTruePixel(char bmpname[],U32
color[240][320],int *mapwidth,int *mapheight);
• Get256Pixel(char bmpname[],U8
colorbuf[240][320],U32 pale[256],int *mapwidth,int
The first function is defined to read a true color image
named as bmpname[] into a matrix named as color[][] and to
gain the width and height of the image. The second function
is defined to read a 256-color image named as bmpname[]
into a matrix named as colorbuf[][], at the same time to read
the corresponding color palette data into the matrix pale[]
and also to gain the width and height of the image.
2) Algorithm Transplantation
All image processing algorithms involved in this paper
are ready-made and their mathematical transformation
processes are fixed. What is needed to do is to re-implement
these algorithms using C program language and to
encapsulate them into APIs for being invoked easily.
3) Transplantation of MiniGui
A good user interface is necessary for a full-featured
software and so it is the same with embedded system.
447 447
Compared with other embedded GUIs, MiniGui has faster
message response mechanism and is more suitable for real-
time systems, therefore is selected to be the graphic interface
platform for developing the image processing software
system. The transplantation process of MiniGui is introduced
as follows.
a) Building Linux Cross-compiler Environment
The process of building Linux cross-compiler
environment is general and we don’t discuss it any more in
this paper.
b) Cross-compiling MiniGui
Download the source packages of MiniGui including
libminigui-1.6.9.tar.gz, miniguires-1.6.9.tar.gz and mde-
1.6.9. tar.gz, which are separately MiniGui library source
codes, MiniGui resources and integrated demos.
Firstly compile and install MiniGui library. Extract
libminigui-1.6.9.tar.gz, enter the corresponding directory and
run the ./configure script using the command “CC=arm-
=i686-pc-linux-gnu --disable-lite”. If ./configure script is run
successfully, the file of Makefile will be generated. Run the
commands of make and make install to compile and install
Secondly compile and install MiniGui resources. Extract
minigui-res-1.6.9.tar.gz and enter the corresponding
directory. Before running the command of make install,
modify the file of configure.linux in the directory. Revise the
value of the prefix option to be $(TOPDIR)/usr/local/arm-
linux /arm-linux and then run the command of make install.
Finally compile and install MiniGui demos. Extract mde-
1.6.9.tar.gz and enter the corresponding directory. Modify
the file of Revise AC_CHECK_HEADE to be
include/minigui/common.h,have_libminigui= yes,foo=bar).
Run the command of make to finish the compiling of demos.
c) Copying MiniGui Resources to Development Board
Enter the directory of /usr/local/arm-linux/arm-linux. The
resources needed to be copied are in the subdirectory of etc
and lib. Run the operation of arm-linux-strip to these
libraries and then run the command of tar to package the
required resources. Finally write these resources into the
ramdisk file system.
d) Onboard Environment Configuration of Linux
Firstly create the device file of FrameBuffer driver by
entering the /dev directory and running the operation of
mknod fb0 c 29 0. Then run the command of mknod tty0 c 5
0 and mknod mouse c 10 1 to create the terminal device file
and the mouse driver device file. Modify the configuration
file MiniGui.cfg in the directory of /usr/local/etc by setting
the defaultmode of fbcon to be the appropriate display mode.
To this, MiniGui transplant is successful. Continuing to add
MiniGui library functions and various resources and writing
application programs will make the graphic interface of
development board more beautiful and perfect.
4) Program Development Based on MiniGui
The API of MiniGui is similar to that of Win32 so that
the creation of a window is also similar to that in Windows
programs. The entry function of the program is named as
MiniGUIMain(), which is responsible for creating the main
window. In the entry function a MAIN WINCREATE
structure is initialized and the function CreateMainWindow()
is invoked to create the main window. Then the program
enters a message loop. When it exits, the function
MainWindowThreadCleaup() will be invoked, which is to
destroy the message queue of the main window. The
function DestroyMainWindow (hWnd) can be invoked to
destroy the main window but not the message queue of the
main window. Generally speaking, a main window program
will destroy the main window after receiving the message
MSG CLOSE and invoke the function PostQuitMessage() to
terminate the message loop.
The main work of GUI design on embedded platform is
to transplant MiniGui. After successful transplantation of
MiniGui, programming based on MiniGui is relatively
simple, which is mainly to realize the seven modules
discussed above.
5) Running the Image Processing System
A message from the keyboard or touch screen can start
the image processing system. There will be a guided
interface. Click anywhere on the touch screen and enter the
main interface of the system as shown in Fig.4. There are
seven choices of image processing algorithms on the main
interface. Click each button on the touch screen or type the
shortcut key in the bottom of each button to enter the
corresponding algorithm operation interface. Click the return
button to get back to the guided interface.

Figure 4. Main Interface of the Image Processing System
In the main interface, click the corresponding button of
the image geometric transformation on the touch screen or
type the “1” key on the keyboard to get into the function
interface of the image geometric transformation, as shown in
Fig.5. In this interface there are several buttons to be selected
to get into the corresponding interfaces of specific image
algorithms. Click one of the buttons to run the corresponding
image algorithm. In the running process the user can select
the image file needed to be processed. After image selection
the system will perform different processing according to the
need for input data. If input data is needed, a new interface
for input data will be created. If not, the system will directly
run the image algorithm procedure. Eventually the
448 448
processing results will be displayed in the system interface.
Click the return button to get back to the main interface so as
to reselect other image algorithms.

Figure 5. Image Geometric Transformation Interface
The image processing system based on embedded
platform can be installed in hand-held or mobile devices to
satisfy the user’s needs of image processing at a lower cost.
This paper introduces the realization process of an embedded
image processing system and emphasizes the design and
operation of the software system. Testing results indicate that
the system is high efficient, user-friendly, easy to extend and
with reasonable interface design. With the change of market
requirements, the embedded image processing technology
will be more widely applied and developed. The design
scheme of embedded image processing system discussed in
this paper is believed to play an active role in the application
domains of relative technology.
This paper was supported by Youth Science Foundation
of Jiangxi Provincial Department of Education under
contract number GJJ09502 and Science Foundation of
Jiangxi Provincial Department of Education under contract
number GJJ10452.
[1] Liu Fucai Zhao Jiawei Tang Lina, Image Processing Interface
Development Of Qt/Embedded Based on Embedded Linux[J].
Computer Applications and Software, 2009, Vol.26(11): 116-117,149
[2] Liu Cheng, Bao Kejin. The Research of High Performance Embedded
Image Processing System[J]. Microelectronics and Computer, 2008,
Vol.25(6): 38-41
[3] Ke Yong, Yang Zongkai, Zhao Mengxin. Design and implementation
of image processing middleware in embedded system[J]. Application
Research Of Computers, 2007, Vol.24(9): 292-294
[4] Dai Xuefeng, Jin Lianwen, Xiong Bo. Embedded Intelligent
Surveillance Systems Based on Foreground Segmentation
Technology[J]. Computer Engineering, 2007, Vol.33(20): 233-235
[5] Mohamed Akil. Special issue on reconfigurable architecture for real-
time image processing[J]. Journal of Real-Time Image Processing,
2008, Vol.3(3)
[6] Zhou Jianjun, Zhou Jianhong. Research on embedded digital image
recognition system based on ARM-DSP[C]. 2nd IEEE International
Conference on Computer Science and Information Technology, 2009,
ICCSIT 2009: 524-527
[7] Li Linsheng, Zhang Yongliang, Tian Qichuan. Multi-face location on
embedded DSP image processing system[C]. 1st International
Congress on Image and Signal Processing, CISP 2008, vol(4): 124-
[8] Qian Yanxin, Yang Yuhang. The software platform construction of
embedded digital video player[J]. Microcomputer Information, 2006,
[9] Liu Cheng, Bao Kejin. Automatic alarm embedded system based on
image processing[J]. Computer Engineering and Design, 2005,
Vol.58(17): 4198-4200

449 449