You are on page 1of 5

The 2017 ISEE

Dang Thanh Tin(1), Le Tuan Vu(1), Van Dinh Trieu(1)
Bach Khoa University , Viet Nam

methods is in section 2, result analysis in section 3. The
conclusion will be in the section 4.
Nowadays, the worldwide expansion of the Internet
brings about the higher demand for subtitle video. 2. METHOD
Subtitle is a textual version of a video’s dialogue or 2.1. Tool
explanatory that appears onscreen. Due to that fact,
OpenCV is a library of programming functions
developing a software that facilitates video makers in
mainly aimed at real-time computer vision. Its vast and
adding and editing their subtitles is imperative. A study
progressively expanded library aids us effectively in
has been conducted to build up the SmartSubtitling
image processing. It includes various functions such as
software. It aims at adding and processing the subtitle
drawing, video input/output, shape detection ect. As an
smartly by the ability to evaluate the color average value
open source tool, OpenCV is suitable for studying. In
of the subtitle area. Then it defines the subtitle color so
this study, OpenCV version 3.2.0 is utilized.
that they are the most readable.
This software is programmed in C++language which
KEYWORDS: subtitling, smart subtitling, opencv,
is the Object Oriented programming. The test run and
subtitle process.
algorithm testing are carried out on Visual Studio 2010.
To build Graphic User Interface (GUI), Qt 5.6.2 is
For the reason that viewers sometimes get annoyed chosen. It is a C++ based framework of libraries and
at the classic subtitles which are virtually unreadable as tools that enables the development of powerful,
its color coincides with the background’s. This degrades interactive and cross-platform applications and devices.
video quality and causes the interruption in viewer
comprehension. 2.2. Theoretical basis and idea
There are availably some subtitling softwares such As regard to subtitling, every single video frames
as MKVToolnix or Captions and Subtitles function by must be observed. Each of them is encoded into a matrix
Youtube website. They all have their own methods for which the coordinates stand for an image element- pixel
the better readability. Commonly, they use the default (Fig.1).
font as white word with black edge. Youtube, personally,
sets their white text in black cells. Sharing the same aim,
the SmartSubtitling is introduced as subtitling software
with the advanced function.
Smart subtitling is a complex function that involves
creating, editing or adjusting subtitles so that they are
always readable. On the other words, its task is to keep
the subtitle and background colors distinguishable even
when the background changes consecutively in term of
color and brightness. The initial requirement is to
accurately define subtitle area in the background then
segment it to darkness and brightness so as to properly
Figure 1. Illustrating the pixel of an image [1]
process its colors. Discussing briefly on the applied


The 2017 ISEE Thence. To do so. average color. time that the subtitle is rendered is another important factor needed to be considered. the rectangle is divided into subtitled of the text each cell cells in such a way that its length is equal to the length of Figure 6. Likely. In deed. Seperating the subtitle area into cells [1]. array is compulsory. Next. Frame position is defined by the formula: Position = fps * [Time point] To fps is the frame per second or frame rate. It is a rectangle video character average value of surrounding the text. …Hence. changing the line into character specified pixel. a reference Accordingly. The frame background into details on how the cells are divided are going to be cells discussed. drawing the word is conducted as After detaching the word. including space. The process overview 2 . The following diagram depicts the process overview. [time point] is the point of time observed. Figure 2. the available which helps to define the essential color in each cell. Figure 4. A calculation on the average color was made to select the right one instead of going further value of the cells containing the characters is carried on into designing this algorithm. calculate the length of each character and divide the rectangle into cells.  n 1  Color average value formula:   C(i )   n  i  To C(i) is the color value of pixel place i. premarily. A character is composed by the line segments and the curves. Generation. then subdivided into cells. the following step is to drawing then connecting those lines and curves [3][6]. the subtitle must be displayed in this interval. Mid-Point Algorithm. This is executed by defining frame position of start point and end point. with each cell in the array is a character. Each character is composed by the line segments and curves There are many available algorithms to draw the line and curve such as DDA Algorithm. it is able to insert text into the frame at a each character. Available word fonts in OpenCV library Input Grab video Input text As to preparation process. [1] Figure 3. library offers drawing fonts abolishing line and curve Normally the color is chosen in contrast with the connecting step. the background area Output Draw each Calculate the color containing subtitle can be defined easily. By entering input including text and the End of file subtitle position on the frame. Detaching a word into seperated characters [1]. n is the total pixels of one cell. Bresenham’s Line Figure 5. Each cell has its own certain Next Divide text’s brightness level that helps define its subtitle color. Beside the content. background video frame area has to be defined on frame containing the subtitle.

label displays the receiving image.1. To any subtitle section with its certain font. For every receipt. Currently. the frame points in the rectangle void updateUIframe(QImage img) are defined: [8] Then the img is displayed on a label in GUI .y are the coordinates of the rectangle’s right-below angle. x and y are receive image img: computable. using: emitter and another plays a role as a receiver.3. void sendProcessedImaged(QImage img) Supposing x. By default. The following table is an example: frame _ width  subSize x Table 1. in principle.2. Subtitling The player is totally designed basing on Frame Thank to the sufficient OpenCV and C++ libraries. subSize = Subtitle length 2. Subtitle characteristics 2 9 y  frame _ height  10 Start End Point Point Content (sec) (sec) 0 10 13 Hello! Set of pixels in subtitle area 1 15 18 Hi ! Figure 7. the alignment position is chosen-center and under the frame about nine tenths of To signal emitter. The algorithm used to define subtitle area 2 20 23 How are you today? As to character splitting. then emits rectangle surrounding the text which is the same length a signal attached the img: with the string and the same size with text font. every video frame. object2. the corresponding signal receiver implements its slot. To reach high appropriation. To connect signal and slot the Subtitle is entered as the string. For each time. Algorithm [2] 2.3. It is a an image and save it to a variable called img. the following formula is used instead: T  fps1  106 (  sec) . a corresponding slot is set up to width and the length of a video frame.3. Signal is emitted constructor the following syntax [1]: when a particular event occurs. reading function in OpenCV instead of the available the process becomes much more convinient.3.3. The 2017 ISEE 2. end point and content. signal. using VideoCapture mechanism is a central feature of Qt. the string is 3 25 27 Good. Every elements of the array must be referred then the character is assigned 3 . And you? the contiguous sequence of characters. Connect command is used: Background area on frame containing the subtitle connect(object1. slot) has to be defined. When there is a connection. Subtitle storage A class is used to show subtitle characteristics. Signal and Slot To open the given video. frame_width and frame_height is the To signal receiver. VideoCapture::Videocapture(const string& filename) Signal và Slot are used for communication between objects. A slot is a function of a class that is called in response to a particular signal. Vectors are used as the container. an object is an To read each frame in the video. media player function in Qt library. the loop will be executed to read frame length. VideoCapture::read(Mat& image) for every emission. Entering the Subtitle Time interval between the two signal emission is 1 calculated by the formula: T  fps (sec) . the three characteristics under consideration are: Start point. it splits the frame into It is possible to define the subtitle string’s format. From then. Displaying video in GUI 2. In this case.

4 .columns to define background color of this string is performed then its color average value is calculated. the calculated color is After calculating the average value. The algorithm used to split characters S Each split word is considered a string. the condition for 130. The algorithm is shown in Figure coloring coincidence in some cases. an intermediate returns the black color. the rule is to calculate each pixel’s color value A more friendly approach is to put the text and its then the average color. step must be executed to exchange the characters from char type to string. OpenCV library the interval is set from 0 to 176 which returns the white doesn’t support drawing char function. Calculating the color average value of a value under determination. and the rest interval which is between 176 and 255 draw characters to the frame. avg = 0 i < S.columns False Return a True S = S + 𝑎𝑖𝑗 i = i+1 i = i +1 End Figure 8. Smoothly. j. For example: when 9. However. However. this leads to the unreadability. Mat is the matrix of trimmed defined area. it is impossible to color. To avg is the average Figure 9.rows Mat.rows j = j+1 i=0 True Covert a to string type i ≤ Mat. A algorithm avg  Mat. Subsequently. The 2017 ISEE to a variable in type of char [5]. the color average value is 125. S. Therefore. image. Here is the algorithm: Defining the object area Entering any S string Trimming the area to a new matrix: Mat i=0 False i. interval from 0 to 255. Because the calculation is background colors in contrast (255 – [the average value executed on greyscale [4][7]. the defined area must be trimmed into a particular image to calculate its color average value. the average color drops in the of color]).length True False Char a = S[i] j ≤ Mat. they are nearly the same. character color in each cell is restricted.

Bradski and A. A subtitled video. Prentice ing-a-string-by-a-character [6] “Line Generation Algorithm” Figure 11. With this version. [1] G. output video format is limited. “Three algorithms for converting color to algorithm has some drawbacks such as the unreadability grayscale”. Blanchette and M. 2nd Edition. [8] “Efficient way to display OpenCV image into Qt”. ACKNOWLEDGEMENT According to certain practical tests and user feedbacks. friendly and to fulfill the Tin for his valuable criticism of the draft of this paper. [5] “Splitting a string by a character”. Dang Thanh GUI is regarded to be nice.mp4 and . This causes the inaccuracy in http://www. displaying time which affects the subtitle continuity. A.  The GUI will be designed nicer and more friendly. [2] J.  Video output has wider kinds of format  Sound adding for rendered video and output video.AVI only. [4] S. “C++ GUI Programming with QT 4”. 2005. RESULT ANALYSIS  Higher accuracy algorithm. 2006. Retrieved from: https://www. 1st Edition. Figure 10. However there are some drawbacks in the process such as subtitle quality. A test run is made including opening a video then  Enable user edit their subtitles after storing.htm However. [3] P. In the coming version. The two subtitle features start point and end point are Retrieved from: calculated by second.  The subtitle will be automatically split into rows if it width excesses the frame width. “Fundamentals of Computer Graphics”. An unreadable moment ine_generation_algorithm. “Learning OpenCV”. O’Reilly Media. . The result is as good as expectation. Retrieved from: https://stackoverflow. in Figure 11. the algorithm performs properly.avi format.. Inc. “Stephen Johnson on Digital Photography”. 2006.mpg. CONCLUSIONS The assignment on “Smart Subtitle” is completed in all edges. Inc. Retrieved from: https://www. way-to-display-opencv-image-into-Qt 4. 2nd Edition. time-consuming process. Shirley. It allows input video in various formats such as . size and customized coordinates availably in the GUI.K. Summerfield.  Adding three more subtitle features: Font.Peters. The 2017 ISEE 3. there are particular cases that the [7] John. inserting the subtitle “Hello World !!!!”  Previewing. in most case.johndcook. the software will be designed and developed to get more advanced function: 5 . O’Reilly Media. initial idea.avi. Dr. 2008. the color average hms-convert-color-grayscale/ value is not appropriate which leads to the unreadability. 1st Edition.  Time accuracy will reach millisecond.qtcentre. The author is grateful to ASS Prof. video output is REFERENCES only saved in . For such background.