You are on page 1of 12

Student ID

(1)OpenCV
1811562128 TP No./卷号:A
Total
Name Result
SAAD
Reviewer Hubei University of Technology
191c 软工 2021 to 2022 Academic Year Second Semester, Final Exam
二 O 二一 — 二 O 二二 学年第二学期期末考试

Computer Graphics 试题/Test Paper


(For Majors of Computer Science, Software Engineering)Open-book
(计算机科学 软件工程 专业用)开卷

Q
I II III IV V VI VII VIII IX X
No.
Q 20 20 60 (2)Cathode Ray Tube
Result
Your
Result
Notice: Student ID, TSA and Grade & Class are not filled properly, the paper is invalid.
注意:学号、姓名和所在年级班级不写、不写全或写在密封线外者,试卷作废。

Result I、terms to explain(5’×4)


Reviewer

1
(3)OpenGL
(4)Matplotlib

2
II、Scripts (5’×4) (2)Write some codes to flip a picture horizontally or
Result
vertically.
Reviewer

(1)Write some codes to show pictue and save it to “star.jpg”

Answer:

3
(4)Write some codes to implement image mixing blending.
(3)Write some codes to draw some geometries like circular and rectangle.

4
Result
Reviewer

III、Essay questions(10’×6)
(1) Give a brief explanation of applications of Computer Graphics?

5
6
game_over_rect = game_over_surface.get_rect()
(2)How to use computer graphics knowledge to develop a greedy
snake game and write out design ideas and solutions. # setting position of the text
Answer: game_over_rect.midtop = (window_x/2, window_y/4)

# initial score # blit will draw the text on screen


score = 0 game_window.blit(game_over_surface, game_over_rect)
# displaying Score function pygame.display.flip()
def show_score(choice, color, font, size):
# after 2 seconds we will quit the
# creating font object score_font # program
score_font = pygame.font.SysFont(font, size) time.sleep(2)

# create the display surface object # deactivating pygame library


# score_surface pygame.quit()
score_surface = score_font.render('Score : ' + str(score), True, color)
# quit the program
# create a rectangular object for the quit()
# text surface object
score_rect = score_surface.get_rect()

# displaying text
game_window.blit(score_surface, score_rect)

# game over function


def game_over():

# creating font object my_font


my_font = pygame.font.SysFont('times new roman', 50)

# creating a text surface on which text


# will be drawn
game_over_surface = my_font.render('Your Score is : ' + str(score), True, red)

# create a rectangular object for the text


# surface object
7
(4)How is OpenCV to read and save the video? Write some codes to save
(3)Explain the definitions and applications of BGR, HSV,CMYK, YUV color the camera video as video file format likes avi, mp4 or flv.
spaces models. Answer:
In OpenCV, a video can be read either by using the feed from a camera
Answer connected to a computer or by reading a video file. The first step towards
reading a video file is to create a Video Capture object.
3)3) Color spaces are a way to represent the color channels present in the image that
gives the image that particular hue. There are several different color spaces and each Code:
has its own significance.
Some of the popular color spaces are RGB (Red, Green, Blue), CMYK (Cyan, Magenta, import cv2
Yellow, Black), HSV (Hue, Saturation, Value), etc. def videocapture():
cap=cv2.VideoCapture(0)
width = int(cap.get(cv2.CAP_PROP_FRAME_WIDTH))
BGR color space: OpenCV’s default color space is RGB. However, it actually stores height = int(cap.get(cv2.CAP_PROP_FRAME_HEIGHT))
color in the BGR format. It is an additive color model where the different intensities of fps = cap.get(cv2.CAP_PROP_FPS)
Blue, Green and Red give different shades of color. fourcc = int(cap.get(cv2.CAP_PROP_FOURCC))

HSV color space: It stores color information in a cylindrical representation of RGB writer = cv2.VideoWriter("my_Output.mp4", fourcc, fps, (width,
color points. It attempts to depict the colors as perceived by the human eye. Hue value height))
varies from 0-179, Saturation value varies from 0-255 and Value value varies from while cap.isOpened():
0-255. It is mostly used for color segmentation purpose. ret, frame = cap.read()
cv2.imshow('teswell', frame)
CMYK color space: Unlike, RGB it is a subtractive color space. The CMYK model key = cv2.waitKey(24)
works by partially or entirely masking colors on a lighter, usually white, background. writer.write(frame)
The ink reduces the light that would otherwise be reflected. Such a model is called #press q to quit
subtractive because inks “subtract” the colors red, green and blue from white light. if key == ord('q'):
White light minus red leaves cyan, white light minus green leaves magenta, and white break
light minus blue leaves yellow. cap.release()
YUV color space: is a color model typically used as part of a color image pipeline. It cv2.destroyAllWindows()
encodes a color image or video taking human perception into account, allowing reduced
bandwidth for chrominance components, compared to a "direct" RGB-representation. if __name__ == '__main__' :
Historically, the terms YUV and Y′UV were used for a specific analog encoding of
color information in television systems.[1] Today, the term YUV is commonly used in videocapture()
the computer industry to describe file-formats (pixel formats) that are encoded using
YCbCr.
8
(5)Give at least three geometric transformations of an image, and cv.imshow('img',dst)
explain the concepts and write some codes to show the results. cv.waitKey(0)
Answer: cv.destroyAllWindows()

See the result below:

Transformations

OpenCV provides two transformation


functions, cv.warpAffine and cv.warpPerspective, with which you can perform
all kinds of transformations. cv.warpAffine takes a 2x3 transformation matrix
while cv.warpPerspective takes a 3x3 transformation matrix as input.

Scaling image

Scaling is just resizing of the image. OpenCV comes with a


Rotation
function cv.resize() for this purpose. The size of the image can be specified img = cv.imread('messi5.jpg',0)

manually, or you can specify the scaling factor. Different interpolation methods rows,cols = img.shape
# cols-1 and rows-1 are the coordinate limits.
are used. Preferable interpolation methods are cv.INTER_AREA for shrinking
M = cv.getRotationMatrix2D(((cols-1)/2.0,(rows-1)/2.0),90,1)
and cv.INTER_CUBIC (slow) & cv.INTER_LINEAR for zooming. By default, the
dst = cv.warpAffine(img,M,(cols,rows))
interpolation method cv.INTER_LINEAR is used for all resizing purposes. You
can resize an input image with either of following methods: See the result:

import numpy as np
import cv2 as cv
img = cv.imread('messi5.jpg')
res = cv.resize(img,None,fx=2, fy=2, interpolation = cv.INTER_CUBIC)
#OR
height, width = img.shape[:2]
res = cv.resize(img,(2*width, 2*height), interpolation =
cv.INTER_CUBIC) image

Translation): Affine Transformation


import numpy as np img = cv.imread('drawing.png')
import cv2 as cv rows,cols,ch = img.shape
img = cv.imread('messi5.jpg',0) pts1 = np.float32([[50,50],[200,50],[50,200]])
rows,cols = img.shape pts2 = np.float32([[10,100],[200,50],[100,250]])
M = np.float32([[1,0,100],[0,1,50]]) M = cv.getAffineTransform(pts1,pts2)
dst = cv.warpAffine(img,M,(cols,rows)) dst = cv.warpAffine(img,M,(cols,rows))

9
plt.subplot(121),plt.imshow(img),plt.title('Input')
plt.subplot(122),plt.imshow(dst),plt.title('Output')
plt.show()

See the result:

(6)Explain the concept of working principle of template


matching in OpenCV and write some codes to show the results.

Answer:

If input image is of size (WxH) and template image is of size (wxh), output image will
have a size of (W-w+1, H-h+1). Once you got the result, you can use cv2.minMaxLoc()
function to find where is the maximum/minimum value. Take it as the top-left corner of
rectangle and take (w,h) as width and height of the rectangle. That rectangle is your
region of template.

If input image is of size (WxH) and template image is of size (wxh), output image will
Image have a size of (W-w+1, H-h+1). Once you got the result, you can use cv2.minMaxLoc()
function to find where is the maximum/minimum value. Take it as the top-left corner of
rectangle and take (w,h) as width and height of the rectangle. That rectangle is your
region of template.

CODE

import cv2
import numpy as np
from matplotlib import pyplot as plt
img = cv2.imread('Saad.jpg',0)
img2 = img.copy()
template = cv2.imread('brain.png',0)
w, h = template.shape[::-1]
# All the 6 methods for comparison in a list
methods = ['cv2.TM_CCOEFF', 'cv2.TM_CCOEFF_NORMED',
10
'cv2.TM_CCORR',
'cv2.TM_CCORR_NORMED', 'cv2.TM_SQDIFF',
'cv2.TM_SQDIFF_NORMED']
for meth in methods:
img = img2.copy()
method = eval(meth)
# Apply template Matching
res = cv2.matchTemplate(img,template,method)
min_val, max_val, min_loc, max_loc = cv2.minMaxLoc(res)
# If the method is TM_SQDIFF or TM_SQDIFF_NORMED, take minimum
if method in [cv2.TM_SQDIFF, cv2.TM_SQDIFF_NORMED]:
top_left = min_loc
else:
top_left = max_loc
bottom_right = (top_left[0] + w, top_left[1] + h)
cv2.rectangle(img,top_left, bottom_right, 255, 2)
plt.subplot(121),plt.imshow(res,cmap = 'gray')
plt.title('Matching Result'), plt.xticks([]), plt.yticks([])
plt.subplot(122),plt.imshow(img,cmap = 'gray')
plt.title('Detected Point'), plt.xticks([]), plt.yticks([])
plt.suptitle(meth)
plt.show()

11
12

You might also like