Welcome to Scribd, the world's digital library. Read, publish, and share books and documents. See more
Download
Standard view
Full view
of .
Save to My Library
Look up keyword
Like this
1Activity
0 of .
Results for:
No results containing your search query
P. 1
Studying the Performance of Transmitting Video Streaming over Computer Networks in Real Time

Studying the Performance of Transmitting Video Streaming over Computer Networks in Real Time

Ratings: (0)|Views: 11 |Likes:
Published by ijcsis
The growth of Internet applications has become widely used in many different fields. Such growth has
motivated video communication over best-effort packet networks. Multimedia communications have emerged as a
major research and development area. In particular, computers in multimedia open a wide range of possibilities by
combining different types of digital media such as text, graphics, audio, and video. This paper concentrates on the
transmission of video streaming over computer networks. This study is preformed on two different codecs H.264 and MPEG-2. Video streaming files are transmitted by using two different protocols HTTP and UDP. After making the real time implementation, the performance of transmission parameters over the computer network is measured. Practical results show that jitter time of MPEG-2 is less than H.264. So MPEG-2 protocol is better than H.264 over the UDP protocols. In contrast, jitter time of H.264 is less than MPEG-2 over HTTP protocol. So H.264 is better than MPEG-2 over the HTTP protocol. This is from the network performance view. However, from video quality view, MPEG-2 achieves the guidelines of QoS of video streaming.
The growth of Internet applications has become widely used in many different fields. Such growth has
motivated video communication over best-effort packet networks. Multimedia communications have emerged as a
major research and development area. In particular, computers in multimedia open a wide range of possibilities by
combining different types of digital media such as text, graphics, audio, and video. This paper concentrates on the
transmission of video streaming over computer networks. This study is preformed on two different codecs H.264 and MPEG-2. Video streaming files are transmitted by using two different protocols HTTP and UDP. After making the real time implementation, the performance of transmission parameters over the computer network is measured. Practical results show that jitter time of MPEG-2 is less than H.264. So MPEG-2 protocol is better than H.264 over the UDP protocols. In contrast, jitter time of H.264 is less than MPEG-2 over HTTP protocol. So H.264 is better than MPEG-2 over the HTTP protocol. This is from the network performance view. However, from video quality view, MPEG-2 achieves the guidelines of QoS of video streaming.

More info:

Published by: ijcsis on Feb 19, 2012
Copyright:Attribution Non-commercial

Availability:

Read on Scribd mobile: iPhone, iPad and Android.
download as PDF, TXT or read online from Scribd
See more
See less

02/19/2012

pdf

text

original

 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 11, November 2011
Studying the Performance of Transmitting VideoStreaming over Computer Networks in Real Time
Hassan H. Soliman
Department of Electronics andCommunication Engineering,Faculty of Engineering,Mansoura University, EGYPT
Hazem M. El-Bakry
Department of Information Systems,Faculty of Computer Science &Information Systems, MansouraUniversity, EGYPThelbakry20@yahoo.com
Mona Reda
 Senior multimedia designer, E-learning unit, Mansoura University,EGYPT
 Abstract
—the growth of Internet applications has becomewidely used in many different fields. Such growth hasmotivated video communication over best-effort packetnetworks. Multimedia communications have emerged as amajor research and development area. In particular,computers in multimedia open a wide range of possibilities bycombining different types of digital media such as text,graphics, audio, and video. This paper concentrates on thetransmission of video streaming over computer networks. Thisstudy is preformed on two different codecs H.264 and MPEG-2. Video streaming files are transmitted by using two differentprotocols HTTP and UDP. After making the real timeimplementation, the performance of transmission parametersover the computer network is measured. Practical resultsshow that jitter time of MPEG-2 is less than H.264. So MPEG-2 protocol is better than H.264 over the UDP protocols. Incontrast, jitter time of H.264 is less than MPEG-2 over HTTPprotocol. So H.264 is better than MPEG-2 over the HTTPprotocol. This is from the network performance view.However, from video quality view, MPEG-2 achieves theguidelines of QoS of video streaming.
 Keywords- Multimedia communication, Video streaming, Network performance
I.
 
I
NTRODUCTION
Multimedia is one of the most important aspects of theinformation era. It can be defined as a computer basedinteractive communications process that incorporates text,graphics, animation, video and audio. Due to the rapidgrowth of multimedia communication, multimediastandards have received much attention during the lastdecade. Multimedia communications have been emerged asa major research and development area. In particular,computers in multimedia open a wide range of possibilitiesby combining different types of digital media such as text,graphics, audio, and video.The growth of the Internet in the mid-1990’s motivatedvideo communication over best-effort packet networks.Multimedia provides an environment in which the user caninteract with the program.There are two different playout methods allow covering of the (Audio/Video) A/V streaming requirements.
1. Streaming from File:
Audio and video are encoded andstored in a file. The file is then scheduled for later broadcastand uploaded to the operator of the distribution network. Atthe scheduled broadcast time, the playout begins from themedia file stored at the broadcaster’s location. Thisscheduling method is particularly useful, when a mediaevent has been prerecorded some time before the broadcastis scheduled.
2. Live Event Streaming:
is, as the name says, a vehiclefor broadcasting streams covering live events. Thebroadcast is scheduled exactly as in the file propagationmethod. A video camera at the location of the eventcaptures the event, and an encoder converts the videostream into an MPEG stream. At the time of the broadcast,this stream is accepted on a TCP/IP port at thebroadcaster’s location (assuming that the system is IPbased). The stream is then wrapped into subscriptionpackages and replicated onto the broadcast stream. Theadvantage of this is that the content is not stored anywhereand is directly broadcast [1].The motivation of this paper is to send video streamingover the network, and find the suitable protocol and alsobest codec in transmission.The paper organization as the following: section relatedwork is consider as a short description about the codecstypes. Section video streaming implementation gives adescription of platform and what is the measurement usedin this implementation and display result figures. Sectionexperimental results is summery the result and choose thebest codec used over the suitable transmission protocols.Finally, the conclusion of this paper.II.
 
R
ELATED
W
ORK
 Noriaki Kamiyama [2] is proposed to stream high definitionvideo over the internet. However, the transmission bit-rateis quite large, so generated traffic flows will cause link congestion. Therefore, when providing streaming services
90http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 11, November 2011
of rich content such as videos with HDTV or UHDVquality, it is important to reduce the maximum link utilization. Tarek R. Sheltami [3] is presented a simulationto analysis the performance of wireless networks undervideo traffic by minimization power and other QoSrequirements such as delay jitter. Yung-Sung Huang [4] isproposed video streaming from both video servers inhospitals and webcams localized to patients. All importantmedical data are transmitted over a 3G-wirelesscommunication system to various client devices. Also,proposed a congestion control scheme for streamingprocess to reduce packet losses.This paper concentrates on the transmission of videostreaming over computer networks. This study is preformedon two different codecs H.264 and MPEG-2. Videostreaming files are transmitted by using two differentprotocols HTTP and UDP. After making the real timeimplementation, the performance of transmissionparameters over the computer network is measured.Practical results show that jitter time of MPEG-2 is lessthan H.264. So MPEG-2 protocol is better than H.264 overthe UDP protocols. In contrast, jitter time of H.264 is lessthan MPEG-2 over HTTP protocol. So H.264 is better thanMPEG-2 over the HTTP protocol. This is from the network performance view. However, from video quality view,MPEG-2 achieves the guidelines of QoS of videostreaming.III.
 
R
ELATED
V
IDEO
F
ORMAT
 There are two standards bodies which are responsible to putthe video coding standards, the International StandardsOrganization (ISO) and the InternationalTelecommunications Union (ITU), have developed a seriesof standards that have shaped the development of the visualcommunications industry. The ISO JPEG, MPEG-1,MPEG-2, and MPEG-4 standards have perhaps had thebiggest impact: JPEG has become one of the most widelyused formats for still image storage and
 MPEG-2
forms theheart of digital television and DVD-video systems [5].The ITU's H.261 standard was originally developed forvideo conferencing over the ISDN, but H.261 and H.263are now widely used for real-lime video communicationsover a range of networks including the Internet. H.264 of ITU-T is known as International Standard video coding, itis the latest standard in a sequence of the video codingstandards H.261 [6].Each of the international standards takes a similar approachto meeting these goals. A video coding standard describessyntax for representing compressed video data and theprocedure for decoding this data as well as (possibly) a'reference' decoder and methods of proving conformancewith the standard [1].
MPEG-1
The first standard produced by the Moving Picture ExpertsGroup, popularly known as MPEG-1, was designed toprovide video and audio compression for storage andplayback on CD-ROMs. MPEG-1 aims to compress videoand audio to a bit rate 1.4 Mbps with a quality that iscomparable to VHS (Video home system) video tape.MPEG-1 is important for two reasons:1. It gained widespread use in other video storage andtransmission applications (including
CD
storage as part of interactive applications and video playback over theInternet)2. Its functionality is used and extended in the popularMPEG-2 standard.The MPEG-1 standard consists of three parts: Part 1: dealswith system issues (including the multiplexing of codedvideo and audio). Part 2: deals with compressed video,video was developed with aim of supporting efficientcoding of video for
CD
playback applications andachieving video quality comparable to, or better than,
VHS
 videotape at
CD
bit rates (around 1.2Mbps for video). Part3: deal with compressed audio.
MPEG-2
The next important entertainment application for codedvideo (after CD-ROM storage) was digital television. It hasto efficiently support larger frame sizes (typically 720 x 576or 720 x 480 pixels for ITU-R 601 resolution) and codingof interlaced video [5].The MPEG-2 standard was designed to provide thecapability for compressing , coding, and transmitting high-quality, multichannel multimedia signals over terrestrialbroadcast, satellite distribution, and broadband networks[7]. MPEG-2 consists of three main sections: video, audio(based on MPEG-1 audio coding) and, systems (defining, inmore detail. than MPEG-1 systems, multiplexing andtransmission of the coded audio/visual stream).MPEG-2 video is a superset of MPEG-1 video; mostMPEG-1 video sequences should be decodable by anMPEG-2 decoder. There are 4 main enhancements addedby the MPEG-2 standard are as follows: Efficient coding of television-quality video, support for coding of interlacedvideo, scalability, profiles and levels.Efficient coding of television-quality video: The mostimportant application of MPEG-2 is broadcast digitaltelevision. The ‘Core’ functions of MPEG-2 are optimizedfor efficient coding of television resolutions at a bit rate of around 3-5 Mbps.Support for coding of interlaced video: MPEG-2 video hasseveral features that support flexible coding of interlacedvideo. The two fields that make up a complete interlacedframe can be encoded as separate pictures (field pictures),each of which is coded as an I-, P- or B-picture. P- and B-field pictures may be predicted from a field in anotherframe or from the other field in the current frame.Alternatively, the two fields may be handled as a singlepicture (a frame picture) with the luminance samples ineach macroblock of a frame picture arranged in one of twoways as figure1. Frame DCT coding is similar to theMPEG-1 structure, where each of the four luminanceblocks contains alternate lines from both fields. With fieldDCT coding, the top two luminance blocks contain only
91http://sites.google.com/site/ijcsis/ISSN 1947-5500
 
(IJCSIS) International Journal of Computer Science and Information Security,Vol. 9, No. 11, November 2011
samples from the top field, and the bottom two luminanceblocks contain samples from the bottom field.In a field picture, the upper and lower 16 x 8 sampleregions of a macroblock may be motion-compensatedindependently: hence each of the two regions has its ownvector (or two vectors in the case of a B-picture). However,this 16 x 8 motion compensation mode can improveperformance because a field picture has half the verticalresolution of a frame picture and so there are more likely tobe significant differences in motion between the top andbottom halves of each macroblock.Scalability:
 
A scalable coded bit stream consists of anumber of layers, a base layer and one or moreenhancement layers. The base layer can be decoded toprovide a recognizable video sequence that has a limitedvisual quality, and a higher-quality sequence may beproduced try decoding the base layer plus enhancementlayer(s), with each extra enhancement layer improving thequality of the decoded sequence. MPEG-2 video supports 4scalable modes: spatial scalability, temporal scalability,SNR scalability, and data partitioning [5].
 
Profiles and levels: With MPEG 2,
 profiles
specify thesyntax (i.e., algorithms) and
levels
specify variousparameters (resolution, frame rate, bit rate, etc.).
Figure1. Illustration of the two coding structures.
Levels: MPEG 2 supports four levels, which specifyresolution, frame rate, coded bit rate, and so on for a givenprofile. 1. Low Level (LL) MPEG 1 ConstrainedParameters Bitstream (CPB) supports up to 352 × 288 at upto 30 frames per second. Maximum bit rate is 4 Mbps.Main Level (ML): MPEG 2 Constrained ParametersBitstream (CPB) supports up to 720 × 576 at up to 30frames per second and is intended for SDTV applications.Maximum bit rate is 15–20 Mbps.High 1440 Level: This Level supports up to 1440 × 1088 atup to 60 frames per second and is intended for HDTVapplications. Maximum bit rate is 60–80 Mbps.High Level (HL): High Level supports up to 1920 × 1088 atup to 60 frames per second and is intended for HDTVapplications. Maximum bit rate is 80–100 Mbps.Profiles: MPEG 2 supports six profiles, which specifywhich coding syntax (algorithms) is used.1. Simple Profile (SP): Main profile without the B frames,intended for software applications and perhaps digital cableTV.2. Main Profile (MP): Supported by most MPEG 2 decoderchips, it should satisfy 90% of the SDTV applications.Typical resolutions are shown in Table I [6].3. Multiview Profile (MVP): By using existing MPEG 2tools, it is possible to encode video from two camerasshooting the same scene with a small angle between them.4. 4:2:2 Profile (422P): Previously known as “studioprofile,” this profile uses 4:2:2 YCbCr instead of 4:2:0, andwith main level, increases the maximum bit rate up to 50Mbps (300 Mbps with high level). It was added to supportpro-video SDTV and HDTV requirements.5. SNR and Spatial Profiles: Adds support for SNRscalability and/or spatial scalability.6. High Profile (HP): Supported by MPEG 2 decoder chipstargeted for HDTV applications. Typical resolutions areshown in Table I[7].
H.261
ITU-T H.261 was the first video compression anddecompression standard developed for video conferencing.The video encoder provides a self-contained digital videobitstream which is multiplexed with other signals, such ascontrol and audio. The video decoder performs the reverseprocess.H.261 video data uses the 4:2:0 YCbCr format shownprevious, with the primary specifications listed in Table II.The maximum picture rate may be restricted by having 0, 1,2, or 3 non-transmitted pictures between transmitted ones.Two picture (or frame) types are supported: Intra or IFrame: A frame having no reference frame for prediction.Inter or P Frame: A frame based on a previous frame [5].
H.264
The new video coding standard Recommendation H.264 of ITU-T also known as International Standard 14496-10 orMPEG-4 part 10 Advanced Video Coding (AVC) of ISO/IEC. H.264/AVC was finalized in March 2003 andapproved by the ITU-T in May 2003 [3].
H.264/AVC
offers a significant improvement on codingefficiency compared to other compression standards such asMPEG-2. The functional blocks of H.264/AVC encoderand decoder are shown in Fig.2 and Fig.3 respectively.
 
In Fig.2 the sender might choose to preprocess the videousing format conversion or enhancement techniques. Thenthe encoder encodes the video and represents the video as abit stream.
16X16 region ofluminance component
01230123(a) Frame DCT(b) Field DCT
92http://sites.google.com/site/ijcsis/ISSN 1947-5500

You're Reading a Free Preview

Download
/*********** DO NOT ALTER ANYTHING BELOW THIS LINE ! ************/ var s_code=s.t();if(s_code)document.write(s_code)//-->