You are on page 1of 15
c») United States 2) Patent Application Publication (10) Pub. No. Kaheel et al. US 2010 2010021441941 'S 2010/0214419 AT (43) Pub. Dat Aug. 26, 2010 SHARING (54) VIDE (75) lvontors: Ayman Malek Kaheel, Cairo (26); Mota Ahmed El-Saban, Cairo (EG): Mohamed Shavsky ‘Abdallah, Caio (FG), Mahn Ahmed Refaat All, Caio (EG) Correspondence Address: LEE & HAYES, PLLC 601 W, RIVERSIDE AVENUE, SUITE 1400 SPOKANE, WA 99201 (US) (73) Assignce:- Mierosoft Corporation, Redmond, WA (US) (21) ApplNo: — 12890,636 (22) Fite: Feb, 23, 2009 103 Publication Classification GI) mee HAN 5225 (200601) HOAN 57262 (2006.01) Ga6E 1516 (2006.01) (92) US.cl. 348/207.1; 348/240.99; 709/281 ‘S8/08.024; S48/508.085 on ABSTRACT Video sharing is described. In an embodiment, mobile video captire devices sich as mobile telephones capture video Streams of the same event video sharing system obtains contextual information about the video streams and uses that to forma video outpat fom the streams that oui being for shacing by other entities, For example, the formed video provides an enhanced viewing experience as compared with ‘an individual one of the input video steams. Inembodimeats the contextual information may be obtained Irom content fnalysis ofthe video streams, fom stored context inform tion and from contr informa isis: In some embodiments the video streams of ive event fare received and the output video Forated in real ime. In ‘examples Feedback is provided to video eaplure devices 10 ‘Sigges tha the oom, Viewing position or ther charters {ies are adjusted orto wchieve tis automatically Patent Application Publication Aug. 26, 2010 Sheet 1 of 8 US 2010/0214419 A1 te 2 a Ry LS | ‘Communications Network 102 —— FIG. 1 Patent Application Publication 203 Aug. 26,2010 Sheet 2 of 8 US 2010/0214419 A1 200 Receive video | streams 201 Identify video streams of || ‘same event 202 204 video content analysis Control channel information Obtain contextual information for each video stream 205 Optionally send feedback to video | capture devices Form video stream output tor |3°® sharing FIG. 2 Patent Application Publication Aug. 26, 2010 Sheet 3 of 8 US 2010/0214419 A1 o 301 soz 308 ss Ze CZ. 2 rect Cane |_[~ came |_[ video Davee Capture Encoder so 35 RP 208 sor 28 Mux Transmiter Tae aio Tao aver _[-|_canture _[]_encoder 3M 313 312 310 ‘Streaming Tontant Vileo RIP Manager Analys Decoder Recalver on (MMS over HTTP) [RTP Or (RTSP and RTP) a7 Teansmiter 318 Media Player ~ FIG. 3 US 2010/0214419 A1 Aug. 26,2010 Sheet 4 of 8 Patent Application Publication Provisioning Tier Logle Tier Casting Tier os hs o°pin ra FIG. 4 Patent Application Publication Aug. 26, 2010 Sheet 5 of 8 US 2010/0214419 A1 501 502 Video ‘Streaming onto! Information Content Analysis fF i ; ' i 7 ! ' towing uaity ' ! ‘Angle ' ' ‘Assoss0r a ' ' object | | 1 | overiap Detection and| | | | assessor Ss S Tracking |! I 504 sts | Engine | | ; ' ' 203 soe | i 507 i ! ‘Output Video Enchancement ' sz | ' ; ' ; ' i ' 1 | vieeo woos || 1 | stiching Inserion |! 1 | Engine Engine | | ' Viewing ' ' = ia gaty ' I ee ection ! ' 0a oe Engine oh | i oD I ; 500 510 ' L ' FIG. 5 Patent Application Publication Aug. 26, 2010 Sheet 6 of 8 US 2010/0214419 A1 604 —— FIG. 6 Patent Application Publication Aug. 26, 2010 Sheet 7 of 8 US 2010/0214419 A1 | Capture Video 700 Exchange | 799 Identify other video capture channel | ry devices identity 701 information Negotiate with identified Exchange | 794 m devices Dace 703 information FIG. 7 Patent Application Publication Aug. 26, 2010 Sheet 8 of 8 US 2010/0214419 A1 800 ~ 801 805 Processor {~” 7 Display interface ag2_| Memory Inputs Application |8° 804 | Operating 306 software System 807_| Communication interface FIG. 8 US 20100214419 AL VIDEO SHARING BACKGROUND 10001] Video sharing web services are known which enable ‘end uses to upload videos captured using ther mobile tele= Phones oa website. The videos may then be viewed by others Who access the web site. An end user is able to specify ‘whether his or her video is to be publicly available to al, Visitors to the web site or whether ts tobe shared only by 8 specified group of individuals. 10002] Such video sharing web serves are used for many Drusposes such as sharing videos of family events between family members who live in diferent countries. Other ‘examples include sharing videos of edeational lectures or ‘entertainment performances. Typically video is eapturedon.a mobile phone and at a later time is uploaded to the web service. Other are then able to download the video from the ‘cb service [0003] Typically the videos recorded by mobile phones have ow resolution and poor quality for a variety of reasons For example, the mobile phone may’ shake unintentionally ducing video capture as the end usr is typically not exper st Video capture. Also the position of the video capture Geviee ith respect to the scene being rsconled and the lighting ‘conditions and other environmental conditions may be bly ‘elected, In addition, the communications Fink between the mobile capture device and the web site may be poor during the upload process and this can further reise the quality of the video, 10004] In addition, when several users have captured video ‘of the same event the resulting videos atthe web site are ‘difficult to selet between, Its then a Hine consuming and ‘complex process for individuals 1 view each of those videos before being able to docide which provides the best result. IP ‘an individual i 0 dovsnlgad each of those videos from the web service before selecting the most appropriate one for his ‘or hor purpose download fink capacity is used. Also, storage ‘capacity is taken up atthe web service and baadwidih i sed to upload all the videos to the web serviee [0005] The embodiments described herein are no limited 'o implementations which solve any or all of the disadvan- tages of koown video shating systems, SUMMARY 10006] The following presentsa simplified summary ofthe Aisclosure inorder to provide a basic unestnding © the reader This summary" not an extensive everviow of the liselosue anit doesnot identity Reyer elements of the invention or delineate the scope of the invention. Ls sole Prupose isto present some concepts disclosed herein in a $impliied fora as prelude tothe moe detailed desertion thats presented tee {0007} Video sharing is deseribed. In an embodiment, tobi video copie device such ae mobile telephones cap” ture vido steam ofthe same event. A video sbaring system ‘bis eontextal information about the video steams and tse that to forma video output fom te steams, pt bein for sharing by other ets. For example, the formed ‘ideo provides an enianed viewing experience as compared ‘vith i indvidal one ofthe inp video scams. In bo nent the context information may be obtained from eon- tent analysis ofthe video steams, fom stored coaextinfir- tation and from contol information such as” device Aug. 26,2010 characterises. In some embodiments the video streams of a live event are roceived and the output video formed in real time, In examples feedback is provided to video capture devices to siggest thatthe zoom, viewing postion or other charaeteristis ae adjusted orto achieve this automatically [0008] Many of thesttendant Features will e more readily appreciated as the same becomes beter understood by refer: tence wo the following detailed deseripion considered incon fection with the accompanying drawings. DESCRIPTION OF THE DRAWINGS [0009] The present description will be better understood tom the following detailed description read in light of the avcompanying drawings, wherein: {0010} FIG. 1 is a schematic diagram of a video sharing system used in.a communications newwork; [O01] FIG. 2 isa Mow diagram of a method at a video sharing syste: [0012] FIG. 3 isa flow diagram of another method at a Video sharing system; [0013] FIG. 4 is a schematic diagram of a video sharing system; [0014] F1G.Sisaschematicdiggramofthe content analysis, component of FIG. 4 in more detail [0015] FIG. 6 isa schematic diagram of use of a video sharing system with video stitching: [0016] FIG. 7 isa Mow diagram of @ method at «video capture device: [0017] FIG. 8 ilustrates an exemplary computing-based ‘device in which embodiments ofa video staring system may be implemented, [0018] Like reference mumerals sre used to designate Hike pars in the accompanying drawings. DETAILED DESCRIPTION [0019] The desiled description provided below in connec tion with the appended drawings ented as» description ofthe present examples and is pot intended t represent the only forms in which the present example mey be conic or ttilzed. The descripion sets forth the functions of the example andthe sequence of steps for eonstucting and opor Sting the example. However: the sane or equivalent functions nl quences may he accomplished by diffrent examples {0020} FIG. 1 isa schomatie diagram of «video sharing system 101 wsed in @ communications network 100. The Communications network sof any suitable ype suchas the Internet local are network, a wide aca network, a wieless communications network, oF any network stable for come Sincating steaming video between eats, Sieaming ters 102 comprise vide capture devs which may or may ‘ot be mobile. For example, these video captre devices ay be mobile tlepones, PDAS, laptop computers or any lhe stable type of video captre device which is portable or Stticand whichis able tocapleand send steaming video the communications network 100 ether drcetly or vis her ents. nthe example ius the video capture devices 102 reall shownas mobile telephones apturing video of the same scene 103 which may bea wedng for example. Many more video capuredevies may be present ad these may Be opting video of er scenes oF events. [0021] The video sharing system comprise inastosture Auxrated for clarity in FIG. La single communications network node 101. However, it i not esentil to wea single US 20100214419 AL network node 101 asthe functionality may be distributed over 4 plurality of entities in the communications network 100, ‘The communications network node bas functionality t0 receive streaming video and contol information from the Video capture devices 102 and to enable other entities 104, 105, 106 in the communications network to share the streamed video in an enhanced manner. Contextual informa tion is used by the video sharing system to add value tothe received video streams in a variety of possible ways. In an ‘embodiment other family members who are not physically presenta the wedding 103 may view the video provided by the video sharing system 101 at large display screen 106 in ‘communication with he video sharing system. Other types ‘entity may share the video such as personal computers 104, mobile telephones 108, or other entities with eapabilty 10 receive and display video from the vido sharing system 101 [0022] FIG. 2 is flow diagram of a method at a video sharing system (such as 101 of FIG. 1. plunaity of video Streanis are received 200 from video capture devices and the Video shuring system identiles 201 those video streams ‘which are ofthe same event. This identification s achieved in any suitable manner. For example, communication between, ‘each video capture device and the video sharing system 101 may’be achieved using a pair of eommunieations channels, ‘one for com the video stream content and one for ‘communicating control information, 1a some embodiments the control information channel isused to sendan identfierat the event to the video sharing system whieh is used 10 select Video streams ofthe same event. For example, a web service is provided by the video sharing system 101 which enables users to specify a code or channel identity for @ particular ‘event. All streaming users who eapture video stream ofthe particular eventhave access to the Web service anda able to elect the uppropriate channel idetiy. In this way’ the video sharing system is able Io receive hge mumbers of video Streams and to quickly and simply identify those streams ‘which are records ofthe same event. In other embodiments, the content of the video stream ite is analyzed in order ‘denify those streams which are records of the same event Also, chanel identity information may be seat together with, the video content in some eases, [0023] In some embodiments an audio stream is received logether withthe video stream where the video capture device also has audio capture capability. However, this is not esen- tial 10024] The video sharing system 101 obtains 202 oontex- ‘ual information fr each video stream ofthe same event. For ‘example, this contexts information may comprise informa- tion received 204 from a contol channel between the video ‘capture device and the video sharing system. Altematively or ‘98 well, the contextual information may be obtained fe nalysis 203 ofthe video steam content (and/or from any associated audio stream conten). I is also possible for the ‘context information to be obained from historical reconts ‘of past video streaming behavior and/or from video capture ‘dotice records accessible to the video sharing system, 10025] 8 non-exhaustive list of examples of contextual information is: video quality, steadiness/shakiness of video, audio quality, physialloation information obtained over the ‘contol channel such as GPS information, ellulae network location information, viewing angle estimate, object rcog- nition results, video stream overlap estimates; communica tion link conditions, available bandwidth, pocket loss, delay, jiter; also the current state ofthe video capture device batory Aug. 26,2010 ‘which may indicate theamount of processing thateanbedone ‘onithat device; capability af the video capture device such a8 ‘nuximum possible esolution, minimum hx frillumination, ‘zooming capacity (optical and digital) [0026] Using the contextual information together with res, resolds or ler erteria the video sharing system it ‘optionally arranged to send 208 feedack to the video capture devices. This feedback maybe use feedback thats presented (© a user of the video capture device at a graphical user interface (or ther interface) on that device. For example, the Teeback may suggest thatthe user move the position ofthe Video capture device ina particular manner (o improve the view of the scene being captured. In another example, the feedback may suggest thatthe user change the zoom of the video capture device o alter the resolution orother capability ofthe video capture devie. In another example the feedback ‘ay’ suggest that the user stabilize The camera or change ‘motion of the eamera. In other examples the feedback may indicate the location and capability of other video capture devices capturing the same event and make suggestions aso ‘whether the curent video capture device should continue epiure or not It is also possible for the feedback (0 be transparent the end user ia that tray’ spy be received by the video capture device and acted upon astomiatially by that device without user input. For example, the feedback may Jnsntet the video capture device not to send redundant infor ‘mation such asa common view between the user and another see. This reduces video processing power and saves band- svi, [0027] _‘The video sharing system is also arranged to form 206 a video ontput for sharing by other entities in the com ‘nunications nctwork, The process of forming the video out- pt takes into account thecontextial information in oder tha Value is added and the video to be shared provides an ‘enhanced viewing experiences compared with viewing only ‘an individual one of the received video streams. The Video ‘oui formed from the received video streams ofthe same tevent, for example. so af t9 enable the view point to be ‘changed, to tte together video steams to inerease the field of view, to ad Tinks to elated documents or materials, to improve quality to improve security or for other video shar jing enhancement reasons, The video output may be Formed bolle and stored at Tocation in the communications net \workaccessibleto te video sharing system. Itisalso possible Torthe video ourputto be formed dynamically asrequested by cates requiring the shared video. In some embodiments the Video steams are of live event which is shared in a Tive process by other entities, In this ease the processes at the video sharing system are eared out in realtime. [0028] "FIG. 3isa ow diagramofanexample video sharing method. The parts ofthe flow diagram above dotted line 316 represent processes which occur atthe video eaptre device. The parts ofthe flow diagram belew doted line 316and above dotted line 317 represent processes which occur atte Video sharing system (such as 101 of FIG. 1). The parts of the flow iagram below dotted Tine 317 represent processes which ‘eeu at an entity which receives the shared video. [0029] camera driver 301 at the video capture device ates ou eapiure process 302 to capture a video stream, The video stream is encoded using a video encoded 303 of ny suitable type as known in the at. In embodiments where the video capture device also comprises audio capture capa bility microphone driver 306s provided. This captures 307 nati stream which s encoded 308 as known intheat. The US 20100214419 AL ideo and audio streams are multiplexed 304 and transmited ‘over areal time protocol (RIP) communications link, using RIP transmitter 308, tthe video sharing system (101 of FIG. 1). RIP is defined by the Intemet Engineering Task Force in REC 3550 and other RFCs, However, itis not essential tose an RTP communications lik. Any suitable communications protocol may be wsed to send the content fom the video ‘capture device to the video sharing system. A separate com tunications session between the video cpturedevice and the Video sharing system may be estblished inorder to commu- nicate control information. For example, this separate com- ‘munications session may be established using session initia- tion protocol (SIP) or any other suitable protocol. Inthe ‘example ilostrated i FIG. 3 RTCP (eeal-timte transport eon= trol protocol) is used together with an RCP transmitter 311 ‘and RICP receiver 109, [0030] tthe video sharing system the content received on the RTP session is demultiplexed and the video content is ‘decoded ting video decoder 312. The audio content may also be decoded using an audio decoder. The video content ‘and optionally the audio content is processed by a content analysis engine 313 which is described in more deal below. “The content analysis engine 313 also comprises functionality to form the output vido steam anda streaming manager 14 ‘converts the steam into the appropriate format far provision= ing to te emis whieh require to share the video. The video ‘ouiput stream is seat tothe destination entities using any ‘suitable communications protocol suchas MMS (Mierosofi® media server), Windows® media HTT? (hypertext wansfer) streaming protocol or RTSP (realtime streaming protocol) tnd RTP. tthe destination entity the video is displayed using ‘any suitable display engine such as-a media player 318, [0031] FIG. 4 is a schematic diagram ofa vidoo shaving “apparatus 400 which may be provided the communications network node 101 of FIG. 1. The apparatas 400 comprises ‘casting director 401, a video casting engine 403, a content ‘analysis engine 402, and a content provisioning engine 404, These engines may be intepral and provided using a single ‘communications network node or may be dsteibuted over plurality of servers or ther processors in a communications network 10032] ‘The video sharing apparates 400 roceives input from a video steaming engine 408 provided in each video capture device 408 as well a from a moderator 406 also provided in ‘each video capture deviee. The input is sent over a comm nications network 407, FIG, 4aso illustrates example video sharing devioes which receive video shared by the apparatus «400, These include a television 409, 2 web page 410 viewed using a suitable web browser and equipment, and a mabile telephone 411 10033] As mentioned above, each video capture device 408 ‘comprises video streaming engine 40S anda moderator 406 ‘Together these on a casting ter as indicated in FIG. 4, The video steaming engine 405 s arranged to stream video cap- tured by the device upstream to the video sharing apparatis 400, The moderator 406 i arranged to send contol informa- tion to the easing director 401 othe video sharing apparatus 400. This control information may be sent periotally andor ‘on demand and may comprise information on the device ‘capabilities and usage, such as camera resolution, CPU uili= ‘ation, bandwidth availability and soon, The moderator 406 may aso bearranged to execute commands received from the casting director 401 Aug. 26,2010 [0034] | Moredetal about the video sharing apparatus 400 is row given, The video casting engine 403 receives video streams from the video capture devices 408 and readies those video steams for content analysis, The video casting engine may also be arranged (0 forward control information to the casting director 401. For example, this contr! information ‘may comprise communications network statsties such a packet lossand delay which are available to the vide casting fine asa result of ie receipt of the vido streams [0035] The content analysis engine 402 i arranged to ana Ize the content received from the video capture devices ‘which may comprise video andr vaio content. This analy sis may be carried out in real time or may be carried out blline, More detail about the content analysis engine 402 is aiven bolow with reference to FIG. 5 [0036] The casting director 401 receives input from mod eraors 406 as well a receiving control information from the Video casting engine 403 and receiving information from the content analysis engine 402. The casting director 401 is faranged to assign roles 0 video cape devices 408 on the basis of information available to tas well as to pre-specified rules, thtesholds or other entra, Isis arranged to send com ‘mands to moderators at the video capture devices 408 to Jmplement the roles and is ul able to Sed user Feedback the video capture devies 408. [0037] In sn embodimeat the casting director 401 is, fmanged to assign roles tothe video eapore devices 408 on the basis of information about capabilities of those video capture deviees (hich may be received from the moderators ‘ormay’be available at the video sharing apparatos 400 asthe result ofa egistration progess) and using information abouta target viewing experience (which may be pre-specified or provided by user input ata web service provided bythe video shaving apparatus). In an example, the fanget viewing experi- tence st obtain a wider field of view ofthe event. In another example, the target viewing experience i to select an opti- ‘nim Viewing angle for an event, The ros that are astigned ‘ght be for example, to steam video alone, to steam video ‘and audio, to steam aio alone, to stop steaming any con- ‘wot uni a specified time. [0038] In some embodiments the casting director 401 is, provided on sever at which the video sharing system 400 is Implemented. In other embodiments the casting doctor is \sholly’or partially disteibuted amongst the video eapture Advices 408, More deal of an embodiment where the casting tlirector is parially distributed amongst the video eapture devices is piven below. [0039] The video sharing apparatus 400 comprises a con- ‘eatprovisioning engine 404 which is responsible for sending video feds that recast from the video capture devices 408 to target ents, The provisioning engine 404 is aranged to ‘cansform the video feeds to formats approprateto the typeof destination entity. For example, the target enity may be a television 409, a computer displaying the video at a web browser 410, oe mobile welephone 411 {0040} ‘The content provisioning engine 404 may also be faranged to provide a We serice for enabling end Users to create achanne (or event identifier) that streaming users are able fo cast o and which target entities are able to view. The ‘web service may also provide the ability for uses to register tnd sore user details, video capture device details, target ‘device details, athe ike, Inaditio, users may be provided ‘ith options to make channels for video sharing public oF private where privatechannels allow only aspecified group of US 20100214419 AL users to east to andlor view. The web service also enables users to specify a target viewing experience associated with a particular channel, For example thie may be to oblain the Sidest field of view ort may het select an optimal viewing angle [0041] FIG. $isa schematic diagram ofa content unalysis ‘engine 402. This engine receives as input video streams SOT ‘and control information 502 as described above with refer ‘ence to FIG. 4. I provides output video $12 which is formed from the input video streams S01 together with contextual information comprising the contol information $02 andthe resulls of content analysis carried out by the content analysis ‘engine ise [0042] Thecontent analysis engine comprises various com- ponents or modules for making different types of analysis of the content. Examples of these are illustrated as medules 503-506 in FIG. 5 although these are merely examples. Dif= {eat combinations and other types of such module may be used, An overlap assessor 803 may be provided which ‘assesses the degree oF amount of overlap between any pairof ideo stam Rests from this agsessor S08 may bevsed by video stitching engine 508 provided inthe content analysis ‘engine. Any suitable method oF overap assesment and video Stiching may be used, For example, image alignment pro- ‘esses and stitching algorithms are’ deseribed in detail in “Image Aligament and Stitching: A tutocial” by Richard Srelisk 2006, published in Foundations andl Trends in Com- puter Graphics and Vision Vol. 2, No 1 (2006) 1-104 which is Incorporated herein by reference in its entirety. The overlap assessor 803 and video stitching engine S08 may be integral [0043] viewing angle assessor 504 may be provided Which assesses the relative viewing angles of video capture dovices capturing the same event. Any suitable method of assessing the relative viewing angle may be used. A quality fassensor S08 may be provided which assesses the relatively ‘quality of video steams from different vdeo capture devices ‘capturing the same event, Any suitable method of assessing Video quality may be wsed. An object detection and tracking ‘engine 506 maybe provided which sutomatically extracts ‘objects (or potentially just regions) from video sequences. “The objects are sets of 2D image regions across multiple frames which correspond to real physical objects. Any suite ble sch object detection ad tacking engine may be wsed sich as thot described in, "Unsupervised Segmentation of ‘Color-Texture Regions in Images and Video.” ¥ Deng. B S. Manjunath—IEED “Transactions on Pattern Analysis and Machine, 2001, which is incorporated herein by reference in its entirety, By using such an automated process to extract ‘objects froma video, value can be added to the video in many ‘ways. For example, by odding the capability of searching the Video content by objects andlor by adding Byperlinks to ‘objects in the video. {0044} An output video enhancement module takes the input video streams 801 together with the contextual infor- ‘ation and any specified nies, tresholds or criteria relating to the target viewing experience These are used to form the ‘output video $12. Various engines are provided to form the ‘opt video and these may be integral with one another but ate shown separately in FIG. § for clarity For example, these ‘engines comprise a video stiching engine $08, a viewing anglescletion engine 509 aquality selection engine 810, and ‘a Web-link insertion engine. The web-ink insertion engine may take results from the object detection and tracking engine and use those to insert hyper-links into the video output $12. Aug. 26,2010 These may be hyperlinks to documents, files, other videos, other resources Which are related to the video content determined bythe results of the object detection and tracking tengine. For example, these hyperlinks may enable naviga- tion ina video database as deseribed in “Video hyper Links freation for contenthased browsing and navigation” Bouthemy etal 1999 in Proc. Workshop oa Content-Based “Multimedia Indexing, CBMI'99 which s incorporated herein by reference in its entirety. [0045] FIG. 6is a sehematie diagram illustrating use of the Video stitching engine 808, Steaming users have video cap. ture devices 601, 602 as deseribod above and use these to capnire video of the same seene but from different view points. The field of view of a first one ofthe video capture ‘devices is ilusteated at 603 in FIG. 6 and the field of view of second one of the video capture devices is illustrated a 604 in FIG. 6. Thereis some overlap betwen the fields of view as luted, The video steams ate sent upstream to & Video sharing sytem 608 as described above and stitched together ‘o form anoutput video witha wider ied of view athe event than either ofthe single vidoo streams achieve alone. The ‘output video may be viewed at a PC 607 or other entity and representation 608 indicates the resulting video display [0046] In some embostiments functionality ofthe esting director is at Teast pacially distributed amongst the video capture devices, For example, each video easting device may be armnged to eury out a mthod as now described with reference to FIG. 7. Video of an event is optionally captured 700 and the device proceeds to identify 701 other video cophire devices which are capturing or going to capture video of the same event. For example, this identification process ‘may comprise probing the video sharing system to request the information. In other embodiments, the identification is eat- riod ont using a local wireless discovery process probing for bother wireless communications devices i the physical vcin- lity: Onoe other wireless communication deviees are discov ered information may be exchanged 702 wih these about the Video sharing channel For the event. In tht way the video capture device is able to identify ther video capture devices \whiel ae capturing video ofthe same event. Is notessential {othe event to bein only one physical location, For example, the event may bea shared event hetween different goograpi cal locations ana in this case the video capture devices {volved will sll be sing the same channel identity which ‘may'be obtained from the video sharing system. [0047] Once those other devies are identified the device proceeds to automatically negotiate with the identified ‘devices to determine how andor whether to proceed with video capture. The negotiation process may involve exchange of information about resources at the devices, bandwidth availability, and the ike and may proceed aocording t suit able niles specified a the devices. [0048] FIG. 8 illustrates various components of an exen plary computing-based device 800 which may be imple ‘meated as any form of a computing andor electron device, ‘nd in which embodiments ofa video sharing system may be implemented [0049] The computing-based device 800 comprises one or ‘more inputs 806 which are of any suitable type for receiving media content, Intemet Protocol (IP) input, video streams, ‘natin streams or other input. The device also comprises com ‘munication iaterice 807 wo enable ito communicate over a ‘communications network with mobile video capture devices, ‘mobile telephones, and other con munications entities. US 20100214419 AL 0050] | Computing-based device 80 also comprises one or ‘more processors 801 which may be microprocessors, control- Jers or any other suitable type of processors Jor processing ‘computing executable instnicions to contml the operation of the device in oder o provide a video sharing system, Pat- form software comprising an operating system 804 or any ‘other suitable platform software may be provided at thecom- puting-based device to enable application software 803 to be ‘executed onthe device [0051] The computer exccutable instructions may be pro- Vided using any compute such as memory 802, The memory is of any suitable ype such as rando socessmemory (RAM), adiskstorage deview of any type such ssa magnetic or optical storage deviee, art disk drive, ora CD, DVD or other dise drive, Flash memory, EPROM or EEPROM may also be used, [0052] An output 80S is also provided such as an audio and/or video output to display system integral with or in ‘communication with the computing-based deviee. The dis Play system may provide a graphical user interlace, or other ‘ser interface of any suitable type although this i no essen- tial [0053] The term ‘computer is used herein to refer to any device with processing eapability such that it can execute instructions. Those skilled in the art will realize that such processing capabilities are incomporated into many diferent ‘devices and therefore the tema “computer includes PCs, sev ers, mobile telephones, personal digital assistants und many other devices, 0054) ‘The methods described herein may beperformed by Software in machine readable form on a tangible storage medium, The software can be suitable for execution on a parallel processor ora serial processor suc that the metod steps may be caried out in any suitable onder, or substantially Simultaneously, 10055] This acknowledges that software can bea valuable, separately tradable commodity I is intended to encompass software, which ns on or controls “dumb” or standard arc- ware o carryout the desired fnetions. Tis also intended to ‘encompass software which “describes” or defines the eon- figuration of hardware, such as FIDL (hardware deseripion language) software, a is sed for designing silicon chips, or for conliguring universal programmable chips, i camry out desired functions 10056] Those skilled in the art will realize that stomige doves utilized to store program instructions can be distrib uted across network. For example, remote eomputer may sorean example f the process described as sofware. local ‘oF terminal computer may’ access the remote computer and ‘download a pat of all ofthe software to run the program, Altematively the local ompater may downlead pieces ofthe ‘oftware a needed, or execte some software instructions at the local terminal and some athe remote computer (oF com- poter neework). Those skilled inthe art will also realize that by utilizing conventional techniques known to thosesklledin ‘heart that all, ora portion of the software instructions may be ccartid out by a dedicated cirenit, such as a DSP. program- able logic aay, the ike, [0057] Any range or device value given herein may be ‘extended or altered without losing the effet sought, a wil be ‘opparent to the skilled person, 0058] 11 will be understood that the benefits and advan- tages deserted hove may relate t one embodiment or may relateto several embodiments. The embodiments are wat ime Aug. 26,2010 ited to those that solve any or all ofthe slated problems oF those that have ay oral of the stated benefitsand advantages, vill further be waderstood that reference to aie refers to one oF more of those items. [0059] The steps ofthe methods described herein may be Carried out in any suitable order, or simultaneously where appropriate. Additionally individual blocks may be deleted {romany ofthe methods without departing from the spirit and scope of the subject matter described herein, Aspects of any of the examples describod ahove may be combined with aspocts of any of the other examples described t0 form further examples witout losing the effect sought. [0060] The term “comprising” is used herein to mean Including the method blocks or elemeots identified, but that such blocks or elements do nat comprise an exclusive list and ‘A method or apparatus may coatain addtional blocks or cle- [0061] twill beunderstoo that the above description of prefered embodiment is given by way of example only’ and that variows modifications may be mad by thoseskilled nthe at. The above specification, examples and data provide a complete deseripion ofthe Structure and use of exemplary tembodiments of the invention. Although various embodi- ‘ments of the invention have boon descibod above with & certain degree of particularity, or with reference to one or ‘ore individual embodiments, those skilled inthe art could ‘make mumerous alterations 10 the dislosed embodiments ‘without departing from the spirit or scope ofthis invention, 1.4 computer-implemented method of video sharing at a rode in a communications network comprising a plurality of video capture devices, the method comprising: receiving a plurality of video streams of the same event ‘sch video stream originating froma different one of the Video capture deve; ‘obtaining contextual information about the video steams; providing a video steam output for sharing by ether ent Ties in the communications network the video stream ‘omtpat being formed from the received video streams on the basis ofthe contextual information, 2. method as claimed in claim 1 wherein the step of ‘biaining the eontextal information comprises analyzing the Video content ofthe video sean '3Amethod as claimed in claim f wherein the video stem ‘outputs formed so as to provide an improved view of the ‘events compared with the view ofthe event from only one of the video streams. “4. A method as claimed in claim 1 wherein the video strwams are ofa live event and wherein the method i ertied fin real ine 5..\ method is claimed in claim 4 which further comprises sending control commands tote vdeo capture devices onthe basis ofthe contextual information 6.4 methodas claimed in claim I which further comprises sending user feedback to the video capture devices on the basis ofthe conextual information 7. A method as claimed in claim 2 whercin the process of ‘analyzing the video coatent comprises determining pari- ers related to the viewing angle ofeach othe video eapture ‘devices. '8. A method as claimed in claim 7 wherein the process of ‘analyzing the video content also comprises assessing the (quality ofthe video content 9.4 method s claimed elaim 7 which further comprises, iffparametersrelatedto the viewing angle of two ofthe video US 20100214419 AL ‘capture devices ar similar then sending a message to one of those devies o stop the associated video steam, 10. A method at claimed in claim 1 whercin the step of providing the video stream ouput comprises stitching at least part oF tWo or more of the video steams together 11, A method as elsimed ia claim 6 wherein the user ‘eeback comprises recommend changes toa 00m ealire ‘of the video capture device. 12. method as clsimed in claim 6 whercin the ser fexback comprises recommended changes othe position of the video capture device. 13. A method at claimed in claim 1 which further com- prises receiving plurality of audio steams, at leastone audio ‘ream associated with each video stream, and wherein the method comprises providing an atdio stream output formed from the received audio steams 14. A method 3 claimed in elaim 1 which farther com- prises receiving a channel identity for each video stream and ‘wherein each video stream of the same event hae the same channel identity: 15. computer-implemented method of video sharing at a node in communications nework comprising plurality of mobile video capture devies, the method comprising: receiving a plumlity of video streams each video steam ‘originating ftom a diferent one of the video capture devices; identifying atleast two ofthe video streams as capturing the same event by using channel information received reach ofthe video streams: ‘analyzing video eontent ofthe identified video steams; providing a video stream output for sharing by other ent Ties in the communications network the Video stra Aug. 26,2010 ‘output being formed from the identified video streams ‘and taking into ooouat the Video costent analysis nd ‘wherein the video stream output is formed so ast peo: videan improved view ofthe event as compared with the view ofthe event from only one ofthe identified video 16.4 method as claimed in claim 15 wherein the identifi’ video streams ar of fiveevent and wherein the video stream ‘iat is provided in rel time, 17.A method as clsimed in claim 18 whieh fun com: prises sending user feedback to the mobile devices on the basis ofthe content analysis. 18. A method at # mobile video capture device in a com- munications network comprising » plurality of such mobile Video capture deviees the method comprising ‘capturing a video stream of an event: ‘dontifVing one or more other mobile video capture devices ‘whieh are also capturing a video stream of the same rogotiating with the one oF more other video capture ‘devices to determine when to send the captured video Stream t. communications node for sharing with other atte, 19. A method as claimed in claim 18 wherein the step of Identifying the other mobile video capture devices comprises exchanging channel identity infomation with mobile video ‘capture devices in the communications network, 120. mist as claimed in claim 18 wherein the step of ‘negotiating with the one or more other video capture devices comprises exchanging information about resources available atthe mobile video capture devices.

You might also like