You are on page 1of 65
c») United States US 201503468121 cz) Patent Application Publication (1) Pub. No.: US 2015/0346812 AI Cole et al. (43) Pub. Date: Dee. 3, 2015 (54) METHODS AND APPARATUS FOR RECEIVING CONTENT ANDIOR BACK CONTENT (71). Applicant: NexeVR Ine. Laguna Beech, CA (US) (72) lventors: David Cole, Laguna Beach, CA (US): ‘Alan McKay Moss, Lagu Beach, CA ws) (21) Appl. Nos 147726,440 (22) Filed: May 29,2015 Related US. Application Data (60) Provisional application No, 62/008,547, fled on May 29, 2014, provisional application No. 62/167,801 filed on May 28,2015. BRS CRETURT eve rT aut | [exesene | [eaten ' S| [— carter sepanarvs| |oeuveny sever an [Hee sreenne 1 seRER \ 1 \ Publication Classification (2006.01) G6 30012 (2013.01) on ABSTRACT Content delivery and playback methods and apparatus are described. The methods and apparatus are well suited for delivery andl playback of content corresponding t0 #360 degree environment and can be used to support streaming fndor realtime delivery of 3D content corresponding to an ‘event, eg, while the event is ongoing or afer the event is ver Portions of the environment are captured hy cameras located at different positions. The content eaptured from dif Ferent locations is encoded and mace available for delivery. playback device selects the content ta be received based in tser's head position TERALEG P exo MOOTED ‘Sremcoscone Delconcaremae f % PLAYBCKDENICE STOVER PREWSE 0 DHT ES pe Heaptaunted STEREDSCOPC DSPLAY ECCOnG APPRRATTS PLAWRACK DEVICE Patent Application Publication Dec. 3, 2015 Sheet 1 of 29 US 2015/0346812 Al 0 7 uw Rae : ewe RAT f CUSTOER pRewse 1 wa atl JH EG. STREAMING 110 ‘eRVER oy EG | STEREDSCOPC —— PLAYBACKDEVCE 126 FIGURE 1 US 2015/0346812 A1 Dec. 3,2015 Sheet 2 of 29 Patent Application Publication O¢ SuNOld om be =e oak 2 8 PNOULHOS ENOUNOS 2NouNos Nowe FEN 0 az Sundld Oe) es Oe tic Otc Se oe ec 08 oe eee] NOLO Now NOUeOd v1 05 TENS we ve Jundld oe one ewww -INIOS 096 TNS US 2015/0346812 A1 Dec. 3,2015 Sheet 3 of 29 Patent Application Publication € duNdld SEHOISO COLONIES Od SNOLYOS 3N30S G30OON3 3HOIS 806" NoUwod HOWS HO4 SWWSULS SLVY 06 BV3Y LHOM '.06 YYSY NaS dig a7gissod 1a | 147'.081 auvMuod ft WAONOISHSA—/ L8OddNS O1 SNOULYOd '9'3 'SNOLLOd ‘D1dO9SOSUALS -096; NOS 3GOONS '3N30S N OLNI NOLLILEVa zoe7 ‘00 ‘00 US 2015/0346812 A1 Dec. 3,2015 Sheet 4 of 29 Patent Application Publication y aNd asaive / ainvus a30nay / "Nokian cadoo NoIssaudnioo HoH] Nol¥Od NOM SIBGOIE! zr aH ENOISYIA ‘as alve 000N3NOUWOd == —— aywyais aaonazy /#—] NOU! OS aL¥H a30NGRH £43QOONE oie 8007 ZNOISUBA G3GOONS At cs it NOUWOd INOW OS 2 43000N3 7 a LNOISUBA / a3Q00NA NOLO aims fe “INOW GH 31% LIS HOIH / BOONE, ZZ v0" NOUOd NOMS .08b '9°3 'NOLWOd BOWWNLNGN / zov~ we OOF US 2015/0346812 A1 Dec. 3, 2015 Sheet 5 of 29 Patent Application Publication g aundld YNOISHIA a2000N3 > NOISHaA GBGOONA > NoISHaA GBN00N af Ed ag ZNOISHSA G3000NE ZNOISHAA G2O0ONE ZNOISUSA G3GOONS af zo ag ae | Novsean c000N ee | bNosan c3cooNa | NOISYaA a3000Na (NOUNOd wwaY.LHORI.06"9'3) | | (NOLLAOd wwe 1497.05 “9'3) (081 1Nows "93 ENOLLWO 3N3OS Z NOILHOd 3NGOS ENOLMOd 3N30S 9087 105 2057 SSNOLLUOd 341A G3000NI GAYOLS 008 Patent Application Publication Dec. 3, 2015 Sheet 6 of 29 US 2015/0346812 Al oo START METHOD OF PROVIDING STEREOSCOPIC CONTENT 802 REGEIVE A REQUEST FOR CONTENT FROM A PLAYBAGK DEVIOE [804 t DDETERWINE AVAILABLE DATA RATE FOR DELWERY [806 Y SET CURRENT HEAD POSITIONAS FORWARD | -608 POSITION (E.G, ODEGREE) Y ‘SEND N PORTIONS OF SCENE T0 PLAYBAGK DEVICE TO | 610 INTIALIZE SCENE MEMORY IN PLAYBACK DEVI: 7 eo 2 ‘SELECT SCENE PORTIONS TO PROVIDE TO THE 513 PLAYBACK DEVICE, E.G, ON A PERIODIC BASIS BASED ONL... [WAT FORA (CURRENT HEAD POSITION PREDETERVINED TIME ¥ Yew ‘GELEGT ENOODED VERSION OF SELECTED | 614 T a ci oe "SEND SELECTED ENCODED VERSIONS OF SELECTED WHERE X REPRESENTS ‘SCENE PORTIONS TO THE PLAYBACK DEVICE ‘SELECTED SCENE een I ey Loe RECEIVE INFORMATION FROM | POSITION. E.. Vi VING ANGLE OF USER, y 620 [DETERHINE AVAILABLE DATA RATE FOR DELIVERY, E.G, BASED ON MEASUREMENTS AND/OR RECEIVED INFORMATION FIGURE 6 Dec. 3,2015 Sheet 7of29 US 2015/0346812 A1 CONTROL L714 716 DISPLAY F— n a RECEIVED INPUT IMAGES, a INPUT 708 DATARAT ete HOE a VIDEOOF A ae L__SCeNE_] | rcuerenrnenorosrion 08 an | LOETERMNATION NODULE © BSED SE wnerrace [—] PORTIONS: oo THANG. al ” ron cones EHR T Tone FIGURE 7 Dec. 3,2015 Sheet 8 of29 US 2015/0346812 A1 DSPLAY, EG, READ eo MOUNTED 805 STEREOSCORC DISPLAY ne DISPLAY DEVICE a3, ‘COMPUTER SYSTEMCONTENT PLAYBACK DEVICE INTERFACE ee TENOR? 02 las CONTROL $4 REGUEST a6 6 ccnerarion wooute| 1 osruay 1 ane lo HEAD POSITION ANDIOR DecouER VENUNG ANGLE MODULE DETERNNATION NODULE B04 822 INPUT, a8 5" SDIMAGE RENDERING MODULE, DEVICE E.G, IMAGE GENERATION ‘SCENE PORTIONS MODULE aa ae TECODESDATA EG. 6 DECODED SCENE INTERFACE ea PORTIONS ‘360° DECODED 830, SCENE BUFFER ==5 08 ‘STEREOSCOPIC PROCESSOR: eeu eo NETWORK INTERFACE T ToNETWORK FIGURE 8 Patent Application Publication Dec. 3, 2015 Sheet 90f 29 US 2015/0346812 Al 905 92 Patent Application Publication Dec. 3, 2015 Sheet 10 of 29 US 2015/0346812 AI 922 sa Figure 10 5 oe m0 920 01 1000 4 -19 / an m8 4 Patent Application Publication Dec. 3, 2015. Sheet 11 of 29 US 2015/0346812 AL FIGURE 11 US 2015/0346812 A1 Dec. 3, 2015 Sheet 12 of 29 Patent Application Publication esvannose 3noz Jaavavnoa zanoz vans b3Noz OST Pal zi 3unold ost pone US 2015/0346812 A1 Dec. 3, 2015 Sheet 13 of 29 Patent Application Publication €b SUNS oe shh awww mete Eaoers aaron = 98 3.00 7NOZ 9923802 : RET rm 7 yet sis a0 EV RBs ee ee ee ee ee ee Ld ‘Nous 01 “93 3N0r ‘0th INOS oe ees lication Dee. 3, 2015 Sheet 14 of 29 US 2015/0346812 Al Patent Application Pul m co TART HETHOD OF OPCRATINGA PLAVEAGR SYSTEM —_)- 1802 08 1405, [RECEIVE NFORIIATONREGAROINGA PLURILTYOF CONTENT STREAMS ANDY |_, “staf” (OR NVTIAL.ZATION DATA, E.G, AS PART OF A PROGRAM GUIDE, INFORMATION / a ial sosersGuRRENT? DETECT USERS CURRENT HEA POSITION | > ERD rosie TNVTLIZE CURRENT VIEWING POSTION EY SETTING CURRENT DETECTED HEAD] 140s POSITION TO BE THE ZERO DEGREE VIEVING POSITION, EG, FORWAROFRONT PORTION OF ASOENE AREA E.G, a3” SCENE AREA 1410 sat RECENE AT EIST OVE EWVRONMENTALINE. EG, ROD OEPTHNTP |_,ENVIAONOHENTAL DDEEMINGASD SURFACE map. 1482 FECENTE OE OR ORE WV WPS TOBE USED FOR HPN WAGE CONTENT ONTO ATIERET PORTION OFTHE SOSURFACE 114 sae 18 REGENEAFIRST UV | [ REGEIMEAGEOOND | [”REGENE A TARO WT [RECEve aFOURTA NAP. EG, UVHeP.EG, APES, fae ca IconeesPonoive ro| |conEsPoKOiNG TO || coRRESPONDING TO A ]AFIRST PORTION OF | | ASECOND PORTION || THIRD PORTION A | | CORRESFONDINS ‘ADHD SCENEAREA |] OF AZGIRSCENE || agorsceNeAREA | [TOATOPPORTION AREA, RECENEAFFTHUVHAP EG, | 1000 CORRESPONDING TO A BOTTOM PORTION ¥ 12 FREGENE CONTENT, EG ANIWIAGE CORRESPONDING TO ONE OR ORE OF THE FRST, SECOND, ‘THRO, FOURTH AND FIFTH PORTIONS OF A SCENE, E.G, 200° SCENE ¥ DECODES RECEVET CONTENT COMRESPOMDIIS TOONEOR Lge vorrei oe 108 ‘STORE DECODED CONTENT IN ONE OR HORE IMAGE BUFFERS) TRTTTE CONTE DELIVERY OF STREAMS) I SETECTED SET NOT ALRENEY BEING RECENED G, SEND REQUEST FOR CONTENT ORSON A MULTICAST GROUP CCORRESPOREHNG TO CONTENT STREAM COHMLNIGATING CONTENT CORRESPONONG OSES SEUSS LEST TREGENTE CONTENT CORTESPOTDONNG 10 CURFENT SELECTED SET OF STREMVE, EG, FORAARUFROMI PORTION OF THE SCENE ACA [BECODE RECEIVED CONTENT CORRESPONDING TO THE CURRENT SELECTED SET OF STREAIS| 18 @” FIGURE 144 jon Publication Dee. 3, 2015 Sheet 15 of 29 US 2015/0346812 Al 1035 138 UPDATE ONE OR NORE IMAGE BUFFERS) RENDER OONTENT FORDISPLAY ENVIRONMENTAL WAP AND UV MAE i TETERVINE CURRENT AVALABLE BANDIMOTH ANEIOR nto 7 MEXIMOM SUPPORTABLE DATA RATE | suronrasie Dara RATE ANDOR I BANSNIDTH z DETECT USERS GURRENT HEAD POSTION N 14 UPPORTASLE DATA RAY (CHANGED? [coTo STREAa SELECTION SUEROLTINE| SELECTED STREANK TAFFERENT FROM CURRENT SET OF SELECTED STREAMS TEAVE CURRENT SELECTED SET OF STREANIS UNCHANGED sa UPDATE CURRENT SELECTED SeT OF STREAMS TO REFLECT CHANGES INTHE ‘SELECTED SET OF STREAMS TO RECENE y 155 TTERIIATEISTOP RECEIVING STREAMS) WHICH ARE NOTIN THE UPDATED (CURRENT SELECTED SET OF STREAMS FIGURE 144 Se FIGURE 148 FIGURE 14 Patent Application Publication Dec. 3, 2015 Sheet 16 of 29 US 2015/0346812 Al 150% $502 ‘START STREAM SELECTION SUBROUTINE 03, SARDIOTH JANOOR DATA aia warn Ms me ‘SUPPORTABLEDATAL | oserscuRRENT~ “°7_~ staan CONSTRAINTS! RATEANDIOR Sai TEN AoW ‘| Lxesprosnoy”) [Thc ‘SELECT, BASED ON THE USER HEAD POSITION, STREAI INFORMATION ANDIOR MAXIMUM SUPPORTABLE DATA RATE, WHICH OF APLURALITY OF CONTENT STREAMS TO RECENE PRIORITIZE CONTENT STREAMS SASED ON INERTEAD POSTON, EG, CAL — 1506 STREAM PRIOR ZATION SUBROUTINE 1508, [DETERSTNE ASTON EANOW DTT RNOIOR DATA RATE TO BE URED FOR A STRERATVING TH HiGHEST PRIORITY, E.G, BASED ON BANDWIDTH ANLYOR DATA RATE CONSTRAINTS. 510 TERRINE NAIUIN BAND WITH ANDVOR DATA RATE FOR EACH STREANTHATIIGA LOWER PRIORITY, EG, SASED ON BANDWIDTH ANDIOR DATA RATE CONSTRANTS. ORDATA RATE FOR HIGHEST PRICRITY STREAMS) 8 SUPFORTABLE DATA RATED TGESTONARATE STREAM OF HIGHEST PRIORITY ‘THAT CAN BE SUPPORTED, SELECT HPS DATA RA (OF SECOND HIGHEST PRORTTY THAT ‘GAN BE SUPPORTED. <== 00 PRIORITY STREAM BE TELECT HIGHEST DATARATE STREAT (OF THIRD HGHEST PRORITY THAT | [1522 (CH Be SUPPORTED 1528 ‘ELEGY ONE OF WORE LOMERPRORTY STREAUS) THAT CAN BE SUPPORTED BASED ON MAXIMUM DETERMINED BANDWIDTH FOR THE ONE OR WORE. LOWER PRIORITY STREAMS) Au SUPPORTABLE OATA RATEADDITIONA BANDWIDTH ¥ EUR FIGURE 15 Patent Application Publication Dec. 3, 2015 Sheet 17 0f 29 US 2015/0346812 Al IDENTIFY, STREAIAS) COVA NCATING CON TENT CORRESPONDING TO ITE PORTION OF TOE] 'SCENE AREA THAT CORRESPONDS TO THE USER CURRENT FIELD OF VIEW DETERMINE SE OF PORTIONS OFTHE SCENE AREA, CORRESPONDING TO THE USERS ‘CURRENT FIELD OF VIEW, AVALASLE FROM THE IDENTIFIED STREAM CORRESPONDING TO THE USERS CURRENT FED OF View BASED OM Size OF THE PORTIONS (OF THE FELD OF VEW THAT BACH STREAM PROVIDES TSSION HIGHEST PHORITY TOA BREAN PROVIDING LAROEST PORTION OF "THE FED OF VV. E.G, DESIGNATE THE STREAM PROVIOINGLARGEST PORTION OF THE FELD OF ViEW 85 PRIMARY STREAM 1610 [ RSSTGH REXT HIGHEST PROSTY SUD DES CHITE REKINTNG STRERIENSED OV ] THE SIZE OF THE PORTION OF THE FELD OF VIEWEACH OF THe Reta STREAMS [1612 PROVIDES EG, WITHA SrREAM PROVIDING LARGER PORTION O* THE FELD O= VIEW {BEING GIVEN A HIGHER PRIORITY THAN ANOTHER STREAM THAT PROVIDES SMALLER PORTION OF THE FEL OF VIEW ssi ASSEN PADRES TOOHE OR HORE ADDITONAL STREANS COMMUNICATING CONTENT ‘OUTSIDE THE CURRENT FELD BASED ON ONE OF BROMARTY OF UAE CONTENT TO THE ‘CURRENTFIELD OF VIEW OR DIRECTION OF HEAD ROTATION 1 | ow oS ne AER ee) SS) atmos memo EAP | nites eri laats te a, os io [VO ribeebuihe tearad ae Betis ea NE OUTSIDE TH “TERTIARY STREAM, 10 A STREAM PROVIDING CONTENT” GERRENTFIELDOF ||] CoamesPcNONGTO.APORTION Or THe SOENE N THE a USES FIGURE 16 Patent Application Publication Dec. 3, 2015 Sheet 18 of 29 US 2015/0346812 Al 702 START RENDERING SUBROUTINE att «lit 1704 ENVIROMENTAL DeCDED MAP CONTENT, (deere A706 RENDER CONTENT FOR DISPLAY USING THE DEGODED CONTENT, ENVIRONMENTAL MAP AND UV MAP(S), EG, UV MAP CORRESPONDING TO USER'S CURRENT FIELD OF VIEW 1708 USE GONTENT GENERATED BY DECODING CONTENT STREAIWS) CORRESPONDING TO USER'S CURRENT FIELD OF VIEW, ENVIRONMENTAL MAP AND UV MAP ‘CORRESPONDING TO USER'S CURRENT FIELD OF VIEW TO POPULATE PORTION OF DISPLAY OUTPUT BUFFER CORRESPONDING TO THE CURRENT FIELD OF VIEW 1710 FIGURE 17 Dec. 3, 2015 Sheet 19 of 29 US 2015/0346812 A1 150 ey peg pie on SIR facczoe] OMAP | wucast | Sr wo soccer |] ot | om | sm | woo | am soccer [| oe fue | so | vom | em soccer [| oo fw | so | veom | cm _[socoen [ST] or | om | so | voor | oe NM Peoocen [SERRE oe | we | am | voor | we soccer [SERRGT! os | we | so | voon | ze soccer [RERUET) or | ow | so | voor | ann cooesn | ERLET Toe fw | am | voor | am soocer [RERET | on | we | so | voor | azn 0 leet oo | wo | so | voor | mae wo Lt [ecin| [om [se [voor [sen 0 [ERS] o@ [owe [so | voor | azn TDATERATE 1840 S STREAM \V;CAMERA VIEWING ANGLE TO ‘WHICH STREAM CORRESFONDS. Ms MULTICAST GROUP IDENTITY "ANDIOR ADORESS ©: CODEC TYFE F: FRAME RATE FIGURE 18 Patent Application Publication Dec. 3, 2015 Sheet 20 of 29 US 2015/0346812 Al TERT EGTA] 0g woureD 708 Yin reneoSoor DRLAY 500 TRAV OECE as TERS EMEERTENT RVGRERDETCE WeteAGe a ERT ise ar! 4 Se ConTRaL[J°"4 wero posmion _ [1919 ! : ROUTINES re_| DETERMNATONNODULE 1 oseuay | : STRREAT VEN POSTION a --- TIFALLATON MODULE GORRENTSETETED ee 904 STREAM INITIALIZATION. ‘CONTENT DELIVERY [94 a HTN MODULE wr ‘ae tu wx DEVICE ne GAGES] [RSE RENDERINS 1800, surrenis) | [uPoaTe mooue |} MODULE EG, MAGE 8) GENERATION ODE ips ee i ATE eAOMOTH ROR < wo EUPPORTAGLEDKTARATE || PEAZFOSTION CHANGE INTERFACE, DETERMINATION MODULE, " 1906 a, RAE SNONETT GOR] | STPEMISALECTION 1s SUePORTASLEDATA ATE nahi peTersaaTon woDULE 30 10 ES TERT TET TNE TREATSE SET TENTION YODLLE TED STRENIS 7m TEMS RECENEDSTREAT ee TeRUNATON HOE NORTON 60 RESEND ENOWOTH HT ETESIREDCORRENT ORORTA RATE LOCATON MANIA AVALASLE GenTREL FORTIN abwoIH DOR uPPORTALE OMTARATE 2 a RENE Coo ; Dwr ze, eveo0e | [mona wo] [POSVEDIN Sct tonions l 10) 7 = DECODED E1988 EES pen TETTORTNTEREE || peconenscenerornons f | SRO RECEIVER von SORRENT SELECTED 11961 TRANGUITER a = ee FIGURE 19 Patent Application Publication Dec. 3, 2015 Sheet 21 of 29 US 2015/0346812 Al eeaeer ery anne 7 ReEIvE ONE OR TORE ADOMIOWL INAGES CORRESPONONNG PORTION OF SND ENVIRONVENT, SAI ONE OR MORE ADCITIONAL IVAGE CORRESPONDING TO | | SND FIRST REARVIEW PORTION OF SAD ENVIRONMENT INCLUDING AT LEAST A SECOND IMAGE | __ __ _CORRESFONDING TO SAID FIRST REARVIEW PORTION OF SAID ENVIRONMENT (RECEIVE ONE OR I | PORTION OF SAIO ENVIRONVENT AIO ONE OR MORE ADDTIONAL IMAGE CORRESPONDING T0 | SD SE00N0 REARVIEW PORTION OF SID EWIRONMERT INCLUDING ATLEAST ASECOND | _ IMAGE CORRESPONDING TO SHD SECOND REARVIEW PORTION OF SAID ENVROROIENT Wed —-----------~ ---------4“™__, |" STORE SAD RECEIVED ONE OR MORE INAGES CORRESPONDING TO SAD SKY VIEW PORTION 4 ! ‘OF SAID ENVIRONNENT INAGE 1 t-------__ Seat e+ (STORE SAI RECEIVED ONE OR NORE NIAGES CORRESPONDING TO SAD GROW VIEW I Ll PORTION OF SAI ENVIRONMENT IMAGE I FIGURE 204 Patent Application Publication Dec. 3, 2015 Sheet 22 of 29 US 2015/0346812 Al DETERMINEAHEAD [79% [RecENve A FIRST CONTENT| POSITION OF A VIEWER, ‘STREAM PROVIDING SAID HEAD POSITION (CONTENT a CORRESPONDING TO A CORRESPONDING TO A ‘CURRENT FIELD OF VIEW FIRST PORTION, E., FORWARD PORTION VIEW, ¥ ‘OF AN ENVIRONMENT DETERMINE A CURRENT FIELD OF VIEW FOR SAID | 2939 VIEWER BASED ON SAID 2054 DETERMINED HEAD POSITION FIGURE 20B US 2015/0346812 A1 Dec. 3,2015 Sheet 23 of 29 Patent Application Publication 902 SYNSI4 {_Yenrno 103893801104 INSHNOWANS canviasio i eeeioet 3ecTnoHs 140 NOWNOd ‘GNSOLoN ae SHOEERELED| | sooics om sonwumay! MARS, ‘V0 HOIKI | a8 NON 3 | v0 Patent Application Publication Dec. 3, 2015 Sheet 24 of 29 US 2015/0346812 AL Camas [GENERATE ONE OR NORE OUTPUT MAGES CORRESPONDING TO THE CURRENT FELD OF VEW BASED OAT LEAST ONE OF RECEIVED CONTENT FROM THE FIRST CONTENT STREAM CORRESPONDING TO THE FIRST PORTION VIEW OF THE ENVIRONMENT, A STORED RECEIVED INAGE CORRESPONDING TO AFIRST REAR VIEW PORTION OF THE ENVIRONMENT, A STORED RECEIVED IMAGE CORRESPONDING TO A SECOND REAR VIEW PORTION VIEW OF THE ENVIRONNENT, A STORED RECEIVED IVAGE CORRESPONDING TO A SKY PORTION OF ‘THE ERVIRONVENT, A STORED RECE'VED INAGE CORRES®ONDING TO A GROUND VIEW PORTION OF THE ENVIRONMENT, OR/A SYNTHESIZED IMAGE CORRESPONDING TO A PORTION OF THE CURRENT FIELD OF VIEW FOR WHICH AVAMEGE IS NOT AVAILABLE DETERVINE, BASED GN THE OURRENT FIELD OF VIEW. A SET oF VW PoRTIONSOF “5? ‘SAD ENVIRONMENT TO BE USED IN GENERATING THE ONE OR MORE OUTPUT IMAGES. ‘DOES a Teveranen se Vai oan te cn onnon mR WnUtions Gre cower Rabor ro Taccor eset Siete erm ERIE dieses eanen ; Seebevconrearescu tee er | YORE AM RTOS CONTENT STREAM, - FoREsc romouron wen e - See ERNE OTT ance ‘SYNTHESIZE AN TMAGE FOR A PORTION CORRESPONDING TO THE CURRENT FIELD OF VIEW OF SAID CURRENT FIELO OF VIEW FOR ‘BASED ON THE DETERMINED SET OF VIEW PORTIONS AWHICH AN IMAGE IS NOT AVAILABLE. TO BE USED IN GENERATING THE ONE OR MORE 2 OUTPUT GES 210 SEERATE OE OR WORE OOTP Wnces CORRESPONDING TOT CURTENT FLD OF NWEW 3ASE ONT DEEMED GET OF EW FORTONS OF HE ENVIRONMENT TOBE USEDINcaNeRATWG Te ONE OR WORE OUTPUT HINGES ANDO ONE OR MORE 60 _SYNTHESZEDBLAGES ong SENATE ONE OF TT papa nore oureur ances! f CoHEREA | | CORRESPONDING TO | ' 1 ~ saampate OiEOR MERE OTT HES VconResPonows on cuter io crv! [aaSeb OVAT LEAS] Sone REcencD coNTENT | NOUDED THEFT CONTENT STRENWAND_! sToxcD oon CORRESPONDING TOA SECOND! PORTIONOF SHO ENARONIENT ora I 1 1 URGE CORREREIBICIOL | {| SECOND PORTION VEW OF THE | [ ENVROWWENT BASED ON THE RECEIVED | i 1 1 ! THe CURRENT Re.o OF | MAGE NUTHAT, INAGE SELECTION INFORMATION V""VeewsaseD onar | IMEAST APORTION | 1 | 1 ] yENBASEDONAT IEA es en ar EAST SOME RECEMED) | "ace 10 a ee VONTENT NADEDIN| | generar mu | | IOEMBNE CONTENT GaaneD FRO SADR | THEFRST CONTENT [1 CONTENT STREAM CAPTURED AT ASECOND | 1 oe STREAMANDA | : | eRe [Neca | ena rt SMUATINGA | | poRTiON.ES, A | szcon0 porrioN oF | THE Sn RONMENT, | [CORRESPONDING TO SAID FIRST POINTIN TIME, | | [SOARS ANO SECOND PORTSN THE BENG 1 1 FLDor ew | i J DIFFERENT I ie ©) FIGURE 20D Patent Application Publication Dec. 3, 2015 Sheet 25 of 29 US 2015/0346812 Al 2086, 2086, ‘OUTPUT ANDIOR DISPLAY THE GENERATED ONE OR NORE OUTPUT IMAGES agg ‘OUTPUT ANDIOR DISPLAY A FIRST OUTPUT IMAGE, SAID FIRST OUTPUT IMAGE BEING ONE OF THE ONE OR MORE GENERATED OUTPUT MAGES FIGURE 20E FIGURE 20A FIGURE 20B FIGURE 20C FIGURE 20D FIGURE 20E FIGURE 20 Patent Application Publication Dec. 3, 2015 Sheet 26 of 29 US 2015/0346812 Al DISPLAY, EG, HEAD Fim MOUNTED 72105 200 TERRY DEWIOE Pte COMPUTER SYSTENTCONTENT PUAVEROK DEVICE INTERFACE a2 TENOR me 7 a roy ASSEMBLY OF MODULES, E.G, ASSENELY ; OF SOFTWARE HODULES 2N6 lospay (eee! 2318 DATAINFORMATION 200 —-- oe ee a T ~pecavenaades™ 7 f ~ “Recess ~ 7 a | CORRESFONING TOA | | CORRESPONDING 72.4 | FIRST REAR VEW PORTION | SECOND REAR VIEW PORTION | ine 2109 pace 1 arzal” REOENEO MIAGES I yy2q ookResPonoNs TOA Y”™| conResPoNONG TOA F2" ae |_SKYVIEWPORTION || GROUND VIEW PORTION | 5 SEERINED ORT inci f™ INTERFACE VEWERHEAD POSITION | CORRESPONDING TOTHE | DETESWINED CORRENT 9 FIELD OF VEW 4 2108 1. “coRRESPoNOING TOA RECENEDFRST |oopg | SECOND REARVIEWPORTION PROCESSOR tel a a STREAM CONTROLS! aya4 iog | CORRESPONDING TOA TEE OF |_ S*YVIEW PORTION | NODULES, EG, DeremneDseror | == === = I= rsseuetyor || veuromons oR | (REGEEOCONTRECHID| aso5 AROWARE USEDINGENERATING | 1 CORRESPONDING TOA NODULES, EG, ourPuTiniaces | | GROUND VEW PORTION oRcUlTs aaa 7 eee +, | CEERRED OUTPUT Pe V" cnmesizen maces) j-2040| MAGES roo - ato NETWORE INTERFACE roewore FIGURE 21 jon Publication Dee. 3, 2015 Sheet 27 of 29 US 2015/0346812 Al 2200 ‘ASSEMBLY OF MODULES VIEWER HEAD POSITION DETERMNATION woDULE 22 2204 ‘CURRENT FIELD OF VIEW DETERMINATION MODULE CCONTENT STREAM SELECTION MODULE oe CONTENT STREAM RECEIVE MODULE oo IMAGE RECEIVE MODULE eu RECEIVED IMAGE STORAGE MODULE cau : 5 224 ‘CONTROL INFORNATION RECEIVE MODULE ‘OUTPUT IMAGE GENERATION MODULE VIEW PORTION SET DETERMINATION MODULE _- 22%8 CONTENT STREAM ONLY GETERMNATION MODULE 2220 MISSING PORTION DETERMINATION MODULE ‘22a IMAGE SYNTHESIZER MODULE co ‘CONTENT STREAM OUTPUT WAGE GENERATION MODULE “22 ‘SYNTHESIZED IMAGE INCORPORATION MODULE Ld ‘STORED IMAGE INCORPORATION MODULE al za ST AE OS ESRD GERERATONIOU SYNTHESIZED IMAGE INCORPORATION MODULE bod ‘STORED IMAGE INCORPORATION MODULE Load ourput wooue 2? DISRAY MopuLE 2244 CONTROL ROUTINES: ce FIGURE 22 Patent Application Publication Dec. 3, 2015 Sheet 28 of 29 US 2015/0346812 Al ¥ mm 98 ‘STREAIT SELECTION MODULE OONFIGURED TO SELEOT, BASED ON THE USER HEAD POSITION, STREAM INFORMATION ANCYOR MAXIMUM SUPPORTABLE DATA RATE, WHICH OF A PLURALITY OF CONTENT STREAMS TO RECEIVE 206 ‘STREAN PRIORITIZATION NODULE ‘HIGHEST FRIOEITY STREAN MRTVOW BANGWIDTH ANY | 2208 ‘OR DATA RATE DETERMINATION MODULE 2310 TOWER PRIORITY ST REARS) HARITUN EENDIIOTH AND] ‘OR DATA RATE DETERMINATION NODULE 2312 MODULE CONFIGURED 70 DETERIINE IFA HIGHEST PRIORTY STREAM. [CAN BE SUPPORTED BASED ON MAX MUM SARDIADTH ARDIOR DATA RATE| FORHIGHEST PRORITY STREAMS) AND BASED ON AVALABLE BANDWIDTH ANDIOR SUPPORTABLE OATA RATE zi TAODULE CONFIGURED TOSELECT HIGHEST DATA RATE STREAN'O HIGHEST PRIORITY THAT CAN BE SUPPORTED 36 THODULE CONFIGURED TO DETERS TF ASECOTO HIGHEST PRHONTY STREAM ‘CANE SUPPORTED BASED ON MANIVUN BANDWIDTH ANSIOR DATA RATE -OR ‘SECOND HIGHEST PRIORITY STREAMS) AND [BASED ON AVAILABLE BANDWIDTH ANDVOR SUPPORTABLE DATARATE. 8 THODULE CONFIGURED TO SELECT HIGHEST DATA RATE STREIN'OF SECOND HIGHEST PRORITY THAT GAN Be SUPPORTED WODUCE CONAGURED TSDETERMNE TF ATHRDGHESTPRORRY —] 2x00 ‘STREAM CAN BE SUPPORTED BASED ON MAXIMUM BANDWIDTH ANDIOR Gaia RATE COR THRO Hest PORT STREAMS) 0 BASED ON AVAILABLE BANDWIDTH ANDIOR SUPPORTABLE DATA RATE 22 THODULE CONFIGURED TO SELECT FIGHEST DATA RATE SIRE OF THIRD HIGHEST PRIORITY THAT CAR BE SUPPORTED. TDOTONAL OPRCTVEAIONOTHANLABLITY | 224 DETERMINATION MODULE THODULE CONFIGURED 10 SELECT ONE OR NORE LONER PRIORITY STEMS); TAT GAN BE SPORTED BASED ON MUXINUM OETERIANED BANDWIDTH ANDY ‘OR DATA ATE FOR THE ONE OR MORE LOWER PRIORITY STREAMS) AND SUPPORTABLE DATA RATE/ADDITIONAL AVAILABLE BANDWIDTH, FIGURE 23 Patent Application Publication Dec. 3, 2015 Sheet 29 of 29 US 2015/0346812 Al re 2400 ‘STREMN PRIORTT ZATION WODILE 2404 ‘CURRENT FIELD OF VIEW IDENTIFCATION MODULE 2406 ‘CURRENT FIELD OF VIEW STREAMS) IDENTIFICATION MODULE 2406 MODULE CONFIGURED TO DETERMINE SIZE OF PORTIONS OF THE SCENE AREA, ‘CORRESPONDING TO THE USER'S CURRENT FIELD OF VIEW, AVALABLE FROM THE IDENTIFIED STREANS) 2008 PRIORITY ASSTGRNENTALLOCRTION HOOULE 2410 TWODULE CONFIGURED TO ASSIGN SIGHEST PRIORITY TOA STEM PROVO LARGEST FORT ON OF THE FIELD OF VIEW, E(., DESIGNATE THE STREAM PROVIOING LARGEST PORTION OF THE FIELD OF VIEWAS A PRIMARY STREAM 242 TWODULE CONFIGURED TO ASSIGN NEXT FIGHEST PROR TVS) AND DESIGNATE REMAINING STREAMS BASED ON THE SIZE OF THE PORTION OF THE FIELD OF VIEW EACH OF THE REMAINING STREAMS PROVIDES, € ., WITH A STREAW PROVIOING LARGER PORTION OF THE FIELD OF VIEW BEING GIVEN A HIGHER PRIORITY THAN ‘ANOTHER STREAM THAT PROVIDES SMALLER PORTION OF THE FIELD OF VIEW MODULE CONFIGURED T0 DETERMINE FTHERE ARE [2414 REMAINING STREAMS TO BE PRIORITIZED on THODULE CONFIGURED TO ASSIGN PRIORTIES 10 ONE OR NORE ADDITIONAL STREATS CCOMMUNIGATING CONTENT OUTSIDE THE CURRENT FIELD BASED ON ONE OF PROXIMITY OF [RGE CONTENT TO THE CURRENT FIELD OF VIEW OR DRECTION OF HEAD ROTATION 248 EAD ROTATION DE TERTERATION NOOULE ba DIREDTION OF HEAD ROTATION [2422 20> mE | enna spits | [ome ee tee EE aoc ee | eee ee eee SOENE OUTSOE THE| DESIGNATION, E.G, TERTIARY STREAM, TOA STREAM SECS || [eae EE oe Sarto] | PERL Evasions ee MODULE CONFIGURED TO ASSIGN LOWER PRIORITIES TO ]“2428 REMAINING STREAMS IF ANY FIGURE 24 US 2015/0346812 Al METHODS AND APPARATUS FOR RECEIVING CONTENT ANDIOR PLAYING BACK CONTENT RELATED APPLICATIONS 10001] ‘The preseat application claims the benefit of US. Provisional Patent Application Set. No. 62004547 filed May 29, 2014 and U.S, Provisional Patent Application Ser. No 62/167:891 filed May 28, 2015, each of which is hereby ‘expressly incorporated by reference in is entirety. FIELD 0002] ‘The present invention relates tothe field of adaptive streaming of content, e., stereoscopic image content, and more particularly t acquiring, encoding, siraming and ‘decoding video in « manner that facilitates combining with simulated enviroament for portions of the environment for Which video is not available BACKGROUND 10003] Display devices which are intended to provide an immersive experience normally allow user to tum is head ‘and experience a eoresponding change in he weene whieh is displayed. Head mounted displays sometimes support 360 ‘degree viewing in that a user can turnaround while Wearing head mounted display withthe scene being displayed chang- ing as the user's head positon is changes. [0004] "With such devices a user should be presented with a Scene that was captured in front of a camera position when Tooking forward and a scene that vas captured behind the ‘camera postion when the wer tums completly around While a user may tum his head to the rea, a any given time ‘user's ldo view is normally limited to 120degreesorless ‘duo to the nature of a human's ability to perceive limited field of view at any given time 10005] Inosderto support 340 deyrees of view, a 360 degree cone may he captured using multiple cameras with the images being combined to generate the 360 depree scene Which ito be made available for viewing [0006] It should be appreciated that 2 360 degree view inchides a fot more image data than a simple forward view Which is normally capniced, encoded for normal wlevision ‘and many other video applications where a user does pot ave the opportunity to change the viewing angle used to deter- mine the image to be displayed at particular point in time. 10007] Given ransmission constraints, e., nenwork data ‘constants, associated with conteat being streamed, it may hot be possible to stream the fll 360 degree view in fll high ‘definition video tall customers secking to receive and inter: ‘et withthe content. This is panicularly the ease where the ‘content is stereoscopic content including. image content intends to correspond to left and right eye views to allo for 8 3D viewing effec. 10008) In view ofthe ahove discussion it shold be appre= ciated that thee is @ need for methods and apparatus for supporting seaming andor playback of content ina manner Which allows an individual user to alter his viewing position, ‘by mminghis of her head, and to see the desired portion ‘of the enviroament, It would be desirable i the usee coud be provided the option of changing hisyher head position and thus viewing direction while staying within data streaming ‘constraints that may apply due to bandwidth a other delivery related constraints. While not necessary forall embodiments, Dee. 3, 2015, its desirable that a east some embodiments allow for mu tiple users different locations o receive streams atthe same time and view whatever distinct portions of the environment they desire irrespective of what portion or portions are be Viewed by other users. ‘SUMMARY [0009] Methodsand apparatus for supporingdelivery. © stiwaming, of video or ether content corresponding 1 4 360 degre viewing area are described. The methods and sppar- tus of the present invention are particularly well suite for sirvaming of stereoscopic and/or other image content where data transmission constraints may make delivery of 360 ‘degrees of content difficult to deliver atthe maximum sup- ported quality level, e., using best quality coding and the bighes supported framerate. {lowever, the methods are ast limited to stereoscopic conten. [0010] In various embodiments # 3D model of andlor 3D. dimensional information corresponding to aa environment trom*hich video content will beabiained is generated andor aeeessed. Camera positions in the environment are docu- rented. Multiple distinet camera positions may be present within the environment. For example, distnet end goal cam- era positions and one or more mid fe camera positions may be supported and used to capture realtime eamera fed. [0011] The 3D model andor other 3D information ane sored in a server of the image eapure device used to stream video to one or mae user. [0012] The3D models provided to auser playback device, ‘og, a customer premise device, which hes image rendering ‘nd synthesis capability. The customer premise device gen erates a 3D representation ofthe enviroment whichis dis- played to a user of the customer premise device, e., via @ bead mounts csp [0013] In various embodiments, less than the full 360 ‘degree environment is streamed 10 an individual eusiomer premise device at any given time, The customer premise ‘device indicates, based on use input, which eamera fed ist be streamed, The user may select the court andor eamera positon via an input device whichis par of orattached to the eustomer premise device [0014] In some emboctiments a 180 degroe video steam is transmitted o the estomer playback device, eg. lve eal time, or near realtime stream, rom the server andl Video ‘cameras responsible for streaming the content. The playback {device monitors user's head positon and thus the playback device knows viewing area a user of the playback device is viewing within the 3D environment being generated by the playhack device. Theeustomer premise device presents video ‘when available fora portion of the 3D environment be views withthe video content replacing or being displayed as analtemative tothe simulated 3D environment which willbe presented inthe absence othe video content. Asauser ofthe playback device tums his or her bead, portions of the envi- ronment presente to the user may be from the video content supplied, eg. steamed, 10 the playback device with other portions being synthetically generated from the 31) model andor previously supplied image content which was eap- {ured ata diferent ime than the video conten. [0015] Thus, the playhack device may display video, supplied via steaming, while a game, musie concer or other event i still ongoing corresponding fo, for example. a front 180 degree camera view with rear andlor side portions of the US 2015/0346812 Al 31 environment being generated ether fly synthetically or from image content of the side or rear ares ofthe envion- sent at different times [0016] _ While user may choose between camera positions by signaling a change in postion tothe server providing the streaming conten, the server providing the streaming content may provide iformation useful lo generating the syntbetic ‘envionment for portions of the 3D environment which are not being steamed. 10017] For example, in some embodiments multiple rear and side views are captured at different times, eg. prior to streaming portion of coatent or roman earlier pointintime The images are buffered inthe playback device. The server providing the content can, and in some embodiments does, ‘anal tothe playback device whieh of aset of non-real time scenes or images to be used for synthesis of environmental portions which are not being supplied in the video stream, For ‘example, an image of concert panieipants siting and another image of concer participants standing behind a camera posi- tion may’be supplied to an storedin the playback device. The server may signal which sot of stored image data should be used at a particular point in time. Thus, wen a crowed is Sanding the srver may signal tha the image eomesponding to a crowd standing should be used forthe background 180 ‘degree view during image synthesis while when a erowd is siting he server may indiateto the customer premise device that it should use an image or image synthesis information ‘corresponding to crowd which is siting when syalbsizing, ‘de of rear portions of the 3 eamera environment. [0018] In at least some embodiments the orientation of the ‘cameras at each ofthe one oF more positions inthe 3D envi ronment is tricked during image capture. Markers andlor ‘entifying points inthe vironment may beused to failitate alignment andor other mapping ofthe eaprured images, ¢ live images, tothe previously modeled and/or mapped 3D ‘envionment tobe similsted by the eastomer premise device [0019] Blending ofsyniheticenvironmentportions and real (streamed video) provides for an immersive video experi- ‘ence, Evironnientscan and sometimes aremessuredor mod ‘eledusing 3 photometry to ereate the 3D information uselto simulate the environment when video isnot available, Where the environment was not previously modeled [0020] Use of fiducial markers in the real world space at deermined loeations assist with calibration and alignment of the video with the previously generated 3D model, [0021] Positional tacking of each camera is implemented ‘as video is captured. Camera position information relative to the venue, e., that maps X.Y, Zand yaw in degrees (so we know where each camera is pointed) This allows for easy detection of what portion of the eaviroament the captured image corresponds to and allows, when communicated to the playback device along with captured video for the playback ‘to automatically overlay our video capture With the synthetic ‘environment generated by the playback device during image presentation, eg, playback othe user The streamed eontent ‘canbe limited to les than a 360 dopree view e.g. captured 180 degree view ofthe area in font ofthe camera position. AS the viewer looks around, the viewer will see the sinauated background (nota black void) when tured to the rear and the Viewer will se the video when tamed othe front [0022] The synthetic environment can be, and in some ‘embosliment is interactive, In some embodiments, multiple factual viewers, eg. users of different customer premise ‘devices, are included in the simulated environment so that 3 Dee. 3, 2015, ‘sera watch the game wih ser finds inthe vetal 3D fnvironment, and it scems thatthe users ate acta tthe Stadia [0023] The images of the users muy be, and in some embodiments ar, captured by camers included with oF attached 1 the etstomer premise devices, supplied to the Server and provided to tho other user, eg. members ofa troup, for ube in generating the simulated environment. The {erimages need not boreal ine images buy beeline image. [0024] The metods can be sed 1 encode and provide content in ea timer near el ime bt arena Tinie sch realtime applications. Civen the ability to support real ine fn near el ine encoding and steaming o multiple wes, the methods and appara described herein ee well sited Torstreaming senes of sporting events, concerts andor other ‘entes whore individuals ike To view an event and abseeve "ot onl te stage oe field hat be able to te and appreciate ‘owsofthenvironmeacstadium or crowd By support Jing 360 degree viewing and 3D the methods and appara of the present invention are well suited for Use wilh head ‘mounted displays intended to provide a wera SD immersive experience ith thefeedom to tum andobservea sene from different viewing anges as might be te ease ifthe user was present in the caviromment andthe seers head timed othe Fe iat o eae {0025} Methods and apparatus for communicating image Content, eg costent comesponing to a 360 degree field of ‘view are described. In various embydiments the ld of view onesponds to diferent portions ofan environmen, e. 8 front portion, atleast one hack portion a op portion and a boxtom portion Insome embodiments et nd ahi rare, beck portions) ofthe environment are generated andr com sicated separately. playback device monitors the pos Gin of users hed ad generates mays, eg, seeoseopic Images comespondingtothe portono thecrsionment a ker islookingat given ime whicharethen displayed the wee. Inthe ease of stereoscopic playback, separate left and right ce images are generated. The generated mages can, and in Some embodiments do, coresponding © one mar cane, © envionment portions. {0026} At start up of playback, user's forward Joking Tea level poston i se 10 comespond a &deiult o the forward scene portion. As 0 user turns bisier bad sod oF raises or lowers hisor her head, oer potions othe environ tment may come into the users eld o view. {0027} Banwicn and image devoting capabilites on san playback devices re ited by te rocensngeapacity ofthe device andor the bandwidth fr receiving image con tet In some enbodiments, the playback deve determines ‘hich portion of the environment corresponds othe Users ‘sin field of view. Thedevie then sclst that portion tbe received aa high te, eg, fll solution withthe sueam being designate fom 8 prioniy pempectve. 3s primary steam. Content fom one or mote oer steams providing Content eoresponing o other portions ofthe environment tay be rceived aswell, but admally ta lower dat rat Content delivery for @paicular steam may be initiated by the plabock device, ey sending sana used to igzer conte delivery. The signal may be used to join a multicast troup providing content corresponding 1 a porion of an vironment riiiatng delivery ofa switched dig bod ast Inthe case ofbroadcas content not ring requestor US 2015/0346812 Al ‘other signal such as multicast group join signal, the device may initiate reption by tuning to a channel on which the ‘content is available. 10028] -Giventhat users are normally intersted primarily in the forward view portion of the environment since ths is where the main action is nomnally ongoing particularly when the content corresponds to a sporting event, rock concert, fashion show o a number of different events, in some ‘embodiments the forward view portion of the environments piven data transmission priority. In at lest some embod ments, images corresponding to the forwarl viewing position ae streamed at a higher rate than one oe more other portions ‘of the 360 degree environment, Images corresponding 10 ‘ther portions ofthe enviroament are sent at lower dats rate ‘rae sent as static images. For example one or more static images ofthe top, esky, and bottom, ee, ground, may be [0029] In some embodiments multiple static captured ‘mages are sent for one or more portions ofthe envionment, ‘eg, rear view portions) or sky portion. In some embod ‘ments conto information is seat indicating which one ofthe Satie images fors portion of tae environment sold be used ‘ta given time. Inthe ease where static images fora portion ‘ofthe environment are Sent, they may be sent inencoded form, snd then sored in memory in decode form for use in com bining. with other image content. In this way, decoding resources required during an event can be reduced since mul- tiple streams noed not be decoded in parallel at the same frame rate. The static images may be sent prior to streaming thecontentofthemain event. Alternatively few images may hhesent fr different portions ofthe environment and stored in the eveat they are needed during playback given a change in the user's head position from the forward viewing position ‘The siatc or infrequent images may be encoded and sent as partofa content steam providing te content forthe primary, ‘8. forward, viewing direction ormay be sent asa separate ‘content stream. 10030] The static images corresponding othe rear may be, ‘and sometimes are, images captured prior an event while the content corresponding tothe forward portion ofthe envi ronment may, and in many eases des, include content that is ‘captured and streamed while an eveat i ongoing. eg. iareal [0031] Consider for example a case where two diferent rear view scenes are communicated and stored in the play back device. One scene may corresponding toa erowd which isan standing positon and another image may’ correspond to ‘crowd which sina seated position, The contro infomation nay, and in some embodiments does, indicate whether the ‘cating or standing position erovd image is wed at a ven time shoulda user tm hissher head ta position where rear portion ofthe eavionment is visible 0032] Similarly: muiple images of the sky may he com- ‘unicated wo the playback deviee and stored inthe playback ‘device in encoded or decoded fom, In some embodiments ‘which image ofthe sky portion isto be used at given time is ‘commuiniated in conta information In other embodiments Which scene ofthe sky is to be used is automatically deter- mined based oa the luminance of ane or more images corre= sponding tothe Forward seene area with a sky portion consis- tent with orelose tothe forward environmental scene portion being selected, ex, a bright forward scene area can be dected al used to control selection of bright sky image with few clouds, Similarly detection of a dark forward envie Dee. 3, 2015, ronmental area in some embodiments will result in a dark ‘overcast sky image hoing used. [0033] In eases where an image fora poston ofthe envi- ronment in afield of view is not available, the scene portion fan be synthesized, e., from information or content from other portions of the environment which are available. For ‘example, ifthe rear image portion isnot available, the content orm the Ff andlor right sides of the forward scene area may be copied and used to il a for missing rear portions of the environment. Blurring and/or other image processing oper ‘ions in addition to content duplication may be used to fil in lor missing portions of the environmeat in some embod ments. Altematvey,n some embodiments dassing informa ‘ion is provide the conten stream and the playback device genertes completely synthetic images forthe missing por tions. As with video game content such content may be ra istic in natureand may include a wide variety of image effects andlor conten shih is generated from drawing andior other mage cretion rales stored in the playback device. [0034] An exemplary method of operating a playback sys tem, in scoordance with some embodiments inchides dete ‘mining ahead positon ofa viewer, said head position corre sponding toa curent feld of view; receiving a irs content ‘team providing content corresponding to first portion of an environment; generating one or more output images eorre- sponding to the current field of view based on atleast some ‘ceived content included in sid ist content stream andj) stored content corresponding to a second portion of sad environment or ii) a symhetic image simulating a second portion of said environment; and outputting or displaying @ first output image, si first output image being one ofthe one ‘ormore generated output images. An exemplary content play~ bck system, in accordance ‘with some embodiments, Includes viewer head position determination module con- figured to determine a head positon ofa viewer, said head positon corresponding to a Curent field of view: a content Stream receive module configured 1» receive a frst content steam providing content corresponding fo fist portion of an feavironment; an output image content steam based gener- ‘ion module’ configured to generate one oF more output ‘mages corresponding to the curren field of view based on at Teast some received content inchide in said fist content sreum snd i) sored content corresponding to a second por ‘ion of said envionment or i) synthetic image simulating a second portion of said environment; and atleast one of: an ‘pat module configured to outpt sid fit output image oF a display module configured to display said frst output image. [0038] | Numerous variations and embodiments are possible tnd discussed in the detailed description which follows. BRIEF DESCRIPTION OF THE DRAWINGS: [0036] FG. itluseatesan exemplary system implemented Jnaecordance with some embodiments the invention which an be used to capture and stream content for subsequent Aisplay by one or more users along with one or more syathe- sized portions ofan environment. [0037] FIG. 24 illustrates an exemplary stereoscopic scene, full 360 degreestereoscopie scene which has not ben partitioned [0038] FG. 28 illustrates an exemplary stereoseopie seene \Which has been paritioned into 3 exemplary seenes in accoe dance with one exemplary embodiment US 2015/0346812 Al [0039] FIG. 2Cillustrates an exemplary stereoscopie scene ‘whieh has boen paritioned ino 4 scenes in accordance with, ‘one exemplary embodiment [0040] FIG. 3 usratesan exemplary process of encoding an exemplary 360 depree stereoseapic seene in accordance with one exemplary embodiment [0041] FIG. illustrates an example showing how aa input image portion is encoded using a variety of encoders fo gen- ‘erate different encoded versions of the same input image potion [0042] FIG. Sillustrates stored encoded portions ofan input stereoscopic Scene that has been partitioned into 3 portions. [0043] FIG. 6 is a flowchart illustrating the steps of an ‘exemplary method of streaming content in accordance with, wliment implemented using the system of 044) tem including encoding capability that ean be used to encode ‘and sium content in accordance with the Features of the FIG. Tillusteates an exemplary conten delivery sys {0045} FIG. 8 ilsteates an exemplary conten playback system that cam be used to receive, decode and dip the ‘content streamed bythe sytem of FIG. 7. {0046} FIG. 9 illosttes «drawing showing an exemplary ‘cameras wih 3 camera pois ousted in ifereat mounte ing postions slong with calibration targst which may be used ofr calibrating the emer ri 10047] FIG. 10 stustates a drawing tht shows & more focused view of the camer rig withthe 3 comera pis mounted inte camer i [0048] FIG. 11 shows a detailed illustration of an exem- Plan comir ris implemented in aecondance with one exem- Diary embodiment [0049] FIG. 12iloseatesan exemplary 360 scene environ: ment, ex, 360 seen are, whieh can be partitioned nto Uiferent viewing arcs portions comesponding to diffrent ‘mera postions ofthe respective cameras tht capture the Aileen! portions ofthe 360 dear scene. 10050] FIG. 13 inchudes thre diferent drawings showing Altfeent potions ofthe exemplary 360 scene area of FIG. 12 ‘which may be captured by differen’ cameras ht earespond ‘oandir positioned to coverthe viewing atea/portions othe ‘exemplary 360 scene are, {00S1] FIG. 144 isa fistpar of fowehaniustrating the sHeps ofan exemplary method of operating playback device, in cordance with an exemplary embodimeat ofthe iaven- {0052} FIG. 148 isa second part ofthe fowcha iste ing the sepsofan exemplary metho of operating a playback deve, in accordance With an exemplary enbedimeat of he 10053] FIG. 14 comprises the combination of FIG. M4A.and FIG. 4B, [0054] FIG. 15 is » Bowchar illustrating the steps of stream selection subroutine in accordance with an exemplary’ ‘emiostiment. 10055] FIG. 16 is « Bowchar illustrating the steps of stream priottizaton subroutine in accordance with an exer- plary embodiment 10056] FIG. 17 is a fowehar ilustating the steps of @ rendering subroutine in aocondance with an exemplary ‘embodiment. Dee. 3, 2015, [0057] FIG. 18 illastates an exemplary table including stream information corresponding to a plurality of eontent [0058] "FG. 19 illustrates an exemplary playback system implemented in aovordance with he present invention. [0059] FIG. 204 is a frst part of a Movichart of an exen plary method of operating 2 content playback system in faeordance with an exemplary embodiment, [0060] "FG. 208 isa second partof a owehart of an exem- plary method of operating a content playback system in feordance with an exemplary embodiment, [0061] FIG. 20C isa third par of a lowehart of an exem- plary method of operating 2 content playback system in faordance with an exemplary embodiment, [0062] FG. 20D isa fourth par of flowchart ofan exe plary method of operating 2 content playback system in focordance with an exemplary embodimea, [0063] FG. 208 is 2 fins part ofa Nowchart of an exem- plary method of operating a content playback system in focordance with an exemplary embodiment, [0064] FG. 20 comprises the combination of FIG. 204, FIG. 208, FIG. 20C, FIG. 20D and FIG. 207. [0068] FG. 21 is a drawing ofan exemplary content play- back system, ea content playback device ofa computer system coupletoa display, in accordance with an exemplary tembodiment [0066] FIG. 22 is drawing ofan exemplary assembly of ‘modhiles which my he inchided in the exemplary content playback system of FIG. 21 [0067] FG. 28s drawing showing an exemplary stream selection mode whickcan beused inthe playback system of FIG. 19 in socordance with some embodiments [0068] FG, 24.0 drawing showing an exemplary stream, prioritization module which ean be implemented as part of the stream selection module of FIG. 23 or as an individual sole DETAILED DESCRIPTION {0069} FG. 1 illustrates an exemplary systems 100 imple ‘mented in accordance with some embodiments of the inven= ‘ion, The system 100 supports content delivery, ex, imaging content delivery, to one or more customer devices, eg. play= back devieescontent players, loated at customer premises The system 100 includes the exemplary image eapting device 102, content delivery system 104, a communications network 105, anda plurality of customer premises 106, ... 110. The image capturing device 102 supports capturing of stereoscopic imagery. The image captring device 102 cap tures and processes imaging content in aevordance with the ‘eatures of the invention, The communications nctwork 0S ‘may'be, ga iybrd fibercoaxial (HEC) network, satellite network, andlor internet [0070] “The content deliver system 104 inchudes an encod ng appara 112 anda content steaming device'server 114 ‘The encoding apparats 112 may, and in some embodiments oes, include one or @ plurality of encoders for encoding ‘mage data in accordance with the invention, The encoders ‘may’ be use in parallel to encode different portions ofascene andor to encode a given portion of a scene 10 generate ‘encoded versions which have different data mates, Using mul- tiple encoders in parallel canbe particularly useful when real time or near real ine streaming is to be supported. [0071] The content streaming device 114 is configured to stream, e.2 transmit, encoded content for delivering the US 2015/0346812 Al ‘encoded image content tpone or more customer devices, ©. ‘over te communications network 108. Via the network 108, the content delivery system 104 can send and/or exchange information with he devices locatd atthe customer premises 106, 1102s represented in the igure by te ink 120 aversing the communications network 108, 10072] While the encoding apparstus 112 and content delivery server L14 are shownas separate physical devices in the FIG. 1 example, in some embodiments they are imple- rmentedasa single device which encodes andstreams content The encoding process may bea 3D, eg. stereoscopic, image ‘encoding process where information corresponding to lel land right eye views of @ scene portion are encoded and inchided inthe encosled image data so that 3D image viewing, ‘can be supported. The particular encoding method used is not ‘critical to the present application and a wide ring of encod= ‘ers may be used as or 10 implement the encoding apparatus 1, 10073] Fach customer premise 106, 110 may include a Pluality of devicesplayers, e-., playback systams used to ‘decode and playback’ display the imaging content streamed by the content steaming device 14, Cestomer premise 1106 inchides a decoding appamsiplayback device 122 coupled to a display device 124 while customer premise N10 inches decoding appamtus/playback device 126 coupled toa display device 128. In some embodiments the display devices 124, 128 are head mounted stereoscopic display ‘devices, [n some embodiments the playback device 122/126 ‘and the head mounted deviee 124/128 together form a play back system, 10074] In various embodiments decoding apparatus 122, 126 present the imaging content onthe corresponding display devices 124, 128. The decoding apparatus/players 122,126 may be devices which are capable of decoding the imaging ‘content received from the coatent delivery system 104, gen ‘erate imaging content using the decoded content and render- ing the imaging content, e ., 3D image content, on the dis- phiy devices 124, 128. Any of the decoding apparatis’ Playback devices 122, 126 may be used as the decoding ‘apparatsplayback device 800 shown in FIG. 8. A system playback devie such asthe one illustrated in FIGS. 8 and 19 ‘ean be used as any of the decoding apparatus)playback ‘devices 122, 126. 10075] FIG. 2A illustrates an exemplary stereoscopic scene 200, ez. fll 360 degree stereoscopic seene which hs not been panitioned, The sercoscopic tcene may be and not nally isthe result of combining image data captured from multiple cameras, eg, video cameras, offen mounted on & single video cepture platform or camera monn [0076] FIG. 2B ilustrates a partitioned version 280 of the ‘exemplary stereoscopic scene 200 where the seene has been partitioned into 8 (N-3) exemplary portions, e front 180 ‘degree portion, efter 90 deyree portion anda right rear 90 ‘degree portion in accordance with one exemplary embod ment 10077] FIG. 2C ilustrates another partitioned version 280 ‘of the exemplary’ stereoseapic scene 200 which has been partitioned into 4 (N=4) portions in accordance with one ‘exemplary embodiment. [0078] | While FIGS. 28 and 2C show two exemplary part tions, it should be appreciated that other pattions are pos- sible. For example the seene 200 may be partitioned into twelve (n~12) 30 degree portions. In one such embodiment, rather than individually eneoding each portion, multiple por- Dee. 3, 2015, tions are grouped together and encoded as a group. Diflerent sroups of portions may be encoded and streamed to the user ‘with the sizeof cach group being the same in terms of total ‘degrees of scene but comesponding toa different portions of ‘an image which may be steamed depending on the user's head position, eg. viewing angleas measured on the scale of 010 360 degrees. [0079] FIG. 3 illustrates an exemplary process of encoding fn exemplary 360 degree stereoscopic scene in oceordance with one exemplary embodiment. The input to the method 300 shown in FIG. 8 includes 360 degree stereaseopic image “aia captured by, eg. a plurality of cameras arranged 10 capture a 360 degree view ofa scene. The stereoscopic image data, eg, stereoscopic video, may be in any of a variety of known formats and inches, in most embodiments, ltt and right eye imagedata used to allow fora 3D experience, While the methods are pariculasly well suited tor stereoscopic Video, ce techniques and methods described herein can also be applied to 2D images, eg. of a 360 degree o small scene [0080] Instep304 the scene dats 302 s partitioned ino data comesponding to differnt scene ares, e., N soene areas corresponding o different viewing directions For example, none embodimeat suchas the one shown in PIG, 29 the 360 degree scene area is portioned int three pattons left ear portion eoeresponding 0 2 90 dogroe poaion, a front 180 ‘degre portion and a right rear 90 depree portion, The diffe tent portions may have been eaptured by different camer but this isnot necessary and i Tact the 360 degree scene may be constricted from data captured from multiple cameras before viding into the Nssene areas as shown i FIGS. 2B and 2C. [0081] In step 306 the dats comesponding the diflerent scene portions is encoded in accordance with theinvention. In some embodiments each seene portion is independently ‘encod by multiple encoders to suppor: mail possible bi rate steams for each portion. In step 308 the encoded scene portions ae sored, en the content delivery server 14 of the content delivery system 104, for streaming to the cus- ‘omer playback devices, [0082] FIG. 4 is a drawing 400 illastrating an example showing how an input image portion, e.g. 180 degree front portion of seene is encoded using a variety of encoders to {generate different encoded versions ofthe sane input image portion [0083] As shown in drassing 400, an input seene portion 44020 ,, 2 180 degree font portion ofa scene, is supplied to plurality of encoders for encoding. Inthe example there are K different encoders which encode input data with diferent resolutions and using different encoding techniques t gen- temite encoded data to support different data rate sticams of ‘mage conten. The plurality of K encoders include a high ‘efnition (HD) eneosler 1 404, standard definition (SD) encoder 2406, reduced frame rate SD encoder 3 408, + and a high compression reduced frame rate SD encoder K ano, [0084] The HD eneoder 1404 is configured to perfor full high definition (HD) encoding to produce high bitrate HD ‘encoded image 412, The SD encoser 2406 is configured to perform low resolution standard definition encoding to pro- duce @ SD encoded version 2-414 of the input image, The ‘edaced famerate SD encoder 3408 is configured o perform reduced framerate low resolution SD encoding wo produce a reduced rate SD encoded version 3.416 ofthe inp image. The reduced frame rate may be, e., hall of the frame rate US 2015/0346812 Al used by the SD encode 2 406 for encoding. The high eom- pression reduced frame rate SP encoder K 410 is configured to perform reduced frame rate low resolution SD encoding ‘with high compression to produce a highly compressed reduced rate SD encoded version K 420 of the iapot imoge [0085] Thus it should be appreciated that contol of spatial andor temporal resolution can be used 10 produce data ‘reams of different data rates and contr af other encoder settings such asthe evel of data compression may also be used alone orn alton to control of spatial andor temporal resolution to produce data steams corresponding to seene portion with one or more desired data rates [0085] FIG. Silfustates stored encoded portions $00 of an put stereoscopic seene that has been partitioned into 3 ‘exemplary portions. The stored encoded portions may be Stored in the content delivery system 104, eg as datalinfore mation the memory. The sored encoded portions $00 ofthe stereoscopie scene includes 3 different sets of encoded por- tions, where each portion corresponding to a diferent scene ‘rca und eaeh sel including a plurality of dilleent encoded versions of the eoresponding scene portion, Fach encoded ‘version isa version of encoded vdeo daa a thus represents rultiple frames which have heen coded. It shoul Be appre= ‘ited that each encoded version 510, 812, 516 being video ‘corresponds to multiple periods of time and that when stream- ing he portion, e 2 frames, comesponcling to the period of time being played back will used for transmission purposes [0087] As illustrated and discussed above with regard to FIG. 4, each scene portion, e., front, rear seene portions, nay be encoded using a plurality of dillerent encoders 10 produce K litferent versions ofthe same scene portion. The ‘outputs ofeach encoder coresponding oa given input seene fare grouped together as a sct and stored. The fist sot of ‘encoded! scene portions $02 comresponds {0 the front 180 ‘degree scene portion, and includes encoded version 1 S10 of the font 180 degree scene, encoded version 2 S12 ofthe front 180 degree scene... and!encoded version K 816 the font 180 degree scene, The second set of encoded seene portions 504 corresponds tothe scene portion 2. eg, 90 degree left rear seen portion, and inludes encode version 1 $20 ofthe 30 degree left rear scene portion, encoded version 2522 ofthe {0 dogroe left rear scene portion... and encoded version K 52601 the 90 degrooleft ear scene portion. Similarly the thind se of encoded scene portions 806 corresponds to the scene pomtion 3, eg. 90 degree right rear scene portion, and Includes encoded version 1 S30 of the 90 deytee right rear scene potion, encoded version 2 532 ofthe 90 depres right rear yeene portion, ..., nd encoded version K 836 ofthe 90, degree right rear scene portion 10088] ‘The various ditfereat store encoded portions ofthe $S4Ddegree scene can he used to genemte various diferent bit rate streams for sending the customer playback devices. 10089] FIG. 6s a owehar 600 ilustrating the steps ofan ‘exemplary method of providing image content, inaccondance than exemplary embodiment. The method of owchart 600 is implemented in some embodiments using the capturing system shown in FIG. 1 [0090] ‘Themethod starts instep 602, eg, with hedelivery system being powered on and initialized. The method pro ‘ceeds from sar step 602 (steps 604. In step 604 the content Aalivery sytem 104, e@. the server 114 within the system, 104, receives a request for conten, ea request fora pre= Dee. 3, 2015, ‘ously encoded program of in som cases, a live event bing {encoded an steamed in rel oe ner ain, ©, while the ‘eats stil ongoing {0091} Inresponseto the request in step 606, tbe server 4 determines the data rateable for delivery. The data rte may be determined fom information nels in the request indiating the supported data rates andor from other inoe- ‘ation schas network information indicating the maxim bendvidth that i avilable Zor delivering content #0 the rebestiag device As should beapprevatedtheaaiable data ‘ate may vary depen on network footing and my change nents of the system 700 are coupled together via bus 709 ‘which allows for data tobe commnicsted between the com ponents ofthe system 700, [0110] The memory 742 includes various modules, eg. Toutines, which when executed by the processor 708 contr the system 700 to implement the partitioning encoding, stor- ge, and streaming! transmission and/or ovat operations in socordance withthe invention. [0111] The memory 742 includes various modules, ep. routines, which when executed by the processor 708 control the computer system 700 to implement the immersive tereo~ scopic video acquisition, encoding, storage, and transmission andlor output methods in accordance with the invention, The memory 712 includes control routines 714, a partitioning module 716, encoders) 718, 2 streaming controller 720, received input images 732, eg. 360 degree stereoscopic Video of a scene, encoded Seene portions 734, and timing information 736. In some embodiments the modules are, implemented as software modules. Inother embodiments the modules are implemented in hardware, eg. as individual circits with cach module being implemented 38a circuit for performing the function to which the module corespoads, In still other embodiments the modules are implemented using a ‘combination of software and hardware. [0112] ‘The control routines 714 include deviee contol wv~ tines and communications routines to contol the operation of the system 700. The partitioning module 716i configured to partion a received stereoscopic 360 degree version of a Scene intoN scene portions in accordance with the Features of the invention, [0113] ‘The encoder(s)718 may, and in some embodiments do, include a plurality of encoders configured 10. encode received image conten, e.g, 360 degree version of scene andlor one or more scene portions in accordance with the features ofthe invention. Ia some embodiments encoder(s) inchide multiple encoders with each encoder being coafig~ tured to encode a stereoscopic scene and/or paritioned scene portions to support a given bit rate stream. Thus in some ‘embodiments cach soete portion ean be encoded sing nul tiple encodersto support multiplediferent bitrate streams for ‘each scene, An output of the encoder(s) 718 i the encoded scone portions 734 which are sored in the memory for streaming to customer deviees, eg, playback deviees. The ‘encodes content ean be streamed to one or multiple different devices via the network interface 710, Dee. 3, 2015, {0114} ‘The streaming controller 720 is configured 10 con ‘eo steaming of encoded content for delivering the encoded ‘mage conten! toone or more customer devives, gover the communications nctwork 108. In various embesdiments varie tus steps of the Howehart 600 are implemented bythe ele- ‘mentsof the sreasing controller 720. Thestreaming contol Je 720 ineladesa request processing mosile 722, a data rate determination module 724, 2 curent head positon deter tation module 726, a selection module 728 and a steaming control module 730, The request processing. module 722 is configured to process a received request for imaging eontent ‘rom a customer playack device. The request for content is received various embodiments via receiver inte network interface 710. In some embodiments the request for content includes information indieating the identity of requesting playbook device. In some embodiments the request for eon- tent may include data rate supported by the customer pay back device, a eurent head postion othe user ©, postion ofthe head mounted display. The request processing module “722 processes the recetved request and provides retrieved information to othe elements of the streaming controller 720 {0 take furer actions. While the equest for content may ‘include data rate information and current head position infor- ‘maton, in various embodiments the data rate supped by the playback device ean be determined from network tests andl ther network information exchange between the sytem 700 andthe playback device [0113] The data rate determination module 724 is contig- ‘ured to determine the available data rates that ean be sed 19 ‘ream imaging content to customer devices, eg, since muh Line eacoded seene portions are supported the content dei ery system T00 can spport steaming content at multiple data fates to the customer device. The data rate determination ‘module 724 is further configured w determine the dats rate supported by a playback device requesting content frm sys- ‘em 700, In some embodiments the data rate determination ‘module 724is configure to determine available data rate for delivery of image content based on network measurements [0116] The current head postion determination module “26isconfiguredto determine aeurent viewing angle andor current head position ofthe user, e postion othe head ‘mounted display, from information received from the play~ bck devi. fn some embodiments the playback device pen ‘ically sends curent head position information to the sys ‘em 700 where the cument head peston determination module 726 receives ad processes the information to deter- mine the current viewing angle and/or acurent head positon [0117] The selection module 728 is configured to dter- rine which portions ofa 360 degree scene t steam 10 4 playback device based on the curent viewing anglbead position infomation of the use. The selection module 728 is further configured to select the encoded versions of the deter~ rined seene portions based on available data rate to support streaming of content {0118} The streaming contro! module 780 is configured control streaming of image content, eg, mulple portions of 1360 degree stereoscopic scene, at various supported data ‘ales in accordance wil the features the invention. Insome embodiments the sreaming control module 730s configured {© conto steam N portions of a 360 degree stereoscopic scene tothe playback device requesting content to initialize Scene memory’ in the playback device. In vatious embo- ‘ments the streaming eon module 730 iscoafizured to send the selected encoded versions of the determined scene por US 2015/0346812 Al tions periodically, e.g, ata determined rate. Insome embosl- ments the streaming control module 730s further configured to send 360 degree scene update to the playback device in accordance With atime interval, eg, once every minute In some embodiments sending 360" deyree scene update includes sending N scene portions or N-X scene portions of the full 360 degree streoseopie scone, where Nis the total number of portios into which the fll 360 degree steroo- scopic scene has been partitioned and X represents the selected scene portions recently sent to the playhack device In some embodiments the streaming control module 730 ‘sits fora predetermined time afer initaly sending N scene Portions forinitialization before sending the 60/deuree scene Update. In some embodiments the timing information to con- trol sending of the 360 dearee scene update i included in the timing information 736. Tn some embodiments the streaming ‘contol medule 730 is further configured identi scene por: tions which have not en transite to the playback device ‘ducing # refesh itera and transmit an updated version of the identified scone portions which were not transmit to the playback device during the refresh interval [0119] In variousembodiments the streaming conteol med- lle 730 is configured communicating at last a suiicient numberof the N portions tothe playback device on. periodic basis to allow the playback device to flly refesh a 360 ‘degree version of said seene atleast once during each refresh period. [0120] FIG. 8 illuicates a playback system 800 imple- ‘mented socondance withthe present invention whichean be used to receive, decode, store and display’ imaging content received from a content delivery system such as the one shown in FIGS. 1 and 7. The system 800 can be implemented isa single playback device 800 which includes a display 802 ‘oras.a combination of elements suc as an external display. ‘eg. a head mounted display 808, coupled to a computer system 800. [0121] The playback system 800, in a least some embod- ments includes a 3D head mounted display. The head mounted display maybe implemented using the OCULUS RIFT™ VR (virtual reality) headset which may include the head mounted display 808. Other ead mounted displays may ‘slsobe used. In sme embodiments the head mouated helmet ‘rather head mounting device in which one or more display fereens are used to display content toa user's let and eight ‘yes. By displaying different images to the lel and right eyes ‘ona single sereen with the head mount being configured to ‘expose different portions of the single sereen to different ‘es, a single display can he used wo display Fell and sight eye images which will be perceived separately by the viewers lft ‘and right eyes. In sonte embodiments cel phone sereen is used as the display othe head mounted display device, Inat least some such embodiments cell phone is inserted into the hhead monnting device and the cell phone is used to display images. [0122] ‘The playback system 800 has the ability to decode received encoded image data, eg, left and right eye images ‘and/or mono (single images) corresponding to diflereat por- tions of an environment or seene and generate 3D image ‘content for display to the customer, eg, By rendering ad slsplaying different left and right eye views wick are per- ‘ceived by the users a 3D image. The playback system 800 in ome embodiments s located a customer premise location such as @ home or office but may be located at an image Dee. 3, 2015, capture site as wel. Thesystem 800 can perform signal recep ‘ion, decoding, display andior other operations in accordance: swith the invention, [0123] The system 800 includes a display 802, 2 display device interface 803, input device 804, inpuvoutput (WO) Interface 806,» processor $08, network interface 810 and a ‘memory 812. The various components ofthe system 800 are coupled together via bus 809 which allows for data to be ‘omunieated between the components of the system 800 ‘andr by other connections or through a wireless interface ‘While in some embodiments display 802 is included as an ‘optional element as illustrated using the dashed box, in some embodiments an external display device 808, e., 2 head ‘mounted steroscopie display device, can be coupled to the playback device via the display deviee interlace 803. [0124] For example, in a case where cell phone processor Js usedas the processor 808 and the cell phone generates and isp images in ahead mouat, he system may include as part ofthe hea mount device the processor 808, display 8O2 and memory 812, The processor 808, display #02 and nietory 812 may all be part of the cell phone. In other embodiments of the system 800, the processor S08 may be part of gaming system sueh as an XBOX or PS4 with the Aisplay 808 being mounted in a head mounting device and coupled to the gaming system. Whether the processor 808 or ‘memory B12 ae located inthe device which is woen on the head or not is not critical and, as cam be appreciated, while in some cases itmay be convenient to e-locate the processorin the headgear, froma power, heat and weight perspectiveitean be desirable, in at last some eases, to have the processor 808 ‘and memory 812 coupled the head gear which includes the splay. [0125] While various embodiments contemplate « head ‘mounted display 805 or 802, the method and spparatus ean also be used with non-head mounted displays which can support 3D image. Accordingly while in many embodiments thesystem 800 includes lead mounted displ, it can alsa be ‘implemented with a non-head mousted display. [0126] The memory 812 includes various modules, ex. routines, which when executed by the processor 88 contra the playback device 800 to perform decoding and output ‘operations in accordance with theinvention. The memory B12 Includes control routines 814, request for content zeneration ‘module 816,» head postion and/or viewing angle deteni- ration module 818, # decoder modile 820, a stereoscopic mage rendering module 822 also referred to as a 3D image generation module, and data/information including received. feneaded image content 824, decoded image content 826, 2 340 degree decoded scene buller 828, and generated sterco- scopic content #30, [0127] Theconirol routines 14 inchude device contr ou ‘ines and communications outines to control the operation of the device 800, The roquest generation module 816 8 comfig- ured fo generate request for content (0 send (0 4 content

You might also like