You are on page 1of 33
c») United States 2) Patent Application Publication (10) Pub. No. RINNER et al. 'S 2012/0050524 Al (43) Pub. Dat Mar. 1, 2012 (sa) os) oo ay @ (60) G0) Avg, 25, 2010 APPARATUS AND METHOD FOR GENERATING AN OVERVIEW IMAC PLURALITY OF IMAGES USING AN ACCURACY INFORMATION GE OFA Inventors: Radegund (AT), Markus QUARITSCH, Grosspetersdor! (AT), Daniel WISCHOUNIG-STRUCL, Klagenfurt (AD) Sued YAIYANEIAD, Kl Assignee LLAKESIDE LABS GMBIL Klagenfurt (AT) Appl. No: 18/2158,366 Filed Aug. 23,2011 Related US. Application Data Provisional application No, 61/377,604, filed on Aug. 27.2010. Foreign Application Priority Data er) 10174053.8 6) Incr OWN 718 (2006.01) Gast 15/00 (2011.01) (2006.01) G096 $377 (2) MMB/L17; 345/634; 345/419; 348/807 085 on ABSTRACT An apparatus for generating an overview image ofa plurality of images includes an image preprocessor which prepro cesses anew image by assigning the new image toa position in the overview image based on position information con tained by meta-dataof the new image, A storage unit stores a plunity of images of the overview image and provides the ‘overview image for display. Further, the image processor receives accuracy formation of the position information The image processor determines an overlap region of the preprocessed nes image and a stored image within the over View image based on the assigned positions of the prepo cessed new image and ofthe stared image. Further, 9 conto lable processing engine processes the preprocessed new image by resadusting the assigned position of the prepo cessed new image based on comparing features ofthe overlap region of the preprocessed new image and the steed image. The controllable processing engine is controlled by accuracy {information ofthe position infomation, Patent Application Publication Mar. 1,2012 Sheet 10f15 US 2012/0050524 AI 100 102 + image preprocessor V2. i; sapere ' ' image processor ! i ' t | controllable accuracy [1 \ +>) processing fe isformation fey ! engine ( input [4 Heri2e- ' toa ' storage unit FIG1 Patent Application Publication Mar. 1,2012 Sheet 20f15 US 2012/0050524 A1 ro woe 02. *. image preprocessor ie : : {image processor ' 4104 i we | ' controllable accuracy 1 +—>) processing + information be S i engine } inpul ! E ' 1 1 1 1 5 4 6 ead unmanned 120 aerial “ v4 vehicle 124-7 + control unit AG2 Patent Application Publication Mar. 1,2012 Sheet 3of15 US 2012/0050524 AI GPS position | flight height Leeman GPS error range + camera tilting error range el] total error range FiG3 Patent Application Pul tion Mar. 1, 2012 Sheet 4 of 15 US 2012/0050524 A1 FIG 4 Patent Application Publication Mar. 1,2012 Sheet Sof 15 US 2012/0050524 A1 $00 + Storing a pluralfty of processed images of the overview image, wherein each processed image of the plurality of processed images is assigned to a position in the overview image: 5107 Determining fealure points of a new image. 520 if Comparing the determined feature points of the new image with feature points of a stored processed image to identify common feature points and to obtain 3-dimensional positions of the common feature points. 530 t ‘Determining common feature points located within a predefined maximum distance of relevance to a reference plane hased on the 3-dimentional positions of the common feature points fo identify relevant common feature points. 540: “Processing a Hew image by assigning the new image fo a position in the overview image based on a comparison of an image information of each relevant common feature point of the new image with an image information of each corresponding relevant common feature point of the stored processed image without considering common feature points located beyond the defined maximum distance of relevance to ! ence plan 5507 t Fiding the new Invage wit the assigned postion to he pluraliy of provessed images of the overview image. 5607 A Providing the overview image Containing the pluraly of processed mages & their assigned positions for displaying. 570 FIGS Patent Application Publication Mar. 1, 2012 Sheet 6 of 15 US 2012/0050524 AI co ~602 image preprocessor [~~ 620 612 FIG6 Patent Application Pul tion Mar. 1, 2012 Sheet 7of15 US 2012/0050524 A1 yt Sioring a plurallly of processed images of fhe overview image, wherein each processed image of the plurality of processed images is assigned to a _position in the overview image. 710 I Determining feature points of a new image. 7207 t ‘Comparing the determined feature points of the new image with feature points of a sled processed image to identity common feature points and to oblain 3-dimensional positions of the common feature points, 7307 t Determining common Teature polnis localed within a predeliaed maumium distance of relevance to 4 reference plane based on the 3-dimentional positions ofthe common feature points to identify relevant common feature poiats. 740. tL Processing a new image by assigning the Aew Image to a posllon WW the overview image based on a comparison of an image information of each relevant common feature point of the new image with an image information ofeach corresponding relevant common teature point ofthe stored processed image witht considering common feature points located beyond the redefined maximum distance of relevance to the referenc ne, = i Adding the Hew image wilh the ascigned postion to the plurally Of processed images of the overview image. 760° J Providing the overview Image containing tne plurallty of processed images at their assigned positions for displaying 7107 G7 Patent Application Publication Mar. 1,2012 Sheet 8 of 15 US 2012/0050524 AL fn [rave i & meta-data 810 image preprocessing COMpUTE TNT) Wanstormation 820. based on meta-data sition, orientation) 830 pool with already find intersecting images. processed images ial out of already processed images | 840 compute (rough) 3D structure 842. 1-4 extract feature points 844 maich feature points feompute 30 stucture and camera bositions based on featured points 850 compute (refined) transformation based on image data 852\_,—~Jit common plane into 3D structure} 846-4 maximize correlation of extracted model of meta-data] _ i regions on common plane, taking inaccuracy into account estimated inaccuracy} of meta-data 854 7 US 2012/0050524 A1 Mar. 1,2012 Sheet 9 of 15 tion Patent Application Publi 691 “ bsausea foeina3e Ore" 006: / US 2012/0050524 AL Mar. 1, 2012 Sheet 10 of 15 Patent Application Publication VOL 9H Patent Application Publication Mar. 1, 2012 Sheet 11 of 15 US 2012/0050524 AI FIG 10B Patent Application Pul tion Mar. 1, 2012 Sheet 12 of 15. US 2012/0050524 A1 is S s is 3 = B = & S% 8 is x Ry ay 2 = E = RS RS 5 2 s ge 8 id 2 2 lo jo Beeao a gw eS eS S S Ss uouIsod peyewnise pue S49. usamleg sOuEySip aanejai AiG 11B Patent Application Publication Mar. 1,2012 Sheet 13 of 15 US 2012/0050524 AI ground size N-S [px} 400. 600- 800. 1000 1200- 1400- 1600- 1800. pure GPS position placement of images At ts 10 3000 3500 4000 ‘round size WHE {px} FIG 12 Patent Application Publication Mar. 1, 2012 Sheet 14 of 15 US 2012/0050524 AI sotated GPS position placement of images: ground size N-S (px) 2400 2600 2800 3000 3200 3400 3600 3800 4000 ground size W-E [px! FIG 13 Patent Application Publication Mar. 1,2012 Sheet 15 of 15 US 2012/0050524 AI SIFT position placement of images 800+ 1000+ 12004 14004 1600-4 ‘8004. 2ab0 2600 2800 3000 3200 3400 3600 3do0 4000 ground size W-E [px] FIG 14 US 2012/0050524 Al APPARATUS AND METHOD FOR GENERATING AN OVERVIEW IMAGE OF PLURALITY OF IMAGES USING AN "ACCURACY INFORMATION BACKGROUND OF THE INVENTION 10001] Embodiments according w the invention relating to the feld of image processing and partiulaay aa apparatus and a method for generating an overview image of the p= ality of images. [0002] Muci research as been done in the area of mosa- fcking of acral imagery and surveillance over the pst years Many sppmaches have besa proposed ranging from using low altitude imagery of stationary cameras and UAVS (un- ‘manned aerial vehicle) to higher altitudes imagery eaprured fromballoons, airplanes, and satellites. High altitude imagery ‘and on-ground mosaicking such as panoramic image con- struction are deeling with diflereat challenges than low al tude imagery. [0003] ‘There has been a breakthrough regarding the seam lessstitching in past years by exploiting robust featureextrac- tion methods (se for example “Y. Zhan-long and G. Bao- Jong. Image registration using ration normalized feature points. In ISDA “08: Proceedings of the 2008 Fiphih Interna tional Canfereace on Intelligent Systems Design and Appl ‘ations, pages 237-241, Washington, D.C. USA, 2008, EEE. ‘Computer Society", “D, Steely, C. Pal, and R, Seelski Bificiently registering video into panoramic mosaicks. In Proceedings ofthe Tenth IFFE Intemational Conference on ‘Computer Vision, volume 2, pages 1300-1307, Los Alamitos, (Calif, USA, 17-21 2005. EEE Computer Society, "Ht. Bay. A-Es, T, Tuyteluars, and L. Van Gool, Speeded-Uip Robust Features (SURF).Comput. Vis. mage Underst, 110(3)346- 359, 2008"), dept-maps Gee forexampleS, B. KangandR, Svelisi, Exeacing view-dependent depth maps from col- lection of images. Int J. Comput. Vision, $8(2):139-163, 2008", °C. Cigla and A. A. Alatan, Multi-view dense depth ‘map estimation, In IMMERSCOM "09: Proceedings of the 2nd Intemational Conference on Immersive Telecommnica- tions, pages 1-6, ICST, Brussels, Belgium, 2009, ICST Ins tute for Computer Sciences, Social-Informatiesand Telecom munications Engineering”), 3D reconstruction of the set image fsioa, and many other approaches (eg. "R. Szlisi Image aligament and stitching: « torial, Found. Trends ‘Comiput- Graph. Vis, 2(1):1-104, 2006", “H.-Y. Shum and R ‘Sveliski, Construction and refinement of panoramic mossicks ‘with global and local alignment. In Procoodings of Sixth Invemational Conference on Computer Vision, pages 953- 956, 1998.").A SURF feature-based alporithm is for example described in “H. Bay, A. Ess, Tuytlaars, and L. Van Gol Speedled-Up Robust Features (SURF). Comput. Vis. mage Underst, 1103):346-359, 2008”. Results look seamless at the stitching part but a drawback is that the transformation performed on the images leas to a distortion in sales and Felative distances. ines which are parallel in re Parallel anymore in the stitched image. This type of ror ‘accumulates over multiple images if not compensated. Sucha treditional feature-based approach ie difficult fr some apli- cations. For example, the generation of a geo-referenced image is hardly possible due tothe scale and angle distortions a8 well the error propagation ver multiple images. Other ‘iching. algorithms are shown ia “H.-Y. Shum and R. Svelisi. Construction and refinement of panoramic mosaicks th global and local alignment. In Proceedings of Sixth Mar. 1, 2012 International Conference on Computer Vision, pages 953 956, 1998", “Y. Furukawa and J. Ponce. Accurate Camera Calibration fom Multi-View Steroo and Bundle Adjustment In Proceedings of IEP Conference on Computer Vision and Pattern Recognition, number 3, pages 1-8, Hingham, Mas, USA, 2008” or “G, Sibley, C. Mei, I. Reid, and P. Newman, Adaptive relative Bndle adjustment. In Roboties Seience and Systems (RSS), Seattle, USA. lune 2000", [0004] challenge flow altitude imagery and mosaicking {or surveillance purposes is finding an appropriate belance: between seamless siching and peo-relerencing under con- sideration of processing time and other resources. The scale diflerence asa result of different flying altitude resulted in several stitching errs. In other words significant stitching cenors induced by scale differences among images may be Visible. Similar objects may have different sizes, and there ‘may be a disparity in horizontal and vertical stitching. A similar enor may occur by inaecirate camer position oF rotation, In other words, ttching disparities may be caused by inaccurate camera angle or poston, [0005] Many approaches have been proposed to tackle these problems. Fxamples include the wavelet-based stitch ing "C. Yuanhang, H. Xiaowei, and X. Dingyu. A mostick ‘approach for remote sensing images based on wavelet rans {fomm, In WiCOM "08: Proceedings of the Foun Interational Conference on Wireless Communications, Networking and Mobile Computing, payes 1-4, 2008”, image registering in binary domains “X: Han, H. Zhao, L-Yan, and S. Yang. An ‘approach of fas mosaick for serial remote sensing images tom UAV. la PSKD "07: Proceedings of the Fourth Interna ‘ional Conference on Fuzzy Systemsand Knowledge Discov- ey pages 11-15, Washington, D.C., USA, 2007. IEEE Com- Puler Society", automatic mosaicking by 3D-reconstrucion ‘and epipolar geometry “L. Lov, FM. Zliang, C. Xu, EL, ‘and M-G. Xve, Automati registration of aerial image series ‘using geometric invariance. In Poosedings of IEEE Inter tional Conference on Automation and Logistics, pages 1198- 1208, 2008”, exploiting known ground reference points for distotion correction “P Pest J. Hsoa, J. Howell, D-Steedly, ‘and M, Uyttendacle, Low-cost onthogrpic imagery. Ia GIS “08: Proceedings of the 16th ACM SIGSPATIAL intem- tional conference on Advances in geographic informal systems, pages 1-8, New York, NY, USA, 2008, ACM, IMU-based muli-spectral image correction “A, Jensen, M. Baumann, and Y. Chen. Low-cost multispectral aerial image jing. using. autonomous runway-ffee small fying wing vehicles. Geoscience and Remote ‘Sensing Symposium, IGARSS, 5:506-509, 2008", combining GPS, IMU and video sensors for distortion correction and goo-refereacing “A. Brown, C. Gilbed, H. Holland, and Y. Lu. Near Real-Time Dissemination of Geo-Referenced Imagery by an Enterprise Sener. In Proceedings of 2006 Geolee Event, Otava, (Ontario, Canada, June 2006” and perspective correction by projective tansformation “W. H, WANG Yue, WU Yun-dong. Free image registration and mosaicking based on tin and improved szeliski algorithm. In Proceedings of ISPRS Con tress, volume XXXVI, Beijing, 2008”. Some of these fpproaces are considering higher altitude “A. Brown, C. Gillen, H. Holland, and Y. Lu, Near Real-Time Dissemina- ‘ion of Geo-Referenced Imagery by an Enterprise Server. In Proceedings of 2006 Geolee Event, Ortawa, Ontario, Canada, Jane 2006", °X, Han, H, Zhao L. Yan, and S, Yang. Anapproich of fast mossick forserial remote sensing images from UAV. In FSKID"07: Proceedings of the Fourth Infema US 2012/0050524 Al tional Conférence on Fuzzy Systems and Knowledge Discov ‘ey pages 11-15, Washington, D.C., USA, 2007. IEEE Com= puter Society”, “L. Lou, F-M. Zhang, C. Xu, F.Li,andM. Xue. Aatomatic registration of arial image series using go0- tnctric invariance. ln Prococdings of IEEE Intemational Con ference on Automation and Logistics, pages 1198-1203, 2008”, P. Pest, J. Elson, J. Howell, D. Steedly, and M. Uyt= tendaele. Low-cost onbographic imagery. ln GIS “08: Pro- ‘ceedings of the 16th ACM SIGSPATIAL international con ference on Advances in geographic information systems, pages 1-8, New York, N-Y., USA, 2008 ACM", “W. HL WANG Yue, WU Yur-dong. Free image registmtion and mosaicking based on tin and improved szelisk algorithm. In Proceedings of ISPRS Congress, volume XXXVI, Bejing, 2008”, while others are using different types of UAVs suas ‘small fived Wing airerafts“G. B. Ladd, A. Nayehudhuri, M Mi. Ea dG; Bd Resin, ere ‘encing, and mosaicking of images acquired with remot ‘petted aera pforms. In Proceedings Of ASPRS 2006 Annval Conference, page 10 pp., Reno, Nev., USA, May 2006”, “A. Jensen, M. Baumann, and Y. Chen. Low-cost ‘multispectral sera imaging using autonomous ranway’fee small lying wing veieles. Geoseience and Remote Sensing Symposium, IGARSS, §:506-509, 2008", “Y, Huang. J, Li, andN, Fan, Image Mosaicking for UAV Application. In KAM. “0S: Proceedings of the 2008 International Symposium on Knowledge Acquisition and Modcling, pages. 663-667, Washington, D., USA, 2008, IEEE Computer Society” “These aireraftsshow less goo-referencing accuracy cased by higher sped and degre o iting higher amount of roll and pitch). “Z. Zh, B. M, Riseman, AR, Hanson, and 1. J Sch. An elicent method for geo-referenced video mosi- icking. for environmental monitoring. Mach, Vis. Appl 16(4}-203-216, 2005” performed an aerial imayery mosaick ing without any 3D reconstroction or complex global regis- tration. That approach uses the video stream whieh was taken from an agplane.“Y. Huang, J i, and N. Fan. Image Mosa- icking or UAV Application. In KAM"08: Proceedings ofthe 2008 International Symposium on Knowledge Acquisition tnd Modeling, pages 663-667, Washington, D.C., USA, 2008. IPFE Computer Society” performed a seamless fea ture-based mosaicking using a small fixed-wing UAV. “I RoBmann and M. Rast, High-detail local aerial imaging using autonomous drones. In Proceedings of 12th AGILE Intemational Conference on Geographic Information Sei- ‘ence: Advances in GlSeience, Hannover, Germany, June 2009” also use small-scale quadrocopters. The mossicking resulls ae seams but lacking geo-referencing. 10006] “Howard Schultz, Allen R, Hanson, Edward M. Risemuan, Frank Stolle, Zhigang Zhu, Christopher D. Hay- ward, Dana Slaymaker, A System for Real-time Generation ‘of Geo-referenced Terrain Models. SPIE Enabling Techaolo- ‘ies for Law Eniorcoment Boston, Mass., Nov. 58, 2000" snd “Zhigang Zhu, Edward M. Riseman, Allen R. Hanson, Howard Schulz. An efficent method for goo-referenced video mossicking for envimnmental monitoring. Machine Vision and Applications (2008) 16(4: 203-216" deseribes system for generating 3D strictures from aerial images wse- ing laser sensor to precisely measure the elevations. For this, ‘Tanger sigplane with wo camera systems may be used 10007] In °M, Brown and D, G. Lowe, Recognising Pan- ‘oramas. In Proc. ICCV 2003" a purely image-based mosaick- ing is shown, I describes an automatic approach for feature Mar. 1, 2012 (sing SIFT) and image matching by assuming that the eam ‘ea rtatesabour its optical contr. [0008] WU Yundong. ZHANG Qiang, LIU Shacgind. A CONTRAST AMONG EXPERIMENTS IN THREE LOW- ALTITUDE UNMANNED AERIAL VEHICLES PHO- ‘TOGRAPHY: SECURITY, QUALITY & EFFICIENCY. The Interational Archives ofthe Photogrammetry, Remote Sens ing and Spatial Information Sciences. Vol. XXXVI Par BI eiing. 2008" describes experiments for covering larger areas with UAVs and generating an overview inoge. {0009} There are some software tools for image stitching valle. However, they have several restrictions/assimp= tions (camera position and orientation, distance to objects ec.) For example, AutoPano (hitp:/www.autopano.net) fakes a set of images and generates an overview (moss) ‘whieh is visually most appealing. AvioPano stitches for ‘beauty, at all areas where are less images or less overap the distortions are sill high with AwtoPano, Another tool is PIGU Pro (hupi/iwww:ptguicom). PTGu stitches most panoramas filly utomaticaly, bat tthe same ime provides Tull manual conteol over every single parameter. SUMMARY {0010} Acconding to an embodiment, an apparatus for gen cating an overview image ofa plurality of images, wherein toch image of the plundity of images comprises associated fica-data, may have: an image preprocessor configured to preprocess a new image by assigning the new image 1 positon in the overview image based on a positon informa ‘ion comprised by the meta-data ofthe new image: a storage unit configured to store a plurality of preprocessed or pro: cessed images of the overview image, wherein each prepro- cessed or processed image othe plrality of preprocessed oF processed images is assipned (o position in the overview Image, whorcin the storage unit is configured to provide the ‘overview image coniining the plurality’ of preprocessed or processed images at their assigned positions for displaying: ‘nds image processor comprising an accuracy information input for receiving an accuraey information of the position information and 8 controllable processing engine, wherein the image processor is configured to determine an overlap region ofthe preprocessed new image and a stored prepro- cessed or stoned processed image within the overview image based onthe assigned positon the preprocessed new image ‘nd the assigned position ofthe sored preprocessed de sored processod image, wherein the controllable processing engine 's configured to process the preprocessed new image by re- adjusting the assigned position of the preprocessed new ‘mage based on a comparison of features othe overlap region of the preprocessed image and features ofthe overlap region fof the sored preprocessed or stored. processed image, ‘wherein the controllable processing engine is controlled by ‘an accuracy information a the postion information rceived by the accuracy information input, so that @ maximal r= adjustment ofthe assigned position of the preprocessed new mage is limited based on the received accuracy information ff the postion infirmation, whersin the stomge unit is con figured to add the processed new image withthe re-odjsted assigned postion to the plurality of preprocessed or pro: cessed images. [0011] According to another embodiment, an unmanned ‘eral Vehicle may have: a camera configured to takean image during aight of the unmanned aerial vehicle; a sensor sy tem configured to determine a position of the unmanned US 2012/0050524 Al serial vehleat atime the imageis taken to acquire a positon information associated tothe taken image, wherein the seasor system is configured tdeterminea yaw, a nick ora ll ofthe unmanned serial vehicle ata time, the image is taken, to soquitea yaw information, nick information ora rll infor- mation associated tothe taken image; an accuracy determiner ‘configured to determine an accuracy of the determination of the postion of the unmanned serial vehicle to acquire an ‘etiicy information associated to the determined postion information, wherein the accuracy determiner is configured to determine an accuracy ofthe determination of the yay the nick oF the roll ofthe unmanned aerial vehicle to acquire an scctracy information astocited tothe determined ya infor- ration, the nick information or the roll information: a trans- nitter configured to transmit the taken image together With associated micta-dats containing the position information of the taken image and the accuracy infomation of the position information, wherein the metadata further contains the yaw information, the aick information, dhe rol infomation, the ‘accuracy information of the yaw infomation, the accuracy information ofthe nick information or the ceuraey informa tion ofthe roll information. 10012] According to another embodiment, method for providing an image together with associated meta-data taken by an unmanoed aerial vehicle may have the steps of: taking an image during a flight of the unmanned acral vehicle, ‘determining 2 position ofthe unmanned aerial vehicle at time the image is tken to acquire a postion information sssociated tothe taken image; determining ays. a nick ot roll of the unmanned aerial vehicle ata time, the image is taken, to acquire a yaw information, a nick information or 8 roll information associated to the taken image; determining fan accuracy of the determination of the position of the Unmanned aerial vehicle to acquire an accuracy information sssocated to the determined postion information; determin= ing an accuraey of the detersination ofthe ya, the nick oF the roll ofthe unmanned aerial vehicle to acquire an accuracy information associated to the determined yaw information, the nick information othe roll information: transmitting the taken image together with associated meta-data containing the position information ofthe taken image and the accuracy information of the position information, wherein the meta- ‘data further contains the yaw information, the nick informa- tion theroll information, the accuracy information ofthe yaw information, theaccuraey information the nick information ‘or the aecuracy information af the roll infomation. 10013] According to another embodiment, a method for Benerting an overview image of a plurality of images, wherein each image of the plurality of images comprises associated meta-data, may have the steps of: preprocessing new image by asianing the new image 1 a postion in the ‘overview image based on a postion information comprised by the metaalata of the new image: storing a plurality of preprocessed or processed images ofthe overview image, ‘wherein each preprocessed or processed image of the plral= ity. of preprovewed or processed. images comprises. an ‘assigned position in the overview image: detemining an ‘overlap egion ofthe preprocessed now image and a stored preprocessed or stored processed image within the overview nage based on the assigned position ofthe preprocessed new image and the assigned position of he stored preprocessed or Stored processed ime; processing the preprocessed image by r-adjusting the assigned positon ofthe preprocessed new image based on a comparison of features the overlap region Mar. 1, 2012 ofthe preprocessed new image and the stored preprocessed or stored processed image, wherein a maximal r-adjustment of the assigned position af the preprocessed new image is i= ited based on an accuracy information of the position infor ‘mation; adding the processed new image with the re-ojusted fssigned position to the plurality of preprocessed ot pro- cessed images: and providing the overview image contain the plurality of preprocessed or processed images at their assigned positions for displaying. [0014] Another embodiment may have 9 computer peo= {gram with a program code for performing the method for providing an image together with associated metadata taken by an uamanned aerial vehicle, which method may have the steps of: taking an image during a ight of the manned ‘eral vehicle; determining a position of the unmanned aerial vehicle a a time the image is taken to acquire a position information associated to the taken image: deteanining yw, a nick ora roll of the unmanned aerial vehieleat atime, the image is taken, o acquire a yaw information, a nick information of a roll information associated to the taken image; determining an accuracy ofthe dotermination of the position ofthe unmanned aerial vehicle to acquire an acct ‘cy information associated tothe determined position infor ‘mation; determining an accuracy ofthe determination ofthe yy, the nico the roll ofthe unmanned aerial vehicle to quire an accuracy information associated tothe determined {ya information, thenik information the roll information; transmitting the taken image togcther with associated met data containing the position information othe taken image fad the accurscy ivomiation of the postion information, ‘whore the meta-data further contains the yaw information, theniek information, the roll information, theaccuraey infor. ration ofthe yaw information, the accuracy information of the nick information othe accuracy information of the rll information, when the computer program runson.a computer cor micro contaler [0015] Another embodiment may have a computer pro {gram with » program code for performing te for generating fan overview image of «plurality of images, wherein each ‘mage ofthe plurality of images comprises associated meta- data, which method may have the steps of: preprocessing @ new image by assipning the new image to a postion in the ‘overview image based on a position information comprised by the metadata of the new image: storing a plurality of preprocessed or processed images oF the overview image, ‘wherein each preprocessed or processed image of the pal- jy of preprocessed or processed images comprises. an ‘assigned position in the overview image; determining an ‘overlap region of the preprocessed new image and a sored prepracessed or stored processed image within the overview ‘mage based on the assigned postion othe preprocessed new mage and the assigned postion of thestored preprocessed oF stored processed image: processing the preprocessed image by r-adjusting the assigned positon of the preprocessed now ‘mage based on acomparson of features of the overlap region ofthe preprocessed new image and the sored preprocessed or stored processed image wherein a maximal readjustment of the assjaned position of the preprocessed new img is lin- ited based on an accuracy information ofthe position infor- ‘mation; adding the processed new image with the re-odjusted assigned postion 1o tbe plurality of preprocessed or pro- cessed images and providing the overview image containing the plurality of preprocessed of processed images at their US 2012/0050524 Al assigned positions for displaying, when the computer pro- fram runs on computer or mero controller. 10016] An embostiment of the invention provides an appa ratus for generating an overview image of a plurality of images. Each image of the plrality of images comprises associated meta-data, The apparatus comprises an image pre= processor, a sorage unit and an ime prooessor. The image Proprocersr is configured to prepmocess a new image by ‘assigning the new image toa pestion in the overview image based on position information contained hy meta-data ofthe new image. Furr, the storage unit is configured to store plurality of preprocessed or processed images, Fach prepro ‘cessed or processed image ofthe plurality of preprocessed or processed image is assigned to @ position in the overview Image. Additionally, the storage unit is configured to provide the overview image containing the plumlity of preprocessed ‘orprocessedimagesat their asigned positions fordispaying.. Further, the image processor comprises an accuracy informa tion inp for receiving an accuracy information ofthe pos tion information and a controllable processing engine, The image processors configured to determine an overlap egion ‘of the preprocessed new image and a stored preprocessed or stored processed image within the overview image based on the assigned position of the new image and the assigned Position of the stored preprocessed or stored. processed Image. Additionally, the contollable processing engine is ‘configured to process the new image by reaadjusting the ‘ssigned postion othe new image based on a comparison of features ofthe overlap region of the new image andthe stored preprocessed or stored processed image. Porthis, the control- Table processing engine is eonrolled by the accuracy infor= mation ofthe position information received by the accuracy information input, so that @ maximal readjustment of the assigned postion of the new image is limited based on the received aecuray information ofthe positon information 10017] Since the re-adjustment ofthe assigned postion of the new image s limited based onthe accuracy information of the position information, distances between points in differ: ‘ent images can be preserved more accurate than with an image-based stitching algorithm without limitation. In this ‘ay, position-hased generation ofanoverview image can be refined by an image-based readjustment without the risk of losing completely the reference to real distances (the go0- reference) inthe overview image. Further, a very fast genera- tion of an overview image containing the new image may be possible, since a position ofthe new image in the overview Image is already assigned after preprocessing. The processing ‘of the new image for refining the overview image may be ‘dove afterwards for improving the smoothness of transitions from the new image © overlapping images for obtaining & nice-looking pictre 10018] Another embodiment of the invention provides an apparatus for generating an overview image ofa plurality of images comprising storage unit andanimage processor. The storage unit is configured to stare a plurality of processed images of the overview image, Fach image ofthe plurality of images isassjuned o a position in the overview image, ther, the storage unit is configured to provide the overview image containing the plurality of processed images at their assigned positions for displaying, The image processor is ‘configured to determine fest points of a new image and ‘configured to compare the determined feature points ofthe nw image with feature points ofa stored processed image 0 identify common feature points and to obisin 3-cimensional Mar. 1, 2012 positions ofthe common festure points. Further, the image processor is configured to determine common feature points located within a prdefined maximum distance of relevance toa reference plane based on the dimensional positions of the common feature points to idem relevant eomaion fea- ‘ureppoints. Additionally, the image processoris configured process the new image by assigning the new image 10 a positon in the overview image based on a comparison of an mage information ofeach relevant common feature point of the new image with an image information ofeach correspond- ing relevant common feature point ofthe stored image with- ‘ot considering common feaire points located heyond the predefined maximum distance tothe reference plane. Further, the storage units configured to addthe new processed image with the assigned position to the plurality of images of the ‘overview image. {0019} Since only feature points located in or near the re ference plane are considered for determining the position of the new image inthe overview image erors caused by con- sidering reference points at lrgely different clevations ean be resluced and theroforethe distance between points inderent ‘mages ean be preserved more accurate andlor the smooth ress of the transition between images can be increased to obiain a nie-fooking overview image. [0020] Some embodiments according wo the invention relate to an unmanned serial vehiele comprising » camera. a sensor system, an accuracy determiner anda irasmitter. The camer is confighred to lake an image during a light of the ‘unmanned aerial vehicle. The sensor system is eonigured to determine a position ofthe unmanned aerial vehicle atthe time the image is taken o obtain a position information asso- ciated tothe taken image. Further the accuracy determiner is configured to determine an accuracy ofthe determination of the position of the unmanned aerial vehicle to obtain an aceuraey information associated to the determined positon information, The transmitter is configured to transmit the taken image together with associated meta-data containing the position information of the taken image and the seenracy information ofthe position infomation [0021] By determining ad transmitting an infomation of the accuracy of the position information, the accuraey infor- ‘mation can be taken into account, for example, forthe gen eration of n overview image containing the taken image later BRIEP DESCRIPTION OF THE DRAWINGS [0022] Embodiments of the present invention will be ‘detailed subsequently’ referring to the appended deawings in wwhicle [0023] FIG. 1 isa block diagram of an apparatus for gon ‘ating an overview image ofa plurality of images [0024] FIG. 2 isa block diagram of an apparatus for gen ‘eating an overview image ofa plurality of images [0025] FIG, 3 isa schematic illusirstion of position and ‘ricntation errors of unmanned arial vehicles [0026] FIG. isa schematic iusteationof the generation of fn overview image of a plurality of images taken by an ‘unmanned aerial vehicle (0027) FIG. isa fosechart ofa method for generating an ‘overview image ofa plurality of images; [0028] FIG. 6 isa block diagram of an apparatus for gen cning an overview image ofa plurality of images [0029] » FIG. 7s fowschart ofa method for generating an ‘overview image ofa plurality of images US 2012/0050524 Al [0030] FIG. #is » howchart of a method for generating an ‘overview image ofa plurality of images [0031] FIG. 9 is a block diagram of an unmanned aerial vehicle: 10032] FIG, 102 is au example for an overview image of plurality of images 0033] FIG. 10) is another example for an overview image ‘ofa plurality of images 0034) FIG. 1a is a diagram indicating a comparison between correlation ofthe overlapping pats of two adjacent images in different approsches; [0035] FIG. 11} isa dingram indicating the relative dis- tance between the estimated postion and the GPS positon: 10036) FIG. 12 isan illustration ofa position-based align- rent of eight images; [0037] "FIG. 13 isan illustration of a postion and orienta~ tion-based alignment of eight images: and (0038) FIG. 14 s an illustration of an image-based align- ment using SIFT features. DETAILED DESCRIPTION OF THE INVENTION 10039] In he following he same reference numerals are parly used for objects and finetonl uit having thesameor Similar nctonal popenies andthe description thereat with regard 0 fgue sl apply also tothe igure inorder © Fodice redundancy in the desertion of the embodiments [0040] FIG. 1 shows a block digaram ofan appara 100 for generating an overview image 124 of plait of images ‘seconling to an embodiment of the invention. Each nae of the plurality of images comprises associated meta The appara 100 comprises an image preprocessor 110, stor ‘ue unit 120 and an image processor 130, The image prepro- ‘cessor 110 is connccted t the iniage processor 130 an he image procesoe 1303s connected tothe storage unit 120. The image preprocessor 110 preprocesses a new image 102 by assigning the new image 10210 positon inthe overview image 124 fused ona postion information contained y the nct-da of thenew image 102 Further testorage unit 120 Sores plualiyof preprocessed or proces images 112, 122, wherein each preprocessed or processed image 112,122 ‘of the plurality of proprocested or processed images com- prises an aesigned poston inthe overview image 124. Add nally the storage wait 120s abe a provide the overview image 124 containing the plurality of preprocessed oe ro essed images af ther assigned positions for dspsying The image processor 180 comprises an accuracy information input 13 for receiving an accuracy information 104 of the resiton ifoemston and 2 contollable processing engine 4132. The accuracy information input 134 is connected othe ‘controllable processing engine 132 and the controllable pro ‘essing engine 182 is connected to the image preprocessor 110 and Ue slomage wit 120. Funer the image proces 130 decries an overlap region ofthe reproceacd new image 112 and a sored preprocessed or sored processed image 112, 12 within the overview image 124 has on the assigned position of the new image 112 and the assigned positon ofthe stored preprocessed or stored processed nau 112,122. The contolable processing engine 132 procemcs the preprocessed new image 112 by re-ajsting the assigned postion ofthe preprocesd new imag 112 based on a com Parison o fates othe overlap region othe preprocessed how image 112and features othe oeriap repionof testored peproceised or store processed imoge 112,122. For this, the controllable processing engines controled by the soot Mar. 1, 2012 ‘acy information ofthe postion information 104 received by the accuracy information input 134, so that 3 maximal re- adjustment of the assigned position of the preprocessed new Jmuge 112s limited based on the received aoctracy infor ‘ion 104 the position information. Additionally, he storage unit 120 ads the processed new image 122 with the re adjusted assigned position tothe plurality of preprocessed or processed images ofthe overview image 124. [0041] Since the re-adjusiment ofthe sssigne postion of | the preprocested new image 112 is limited based on the received accuracy information 104 of the position informa ‘ion, the image-based processing of the preprocessed image 112 and the overlapping stored preprocessed or processed mage 112, 122 preserves the connection ofthe distances of points indifferent images of the overview imoge 124 10 the tistances in reality at least with the accuracy o the positon information contained by the meta-data, In this way, a posie ‘ion-based alignment of images in an overview image can be refined by an image-based alignment without losing the ‘whole information about distances between points in the overview image. So, the aecurucy of preserving the distance Inthe overview image can be significantly inereased, Further, the smoothness of transitions of images in the overview Jmagemay be increased in comparison tony postion-based ‘or orientation-based alignments [0042] The new image 102 may be provided, for example, {roma storage unit (eg. a memory card ofa digital eamera oF aastorage device of «computer of may be transmitted online trom an unmanned aera Veiee or an aeplane taking images ofan are, [0043] Esch image comprises associated meta dats, which fs stored together with the image or transmited together with the image, for example. The metadata contains a position information ofthe image, but may also contain further data (roll pitch, yaw ofa camera, the insage staken with). The position information may be, for example, a GPS positon (qlobal positioning system) or another position definition indicating. relative oF an absolute position allowing to assign the new image to a positon in the overview image. [0044] The storage unit 120 stones a plurality of prepro- cessed or processed images of the overview image. In this connection a preprocessed image 112 isan image compris 1 position inthe overview image assigned by the image pre processor 110 before the assigned positon is r-adjusted by the image processor 130, Consequently, a processed image 122 is an image aftr re-adjustment ofthe assigned position by the image processor 130 [0045] For example, a preprocessed image 112 may be stored directly after preprocessing by the storage unit 1200 fable the storage Unit 120 to provide an overview image containing the new image very fast. Afterwards, the new mage may be processed by the image processor 130 0 refine the asigned position of the preprocessed new image 112 ‘Altemaively, the preprocessed image 112 is dirselly pro- cessed by the image processor 130 without storing the pre= processed image 112 [0046] The storage unit 120 sable to provide the overview mage 124 toa display (which may bean optional part ofthe apparatus 100). [0047] The image processor 130 determines an overlap region of the preprocessed new image 112 and a stored pro- processed or stored processed image 122, Although itmay be possible that none of the already stored preprocessed oF stored processed images 112, 122 overlap with the prepro- US 2012/0050524 Al ‘cessed new image 112, itis getting more likely when the number of images already contained by the overview image stored by the storage unit 120 increases. apreprocessed new image 112 does not comprise an overlap region with any of thet images 112, 122 ofthe overview image 124 already sored by the stomge unit 120, the preprocessed new image 1112 may be stored directly by the storage unit 120. [0048] The controflable processing engine 132 processes the preprocessed new image 112 by re-adjusting theassigned position, wherein the readjustment is limited based om the ‘accuracy information 104 of the position information. The ‘accuracy information 14 ofthe postion information may be the same for all images, may be updated periodically or ‘depending on a change of the accuracy ofthe position infor: nation. Alternatively, each image may comprise an indi- Vidal accuracy information 104 ofthe individual position information contained by the associated metadata, For ‘example, if the images are taken by an unmanned aerial ‘hick, the unmanned aerial vehicle may determine a pos tion and an accuracy of the position atthe time a picture is taken and transmits this information together with the image to the apparatus 100. Alternatively, the unmanned aerial \ebiele may transmit accuracy information periodically or ‘when the accuracy changes more than a predefined threshold Inanother example, the accuracy of te position information is known and stored by a storage device, which provides the sccuracy information of the position information othe acu~ raey information input 134, By limiting the re-adjutment of the assigned position by the accuracy information 104 of the position information it may be guaranteed that the loss of ‘eeu’ of the connection beween distances of points in the ‘overview image o distances in reality is not larger than the ready existent inaccuracy due to the limited aceuraey ofthe Position information, [0049] There-adjustmentis done basedona comparison of features ofthe overlap region of the preprocessed new image 112 and the sored preprocessed or stored processed image 112, 122, Forexample, the festures ofthe overlap region may be common feature points identified in the overlap regions of bot images or areas around feature points of the overlap regions of images. Although only a readjustment of the assigned position ofthe preprocessed new image 112 ismen- tioned, at the same time a re-adjustment (or a further re- adjustment) of the assigned position of the stored prepro- ‘cessed or stored processed image 112, 122 may be done, In this case, the readjustment of the assigned position of the stored preprocessed or stored processed image 112, 122 may also be limited by an accuracy information of the postion information of the stored preprocessed or stored processed image 112, 122, [0050] The image preprocessor 110, the storage unit 120 tnd the image processor 130 may he, for example, indepen- ‘dent hardware units or part of computer or micro controller ‘aswell as computer program ora software prodvet config tured to run oa a computer oF micro controle, {0051} _Ackitonally 10 the assignment of the new image 4102 to the positon in the overview image 124, the image preprocessor 110 may preprocess the new image 102 by ‘considering a oll information, 2 pitch information andor & Yaw information to corect an orientation andra perspective Jistorion ofthe new image 102. For example, the new image 4102 may be taken from a unmanned aerial vehicle. Sued an ‘unmanid serial vehicle is usually small and therefore sus- ‘ceptible for wind and other environmental influences. Tete Mar. 1, 2012 fore, sensors (IMU, jnerial measurement unit) of the ‘unmanned aerial vebicle may determine a rll. a pitch and a yw ofthe unmanned aerial vehicle (or ofthe camera) atthe time an image is taken and add a rol information, a pitch information and/or yaw information tothe meta-data ofthe mage. The image preprmeessor 110 can ws this information contained by the mta-data to compensate an orientation of set oF a perspective distortion of the new image 102 [0052] Further, the controllable processing engine 132 may process the preprocessed new image 112 by re-adjusting an Drientation ors perspective distortion based onthe compar son of the features ofthe overlap region. In tis ease, the controllable process engine may be furher controlled by accuracy information 104 of the rll information, the pitch information othe yaw information received by the accuracy information input 134, so thatthe maximal re-adjustment of the assigned position a the preprocessed new image 112 and ‘maximal readjustment ofthe orientation or the perspective distortions Fimited based onthe accuracy information 104 of the roll information, the pitch information or the yaw infor- sation. In other words, the maximal readjustment of the assigned position may be limited by the accursey of the ‘determination oF the postion, which may also depend on the aeuraey ofthe roll the pitch andor the yaw. In this way, for example, the positon and orientation of the camera, the mage was taken with unmanned aerial vehicle or another vehicle taking that new image may be considered during readjustment of the assigned position of the new image. ‘Additionally. sso an orientation or a perspective distortion ‘may be re-adjusted and limited based on the securacy’ infor sation 104, [0053] FG. 3 shows an example for postion and orienta- ‘ion errs, which ean be taken ito account for the e-adjust ‘ment ofthe position ofthe new image 112. It indicates the GPS erroernge (the real position i in this range) and the tilting eror range, The sum ofthese two errors may give the total positioning eror, which may be represented by the acct icy information of the postion information. It shows an ‘example fora maximum positon eror(aceuraey ofthe posi tion information), if the new image 102 is taken bY an unmanned eral Vehicle, [0054] In some embodiments according © the invention, the image preprocessor 110 may receive successively a plu rallity of new images 102 and preprocesses each received new ‘mage 102. Furher, the storage unit 120 may store each preprocessed new image 112 and provides an updated over- View image 124 after storing a predefined numberof prepro- cessed new images 112. The predefined number of images may be one, so that the storage unit 120 may provide an ‘updated overview image after storing each preprocessed new image 112. Allematively, the predefined number of prepro- cessed images may be higher than one, so that the overview image is updated periodically ater storing the predefined umber of preprocessed images. In this way, an overview mage 124 canbe provided very fast since the psitionandlor brientation-based preprocessing can be done significantly faster than the image-hased processing by the image proces sor 130. Forexample, the apparatus 100 may receive continu~ uly nev images from an amanned aerial vehicle, which fan be provided for displaying in an overview image directly alter preprocessing. A readjustment of the assigned position by the image processor 130 for refinement ofthe transition between overlapping images may be done later on. US 2012/0050524 Al [0055] FIG. 2 shows a block diagram ofan apparatus 200 for generating an overview image 124 oa plurality of images ‘scconing to an embodiment ofthe invention. The apparatus 200s similarto the apparatus shown in FG. 1, but comprises ‘sdtionally’an optional unmanned aerial vehicle conto! unit. 240 connected 10 the storage unit 120 andthe image prepro- {essor 110 is connected othe storage unit 120. 10086] In this way, the storage unit 120 may add the pre~ Processed new image 112 withthe assigned postion to the plurality of preprocessed or processed images. Further, the Storage unit 120 may provide the overview image 124 con- taining the preprocessed new image 112 at the assigned pos tion for displaying before the preprocessed new image 112 is processed by the image processor 130 as already mentioned ‘above. Inthis way, an overview image 124already containing the preprocessed new image 112 ean be provided very last. Aterwards the accuracy ofthe distance between two points in different images and/or the smoothness of the transitions betwen the preprocessed new image 112 and other overlap Ping images of the plurality of preprocessed or processed Images 112, 122 may be increased by readjusting the ‘assigned postion of the preprocessed new image 112 by the image processor 130. In other words alter preprocessing the new image 112, the storage unit 120 may provide the over- view image 124 and later the storage unit 120 may provide refined overview image 124after processing the preprocessed image 112 by the image processor 130 [0057] If the preprocessed new image 112 is stored by the storage unit 120, the storage unit 120 may ad the processed new image 122 by replacing the stored preprocessed new image 112 by the processed new image 122 10058] Optionally, the apparatus 200 may comprise an ‘unmanned serial vehicle contro! unit 240, The unmanned ‘eral control unit 240 may renerate a contol signal 242 for Piloting an unmanned eral vehicle toa regioncomesponding to an area of tbe overview image 124 being not already eov- ered by an image ofthe plurality of preprocessed or processed images 112, 122. In other words, if an unmanned aerial \ehiele is used for taking images ofan area from which an ‘overview image should be generated, then the unmanned ‘etal vehicle control unit 240 may pilot the unmanned arial ‘vehicle fo uncovered areas ofthe overview image fo obtain mages from the uncovered ares. [0059] Fittingly, FIG. 4 shows an example of an apparatus 200 piloting an Unsaaned serial vehicle 460 with & camera 462 for taking images ofthe area below the unmanned aerial ‘vehicle 460. The apparatus 200 may transmit contol signs to thounmanned aerial vehicle 460 for piloting the unmanned sctal vehiele 460 touncovered regions ofthe overview image 452, The overview image 482 may be provided to a display 450, Inthis example, the display 480 shows coveredarcas 484 tnd uncovered areas of the overview image 452. The lunmanned serial vehicle 460 may take images from the uncovered areas and provide the image data as well 8 corre= sponding meta-data (eg. position information, rll nforma- tion pitch information andr yaw information) to the appa ratus 200 for generating the overview image 482 of the plurality of images 454 10060] FIG. 5 shows a flowchart ofa method 500 for gen- ‘rating anoverview image of a plurality oF images according to an embodiment ofthe invention, Each image ofthe plral- ity of images comprises associated meta-data. The method 500 comprises preprocessing $10 nev image, storing $20. plurality of preprocessed or processed images oftheoverview Mar. 1, 2012 mage, determining 30 an overlap regionof the preprocessed new image and 2 stored preprocessed or stored processed ‘mage with the overview image, processing $40 the prepro- cessed new image, eddingSS0 the processed new image tothe plurality of preprocessed or processed imayes and providing 560 the overview image containing the plurality of prepro- essed or processed images. The new image is preprocessed $10 by assjning the new image toa positon in the overview ‘mage besed on position information being contained by the ‘meta-data of the new image. Furher, each stored prepo= cessed or stored processed image of the plurality of prepro- ‘cessed or processed images comprising an assigned position Jn the overview image. The overlap region of the prepro- cessed new image and the stored preprocessed or stored pro: cessed image within the averview image is determined 530 ‘based on the assigned positon the preprocessed nes image tnd the assigned position ofthe stored preprocessed or stored processod image: Further, the processing $40 ofthe prepro= essed new image is dane by re-adjusting the assigned posi- tion af the preprocessed new image basedon a comparison of atures ofthe overlap region ofthe preprocessed new image ‘and the sored preprocessed or stored processed image. The processed new image is added S80 tothe plundity of prepro= essed or processed images with the readjusted assigned position. Further, the overview image is provided 860 con- faining the plurality of preprocessed of processed images a ‘Meir assigned positions for displaying, [0061] Further, the method $00 may comprise adding the preprocessed new image to the plurality of preprocessed or processed images ofthe overview image displaying the over View image containing a preprocessed new image at the assigned positionand displaying the overview image contain jing the provessed new image a the readjusted assigned posi- tion, In this way, a rough overview image may be displayed very Fist anda refined overview image may be displayed later [0062] Additionally the method $00 may comprise farther steps representing the optional fetures ofthe described con ‘cept mentioned above, [0063] FIG. 6showsa block diagram of apparatus 600 for genemting an overview image 614 of a plurality of images focording to an embodiment of the invention. The apparatus {600 comprises storaze unit 610 and an imape processor 620. The storage unit 610 is connected tothe image processor 620. The storage unit 610 sores plurality of processed images of the overview image 614. Each processed imaye 612 of the plurality of processed images comprises an assigned positon fn the overview image 614. Further, the storage unit 610 is able to provide the overview image 614 containing the pl tality of processed images at their assigned positions for Lisplaying, The image processor 620 determines festure points of a new image 602 and compares the determined {ture points ofthe new image 602 with festure points of a stored processed image 612 to identify common feature points and to obtain 3-dimeasional positions ofthe common feature points. Further, the image processor 620 determines ‘omion feature points located within predefined maxim distance of relevance toa reference plane based on the ‘mensional postions ofthe common feature points to identity relevant common feature points. Additionally the image pro- 28307 620 processes tie now image 602 by assigning the new mage to @ postion in the overview image 614 based on comparison ofan image information of each relevant com: ‘ion feature point ofthe new image 602 with an image infor US 2012/0050524 Al mation of each comtesponding relevant common feature point ‘of the sored processed image 612 without considering eom- toa feature points located beyond the predefined maxim distance tothe reference plane. Further, the storage unit 610, ‘ads the processed new image 612 with the assigned postion to the plrality of processed images of the overview image ous. 10064] By taking ino account only common feature points ‘of the new image 602 and the stored processed image 612 being located within a predefined maximum distance ofrel- ‘evace to a reference plane, a falsification of the assigned Position of the new image in the overview image 614 by ‘common feature points located at significantly different ‘elevation levels than the relevant common feature points may be reduced, In this way, distances between two points in «different images may be preserved more accurate andor the ‘smoothness of tmnstions between overlapping images may be increased 10065] The storage unit 610 may store each processed image together with the asigned position inthe overview image 614. The overview image 614 may be provided 10 clisplay (hich may bean optional pat of the apparats 600), 10066] The feature points determined by the image proces- Sor may be, for example, comers or walls of houses, the ‘contours of a ea, road markings or similar features. These Feature points may be determined by various algoritums ss, for example, SIFT (Scaleinvarant feature transform) oF SURF (speeded up bust features). fer processing the new image 602 by the image processor 620, the slorage unit 610 ray’ add the processed new image 612 together with the deermined fetire points of the new image othe platy of processed images. In this way, the feature point determina- tion for an image may only be necessitated one time. [0067] After determining the feature points of the new image 602, these feature points may be compared to festre points of @ stored processed image (eg. am ovedapping tmage) to identify common feature points by using, for ‘example, a correlation lunetion(e-2. nearest neighbor). Fur- ther, for these common feature points 3-dimensional posi tions ofthe common feature points may be determined, for ‘example, by using methods of muli-view geometry. 10068] ‘Thea, common feature points located within a pre- defined maximum distance of relevance toa reference plane ‘are determined, The reference plane may be predefined or may be determined by the image processor 620 based on the 3-dimensional positions ofthe common feature poins. For ‘example, a predefined reference plane may be a horizontal Plane of @ dimensional coordinate system (eg. Cartesian ‘coordinate system) with he-cooninate equal to Doeanother predefined value. Altematively, the image processor 620 may ‘determine the reference plane by fing «horizontal plane ‘0 the 3-Jimensional positions of the common feature pints, 80 that maximal number of comanon feature points at located ‘within a predefined maximal fitng distance tothe reference plane or so that the reference plan is as far avay as possible froma camera position, the new image was taken from, with at Teast one common feature point Focated within the pre= ‘defined maximal fining distance. The maximal fing dis tance may vary between zero and, for example, a value ‘depending on a maximal height difference of the common Teature points. Altematively the relerenee plane may be a non-horizontal plane. [0069] By choosing the maximal predefined distance of relevance, the accuracy of preserving the distances between Mar. 1, 2012 points infront images andor the smoothness of tran fins between images maybe intheneed By choosing arse sunimum distance of relevance, more relevant common fx fare points are identified, which are taken fata account for ‘sighing the peiton of the ne image. Tis may increase the stati, since more comeon fate points are eons ered, but may also inerease the enor obsied by considering Poinis at diferent elevation levels. An opposite effet fs Shine by choosing low predefined maximum dstaceot relevance. The precined maximam distance of relevance nay also be 2eo, so that only common feature points are ‘Henifiedarelevent which are located inthe reference plane The maximal fiting distance may be equal to the maximal distanee of revance [0070] For example, common feature pots may be road ‘atkingsand thereerence plane may Bethe road. Inthisway, only common feature point located tthe height ofthe see ‘ay be considered for asigning the position In this wa an fsor obtained by considering feature poins at diferent elosation level (eg feature points locate at crs, houses or Suc lis) may be suppressed completely or nearly ome ply {0071} The image information ofa relevant common fe fare point may be, for example, the position of the Tete poi inthe image ise or the image data ofan area ofthe fmnage aot the relevant common leatre point. More cet tatly the image infomation of «relevant common fete Point ia the ne insage 602 may be the postion the relevant oman Fete poet i the nes image 602 and the image Jnformation ofa relevant common feature pont in the sored ‘mage 612 may'be a position ofthe relevant common Fete pot ia she stored nse, Wit this, the image processor 20 ‘oy assign the new image 602 t 8 poston inthe overview ‘image 614, so that a distance between postion ofa relevant common etre point inte new image 602 ada poston oF s comesponding elvan’ common feature point int the sored Jmnage 612 is minimized orso that asm of stanceshetween the portion ofthe eleven! common fate points in the new Jmage 602 ad the postions ofthe expetve corresponding relevant common featre pots in the stored image 6121s tninimize, [0072] _Altematvely, the imoge information ofa relevant omaon feature point inthe new image 602 may be sn area ot the ne image 02 around the relevant common este point ofa predefined size and the image information of a relevant common etre point inthe stored image 612 my Be an area ofthe stored image 612 around a relevant common feature ot ofthe predtined size. In thiscaso, the image processor £20 may asin the ew image 602 to position nthe oer ‘ew ings 614 so thats corlstion vali of an are around {relevanteommon feature point of te new image 602 aod an rc around a corresponding relevant common fetire point ofthe stored image 612 is maximized oe so that the sum of omelaton vles of areas aotnd relevant common fete Points of the new image O02 and aay around respective onesponding relevant common festive points ofthe sored image 612 is maximize. corcation value may be deter tied, for example, hosed ona given correlation fasion or comeatngimapes or pats of images {W073} Furter, bots mentioned examples may’ be com: bine so that the image processor 620 may otign the new ‘mage 602 to positon in the overviw insue 64 based on the minimization of the distance or the sn of distances and readjust the asignment ofthe new image 6021. position US 2012/0050524 Al inthe overview image 614 based on the maximization of ‘correlation value ofa sum of eorelation values. [0074] The image processor 620 may determine feature Points ofthe new image 602 for the whole imager only parts Of the new images 602 overlapping another image of the plurality of processed images. Determining feature points from the whole image may ierease the elforts, but the deter- mined festre points ean be stored together with the new image after processing, so thatthe feature points may only be desermined once, 10075] Independent from the determination of the feature points, only feature points located within an ovedapping Fegion ofthe new image and the stored image 612 may be ‘considered for identifying common feature points. Far this, for example the image processor 620 may determine an over- lap region ofthe new image 602 and the stored image 612. Further, the image processor 620 may compare determined feature points of the new image 602 located in the overlap region with feature points ofthe stored image 612 located in the overlap region, while determined feature points outside the overiap region are nepleted forthe identification of com mon feature points. 10076] The storaze unit 610 and the image processor 620 may be, for example, independent hardware units of part of a ‘computer oF micro controler as wel as computer program, fora software product configured fo run on a computer ora micro controller [0077] FIG. 7 shows a flowchart ofa method 700 for gen- crating an overview image of a plurality of images cording to an embodiment of the invention, The method 700 com prises storing 10 a plurality of processed images, determin- ing 720 feature points of a new image, comparing 730 the determined feature points of the new image with feature points ofa stored processed image, determining 740 common feature points, processing 750 the new image, adding 760 the processed new image othe plurality of processed images of the overview image and providing 70 the overview image: ach stored processed image of the plurality of processed images comprises an assigned position in the overview image. The determined featire points ofthe new image are ‘compared 730 with feature points ofa sored processed image 1o identity common feature points and to obiain 3-dimen- sional positions ofthe common feature points. Further, com= ‘mon feature points located within a predefined maxinnim distance of relevance toa reference plane ar determined 740, based on the 3-dimensional positions af the common featre points to identify relevant common feature points. The new lage is processed 750 by assipning the new image to a position in the overview intage based on a comparison of an mage information af each relevant common feature point of the now image with animage information of cach comespond- ing relevant common feature point of the stored processed Jinage without considering common feature points located beyond the predefined maximum distanee of relevance to the reference plane. Further, the processed new image is aed 760 the plurality of processed images of the overview image with the assigned position. Additionally, the overview image containing the plurality of processed images at their sssigned postions i provided 770 for displaying. 10078] Opvionally, the method 700 may comprise further steps representing Teatures of the described concept men- tioned above. 10079] Some embodiments according 10 the invention relate to an apparatus for generating an overview image of a Mar. 1, 2012 plurality of images combining the features ofthe apparates Showin in FIG. [and the features ofthe apparatus shown in HIG. 6. [0080] Forexample, the ime processor 130 of apparatus 100 may readjust the assigned position ofthe preprocessed evs image based on «comparison ofan image information of relevant common festure points ofthe new image with an ‘mage information of each comesponding relevant common {ture point ofa stored processed image without considering ‘common feature points located beyond a predefined mxi- ‘num distance of relevance toa reference plane, For thi the Image provessor 130 may determine feature points of the preprocessed new image and may compare the determined {alu points of the preprocessed new image with feature points ofa stored processed image to identify common fea {ure points ad to obtain 3-dimensional positions ofthe com- sion feature points. Further, the image processor 130 may determine common festure points located within a predefined ‘maximum distance of relevance ta reference plane based on the 3-dimensional postions othe common feature points to ‘dentify relevant common feature points [0081] Optionally, the apparatus 100 may realize farther features, fOr example, mentioned in connection with the ‘appaatis 600 show in FIG. 6, [0082] The other way around, for example, the apparatus, {600 may comprise actionally’an inzaze preprocessor 110 nd the image processor 620 may eompriseanoctracy infor ‘tion input 134 and a controllable processing engine 132 98 ‘mentonesl ia connection with the apparates 100 shown in FIG. 1. Inthis example, each image of the plurality of images ‘may’ comprise associated meta-data, Fuather, the image pre processor 110 may preprocess a new image by assigning the few image 19a postion in the overview image based on & positon information contained by the meta-data of the new ‘image. The storage unit 610 may store a plurality of prepro- cessed or processed images ofthe overview image, wherein eoeh preprocessed or processed image of the plurality of prepracersed or processed images comprises an astigned Position in the overview image, The storage unit 610 may provide an overview image containing the plurality of pre= processed of processed images at their assigned positions for sisplaying. Further, the image processor 620 may devenmine tanoverlp region of the preprocessed new image and a sored preprocessed or stored processed image within the overview mae based on the assigned postion ofthe preprocessed new mage and the assigned position othe stored preprocessed or stored processed image. The controllable processing engine 132 may process the preprocessed image by e-adjusting the assigned psitionof the preprocessed new image according the originally defined image processor 610, but with the restriction that the controllable processing engine 132, and in that way the image processor 610, is controlled by the aceu- ‘gy information of the poston information received by the ceuracy information iat, so that a maximal re-adjustinent of the assigned position of the preprocessed new image is Timited based on the received soeuraey information of the position information. [0083] Optionally, the apparatas 600 may realize farhee, Torexample features mentioned ia connection with apparats 100 shown in PIG. 1 or apparatus 200 shown in FIG. 2. [0084] In the following. a detailed example for using a combination of the features described above is given, In this example, the images are taken by an unmanned eral vehicle, although the described inventive concept may also be used or US 2012/0050524 Al implemented, if images are provided, For example, by a stor- ‘age unit (eg. memory card or mnltimedi cael of a digital ‘camera or hard disk ofa computer) [0085] _ very simple and! naive approach is © align the images based on the camera’s postion. Hence, for image alignment the world coordinates ofthe camera are mapped ‘corresponding pixel coordinates in the general overview image. Defining the origin of the overview image of the ‘obsorved target area a 0, lon: alt) in world cord- nates, all image coordinates ae related to this orgia on the tial plane (CTP) by approximation to the earth model WGS84. Given the csmera’s positon area coveted by the pienre in world coordinates relative tothe origin is com ped taking into account the cameras intrinsic parameters ‘The lative world coordinates are diectly related tothe pixel ‘coorimates in the generated everview image. An example of the resulting overview imoge is depicted in FIG. 12 ulin the placement funetion with transformation being just a simple translation foreach image. In tis approach reason- ably accurate position information is assumed and a nadir ‘ew but do nr take into account the camera's orientation, Obviously; effects introduced by non-planar surfaces ean not be compensated with this approach, A. more advanced approach is to extend the naive position-based alignment by ‘compensating the cameras orientation deviation (Le. rol, Pitch, yan angles). The placement funetion ofthe individu mages fo generate the overview image is the same as before But instead of considering only translation, a perspective transformation with eight degrees of freedom may be use. [0086] Ifa nadie view (.c.noplecting deviation of rll and Pitch angles) is assumed, the transformation is reduced to 8 ‘imslrity transformation, An example is shown ia FIG, 13. 0087] Further, image-based alignment can he categorized into pixe-hasod, and feature-based methods. The idea is to find transformations and consequently the pesition of each now image which maximizes the quality function: tert 9)? [0088] The pixel-based approaches ane computationally tore expensive becavse the quality function is computed fiom all pixels inthe overlapping pars of two images. Pea- ture-based approaches try to reduce the computational effort hy first extracting distinctive Feture points and then match the feature points in overlapping pants. Depending on the ‘chosen degree feedom the resting transformation ranges from a similarity transformation ta perspective transforma- tion. The benefit ofthis approach is thatthe generated over- View image i visually more appealing. Butou the other hand, the major disadvantages are thatthe search space grows with the numberof images to be stitched andthe images may’ get distorted. An example is shown in FIG. 14 [0089] According tone aspect multiple small-scale UAVs ray'be deployedte suppor first responders indisasterassess- ment and disaster management. In particular commercially availble quadrocopters may be used since they are agile, ‘easy t0 fly and very stable in the air due to sophisticated ‘on-board control, The UAW may be equipped with an RGB. ‘camera (red een blu), 10090] The intended use-case can be sketched as follows: The operator First specifies the areas to be observed on & digital map and defines the quality parameters for each area (see for example “M. Quartsel, B, Sojanovski, C. Betstt- ter.G. Predrch, H.Hiellwogner, M Hofbaue,M, Shah and B, inner, Collaborative Mictodrones: Applications and Mar. 1, 2012 Research Challenges. In Proceedings ofthe Second Interna tional Conference on Autonomic Computing and Commn- cation Systems (Antonomics 2008), page 7, Turin, lly Sep- tember 2008), Quality parameters include, for example, the spatial and temporal resolution of the generated overview mage, and the minimum and maximum Aight altitude, among other. [091] Based on the user's input, the system generates plans forthe individual drones a cover te observation areas (Gee for example “M. Quartsch, K. Krug, D. Wischounig: Struc, S. Bhattocharya, M. Shah, and B. Rinner, Networked UAVS as Acrial Sensor Network for Disaster Management Applications. e&i Journal, 127):6-63, March 2010") “Therefore, the observation ares are paritioned ito smaller areas covered by a single picture taken from a UAV flying at ‘certain eight. The partitioning has to consider a cerain ‘overlap of neighboring images which s useful in the stitching process. Given a partitioning the continaons arcas to be cov teredean be discretized toa setof so-called picure-poins, The pictre-poinis are placed inthe center of eac partion atthe ‘sen hejht. The pictures are taken withthe camera point ng dawawands (nadie view), [0092] The mission planner component (eg. unmanned terial vehicle contol unit) generates routes for individual UAVS such that each pictore-point is visited taking into ‘ceount the UAV"S resource limitations. The images together ‘with metadata Ge. the postion and orientation of the eam: era) are transferred tothe base-station during fight where the individual images are stitched toa overview image. [0093] The major goa sto generate an overall image I ‘ofthe tant area given a set ofa individual images (1). For example, the overall image can be iteratively constucted as allows Org METH) \whereO isan empty background matrix, Tis transformation finetion and the Merge function combines the transformed mage tthe overall image. [0094] This mossicking can he described as an optimiza ‘ion problem, ia whiel T, has to be found in a wat that it ‘maximizes a quality function 2(,,,). This quality unction, based on the system usecase, balances te visual appearance (Gmproving the smoothness of transitions) and the peo-refer- encing accuracy (accuracy of preserving distances). While in Some spplications it is more important to have a visually appealing overview image, other applications may involve focurate goo-referencing in the overview image. A quality Tnetion may’be used that ss combination of te corelation between overlapping images and relative distances in the generated overview image compared to the ground tra, [0095] Some challenges for solving the problem using ‘mages from lowflying, small-scale UAV’ are mentioned in the following [0096] Whentakingimages from low altitude the sssump- tion of s planar surface is no longer true. Objects such a buildings, trees and even cars cause high perspective distor- ‘ions in images, Without a common ground plane the mateh- ing of overlapping images may utilize depth information Image transformations exploiting coresponiences of points at different elevations may result in severe matching error. [0097] Further, due to their yt weiyht small-scale UAVS fre vulnerable to wind inluenees resuiting high-dynamic ‘control actions to achieve stable light behavior. Even ifthe US 2012/0050524 Al “onboard camera position is actively compensated, a perfect, hadir-view ofthe images cannot be provided. [0098] The UAV"s auxiliary sensors sueh as GPS, IMU and slime are used to determine its position and orietation, However, such auxiliary sensorsin small-scale UAVs provide ‘only limited accuracy which is not comparable with lager sirrals. As consequence itean not be relied on accurate and reliable position, orientation and alitde data of the UAV. Feence thas tobe dealt with his inaccuracy inthe mosaicking process. 10099] Additionally, the resources such as computation povser and memory on-board the UAVs butalse onthe ground ‘ation may be very limited. In disaster situations ts usally not possible to have a huge computing infrastructure a able, The base-station may consist of notebooks and standard desktop PCs, Buta the same ime, the overview image should bre presented as quick as possible 0100] The individual images may be taken from multiple UAVS ina arbitrary order. inremental approach maybe used to present the user the available image data 2s erly as possible while the UAVs ae sill on thee mission. The more Imagesare taken the hotertheoverview image ges. This also means that fora new image, the position of alrealy processed image may be adjusted to improve the overall quality [0101] For example, as described an appropriate transfor ‘ation T, foreach image [captured st picture-point may be found in order to solve the mossicking problem. There are two basic approaches for computing these transformations “The metadata approach exploits auxiary sensor infomation to derive the position and orientation ofthe camera which is then used to compute the transformations, In this ease that auxiliary sensor data (Je. GPS, altitude and time) may be provided foreachceptared image. Theimage-based approach ‘nly exploits image data to compute the trnsTormation 10102] ‘The proposed concept may realize a combination of metalata-basedl and image-based methods enhancing, the ‘etadata-basod alignment with image-basod alignment, The presented approaches vary in heir resource requirements and their achieved resuls [0103] An aspect isto fist place the new images based on the cameras position (postion information) and ereatation information (eg. rol, nick, yaw) on the already generated ‘overview image. Inthe next step, image-based methods are used fo correct for inaecurate positon and orientation infor- ‘mation and atthe same time improve the visial appearance Since the approximate position ofthe image s already known from the camera's positon, the search-space ean be signil- cantly reduced (by limiting the maximal readjustment). “Thus, the ansformation T, mentioned before may be split, into two transformation whereas the T,.,. represents the transformation bsed on the camers’sposition andorientation ‘0 Tg Fepresenis the transformation which optimizes the ‘lignin using the image-based method, 0104] Transformations Tiyye Ad Typ Which maximize the quality function may be sdvantagect etn nT [0105] The search space may be limited toa reduced set of possible positions based on the expected inaccuracy of pos tion and orientation information (accuracy information of the position information aad optionally the accuracy information Df the oll nick or ya). [0106] _Withthis proposed approach an appealing overview image without significant perspective distortions may be gen- Mar. 1, 2012 crated and at the same time the relative distances and geo- references in the overview image ean be maintained. More- over, this approach ean cope with inaccurate position and brientation information of thecamera and this avoidstitci sprites in the everview image [0107] In the following, some examples for technical Aetalson image mossicking from micro-UAV flying a low aliude are described. A possible process 800 of aligning mages taken from mico-UAVs fying at low abide is sketched in FIG. 8 [0108] The input forthe whole processing pipelines anew Image taken from the camera on-board the UAV and a set of ‘icta-data, The meta-data contains information on the (eg. GPS.) position where the photo has been taken as well as information on the UAVs and thus the eamera’s pose (ya, tick, rol) also known as extrinsic camera parameters, Due to the limited capabilities of the UAVs-—and the small and relatively cheap inerial measurement units (IMU)—the meta-data is inaccurate to some degree. For example the accuracy of GPS-pesitions is inthe range of up 10 TOmeters ‘and also the measured anges have errs [0109] Firstly, the image is preprocessed 810. The image preprocessing 810 may inlude several steps to itaprove the ‘riginal images taken by a camera. For example, lens distor tions ae corrected. [0110] Secondly, an image transformation based on meta- datas performed 820 (assigning the new image to position inthe overview image). Given the image's metadata quiekly the transformation, can be computed which is solely based on the motsdsta. This transformation basically includes tanslation and rotation (to take into aecount the UAV's heading(yaw)) and optionally warps the image asi has eon takon at a nadir view (ic. correct nick and roll angles) (W111) ‘Thirdly intersecting images may be found 830 de- termining an overlap eegion) Based on the image position in the poo! of already processed images those that intersect with the current one are identified. Tino image can be found, urher processing is postponed unl atleast one intersecting ‘mage is available In this ease the current image is placed in the overview image only based on the metadata (T, 7). [0112] Fourthly, a (rough) 3D structure is computed 840, For this, he original image (new image) andthe set of images that intersect (overlap) the original one ae considered. First {ature-points from the new image are extracted 842 (deter rine feature points). Different algorithms can be used to compute featre-points (eg. SIFT, SURF, ..-). Por the already processed images the featie-poins can be stored separately and thus avoid repeated computation. Conse- queatly a st of feature-points for each image to consider is ‘obtained, Next, the feature-ponts from the individual images fare matched 844 (determine common feature points) using Some correlation fupetion (e.. nearest neighbor). Thus, a ‘mapping, which eature-pointinone imageis (expected tobe) ‘he same point inthe other insages is established. Using this set of corresponding feature-ponts for example. methods of ‘mult-view “geometry (strcture trom motion) can be exploited to estimate bath, the camera's positions snd the 3D positon ofthe feaure-points 846, [0113] Fifhly, the transformation is refined 880 based on the 3 stracte and image data. The inp fortis step isthe set of intersecting images andthe rough 3D structure ofthe scene computed in the previous step. By rough 3D structure, 4 point-cloud in 3D space with their corresponding image US 2012/0050524 Al 12 ‘coordinates is understood. One major goal is 1o maintain eo-referencing as good as possible and atthe same time ‘compute visually appealing overview image. In onder to achieve this goal, for example, the transformation is com- puted based on pont-matebes ona common plane (reference Plane, eg. advantageously the ground plane) and ignore regions which are at different elevation levels (eg, eas, trees, et.) For this, given the 3D poin-cloud a common planemaybe extracted, Thus horizontal planes may be it 882 fo the point-cloud and select a common plane (eg. the plane far ay’ rom dhe eamers, o the ple With the highest fnumer of points Iying oni, et.) For the next steps only those points tht lie on the selected common plane (relevant ‘common feature points) may be considered. Thea, the trans formation ye that optimizes the placement ofthe image further may be compute. This can be dane ether by only ‘considering the selected point-comrespondences, oF by eon sidering a certain region around the poin-corespondences, or bth, 10114] In he first ease transformation that minimize the distance between corresponding points can be compted. In the second case region around the corresponding pints can be defined and the coreltion within comesponding regions ‘can be maximized, Various methods can be used, simple Pixel differencing or optical ow methods. Inthe latter case, the tansformation based on corresponding points may be used asa starting point which i then fither refined by cor- relation-based methods [0115] From the sensor medel, addtional information on how accurate the meta-data is were oblained. In this last refinement step, the sensor model restricts he wansformtion Tygog: This step can be considered optional and also depends of tte method used to refine T,y,- For example ifthe GPS- Position is known tobe upto 2m inaccurate point correspon- ‘denees (relevant eommon feature points) that suggest a posi tion errr of 7 m ean be ignored. Or forthe correlation based approach the search range can be inited 8840 the according range [0116] Theoutputof the algorithm described above aretwo ‘eansfOrmations and Tyjqg- The Hirst one ean be computed ‘ery quickly which allows fast placement ofthe indivi images onan overview map (overview image). The later one includes somewhat mare computation! effort and ths involves more ine. However, or interactive ystems images, ‘a be displayed imme based on the iia tansfor ‘nation and ater on refined. The Hil rasformatin Ts the combination 860 (e-

You might also like