By the name of Allah

THE ISLAMIC UNIVERSITY – FACULTY OF ENGINEERING COMPUTER ENGINEERING DEPARTMENT

Final Work Summarization

Graduation Project-Part 1 Multi Touch Table
BY

Wafaa' Audah Nisma Hamdouna

Haneen El-masry Maysaa El-Safadi

SUPERVISOR

Eng. Wazen Shbair

Gaza, Palestine June. 4th, 2012

Dedication

We dedicate this work To our supervisor Eng. Wazen Shbair… To our university …

Acknowledgment We wish to express our sincere appreciation and thanks to Eng. Wazen Shbair who gave us the chance to search and to develop our knowledge in the project topic beside the possession of supervision wisdom through the semester . .

.. III Chapter 1: Introduction To Multi Touch Surface 1-1 Introduction……………………….. 18 5-2 The changes in the Window1.……………………………………………………..….………………………………………………...……………… 15 Chapter 5: Multi-Touch Hello World 5-1 Application using the Virtual Studio with C++...... 01 01 1-3 Infrared Multi-Touch Table……………………..……..………………………….... Chapter 4: CCV Details 4-1 About CCV .……. 3-2 Software Requirements ….......…..…...........……… II Table of contents………….....…......……………………….........………….....……………………………………………………… 05 2-2 Software........…….……………………….…………..……….....…………........ I Acknowledgment ……......... …............ 06 09 3-3 Hardware Requirements …..………. 04 Chapter 2: Project Components 2-1 Hardware..... ……….…….……………….….TABLE OF CONTENTS Dedication …………………………………………………………….…………. …..………………………….…………...… 06 Chapter 3: Software & Hardware Requirements Specification 3-1 About Project …………………………………..xaml . 1-2 Touch Screen Techniques.......…..............………………..... 02 1-4 Technical Aspects/Features……..…..... 11 3-4 Detailed description about components…….………......…….... 13 12 4-2 Community Core Vision (CCV) – Calibration…...…. ………..………...….....….. 27 ........……….…………………… 19 Chapter 6: Creatives.……………….........

Chapter 7: Conclusion ………………………………………………………….29 .

a small amount of the current is drawn away by the body’s natural capacitance. Sensors monitor these voltage streams. Screen Solutions offers a number of standard and customized touch table programs and options for these and other markets. SAW screens.  Surface Acoustic Wave Surface acoustic wave (SAW) screens use beams of ultrasonic waves to form a grid over the surface of a display. as well as software that recognizes multiple simultaneous touch points. ranging from electrical current to infrared light to sound waves. 1.g. pages. Some have the ability to recognize objects by distinguishing between the differences in pressure and temperature of what is placed on the surface. Resistive touch screens work with any kind of input. Multi-touch surfaces allow for a device to recognize two or more simultaneous touches by more than one user. the two layers connect to form a new circuit. Corporate. A minimal current is broadcast and measured from the corners of the monitor.CHAPTER 1 INTRODUCTION TO MULTI TOUCH SURFACES 1. Multi-Touch Tables are currently being used in Real Estate. but the former work with any kind of input. When the flexible top layer is pressed. the X and Y points are combined to identify the touch coordinate. transparent layers of material over an LCD or CRT. computer touchpad. Capacitive A resistive touch screen sandwiches several thin. computer display.g. be it a fingertip. rotate. table. When a person touches the screen. Sensors along the X and Y axes monitor the waves. Everything can be done with our finger tips. Sensors measure the change in voltage. a fingernail. triangulating the position to X and Y coordinates.1 Introduction Multi-touch denotes a set of interaction techniques that allow computer users to control graphical applications with several fingers. The bottom layer transmits a small electrical current along an X and Y path. Multi-touch computing is the direct manipulation of virtual objects. wall) or touchpad. including a stylus or finger Capacitive screens move the electrical layer to the top of the display. On the other . Retail and Home Sales Centers. which recognizes only one touch point. are durable and provide a clear line of sight to the display image.  Resistive vs. Trade Show. pinch. or a stylus. as opposed to the standard touch screen (e. type.. waiting for an interruption. when one is broken. two or more people can be doing different or independent applications on the device. The sensors measure the relative loss of the current and a microcontroller triangulates the point where the finger made contact. Depending on the size and applications installed in the surface. and images allowing you to swipe.2 Touch Screen Techniques Touch screens rely on different phenomena to perform their functions. and command them eliminating the need for a keyboard and a mouse. ATM). Industrial. Multi-touch devices consist of a touch screen (e. like their capacitive counterparts. grab.

registering surface contaminants as points of contact. When a finger touches the acrylic surface this light is “frustrated” causing the light to scatter downwards where it is picked up by an infrared camera. These screens work with a stylus. . At that point. they’re more susceptible to interference from dirt and other foreign objects that accumulate on the screen. direct interaction will beat traditional input methods. one must press hard or have oily fingers in order to set off the FTIR effect. finger. or other pointer and give an unobstructed view of the display. the cameras measure the angle of the object’s shadow and its distance from the camera to triangulate the disruption. IR imaging screens use two or more embedded cameras to visually monitor the screen’s surface. With a complaint surface (like silicone rubber) the sensitivity is greatly improved. infrared LEDs shoot invisible beams over the surface of the display. When the beams are disrupted by a fingertip or a stylus. The light is trapped inside the acrylic by internal reflection. The microcontroller simply calculates which X and Y lines were broken to determine the point of input.  Infrared and Infrared Imaging Infrared touch screens are similar to SAW screens in that they use a ring of sensors and receivers to form an X/Y grid over a display. They’re also durable because the point of input is registered just above the glass screen. Infrared imaging touch screens are vastly different from touch screens that use traditional infrared input. only incidental contact is needed. we look forward to touch screens filling up walls and tables in our homes and offices. IR beams are transmitted away from the cameras.  Frustrated Total Internal Reflection (FTIR) Infrared light is shined into the side of an acrylic panel (most often by shinning IR LEDs on the sides of the acrylic). But instead of sending electrical current or sound waves across this grid. Military applications often use infrared screens because of the product’s longevity. When touching bare acrylic.3 Infrared Multi-Touch Table There are many techniques to make touch table with infrared light. A silicone rubber layer is often used as a “compliant surface” to help improve dragging and sensitivity of the device. While the technologies may differ.hand. illuminating the outside layer of the display. simple. Who wants to carry a mouse around the house when a personal touch will do? 1.

Rear DI Infrared light is shined at the screen from below the touch surface. this method can also detect hover and objects placed on the surface. The camera senses this shadow. Front Diffused Illumination and Rear Diffused Illumination. A diffuser is placed on top or on bottom of the touch surface. When an object touches the surface it reflects more light than the diffuser or objects in the background.Front DI Infrared light (often from the ambient surroundings) is shined at the screen from above the touch surface. A diffuser is placed on top or on bottom of the touch surface. Depending on the diffuser. 2. a shadow is created in the position of the object. When an object touches the surface. the extra light is sensed by a camera. 1. Both techniques use the same basic principles. Diffused Illumination (DI) Diffused Illumination comes in two main forms. Comparisons between previous technologies .

and touch. The user input is then processed and displayed on the surface using rear projection. 1. .Infrared: Infrared light is projected onto the underside of the diffuser. Objects can also be recognized by their shapes or reading coded tags. hand gestures.Screen: The Surface has an acrylic tabletop which a diffuser makes capable of processing multiple inputs from multiple users. 3. 2. 4.4 Technical Aspects/Features These all have the same basic framework using cameras to sense objects. Objects or fingers are visible through the diffuser by series of infrared-sensitive cameras which are positioned underneath the surface of the tabletop.Projector – The Surface uses the same DLP light engine in many rear-projection tvs.1.CPU – This is similar to a regular desktop. The following is a diagram of the Microsoft Surface (Figure B) and an explanation of the parts. The underlying operating system is a modified version of Microsoft Vista.

Construct the table according to the specified dimensions and angles and test it by keeping your equipments in place. Also decide the angle of inclinations for mirrors cams etc beforehand. . Try to make the mirror and the projection angle adjustable.1 Hardware Making the Table Construct an engineering drawing of a table deciding the location of each element in it. Make sure that in your design the table does not wobble much and touch surface is at sufficient height.CHAPTER 2 PROJECT COMPONENTS 2.

exe. Open CCV and configure it using the various functions. These Surfaces engage the senses.configuration. This film works as a filter that blocks all the visible light and only allows the visible infrared to pass through it. In the folder in which u extracted the multi touch vista zip file. You’d see that every object gets illuminated and the picture is very clearly visible but that’s because infrared from sun falls on every object on which visible light falls.1 About Project • Today’s computers allow you to have multiple applications in multiple windows but they probably only have one keyboard and mouse which means only one person can operate at a time.cmd.console. Add the 89o line generators to them and test if the blob of hand is detected by the webcam on touching the surface. Then microsoft. • Interactive Classrooms: The multi-touch surface computers will encourage the students to interact with content and each other promoting group work and team building skills. Once the driver is installed. After CCV has been configured. For preinstalled games and apps download Microsoft touch pack from the Microsoft’s download centre and enjoy your first multitouch table. Replace it by a piece of darkened photographic film. You should have visual C++ redistributable 2005 installed on the PC. First run multitouch service.Modifying the camera Open the camera to remove the infrared filter over it. let another student add or delete information and then save the document. Select the TUIO tab and click the arrow button. • Students sitting around the table may open a file. CHAPTER 3 Software & Hardware Requirements Specification 3.driver.console. you will find the folder called driver. The photographic image seen should look gray and white when seen through software that reads camera. The Infrared Plane Set up the Lasers at four corners of the table. modify it. So be sure that the filter is working. Save the config.exe After this your multitouch computer is ready for use. go to device manager and check if u have a device called Universal software HID Device under the Human Interface Devices option.WPF. Disable and re enable that device. Install the driver depending on which system you are using by clicking on the install driver.xml file and launch tbeta to see the new settings. and empower the students by having everything available to them at their finger tips.2 Software For this we advise you to run the installation of all the software in “run as administrator” mode. drag it. Extract the zip files of CCV 1. push it across. 2. • Students would have custom built hardware where they can create their assignments and teachers may be able to see it instantly and help the students. improve collaboration. go to the folder of multi-touch vista.4 and Multitouch Vista. Now open multitouch.multitouch. Install Windows SDK and Quick time player on the computer. .

• Students could share podcasts or other information related to a certain project that they have saved to their flash drive just by laying the device on the surface. Also. that student could move to another student’s desk and work along with them until theirs was fixed. By engaging the students and combining both the audio and visual aspects in every lesson plan. the students could share their images instantly.       . If a problem occurred on one Surface. teachers would have the ability to send presentations to any or all desktop eliminating the need for print outs and copies. one student could be painting with a paint brush while another is drawing with her finger. This would make the construction of projects easier. • Teachers would not have to worry about finding space in a computer lab in order for the students to create projects or conduct research. The teacher's desktop could have the ability to look at each student's desktop from their desk and take control if necessary. Students will be able to work in groups at one desktop Surface. students will be able to work on class assignments together or help each other and sometimes students are able to learn and understand better when the information is delivered or reiterated from their peers in a more creative fashion. Also. such as test taking and scoring. • In a geography class each student could find a specific location and the maps could be displayed instantly. Both the paint brush and the finger would be recognized.  System Features  The administration of a classroom can be improved by reducing the amount of time a teacher spends fulfilling paperwork requirements alone. This can be used to help a student having trouble or to verify that the student is staying on task.• In a photography class. Character recognition application will be made in order to give the children the ability to write the letters in Arabic and English in order to be checked if they are true or not beside the ability to the viewing of pictures that represent the letter at different places in the word with related photos. Beside the ability to read the character to the student. we have a better chance of reaching every student and increasing the percentage of information retained. The tests could be included in each student’s desktop and automatically recorded and scored. • In an art class.

because it is a real time program. Y on the screen.2 Software Requirements . the next time you run the program.CCV Software For full software implementation of multi-touch table. This reset the screen before starting of filtration. the following is a description for each part in the figure: 1. a process that will set the X. press on it without touching the screen. 6. 2. 5. CCV is the main program. TUIO OSC: it is dedicated to run any program that hasn’t been programmed by flash (when you want to transfer windows control to touch.The final stage. It is very important that the numbers are reasonable so that the program can recognize the presence of touch points. we need to install community core vision (CCV 1.Points calibration.Connection technique with the programs:   TUIO TCP: it is dedicated to programs that have been programmed by flash technology that is special for touch. it analyzes images and then locates the touch. .4) program. 3.3. but after several filters. This program requires a strong hardware.The same image from the first window. 4. You must ensure that each finger touch represented with a white point called (Blob) without impurities. choose this type). the settings stored in a special file.This window shows us what the camera see (without any filtration or change). save the settings to a file. 7.Max Blobs Size and Min Blobs Size determine the size of the white point (the place that you touch). it adjusts the settings from the file. Figure 1 shows us the main window of CCV.Directly after running the program. which links computer by table (by camera). receives information from the camera and through special filters.

follow the following steps: 1.cmd" and press "Enter". then press "Shift" while right clicking on the "x32" folder. .Make sure "x32" is already selected. just select it). The current file as of 15/04/2009 is "MultiTouchVista .Figure 1: CCV program Multi-Touch Vista If you have Windows Vista or Windows 7 and you need to convert the control of your windows to touch instead of mouse to test multi-touch applications.) and normalizes it against the scale and rotation of the target window.Extract the zip file to a folder easily accessible.zip".Go into the "Driver" folder and then select "x32" if you are using 32 bit Windows (don't go into it. you need to install MultiTouch Vista second release. 3. 4. Answer "Yes" for User Account Control. 5. multiple mice. to install Multi-Touch Vista on windows 7. 2.In the command prompt. Multi-Touch Vista is a user input management layer that handles input from various devices (touchlib.second release refresh 2. Select the "Open command window here". press "tab" a few times until you see "Install driver.First go to Multi-Touch Vista Download Section and download the Recommended Download. TUIO etc.

double click and run it.Configuration.Console. The default input is already set to MultipleMice. You still have to use the regular mouse cursor to interact with the windows as the red dot will only interact with applications created using Multi-Touch Vista Framework. Now the red dot can finally interact with the Windows. Skype.Driver.Paint 2.. CL Eye Platform Driver needed to define Sony Playstation™ 3 Eye camera to run with Windows. first go to the Multi-Touch Vista folder extracted earlier. Whenever you are adding mouse or removing mouse. To stop it (sometimes mouse interaction totally gone after testing for a long time). find "Multitouch. the red dot doesn't interact with Windows yet) and find "Multitouch.Service. It provides a full control of camera such as resolution. find "Multitouch.exe" and double click to run it. 8.To test Multi-touch features in Windows 7. To confirm that. Also. so you can see red dot moving together with the mouse but it will not be at the same location at the mouse cursor.Console. Right click on "Universal Software HID device" and select "Disable". Now Multi-touch driver should be running.exe" and double click to run it. you have to restart "Multitouch. use "alt-tab" to reach the two command windows and press "Enter" to end them. b. exposure and gain configurations..Now you can close the command window and go to "Control Panel" and then "Device Manager".exe".CL Eye Platform Driver CL Eye Platform Driver provides users a signed hardware driver which exposes Sony Playstation™ 3 Eye camera to third party applications such as Adobe Flash.WPF.To run Multi-touch application created with Multi-Touch Vista Framework.Internet Explorer or Firefox Browser (Zoom in and zoom out).Expand the Human Interface Devices. answer "Yes" for the prompt. Now go to the same folder (use the regular mouse cursor. go to "Control Panel" and then "System" to check that "Pen and Touch:" is "Touch Input Available with 255 Touch Points". This actually did a reload and the driver should already start working. .Console.Then proceed with either one of the following: a.Service.Console.". 7. .Activate the software keyboard on the left edge of the Windows and type using it. Some of the Windows 7 multi-touch features to test on is: 1. Click on "Configure device". tick on the empty box for "Block native windows mouse input.exe". but the original mouse cursor still dominating.. MSN or Yahoo for video chat or conferencing. Now go to the same folder and find "Multitouch.Service.exe". go to the Multi-Touch Vista folder extracted earlier. You should see red dot corresponding to the mouse cursor (probably not at the same location). double click and run it.6. Then "Enable" it again. 3. and press "Ok".

this must be done by the glass or of Plexiglass material. 2. 3. every touch tip of the fingers reflect the rays towards the bottom (the exact touched point). infrared light source. Piece of Mirror 6. The thickness of the surface from 3-5 cm. Diffuser which is the upper surface of the table. Rear Diffused Illumination (Rear DI) such as Microsoft’s Surface Table. The reflected - - . screen size 30 inch. Many holes must be opened to enable access to the internal components and to make a holes for the heat of the projector. Laser Light Plan (LLP).Introduction to Hardware Multi-touch denotes a set of interaction techniques that allow computer users to control graphical applications with several fingers. Piece of glass which represent the surface (screen). The scalability. Piece of glass.3. low cost and ease of setup are suggestive reasoning for the popularity of optical solutions. Mirror is used to increase the distance between the surface and the projector in order to have more size for the screen. and the upper surface is for touch. 3. Prior to learning about the technique. The table that we need will be at these parameters. these include: Frustrated Total Internal Reflection (FTIR). and visual feedback in the form of projection or LCD. Diffuser. LED-Light Plane (LED-LP). Multi-touch devices consist of a touch screen (table) and other components as well as software that recognizes multiple simultaneous touch points. Each of the previous techniques consist of an optical sensor (typically a camera). IR LEDs . Woody or metal table or coffer. length 80cm. Web camera 5. 1. this layer capture the photo from the projector beside the avoidance of the outer light effects on the camera. Infrared LEDs used to send infrared x-rays towards the surface. This layer can be made by the cheep white nylon. 3. height 80cm. Projector is used to transfer the picture to show it on the upper surface of the table.4 Detailed description about components: Woody or metal table or coffer: used to contain all the components inside it. the quality of the projector affects the quality of the picture appearance.The details of these components will be shown later.3 Hardware Requirements . At the moment there are five major techniques that are allowed for the creation of a stable multi-touch hardware systems. Optical or light sensing (camera) based solutions make up a large percentage of multi-touch devices. width 60cm. The size of table is bounded to the size of wanted screen. it is important to understand these parts that all optical techniques proximately share. Projector 4. and finally Diffused Surface Illumination (DSI).

DI. TUIO and Binary) over network sockets and has an internal C++ event system. CCV for short. we are needin this project-to enable only the infrared rays to be identified. This camera have to be with attached to the computer with special driver. we know the coordinate positions can be input into Java. is a open source/cross-platform solution for computer vision and machine sensing. CPU for the connection with the table and making the applications. . Four IR LEDs are needed with 48 LEDs. Sony camera will be used that have the name PS3Eye that give 60 picture at the second with accuracy of 640/480. Every camera has a filter that avoid the infrared rays to reach the sensor. CCV can interface with various web cameras and video devices as well as connect to various TUIO/OSC/XML enabled applications and supports many multi-touch lighting techniques including: FTIR. - - CHAPTER 4 CCV DETAILS 4. so we must remove the filter from the camera and add a negative piece that is used to avoid the natural brightness to reach the camera. The coordinate positions are found in port 3333 of the computer.1 About CCV . The needed camera must have high rate and high resolution. then it send the picture to the CPU. -To get working with Surface your best bet is the MT Vista project as it will take TUIO input and dispatch WM_Touch events.Community Core Vision.rays captured by the camera and sent to the CPU. Camera is used to capture the infrared rays that reflected when touching the surface. . DSI. and LLP with expansion planned for the future vision applications (custom modules/filters).CCV outputs in three formats (XML. so that a lot of pictures can be made in one second. It takes an video input stream and outputs tracking data (e. coordinates and blob size) and events that are used in building multi-touch applications.g.

Smoothes the image and filters out noise (random specs) from the . Inverse .Displays the final image after image filtering that is used for blob detection and tracking.Dynamically adjusts the background image. 14.Gets the next camera device attached to computer if more than one is attached. Movement filtering . 10. Max Blob Size . The higher the option is. 2.Adjust the level of acceptable distance (in pixels) before a movement of a blob is detected.Sets the input source to video and grabs frames from video file. 5. the bigger the blobs have to be converted in tracked blobs. 3. The higher the option is. Use Camera Toggle .1. the more you have to actually move your finger for CCV to register a blob movement. Threshold Slider . 4. Use Video Toggle . Min Blob Size . Press this button to recapture a static background image 13. 6.Adjust the level of acceptable maximum blob size. Previous Camera Button .Track black blobs instead white blobs. Turn this on if the environmental lighting changes often or false blobs keep appearing due to environmental changes. Next Camera Button .Adjusts the level of acceptable tracked pixels.Displays the raw video image from either camera or video file.. Smooth Slider . 8. Remove Background Button . Tracked Image . The slider will determine how fast the background will be learned. Dynamic Subtract Toggle . The higher the option is. the bigger a blob can be before losing its ID.Captures the current source image frame and uses it as the static background image to be subtracted from the current active frame. The higher the option is.Adjust the level of acceptable minimum blob size. 9.Sets the input source to camera and grabs frames from selected camera. the bigger a blob has to be to be assigned an ID. 11.Gets the previous camera device attached to computer if more than one is attached. 12. Source image . 7.

24.Turns on hardware acceleration and uses the GPU. GPU Mode Toggle . On/Off Toggle . 17.image. 16. Amplify Slider . Binary TCP .y coordinates). Send UDP Toggle . Flip Vertical Toggle .2 Community Core Vision (CCV) – Calibration In order to calibrate CCV for your camera and projector/LCD. Save Settings . you’ll need to set up your computer so that the main monitor is the video projector so that CCV is displayed on the touch surface. Enter Calibration .  .Turns on the sending of TUIO messages.  A grid of green green crosses will be displayed. this is used to turn each filter on or off. Calibration Instructions 1. Follow the directions below to setup and perform calibration.Removes the blurry parts of the image and leaves the sharper brighter parts.Turns on the sending of RAW messages (x.Brightens weak pixels. Calibrating allows touch points from the camera to line up with elements on screen. Camera Settings Button .Opens a camera settings box.Flips the source image vertically. Highpass Noise . this can be used to make them stronger. 18. This will open more specific controls of the camera.Saves all the current settings into the XML settings file. Flash XML .Used on each filters. skip to step 3. continue. 19.Loads the calibration screen. 20. especially when using a PS3 Eye camera. This is best used on newer graphics cards only. 23. 26. 22. Note: GPU mode is still in early development and may not work on all machines. you’ll need to run the calibration process. Highpass Blur Slider . 27. this is done by touching individual calibration points. Flip Horizontal Toggle . 21. otherwise. 4. note: For those displaying an image on the touch surface (projector or LCD) .Filters out the noise (random specs) from the image after applying Highpass Blur. If a visual image is not being displayed on the touch surface (MTmini users). 15. If blobs are weak. These crosses are calibration points you touch once we begin calibrating (step 4). the touch is registered in the correct place.Flips the source image horizontally. There is a white bounding box that surrounds the calibration points.Turns on the sending of Flash XML messages (no need for flosc anymore). CCV has to translate camera space into screen space. In order to do this. This way. Press the enter calibration button or “c” to enter the calibration screen. when touching something displayed on screen. 25.

top. The goal is to match the white bounding box to the left. . “s” to move bottom side. right. use the arrow keys to move the side in the arrowed direction. q. then “left arrow” on your keyboard to get the upper corner at the top left corner of your screen.2. Up. While holding the above key. down. and “d” to move right side. s. then hold “s + down arrow”. Hold “up arrow”. right. left and z. left arrows will make the box move. follow the directions under Aligning Bounding Box to Projection Screen displayed on the CCV screen to align the bounding box and calibration points so they fit the touch surface. d or equivalent on qwerty keyboard will make the edges move. o  In other words. (MTmini users skip this step) If the white bounding box is not fully visible or aligned with the touch surface. right. and a combination of up.  Aligning Bounding Box to Projection Screen: o Press and hold “w” to move the top side. down. then “d + right arrow” to get the bottom right corner in position. “a” to move left side. and bottom of your screen.

you may guess or draw the touch points directly on the touch surface so you know where to press. and “+/-” from the numerical pad. Follow the directions on screen and press each point until all targets are pressed.  To Change Grid Size: o o Press “+” to add points or “-” to remove points along the x-axis. 4.” 5. If using a wide angle camera lens or need higher touch accuracy.  . Hold “shift” with the above to add or remove points along the y-axis.3.  If this does not work. Begin calibration by pressing “c. note: adding additional calibration points will not affect performance.  If not projecting an image on the touch surface (MTmini users). press “b” to recapture the background and return “r” to the previous point. If there are false blobs and the circle skips without you touching. more calibration points can be added by following the Changing Grid Size directions on screen. If a mistake is made. you may want to try “_”. press “r” to return to the previous touch point. A red circle will highlight over the current calibration touch point.

After all circles have been touched.6. If calibration is inaccurate. the calibration screen will return and accuracy may be tested by pressing on the touch area. . calibrate again (Step 4) or return to the main configuration window “x” to adjust the filters for better blob detection.

.1 Application using the Virtual Studio with C++ Multitouch "Hello World" program:  Start Visual Studio and create a new WPF project and name it "MultitouchHelloWorld".CHAPTER 4 MULTI-TOUCH HELLO WORLD 4.Framework.WPF.dll assembly.  Add reference to Multitouch.

2 The changes in the Window1. Don't forget to close it with </mt:Window>.cs and change Window1 base object from Window to Multitouch. .4. FontWeight to Bold and make Foreground White and Background LightBlue.com/Multitouch/2008/04". The changes in the Window1.WPF.cs. and change root object <Window> to <mt:Window>. Now add a TouchablePanel as a child of Grid and TextBlock as a child of TouchablePanel. Set the Text property of TextBlock to "Hello World!!!". In Window1.Window_.multitouch. Set FontSize property to 72.Controls.xaml add a new namespace: xmlns:mt="http://schemas.xaml .Framework. open Window1.

Take(5)) { Photos.MyPictures).GetFiles(Environment. before InitializeComponent is executed.  Now bind ItemsSource to Photos property with {Binding Photos}. Now override OnInitialized_ method and add this code: foreach(string file in Directory. Multitouch "Photo Album " program  Replace TouchablePanel with ItemsControl  set its ItemsPanel to TouchablePanel. set DataContext to this.  In constructor.  add a new DependencyProperty Photos of type ObservableCollection<string>.  Now hit F5 to start the program. Your program will look like this and you can touch the text and move and rotate it around. } .GetFolderPath(Environment.Service. For example by executing Multitouch. "*.cs file. Add(file).jpg"). Before you start it.SpecialFold er. start multitouch service.Console.   Open Window1.exe.

 Finally let's get back to Window1.  Now start the program and enjoy your Multitouch Photo Album. . It will take 5 pictures from you Pictures folder.xaml and add a DataTemplate to display strings as Images. so make sure you have them.

CHAPTER 5 CREATIVES The following description is about the Multi Touch Table-TimeTable. .

The following description is about the Multi Touch Table-Detailed Budget. .

We are sure that………………………… Everything can be done with our figure tips.CHAPTER 6 CONCLUSION Work will be continued at the summer holiday with a lot of work and assiduity. .

Sign up to vote on this title
UsefulNot useful