You are on page 1of 18

MAY 14-05

Augmented Reality Firearm Training Device


Final Report
Collin Gross Travis Mallow Brock Mills Scott Schmidt Dan Roggow Alec Jahnke 4/25/2014

Table of Contents
Project Design ............................................................................................................................................... 4 Background ............................................................................................................................................... 4 Problem Statement ............................................................................................................................... 4 Concept Sketches .................................................................................................................................. 4 System Overview ...................................................................................................................................... 5 System Description ............................................................................................................................... 5 System Block Diagram ........................................................................................................................... 6 Operating Environment ........................................................................................................................ 7 User Interface Description ........................................................................................................................ 7 Physical Device ...................................................................................................................................... 7 Android Application .............................................................................................................................. 7 Functional Requirements .......................................................................................................................... 9 Non-functional Requirements .................................................................................................................. 9 Deliverables............................................................................................................................................... 9 Implementation Details .............................................................................................................................. 10 Hardware ................................................................................................................................................ 10 Raspberry Pi Proof of Concept: ........................................................................................................... 10 Software .................................................................................................................................................. 10 Corner Detection................................................................................................................................. 10 Keypoint Processing ............................................................................................................................ 11 Image Mapping ................................................................................................................................... 11 Testing Processes and Results .................................................................................................................... 12 Hardware ................................................................................................................................................ 12 Characterization Test .......................................................................................................................... 12 Testing Phase II ................................................................................................................................... 12 Software Testing ..................................................................................................................................... 13 Bluetooth ............................................................................................................................................ 13 Image Processing ................................................................................................................................ 13
MAY 14-05 2

AUGMENTED REALITY FIREARM TRAINING DEVICE Gallery ................................................................................................................................................. 13 Lessons Learned ...................................................................................................................................... 13 Appendix I: Operation Manual.................................................................................................................... 14 Mounted Device...................................................................................................................................... 14 Android Device ........................................................................................................................................ 14 Starting a new session ........................................................................................................................ 14 Viewing the gallery.............................................................................................................................. 15 Configuring options ............................................................................................................................. 15 Appendix II: Alternative Versions of Design................................................................................................ 15 Initial Ideas .............................................................................................................................................. 15 Secondary Design .................................................................................................................................... 15 Proof of Concept ..................................................................................................................................... 16 Appendix III: Other Considerations ............................................................................................................. 17 Cameras .................................................................................................................................................. 17 Image Processing .................................................................................................................................... 17

MAY 14-05

AUGMENTED REALITY FIREARM TRAINING DEVICE

Project Design
Background
Problem Statement Often, when training with a firearm without help, it can be difficult to analyze ones individual technique, especially when the only feedback available is how many holes are in the target after a round of shooting. The target cannot provide information about shots that missed. Our solution will provide a shooter with this missing information. The product will consist of a hardware device, as well as a software application targeted at Android mobile platform. The device will attach to a firearm via a standard picatinny rail mount and will snap a picture every time the weapon is fired. The image is then transmitted via Bluetooth to the users mobile device, where it is processed. After processing completes, the mobile application will display an image of the target with the estimated locations of the bullets highlighted. The shooter obtains immediate feedback, allowing corrective actions to be taken promptly. Concept Sketches (a)
Picatinny rail mount Camera

(b)

FIGURE 1 - D EVICE CONCEPT SKETCH . (A) DEPICTS THE FRONT OF THE DEVICE . ( B) SHOWS A WIREFRAME IMAGE OF THE DEVICE FROM AN ANGLE .

MAY 14-05

AUGMENTED REALITY FIREARM TRAINING DEVICE

FIGURE 2 - S IDE VIEWS OF THE DEVICE

System Overview
System Description Firearm Attachment The firearm attachment is a battery-powered device that uses an accelerometer to detect sudden changes in acceleration. Before detecting a rapid acceleration, the camera is collecting image data which is collected by the microcontroller and sent via Bluetooth to the users mobile device. Once a sudden acceleration is detected, the accelerometer signals the microcontroller, which then marks the last valid data with a special packet, and stops data collection from the camera. Any remaining data in the outgoing buffer is transmitted before sending the special packet. After a certain delay, the system returns to the ready state, which means the system begins collecting image data and transmits it to the users mobile device. Mobile Device The application on the users mobile device collects the transmitted image data, reassembling it into an image, until it receives the special packet signaling a rapid acceleration. Once it reads the packet, it processes the new image, and compares it with a baseline image taking during the calibration phase. Once the delta between the images is computed, the application calculates an estimation of where the shot went, and marks the location on the baseline image. The image with markings is displayed to the user. The application saves the shot history, allowing the user to display previous sessions.

MAY 14-05

System Block Diagram

Lens

Image Sensor

JPEG Compression

I2C Target Location Constellation Overlay

Target Differentiation

UART SPI/I2S Boot Memory (12 kB Flash)


UART

JTAG

I/O (83 pins)

10-bit ADC I2C Data Memory (128 kB Flash)

MIPS32 M4K CPU Core Program Memory (512 kB Flash)

Bluetooth

SPI

Bluetooth Module WT41-A-AI4

Accelerometer H3LIS331DL

Menu

Device Calibration

Shooting Results

3.7 V Li-Ion Battery (Rechargable)

History

Shooting History

FIGURE 3 S YSTEM BLOCK DIAGRAM

MAY 14-05

AUGMENTED REALITY FIREARM TRAINING DEVICE

Operating Environment The firearm attachment will operate in an outdoor environment. This means it will be subject to the elements such as wind, dust, heat, cold, and varied forms of precipitation. Additionally, the firearm attachment will be exposed to extreme changes in acceleration during the firing sequence, and potentially gunpowder residues and other byproducts from the discharge of the weapon.

User Interface Description


Physical Device 1. The device shall have a simple on/off switch. 2. The device shall have a battery compartment which allows the user access to the battery. Android Application The mobile application is where most of the user interaction takes place. Users can tell the application when to begin recording data and when to stop recording data. Once the begin signal is sent, the mounted device will send continuous image data line by line or several lines at a time to the smartphone via Bluetooth. Once the mounted device detects a shot has been fired, it temporarily stops sending image data. The application will then take the whole image and compare it to a base image of the target. Using image processing techniques, it will identify points of similarity and match the two pictures together by rotating or translating as necessary. The offset between the two images will allow the application to accurately place each shot on the base image of the target. Users will be able to view each target individually and identify each shot fired. They are able to cycle between the targets fired at (if there are multiple) by pressing the "Prev" and "Next" buttons on the screen. The option of saving the image to view at a later time is available. There will also be a settings menu in which users can specify the type of firearm and ammunition they are using, as well as some other helpful data such as where the device is mounted.

MAY 14-05

AUGMENTED REALITY FIREARM TRAINING DEVICE

FIGURE 4 T HE ANDROID APPLICATION USER INTERFACE

MAY 14-05

Functional Requirements
Functional requirements are operations and activities that a system should perform in order to meet the specific demand that is placed on the product. The functional requirements for this device have been listed below: 1. The device must connect to a mobile phone via Bluetooth. 1.1. Bluetooth is a short-range radio protocol in the ISM band from 2400 2480 MHz. 2. The device must be able to be calibrated (zeroed). 2.1. It is important for the user to be able to calibrate the mounted device to properly align with the target in order to achieve optimal results. This is important because there is no restriction on mounting the device on top of the barrel of the firearm, underneath the barrel of the firearm, or on the side of the barrel of the firearm. 3. Device must mount on a picatinny rail compliant with MIL-STD-1913. 4. The application software must be compatible with Android devices. 5. The application software must apply image processing techniques to the images received from the device and display the results to the user in an appropriate manner. 6. The system must have an operational range of 3-20 yards. 7. Device must be able to correctly operate when attached to the firearm on the top of the barrel, on either side of the barrel, and underneath the barrel.

Non-functional Requirements
1. 2. 3. 4. 5. 6. Device fabrication cost shall be no more than $60.00 (US). Device shall be able to withstand forces associated with the normal operation of firearms. Device shall be water resistant. Device shall be powered by a 3.7V power source. Device shall not exceed the dimensions 3.0 (L) x 1.0 (W) x 1.5 (H) Device shall not exceed a weight of 6.7 ounces. 6.1. The device must be as light as possible in order to reduce the impact on the ballistics properties of the unmodified firearm.

Deliverables
At the end of the project in May 2014, we will have delivered the following items: 1. A Project Plan describing the system from a high level. 2. A Design Document detailing the technical specifications of the product. 3. A website containing all of our documentation and progress. 4. A prototype of the product. The prototype should be fully functional or else prove that the concept is infeasible. 5. An Android application that communicates with the product. 6. CAD schematic files for the device body. 7. PCB schematic files for device fabrication.

MAY 14-05

AUGMENTED REALITY FIREARM TRAINING DEVICE

Implementation Details
Hardware
Raspberry Pi1 Proof of Concept: Our hardware testing setup consisted of a Raspberry Pi, complete with a case, 5 Megapixel CMOS camera, and camera mount. We attached our accelerometer evaluation board to the GPIO pins on the Raspberry Pi. We also used an AZIO Bluetooth 2.1 EDR USB dongle. Initially, the communication between the dongle and our Android devices was unreliable. After some troubleshooting, we discovered that the Raspberry Pi did not have the capability to power the USB dongle without using a powered USB hub.

FIGURE 5 - TESTING WITH THE RASPBERRY PI

Software
Corner Detection The first step in image processing is to perform corner detection. The basis of the algorithms we use is FAST 9 corner detection. This algorithm works by gray-scaling the image and looking at each pixel individually. The intensity of the pixel is measured and compared against its four neighbors, each three pixels above, below, to the left, and right. If three of these pixels are outside of a threshold above or below the intensity of the base pixel, then that pixel is marked as being of interest. When a pixel is of interest, we look at each pixel in a radius of three around it. If nine of these consecutively are within a certain tolerance of each other, then the pixel of interest is decided to be a corner. Other algorithms considered were FAST 12 and FAST 16, which are more reliable but take more time. Because we wanted to keep the processing time short on a mobile device, we decided FAST 9 was sufficient.

Raspberry Pi is a trademark of the Raspberry Pi Foundation.


10

MAY 14-05

AUGMENTED REALITY FIREARM TRAINING DEVICE Keypoint Processing Once keypoints have been found, they are filtered and refined. We run an ORB feature detector on the image (which uses FAST to find keypoints). Keypoints are then run through a Harris corner detection algorithm to narrow down the results. From here, the top 500 points are taken and given a BRIEF (Binary Robust Independent Elementary Feature) descriptor. This is a unique binary string to identify a keypoint, which is used in matching keypoints across images. After all keypoints have descriptors, the keypoints in both images are compared using a brute force algorithm. From the set of matching keypoints, the top 5 15 are returned. If there are too few returned, we run a different feature detector and start over. If too many are returned, we simply adjust the thresholds accordingly. After this the keypoints are sorted according to how good of a match they are. Image Mapping Once the images have been processed, we are left with a list of keypoints on each image, sorted by their match ranking. Using the first image as our base image, we look at each other image one by one and compare them. We take the best keypoint match and set this as the base point on each image. Using the x and y pixel coordinates of the base point on each image, we can find the translation between images. From here we look at every other keypoint match one at a time. We will consider the point in focus the secondary point. We draw an imaginary line segment from the base point to the secondary point. Using the distance formula, we find the length of the line on each image and then find the ratio of the lengths. This tells us how the images are scaled relative to each other. Next we find the slope of the each line segment. Using trigonometry we can turn the slope of the line into its angle relative to the x-axis. By taking the difference between the angle of the line in each image, we find how the second image is rotated relative to the first. Using the translation, rotation, and scaling of the images that we have found, we take the center point of the first image (the point where the bullet should have hit if the camera is properly mounted) and shift it by the same translation found earlier. The translated point is then rotated around the base point by the same angle found previously, and scaled appropriately. The end result is that the final point matches the position of the center point on the second image, which is where the second shot landed. This process is repeated using every keypoint, and at the end the average is taken for more reliable results. In the Android app these points are plotted on the base image using different colors so that users can tell the order in which the shots are taken and plotted.

MAY 14-05

11

AUGMENTED REALITY FIREARM TRAINING DEVICE

Testing Processes and Results


Hardware
Characterization Test The first round of testing we did was to determine the forces which a firearm will exert on a device attached to it. We attached an old Android smartphone to different pistols and measured the forces exerted on it using the built-in accelerometer. These tests determined two things: phones are not designed to be mounted on firearms; and there is a definite spike in the accelerometer when a gun is fired. The phone actually only functioned with a .22 pistol. The 9 millimeter handgun was too powerful and would spike the accelerometer so much that it would stay at the high point until the phone was restarted. Below is the graph which shows the acceleration the phone went through when the pistol was fired.

Acceleration from .22


25 20 15 10 5 0 -5 -10 -15 -20 -25 X axis Y Axis Z axis

FIGURE 6 - ACCELERATION OF A .22 PISTOL , AS MEASURED BY AN ANDROID SMARTPHONE .

Testing Phase II During this phase of testing, we attached the Raspberry Pi2 system to an SKS rifle using a wood clamp (Figure 5), and then fired the weapon. A rifle was necessary because the Raspberry Pi system was too large and heavy to mount on a pistol. During this test, the Raspberry Pi was configured to take pictures when the accelerometer spiked during the firing process. These images were later transferred to our Android application in order to run them through the image processing algorithms.

Raspberry Pi is a trademark of the Raspberry Pi Foundation.


12

MAY 14-05

Acceleration (m/s2)

AUGMENTED REALITY FIREARM TRAINING DEVICE

Software Testing
Most of the features in the Android application were developed in an isolated environment. Each component was written and tested individually, and once we were satisfied with the results we added it into the larger application. As more pieces came together, we worked to make sure the final application remained stable and free of bugs. Bluetooth Our initial testing of Bluetooth was between Android devices. We wrote a test application to search for devices and then pair with one the user selects. After they had been successfully paired, the user could send an image from one device while the other device waited to receive it. This helped us find some bugs in the way we were transmitting files and led to the discovery of a built in library function that is designed to send and receive images, greatly reducing the complexity of the code. Image Processing Image processing was the most difficult part to test due to the sheer amount of overhead required to set it up. Using the OpenCV libraries, looking through the sample code, and carefully reading the documentation aided us greatly here. We made a simple test application first to take a picture and run keypoint detection on it. From there we could adjust the thresholds to find more or fewer keypoints as needed. Next we made the app take two pictures and run keypoint detection on both images, then try to match the keypoints. At first this was very inconsistent, sometimes giving perfect results and other times not. This was resolved again by adjusting threshold values for feature recognition and filtering matches so only the best results were considered. The final stage in the image processing was to map the images onto the base image. This was tested using the steps above and then for the most part and then visually comparing the results. We attempted to map the center of each image onto the base image and see the successful result. Because the code was running on the Android device, we had limited debug info available on the development console, and had to rely largely on logging information. After several iterations of testing the mapping algorithm, we arrived at one that worked as we expected. Gallery Testing the gallery feature consisted of loading several pre-captured images into the application and making sure the internal database was set up properly to store and view these images. Here, we ran into issues with app memory growth and image file sizes, which were eventually resolved with image compression and limiting how many images could be stored.

Lessons Learned
Bluetooth may not be the best way to transfer images to the phone. o Power consumption is too high; we needed an external power supply to achieve a consistent connection.

MAY 14-05

13

AUGMENTED REALITY FIREARM TRAINING DEVICE The data transfer rate was very slow. For some unknown reason, the Bluetooth device should have been transferring a full picture in approximately 6 seconds but it would take between 30 seconds and 1 minute to complete the transfer. Raspberry Pi2 is not feasible as a final device. o Too heavy o Too bulky o Cost is too high for a production device o

Appendix I: Operation Manual


Mounted Device
Charge the battery. Boot the Raspberry Pi. This may require connecting to a display and a keyboard. Start the application that reads accelerometer data and captures images. Collect images.

Android Device
From the home screen of the app, users will be presented with three options: Start, Gallery, and Configure. Starting a new session Press the Start button on the home screen. Users are prompted for information about their current session. Enter as much information in the boxes as desired. At the top of the screen is your current connected Bluetooth device. Tap the box to select a new one or set up the connection. At the bottom of the screen is a Calibrate button. Tap this to begin the session. Bluetooth Connection Paired devices are listed and can be easily selected. If no paired devices are found, devices can be searched for. Once devices have been found, tap to pair and then confirm to return. Calibration Press the Collect button to receive an image via Bluetooth. Once an image is received, it is displayed with a crosshair. Tap the screen to select the area of interest If the crosshair points to the correct area, press Accept. Otherwise press Retry. This crops the image to reduce file size and search time. Collecting Photos Press the Start button to begin waiting on photos via Bluetooth
MAY 14-05 14

AUGMENTED REALITY FIREARM TRAINING DEVICE Press the Finish button when the shooting session is done. Wait for the processing to finish. When finished, the image will appear with the color coded shots placed on the target. Tap the Done button to view the images in the gallery

Viewing the gallery The gallery can be accessed directly from the home screen or at the end of a shooting session. Press and hold the screen to open the saved gallery photos Select an image (most recent session is selected if coming from a session) to view more details. Image description, time, and name are displayed along with the shots taken. Tap arrows to view individual shots from that session. o Press the Gun Movement button to view the accelerometer readings from any shot Accelerometer Data Once launched, select a replay speed. Press Replay to view the movement of the firearm up until the picture was taken This can be used to identify aim tendencies in the moments leading up to pulling the trigger. Configuring options From the home screen press the Configure button Set personal options here outside of a session configuration.

Appendix II: Alternative Versions of Design


Initial Ideas
Our initial plan, before doing much research into the requirements of the project, was to purchase a device similar to an Arduino. These microcontroller-based devices could provide us the necessary functionality and were also small enough to nearly meet our size requirement. However, these devices were outside of our project budget. Another idea we considered was to bypass the camera altogether. We debated the feasibility of using only sensors like gyroscopes, accelerometers, and possibly a range finder or radar. The mounted device would know its position and orientation relative to the target, and from there we would be able to mathematically model the path of the bullet and closely estimate where it hit. This added its own challenges and introduced a whole new list of factors to consider. We would have to take into account the muzzle velocity, the wind speed if outside, the distance to the target, and the height and size of the target. Additionally, since there would not be a camera, no actual images would be shown to the user. In the end, we discarded this idea because we thought it did not meet the true goal of the project.

Secondary Design
To make our project less expensive, we decided to construct a list of parts which were compatible and would provide the functionality which we require. These parts were:
MAY 14-05 15

AUGMENTED REALITY FIREARM TRAINING DEVICE Part: OmniVision OV2640 CMOS Camera Bluegiga WT41-A-AI4 Bluetooth Module Microchip USB Starter Kit III Microchip PIC32 I/O Expansion Board Microchip 9V Wall Mount Power Supply STMicroelectronics STEVAL-MKI087V1 Accelerometer Evaluation Board 3.7V Li-Ion Battery Total: TABLE 1 - I NITIAL PARTS LIST . Price: $22.90 $35.00 $59.99 $72.00 $20.00 $26.25 $4.99 $241.13

As we started building putting the parts together to build the prototype, we noticed that the pin layout on the PIC USB Starter Kit III was incorrect, so we had to determine which pins had connections to the board, as well as their locations on the I/O board. After that was completed, we figured out how to trigger interrupts from the accelerometer based upon exceeding threshold values. After the accelerometer was working, we connected the Bluetooth module and began attempts to connect it to a cellphone. The module was visible to our phones, but unable to connect. We attempted to run the device with a debug print statement in order to figure out what was wrong. According to the USB Starter Kit III datasheet, there existed on-board circuitry for full debug and programming capabilities.3 However, the library routine that was responsible for printing debug information was not working. After this discovery, we submitted multiple service tickets Microchip through their customer service portal. Original correspondence between our team and Microchip did not give much information, so we decided to pursue an alternative proof of concept (see below) until we received a more helpful response. The final response given to us by Microchip was: We've heard back from our department on this issue. The PIC32 USB Starter Kit III does not support DBPRINTF() due to only having basic JTAG (no APPIN/OUT). The PIC32 USB Starter Kit II (PIC32MX795F512L) and the PIC32 Starter Kit (I) (PIC32MX360F512L) should both support this feature if needed. This response was received on the 23rd of April.

Proof of Concept
When we ran into the first round of problems with the Microchip microcontroller, we decided to split up the efforts of the hardware group to work on designing a second device. This device was designed to be used as a proof of concept, as well as a way to pass on this project to another team. We chose to use a Raspberry Pi1 because it has been in production for well over a year, and has significant documentation and community support. We purchased a Raspberry Pi bundled with a camera attachment, so to finish the design, we connected our existing accelerometer to the GPIO pins.

PIC32 USB Starter Kit III Users Guide (Revision B), http://ww1.microchip.com
16

MAY 14-05

AUGMENTED REALITY FIREARM TRAINING DEVICE


Part Price
4

Raspberry Pi Model B Rev. 2.0 (512 MB) + 5 MP Camera Board Module Duracell Instant USB Charger Raspberry Pi4 Case Raspberry Pi4 Camera Module Mount Transcend 8GB Class 10 SDHC Flash Memory Card Total: TABLE 2 - RASPBERRY PI PARTS LIST .

$68.99 $17.49 $9.00 $9.99 $7.95 $113.42

Appendix III: Other Considerations


Cameras
We had to make the choice between a CMOS or CCD camera. The reasons we liked the idea of having a CCD camera were because: Higher image quality Global Shutter instead of Rolling Shutter o Global shutter captures all pixels at once o Rolling shutter goes line-by-line to capture the image

However, there were a few drawbacks to using a CCD sensor. These were: Price o CCD sensors cost hundreds of dollars o CMOS sensors can cost as little as five dollars Size o CCD sensors are significantly larger than CMOS sensors (about 2-3 times the size) Needs a lens o Lenses are even more expensive and if the sensor doesnt come with one, we would have to purchase one.

Image Processing
We had to make another choice when it came to image processing. The biggest dilemma was choosing speed versus accuracy. It will be deployed on an Android device, so it must be lightweight and efficient Smaller image size means faster processing, but lower resolution Low resolution means less accuracy o Limited by size of pixels (i.e. how much data is represented in one pixel).

Raspberry Pi is a trademark of the Raspberry Pi Foundation.


17

MAY 14-05

AUGMENTED REALITY FIREARM TRAINING DEVICE o Likewise, larger images take longer to send over Bluetooth, slowing the process down even further.

MAY 14-05

18