SHOWCASE: RELATIVE INDOOR FIREFIGHTER LOCATOR, SENIOR DESIGN PROJECT (2010)
Heterogeneous Project: Android and Java AWT Combined.
Technologies Used: Android, Java SE with AWT, Bluetooth RFCOMM, Electronic Compass and Accelerometer Sensors.
Collaborators: Allan Pinero and Christopher Sizelove.
Advisors: Dr. Henry Helmken, Dr. Bassem Alhalabi, and Dr. Ravi Shankar of FAU.
C. Norona, A. Pinero, and C. Sizelove, “Relative Indoor Firefighter Locator,” senior design project for Engineering Design II, Spring 2010 semester, Florida Atlantic University. Profs. Henry Helmken and Bassem Alhalabi.
When I was reflecting on past projects that I have worked on, I could not help but remember the Relative Indoor Firefighter Locator (RIFL). This was a senior design project where my colleagues, Allan Pinero and Christopher Sizelove, and I implemented a proof-of-concept for a first responder/firefighter locating system. At the time, the National Institute of Standards and Technologies (NIST) solicited the public for proposing such a system and even collaborated with the Worcester Polytechnic Institute on such subjects [1 and 2]. Admittedly, our ambitions were not as grand as that since we were college students struggling to find ideas for a senior design project. Using the inspiration from NIST’s solicitation we set out to implement our version of a firefighter locator.
Our first struggle was how to design the RIFL to keep track of firefighters traversing the building he or she is in during a rescue operation. NIST already proposed many schemes which include RFID triangulation, breadcrumb-based tracking and possible use of GPS [I could not find specific website where I found this. -CN]. Our initial efforts went into investigating these possible schemes and figuring out which one would be the most feasible to implement in the allotted time we were given. After some researching we concluded that the breadcrumb-based scheme would be the most feasible both in that we possess the ability to implement it in the allotted time and in that we possess the skill-set to actually implement such a system. Since we had the scheme figured out, we needed to determine what information to gather from the mobile devices and transmit to the base station. Other considerations include: how to present such information, what kind of program would interact with the mobile devices, what data transmission medium do we use. The team and I worked to come up with these specifications and designs.
Eventually, the design evolved to consist of a base station and multiple mobile devices connected in a Bluetooth piconet. Each mobile device was to be connected or fitted with an array of sensors (i.e. compass, accelerometers). The mobile devices would be connected via Bluetooth to the base station and transmit data gathered from the accelerometer as well as the compass or magnetic sensor. Thus, providing enough information to implement the breadcrumb tracking functionality on the base station. We also intended to implement features such as determining if a first responder has been subjected to extreme or violent physical stimuli and gathering biometric data (i.e. Heart rate, body temperature) to relay such information back to the base station. Soon enough it would be time to figure out how to implement and deploy such a system.
We could not think of a better platform for such mobile devices other than Android phones since many that were manufactured for this platform were also manufactured with hardware sensors built into them. Fortunately for us, we were allowed access to half-a-dozen HTC G1s (thanks for Dr. Ravi Shankar’s Center for Systems Integration, FAU) which met the desired specifications we were looking for in a mobile device. For our base station we decided to use a laptop with a Bluetooth dongle and implement a Java desktop application (or applet to be more specific) that would manage the connections to the mobile devices and display data accordingly.
With the design seemingly completed we set out to begin our implementations. Allan Pinero began implementing the applet that would manage the Bluetooth session with the mobile devices, gather data from the Bluetooth clients (the mobile devices in our case), compute the spatial vectors for breadcrumb tracking, and present the data to the user of the base station. I set out to implement the mobile app that would connect and initiate a Bluetooth session with the base station (the Bluetooth server) and broadcast the relevant sensor information. Christopher Sizelove designed the experimental tests and drafted the documentation required of us which we later included in our final presentation and final report.
Our efforts were not without difficulty. Although progress was being made with the mobile app and the experimental designs Allan reported that he was having difficulty with establishing Bluetooth functionality into the Java applet. Recognizing this as one of our more difficult and time-consuming tasks Allan and I met regularly together to ensure that our most important task was achieved. We taught ourselves the conventions of Bluetooth communication, researched any existing Bluetooth Java libraries so that we could utilize them, and finally managed to implement the Bluetooth server functionality.
As much as conquering the most difficult task in our project left us with a sense of achievement we were still shy of implementing the features we wanted. Unfortunately for us, we only had enough time to implement basic breadcrumb tracking for one mobile device. On the HTC G1, we had access to the BMA150 triaxial accelerometer and the AK8973 tri-axis electronic compass[4 and 5]. Since we are trying to represent a user’s movement in terms of a vector, which can be decomposed into a magnitude component and direction component, we were able to represent such components using the accelerometer and electronic compass, respectively. The accelerometer allowed us to determine magnitude and the electronic compass allowed us to determine the direction. All of this information was then put into a Bluetooth message which is relayed to the base station. Once the base station receives the message it is able to represent the movement in a 2-D drawable canvas object in the Java applet. The code below shows how we used the BluetoothRFCOMM object, “pcServerCOMM,” which we created from an aggregate of objects provided by the BlueCove java library, to get the sensor data from both the accelerometer and electronic compass of the mobile device :
|rawMagnitude =(pcServerCOMM.magnitude * pcServerCOMM.magnitude); displacementX =(int)(Math.abs(rawMagnitude)
*0.83/**/); //Scale factor
*0.83/**/); //Scale factor
Table 1: Code snippet of translating the sensor data into Cartesian coordinates for Base Station UI.
Using the displacementX and displacementY variables we were able to represent the horizontal displacement (not yet accounting for altitude) of a user’s movement.
Image: The user’s position (big circle) is shown moving in direct relation to mobile device’s movement from original position (little circle) (Source: C. Norona, A. Pinero, & C. Sizelove).
With the basic breadcrumb functionality implemented we set out to test our RIFL system. In a perfect, scientifically sound scenario we would have been afforded the opportunity to run trials upon trials of tests and gather statistical data on the performance of the system. However, as I recall, we were only hours away from our final presentation so basic functionality tests had to suffice. The only problem with compromising on this outcome was that the live demonstration in our final presentation proved to be quite suspenseful—at first, the base station and the mobile device had a hard time connecting. It was not long before the connection was finally made and the display of the base station showed the dancing big circle traversing around the screen, capturing Chris’ every movement inside the classroom. We successfully presented a working system to our Engineering Design class and our academic records would show that each of us earned an A for our efforts.
Our project has been made open source and freely available online at https://code.google.com/p/rifl/ . You can also find more information there on our designs and relevant documents of the project.
 M. Dorsey, “WPI Receives $1 Million to Develop System Aimed at Preventing Firefighter Injuries and Deaths” in Worcester Polytechnic Institute News Releases – 2009-2010 [Online], Nov. 30th, 2009, Available: http://www.wpi.edu/news/20090/fire.html.
 E. Ballam, “Workshop Tracks Progress on Firefighter Locator” in Firehourse Magazine [Online], Aug. 5th, 2011, Available: http://www.firehouse.com/news/10461822/workshop-tracks-progress-on-firefighter-locator.
 E. Ballam, “Firefighter Locator Test a Success at WPI Workshop” in Firehouse Magazine [Online], Aug. 8th, 2012, Available: http://www.firehouse.com/news/10757117/firefighter-locator-test-a-success-at-wpi-workshop.
 “BMA150 – Digital, Triaxial acceleration sensor” datasheet [Online], Jun. 2010, Available: http://zh.bosch-sensortec.com/content/language4/downloads/BST-BMA150-DS000-07.pdf.
 “AK8973 – 3-axis Electronic Compass” datasheet [Online], Jan. 2007, Available: http://pdf1.alldatasheet.com/datasheet-pdf/view/219477/AKM/AK8973.html.
 “BlueCove | Free software downloads at SourceForge.net” [Online], Dec. 27, 2013, Available: http://sourceforge.net/projects/bluecove/.
 C. Norona, A. Pinero, and C. Sizelove, “RIFL – PC-to-Mobile Bluetooth RFCOMM Connectivity with Sensor Data Transmission,” Google Code Project [Online], May 2010, Available: https://code.google.com/p/rifl/.
Mobile Android Project: Resistor Decoder (ResDec)
Technologies used: OpenCV for Image Processing (color quantization via native C++), Android SDK and NDK, Camera, JUnit for Testing.
Research citation: Akman, M., Norona, C., Poe, J., and Alonso, Jr., M. “ResDec: A Mobile Resistor Decoder,” presented at the 2013 SACNAS National Conf., San Antonio, TX, 2013.
One of the projects we have at the Computing Research Lab is the Resistor Decoder application, or ResDec as it is nicknamed. The purpose of ResDec is to use image processing on an Android mobile device to analyze and decode the value of a resistor whose image was taken by the mobile device’s camera. A personal friend of the lab happens to be color-blind which is a significant and personal motivating factor in pursuing this project for us, not to mention other existing or aspiring electrical or electronics engineers who have to negotiate this hardship can benefit from the use of such an app. Additionally, we recognize that the color quantization logic the project depends on can be reused for another project within the lab, the Skin Cancer Identification System, to analyze the colors present in an image of a skin lesion. All-in-all, we believe this project is worthwhile in pursuing given the intent and the intellectual merit that it will contribute to the science of computing and image processing. The student who primarily works on this project is Matias Akman with mentoring from Dr. Miguel Alonso, Dr. James Poe, and myself.
There are two functions that the ResDec app currently performs. One is the recognition of the resistor to localize and reduce the overall image workspace. The other is the analysis of the region of interest which involces color quantization and the subsequent color band segmentation. Currently, the former function is fully implemented. This was achieved by using the classifiers we created through a process called Haar Training. In this process, many images are analyzed by a program provided by OpenCV specifically designed to create a “cascade of boosted classifiers.” In the simplest sense and in the context of image or pattern recognition, classifiers are used to identify or discriminate certain characteristics in order to determine that the information being analyzed by the classifier represents the very thing the it is intended to find, identify, or—as the name suggests—classify. In the context of ResDec, the resulting classifier that is provided to us as an output of the Haar Training is one that will distinguish a pattern of pixels that is image data representing a resistor. The resistor recognition logic of this project was primarily based on the literature found at Naotoshi Seo’s tutorial on OpenCV Haar Training but instead of faces as positive images we used images of resistors . For more information on classifiers, I recommend reading Ömer Cengiz ÇELEBİ’s chapter on Pattern Classification in his MATLAB tutorial online .
More recently, Matias and I have been working to implement the color quantization processing logic that will be used to help us segment the colors from the resistor bands. Color quantization is an image processing technique used to reduce the number of colors within an image. Consider pixel values from a gray-scale image which ranges from 0 to 255. In order to reduce the color space from 256 distinct colors to just several different values we use a look-up table. The Look-up table keeps track of the various ranges the pixel values can reside in. If the value resides within the given range then the pixel value is reassigned to a specified color, or another pixel value, that corresponds to that range. To better describe this functionality, see the example piecewise function below:
The result of applying this quantization to the image is that of the same image but with reduced colors, allowing for simpler segmentation of the color bands that run across a resistor. An example of this process as applied to skin lesions is described in Ogorzałek et al’s “Modern Techniques for Computer-Aided Melanoma Diagnosis” and Caleiro et al’s “ Color-spaces and color segmentation for real-time object recognition in robotic applications” [3 and 4].
Image source: Color quantization article on The Glowing Python (http://glowingpython.blogspot.com/2012/07/color-quantization.html).
Originally, we attempted to quantize the colors using Java at the Dalvik Virtual Machine (DVM) level (similar to Java Virtual Machine (JVM) but optimized for mobile platforms) in Android. However, this proved to be sluggish since our quantization algorithm is process intensive and has to loop through potentially hundreds—if not thousands—of elements in an array (the pixels). As suggested by Dr. Alonso and Dr. Poe, we decided to bring this logic down to the native C/C++ environment which would handle this kind of logic much more efficiently and quickly. As of the writing of this post, Matias has implemented the loop that will effectively implement the color quantization process explained above.
It is our hope that by the end of this semester or the beginning of the Spring 2014 semester that we will be able to complete the implementation of the color quantization process. This will mark the completion of one of our milestones for a project designed to aid engineers in identifying resistors. In addition, this achievement will provide another project within the lab, the Skin Cancer identification System, a means to analyze colors of an image representing a skin lesion. I will do my best to keep this blog updated with information on the Resistor Decoder project as they arrive.
 N. Seo, “Tutorial: OpenCV haartraining (Rapid Object Detection With a Cascade of Boosted Classifiers Based on Haar-like Features)” [Online]. Available: http://note.sonots.com/SciSoftware/haartraining.html#t334b6a
 Ö. C. ÇELEBİ, “Neural Networks Tutorial – Chapter 1 Pattern Classification” [Online]. Available: http://www.byclb.com/TR/Tutorials/neural_networks/Index.aspx
 M. Ogorzałek, L.Nowak, G. Surówka and A. Alekseenko, “Modern Techniques for Computer-Aided Melanoma Diagnosis,” in Melanoma in the Clinic – Diagnosis, Management, and Complications of Malignancy, Prof. Mandi Murph (Ed.) Intech, 2011, ch. 5, sec. 7, pp. 72–73 [Online]. Available: http://cdn.intechopen.com/pdfs/17899/InTech-Modern_techniques_for_computer_aided_melanoma_diagnosis.pdf
 P. M. R. Caleiro, A.J. R. Neves and A. J. Pinho. “Color-spaces and color segmentation for real-time object recognition in Robotic Applications,” Revista Do Detua, June 2007. Journal.
It was not until my wife and I were successful in finally securing a lease agreement for our first apartment and having done all of the effort to stand out among many other potential renters did I realize that the same diligence required for that process is the same diligence I will need in securing my next job and, hopefully, finding a good, stable career. That is the main reason I started this blog. But I could not help but notice that so many other programmers have done the same thing—just do a Google search of “Yet Another Programmer” and watch how Google suggest you are looking for blogs. After executing the search the first page of results is crowded with relevant material. It seems that I am not the only one sharing my expertise or, perhaps, attempting to win over a potential employer’s favor.
Hopefully, I can make this blog stand out from those in the sense that the content I will be posting are based on my current (but not confidential) and past work and I will attempt to implement examples of skills that potential employers are looking for. If someone else has already made such an implementation then I will do it again but with my own spin, my own explanation, and my own programming and design style while giving credit to the original author. It is my hope that I will be able to impart knowledge to those who encounter what I hope to be numerous posts on my newly created blog and that it will serve curious programmers and software engineers in the same way that so many others have helped me in my past implementations. It is time for me to pay it forward.