Showcase: Relative Indoor Firefighter Locator, Senior Design Project (2010)


Heterogeneous Project: Android and Java AWT Combined.

Technologies Used: Android, Java SE with AWT, Bluetooth RFCOMM, Electronic Compass and Accelerometer Sensors.

Collaborators: Allan Pinero and Christopher Sizelove.

Advisors: Dr. Henry Helmken, Dr. Bassem Alhalabi, and Dr. Ravi Shankar of FAU.

Research citation:

C. Norona, A. Pinero, and C. Sizelove, “Relative Indoor Firefighter Locator,” senior design project for Engineering Design II, Spring 2010 semester, Florida Atlantic University. Profs. Henry Helmken and Bassem Alhalabi.


Image source:

When I was reflecting on past projects that I have worked on, I could not help but remember the Relative Indoor Firefighter Locator (RIFL). This was a senior design project where my colleagues, Allan Pinero and Christopher Sizelove, and I implemented a proof-of-concept for a first responder/firefighter locating system. At the time, the National Institute of Standards and Technologies (NIST) solicited the public for proposing such a system and even collaborated with the Worcester Polytechnic Institute on such subjects [1 and 2]. Admittedly, our ambitions were not as grand as that since we were college students struggling to find ideas for a senior design project. Using the inspiration from NIST’s solicitation we set out to implement our version of a firefighter locator.

Our first struggle was how to design the RIFL to keep track of firefighters traversing the building he or she is in during a rescue operation. NIST already proposed many schemes which include RFID triangulation, breadcrumb-based tracking and possible use of GPS [I could not find specific website where I found this. -CN]. Our initial efforts went into investigating these possible schemes and figuring out which one would be the most feasible to implement in the allotted time we were given. After some researching we concluded that the breadcrumb-based scheme would be the most feasible both in that we possess the ability to implement it in the allotted time and in that we possess the skill-set to actually implement such a system. Since we had the scheme figured out, we needed to determine what information to gather from the mobile devices and transmit to the base station. Other considerations include: how to present such information, what kind of program would interact with the mobile devices, what data transmission medium do we use. The team and I worked to come up with these specifications and designs.

Eventually, the design evolved to consist of a base station and multiple mobile devices connected in a Bluetooth piconet. Each mobile device was to be connected or fitted with an array of sensors (i.e. compass, accelerometers). The mobile devices would be connected via Bluetooth to the base station and transmit data gathered from the accelerometer as well as the compass or magnetic sensor. Thus, providing enough information to implement the breadcrumb tracking functionality on the base station. We also intended to implement features such as determining if a first responder has been subjected to extreme or violent physical stimuli and gathering biometric data (i.e. Heart rate, body temperature) to relay such information back to the base station. Soon enough it would be time to figure out how to implement and deploy such a system.


Image source:

We could not think of a better platform for such mobile devices other than Android phones since many that were manufactured for this platform were also manufactured with hardware sensors built into them. Fortunately for us, we were allowed access to half-a-dozen HTC G1s (thanks for Dr. Ravi Shankar’s Center for Systems Integration, FAU) which met the desired specifications we were looking for in a mobile device. For our base station we decided to use a laptop with a Bluetooth dongle and implement a Java desktop application (or applet to be more specific) that would manage the connections to the mobile devices and display data accordingly.

With the design seemingly completed we set out to begin our implementations. Allan Pinero began implementing the applet that would manage the Bluetooth session with the mobile devices, gather data from the Bluetooth clients (the mobile devices in our case), compute the spatial vectors for breadcrumb tracking, and present the data to the user of the base station. I set out to implement the mobile app that would connect and initiate a Bluetooth session with the base station (the Bluetooth server) and broadcast the relevant sensor information. Christopher Sizelove designed the experimental tests and drafted the documentation required of us which we later included in our final presentation and final report.

Our efforts were not without difficulty. Although progress was being made with the mobile app and the experimental designs Allan reported that he was having difficulty with establishing Bluetooth functionality into the Java applet. Recognizing this as one of our more difficult and time-consuming tasks Allan and I met regularly together to ensure that our most important task was achieved. We taught ourselves the conventions of Bluetooth communication, researched any existing Bluetooth Java libraries so that we could utilize them, and finally managed to implement the Bluetooth server functionality.

As much as conquering the most difficult task in our project left us with a sense of achievement we were still shy of implementing the features we wanted. Unfortunately for us, we only had enough time to implement basic breadcrumb tracking for one mobile device. On the HTC G1, we had access to the BMA150 triaxial accelerometer and the AK8973 tri-axis electronic compass[4 and 5]. Since we are trying to represent a user’s movement in terms of a vector, which can be decomposed into a magnitude component and direction component, we were able to represent such components using the accelerometer and electronic compass, respectively. The accelerometer allowed us to determine magnitude and the electronic compass allowed us to determine the direction. All of this information was then put into a Bluetooth message which is relayed to the base station. Once the base station receives the message it is able to represent the movement in a 2-D drawable canvas object in the Java applet. The code below shows how we used the BluetoothRFCOMM object, “pcServerCOMM,” which we created from an aggregate of objects provided by the BlueCove java library, to get the sensor data from both the accelerometer and electronic compass of the mobile device [6]:

rawMagnitude =(pcServerCOMM.magnitude * pcServerCOMM.magnitude); displacementX =(int)(Math.abs(rawMagnitude)

*Math.cos(Math.toRadians(pcServerCOMM.bearing 90))

*0.83/**/); //Scale factor

displacementY =(int)(Math.abs(rawMagnitude)

*Math.sin(Math.toRadians(pcServerCOMM.bearing 90))

*0.83/**/); //Scale factor

Table 1: Code snippet of translating the sensor data into Cartesian coordinates for Base Station UI.

Using the displacementX and displacementY variables we were able to represent the horizontal displacement (not yet accounting for altitude) of a user’s movement.


Image: The user’s position (big circle) is shown moving in direct relation to mobile device’s movement from original position (little circle) (Source: C. Norona, A. Pinero, & C. Sizelove).

With the basic breadcrumb functionality implemented we set out to test our RIFL system. In a perfect, scientifically sound scenario we would have been afforded the opportunity to run trials upon trials of tests and gather statistical data on the performance of the system. However, as I recall, we were only hours away from our final presentation so basic functionality tests had to suffice. The only problem with compromising on this outcome was that the live demonstration in our final presentation proved to be quite suspenseful—at first, the base station and the mobile device had a hard time connecting. It was not long before the connection was finally made and the display of the base station showed the dancing big circle traversing around the screen, capturing Chris’ every movement inside the classroom. We successfully presented a working system to our Engineering Design class and our academic records would show that each of us earned an A for our efforts.

Our project has been made open source and freely available online at [7]. You can also find more information there on our designs and relevant documents of the project.


[1] M. Dorsey, “WPI Receives $1 Million to Develop System Aimed at Preventing Firefighter Injuries and Deaths” in Worcester Polytechnic Institute News Releases – 2009-2010 [Online], Nov. 30th, 2009, Available:

[2] E. Ballam, “Workshop Tracks Progress on Firefighter Locator” in Firehourse Magazine [Online], Aug. 5th, 2011, Available:

[3] E. Ballam, “Firefighter Locator Test a Success at WPI Workshop” in Firehouse Magazine [Online], Aug. 8th, 2012, Available:

[4] “BMA150 – Digital, Triaxial acceleration sensor” datasheet [Online], Jun. 2010, Available:

[5] “AK8973 – 3-axis Electronic Compass” datasheet [Online], Jan. 2007, Available:

[6] “BlueCove | Free software downloads at” [Online], Dec. 27, 2013, Available:

[7] C. Norona, A. Pinero, and C. Sizelove, “RIFL – PC-to-Mobile Bluetooth RFCOMM Connectivity with Sensor Data Transmission,” Google Code Project [Online], May 2010, Available:


Tutorial: AVR-GCC for Programming Atmel/Arduino Microprocessors


Required proficiencies to use or understand this tutorial:

  • C and assembly programming. Refer to Zed Shaw’s Learn C the Hard Way. For assembly programming with Atmel search among these results for relevant tutorials:

  • Familiarity with Integrated Development Environments (IDEs), specifically for C development..

  • Basic knowledge of embedded computers or computer architecture.

Required hardware:

  • Development board or microprocessor that uses an Atmel AVR processor (i.e. an Arduino).

  • USB-to-ISP Programmer such as the USBTinyISP.

  • A computer to create the programs.

NOTE for Linux Users: This tutorial is specifically for WinAVR (Windows) but do not despair! The concept is similar but some details change in your case. Instead of referring to my tutorial, I strongly encourage you to look into “Lady Ada” Fried’s tutorial on using AVR tools in Linux.

Technologies/practices to be used: AVR-GCC C Compiler, Creating/Modifying C Makefiles, Programmer’s Notebook IDE, Customizing an IDE for automating redundant, basic development tasks.

Collaborators: Dr. Ravi Shankar (

This brief tutorial presents how to use a set of C development programs that come available with the download of WinAVR and explains how to set up an integrated development environment (IDE) for use in developing C or assembly programs on the Atmel AVR microprocessors. WinAVR is a collection of programs and libraries used to compile C code for the Atmel AVR architecture. This is useful when you want to develop microcontrollers such as the Arduino. Also, it allows you as the programmer more flexibility with the hardware and low-level digital devices one may need to contend with in more complicated embedded implementations. Before I move on I must inform you that this tutorial assumes that you possess the aforementioned proficiencies as well as the hardware. Otherwise, you are probably in for a bad time…Unless you enjoy reading technical tutorials for fun!

Overall, the installation and use of these tools are straightforward. Initially, you will only need to download WinAVR and then configure your IDE, preferably one with C-development tools like Programmer’s Notepad or Eclipse with CDT, for development. After this, your efforts will comprise almost entirely of recurring programming, testing, and uploading and executing your program onto the target device that possesses the Atmel microprocessor. The PowerPoint (AVRGCCLecture.ppt) and one of the referenced document (install_config_winavr.pdf) attached go into greater detail as to how to achieve this.

It is beyond the scope of the attached powerpoint and document to show you how to develop C programs or what Atmel-specific data types or variables are available to you. For the former, you will have to conduct your own research on C programming or enroll in courses to learn how to do so. Personally, I suggest consulting the online site for the book, Learn C the Hard Way, since the author, Zed A. Shaw, does an excellent job of getting the reader started on creating programs as well as explaining and breaking down the code in his lessons. For the latter, you will have research the datasheet of your own target microprocessor to become familiar with the architecture of that processor and understand what low-level devices (i.e. registers, ports, etc.) you have at your disposal. This will prove useful as you develop the program that is intended for be executed on your Atmel microprocessor.

As soon as you have established an IDE that executes the commands such as make and program device you can begin coding. If you configured it properly in Programmer’s Notepad, it should be just a matter of the IDE compiling the code and subsequently “programming” or uploading the program to your Atmel processor.

Normally, I would post a more comprehensible tutorial—one that can easily be followed. However, it did not make sense to me to rewrite old tutorials or, much less, rewrite abbreviated power point presentations of tutorials. Speaking of which, the attached powerpoint was originally presented when I was a guest lecturer on this very topic for Dr. Shankar’s Embedded Robotics course at FAU back in November of 2011 and is based on a more detailed tutorial (also attached) drafted by Colin O’Flynn (author) and Eric Wedington (editor).


AVR-GCC Lecture PowerPoint: AVRGCCLecture.ppt

Detailed Tutorial on Getting, Installing, and Using WinAVR: install_config_winavr.pdf


Learning C the Hard Way:

USB AVR programmer and SPI interface:

AVR for Linux:

Dr. Ravi Shankar’s Courses:

Main page for WinAVR.

Main Tutorial for WinAVR which this presentation is based on. By C. O’Flynn and E. Weddington.


WinAVR Download link.

WinAVR GCC Tutorials Source. Contains plenty of tutorials to do various things with your arduino. .

Original presentation citation:

C. Norona, “AVR-GCC Programming: Using C Development Tools to Program Your Arduino Microcontrollers,” presented as a lecture for Dr. Ravi Shankar’s Embedded Robotics course, Room 105, Engineering East, Florida Atlantic University, Boca Raton, FL. Nov. 17th, 2011.

Showcase: Resistor Decoder (ResDec)

Mobile Android Project: Resistor Decoder (ResDec)

Technologies used: OpenCV for Image Processing (color quantization via native C++), Android SDK and NDK, Camera, JUnit for Testing.

Collaborators: Matias Akman (, Dr. James Poe (, and Dr. Miguel Alonso, Jr ( of the Computing Research Labs at Miami Dade College, Kendall campus.

Research citation: Akman, M., Norona, C., Poe, J., and Alonso, Jr., M. “ResDec: A Mobile Resistor Decoder,” presented at the 2013 SACNAS National Conf., San Antonio, TX, 2013.

ResDec_ScreenshotImage source: Matias Akman, student behind the ResDec project.

One of the projects we have at the Computing Research Lab is the Resistor Decoder application, or ResDec as it is nicknamed. The purpose of ResDec is to use image processing on an Android mobile device to analyze and decode the value of a resistor whose image was taken by the mobile device’s camera. A personal friend of the lab happens to be color-blind which is a significant and personal motivating factor in pursuing this project for us, not to mention other existing or aspiring electrical or electronics engineers who have to negotiate this hardship can benefit from the use of such an app. Additionally, we recognize that the color quantization logic the project depends on can be reused for another project within the lab, the Skin Cancer Identification System, to analyze the colors present in an image of a skin lesion. All-in-all, we believe this project is worthwhile in pursuing given the intent and the intellectual merit that it will contribute to the science of computing and image processing. The student who primarily works on this project is Matias Akman with mentoring from Dr. Miguel Alonso, Dr. James Poe, and myself.

There are two functions that the ResDec app currently performs. One is the recognition of the resistor to localize and reduce the overall image workspace. The other is the analysis of the region of interest which involces color quantization and the subsequent color band segmentation. Currently, the former function is fully implemented. This was achieved by using the classifiers we created through a process called Haar Training. In this process, many images are analyzed by a program provided by OpenCV specifically designed to create a “cascade of boosted classifiers.” In the simplest sense and in the context of image or pattern recognition, classifiers are used to identify or discriminate certain characteristics in order to determine that the information being analyzed by the classifier represents the very thing the it is intended to find, identify, or—as the name suggests—classify. In the context of ResDec, the resulting classifier that is provided to us as an output of the Haar Training is one that will distinguish a pattern of pixels that is image data representing a resistor. The resistor recognition logic of this project was primarily based on the literature found at Naotoshi Seo’s tutorial on OpenCV Haar Training but instead of faces as positive images we used images of resistors [1]. For more information on classifiers, I recommend reading Ömer Cengiz ÇELEBİ’s chapter on Pattern Classification in his MATLAB tutorial online [2].

More recently, Matias and I have been working to implement the color quantization processing logic that will be used to help us segment the colors from the resistor bands. Color quantization is an image processing technique used to reduce the number of colors within an image. Consider pixel values from a gray-scale image which ranges from 0 to 255. In order to reduce the color space from 256 distinct colors to just several different values we use a look-up table. The Look-up table keeps track of the various ranges the pixel values can reside in. If the value resides within the given range then the pixel value is reassigned to a specified color, or another pixel value, that corresponds to that range. To better describe this functionality, see the example piecewise function below:

ExamplePiecewiseFunctionAn example piece-wise math function exemplifying the color quantization process.

The result of applying this quantization to the image is that of the same image but with reduced colors, allowing for simpler segmentation of the color bands that run across a resistor. An example of this process as applied to skin lesions is described in Ogorzałek et al’s “Modern Techniques for Computer-Aided Melanoma Diagnosis” and Caleiro et al’s “ Color-spaces and color segmentation for real-time object recognition in robotic applications” [3 and 4].


Image source: Color quantization article on The Glowing Python ( 

Originally, we attempted to quantize the colors using Java at the Dalvik Virtual Machine (DVM) level (similar to Java Virtual Machine (JVM) but optimized for mobile platforms) in Android. However, this proved to be sluggish since our quantization algorithm is process intensive and has to loop through potentially hundreds—if not thousands—of elements in an array (the pixels). As suggested by Dr. Alonso and Dr. Poe, we decided to bring this logic down to the native C/C++ environment which would handle this kind of logic much more efficiently and quickly. As of the writing of this post, Matias has implemented the loop that will effectively implement the color quantization process explained above.

It is our hope that by the end of this semester or the beginning of the Spring 2014 semester that we will be able to complete the implementation of the color quantization process. This will mark the completion of one of our milestones for a project designed to aid engineers in identifying resistors. In addition, this achievement will provide another project within the lab, the Skin Cancer identification System, a means to analyze colors of an image representing a skin lesion. I will do my best to keep this blog updated with information on the Resistor Decoder project as they arrive.


[1] N. Seo, “Tutorial: OpenCV haartraining (Rapid Object Detection With a Cascade of Boosted Classifiers Based on Haar-like Features)” [Online]. Available:

[2] Ö. C. ÇELEBİ, “Neural Networks Tutorial – Chapter 1 Pattern Classification” [Online]. Available:

[3] M. Ogorzałek, L.Nowak, G. Surówka and A. Alekseenko, “Modern Techniques for Computer-Aided Melanoma Diagnosis,” in Melanoma in the Clinic – Diagnosis, Management, and Complications of Malignancy, Prof. Mandi Murph (Ed.) Intech, 2011, ch. 5, sec. 7, pp. 72–73 [Online]. Available:

[4] P. M. R. Caleiro, A.J. R. Neves and A. J. Pinho. “Color-spaces and color segmentation for real-time object recognition in Robotic Applications,” Revista Do Detua, June 2007. Journal.

Why are there so many "Yet Another Progammer's Blog" pages?


It was not until my wife and I were successful in finally securing a lease agreement for our first apartment and having done all of the effort to stand out among many other potential renters did I realize that the same diligence required for that process is the same diligence I will need in securing my next job and, hopefully, finding a good, stable career. That is the main reason I started this blog. But I could not help but notice that so many other programmers have done the same thing—just do a Google search of “Yet Another Programmer” and watch how Google suggest you are looking for blogs. After executing the search the first page of results is crowded with relevant material. It seems that I am not the only one sharing my expertise or, perhaps, attempting to win over a potential employer’s favor.

Hopefully, I can make this blog stand out from those in the sense that the content I will be posting are based on my current (but not confidential) and past work and I will attempt to implement examples of skills that potential employers are looking for. If someone else has already made such an implementation then I will do it again but with my own spin, my own explanation, and my own programming and design style while giving credit to the original author. It is my hope that I will be able to impart knowledge to those who encounter what I hope to be numerous posts on my newly created blog and that it will serve curious programmers and software engineers in the same way that so many others have helped me in my past implementations. It is time for me to pay it forward.

Image source:



Get every new post delivered to your Inbox.

Join 331 other followers