Home Fire Fighting Articles Emerging Augmented Reality Technologies for Command in Fire & Rescue

Emerging Augmented Reality Technologies for Command in Fire & Rescue

by Marcus
Emerging Augmented reality technologies for Command in Fire & Rescue

Dr Ian Greatbatch FRGS, MEPS, FHEA (a firefighter for Surrey Fire & Rescue Service) and AC Iain Houseman (Area Commander, Surrey Fire and Rescue Service) look at the emerging augmented reality technologies for command in Fire & Rescue

Emerging Augmented reality technologies for Command in Fire & Rescue

The UK Fire sector has a likely current market value of roughly £3.2bn and, is itself only a fraction of the total emergency response sector. Both sectors are intrinsically interconnected to many other core economic sectors, where technology, intelligence or best practices can be or is being shared. Furthermore, emergency management practices often have some impact or influence on other sectors, through legislation on occasion, or through changes to perception and practices as a result of major incidents.

The kudos and respect that the emergency sector has in comparison to less glamorous sectors, also is potentially marketable, and there are potential opportunities for organisations in that sector to share the way they work with the wider economy. For example, general business organisations would benefit from learning how fire officers make decisions under pressure, and how fire services structure command and control in dynamic situations – the physical hazards are lesser, but the pressures are just as real. Clearly, there are advantages to all sectors concerned to share the best practices and values of the emergency sector, and this itself brings commercial opportunities.

Leaving that potential market aside for now, there is potential uses for the emerging augmented reality technologies for command in fire & rescue. We are witnessing an acceleration in development of technologies in monitoring, communication and analysis, which will bring both efficiencies and advances in the way we operate set up and maintain safety standards, create shared situational awareness and manage the way we work – both on and off the incident ground.

Starting from pre-recruitment, through recruitment activities, initial training, specialist training, maintaining competence and general development, there are opportunities for “big data” solutions to work with our personnel. The opportunities extend further into the environments they work in (and the equipment they work with) to ensure we present the most effective training materials for the job in hand, and more importantly the individual we are training.

Understanding the frequency of use and the locations of the use in relation to emergencies which can be added to the real time situational awareness at all levels of the incident can ensure that effective solutions are coordinated in a safe manner. These emerging data technologies allow us to use the “big data” in a manner that can create challenging, relevant and effective training across levels of rank/role, but also across and between organisations. Effectively, we can create training environments that are realistic, dynamic and detailed enough to achieve many objectives.

Emerging Augmented and Virtual reality technologies, , linked to sensor arrays, such as intelligent fabrics, distributed sensors, nano-technology and the internet of things (IoT) can be combined to gather, communicate, analyse and present data from an effectively limitless list. There have been recent developments into worn computing and ubiquitous computing, that allow sensors and analysis capability to be worn in PPE. In an incident context, this means that intelligent clothing measuring biometric functions, location and movement can be instantly transmitted to operational, tactical and strategic commanders. The movement itself can be interrogated for “emotive” content (so we can tell if an individual is upset, happy, under the influence of drugs or alcohol, too tired to continue, or likely to cause a long-term injury through their activities).

The location of an individual has a clear and obvious tactical and safety application, being able to see instantly where personnel are within a building, or a wider area, and of course this can be analysed dynamically to calculate the level of risk associated with that individual. Ultimately, the technology’s bottom line concerns firefighter safety and welfare through collective shared situational awareness of activity, resources activity and progress of the actions towards the operational objectives.

Building on that safety concept, we can also use this emerging technology to understand the incident itself in a more complete way. If we accept that all the personnel within the cold, warm and hot zones (risk areas) are wearing an array of sensors, computing and communications technology, it’s a short step to seeing this information being collated and presented to incident commanders and staff at the different levels of command on the incident ground. Currently, to get tactical information from inside an incident, it requires that intelligence be relayed by firefighters working in a building, through an entry control officer, through a runner and eventually to the command centre. This interchange of subjective information through a person to person process creates changes to the information received at the end of the process. If the firefighters are themselves sensor arrays, we can automatically collect information on temperature, constitution of gasses, images and locations of objects within the building. We can landmark and re-transmit information to crews inside the building to allow them to better understand the objective, but also convey vital safety information such as shortest escape routes and so on.

The commander will have access to a multi-dimensional representation of the incident, meaning that beyond the physical structure of the building (let’s assume we’re talking about a building here). This physical structure (considered to be a static incident) could be created by a combination of advanced Geographical Information systems, created from building plans and survey – essentially an updated version of the MDT. This building shape could be complemented by laser scanned surveys from within the building from laser scanners mounted on helmets or BA sets (Fig 1).

Emerging Augmented reality technologies for Command in Fire & Rescue

Figure 1: Internal building structure, including furniture captured by laser scanner. (available at https://commons.wikimedia.org/wiki/File:Cyark_Tudor_Place_3.jpg)

This is useful enough, and would represent a clear advance in firefighter safety, but we can go much further. If the firefighters themselves are now represented in this model, we can see a much clearer tactical picture, and direct them more effectively. We can also use the wearable sensor idea we discussed above to start to add “dimensions” of information such as temperature, gas composition and “landmarks” such as casualty locations, or objects that might later be important to Fire Investigation. This model is also dynamic and will change as the incident evolves (for one thing the structure of the building itself might change as walls collapse or the fire takes hold), and this allows us to scroll back and forward through time, in order to understand the incident, and our responses to it. This again leads us to a training possibility.

It is well documented that until recently fire numbers were decreasing, and as such big fires were rarer. Having the capability to have a full data gathering capability from active incidents allows the learning to be replayed and situations and decisions re-experienced to develop experienced commanders and staff in the absence of bigger and dynamically complex incidents.

It is arguable whether fire incidents are becoming more complex in themselves, but the fact that they are rarer means that a lot of informal knowledge and practice is lost or never learnt. In other words, experience is one of the most valuable skills a firefighter can have, and with incidents declining that becomes harder to attain. This also has a serious side effect of staff being confronted with large dynamic incidents when they happen that may be their first experience of such a fast moving event and where learning is taking place in a high pressurised and growing litigious environment. Data collection at an incident allows us to address this in two ways: primarily we give commanders and firefighters more information – information that previous generations just would not have had access to. Using the techniques highlighted above we can give all personnel involved a clearer picture of the incident as a whole. This would manifest itself in a number of ways, but a simple example would be if two groups of firefighters were approaching a common room between them (one on a left-hand search, one on a right-hand search). The first team to enter that room might find it well alight and attack the fire from their position. The second team, using this data model, displayed on a head-up display in their visors) would now be able to see this room, and the firefight in context to their location. This means they could communicate with the other team and be more effective – this might take the form of not opening a second door and potentially creating a backdraught, it might mean actively ventilating, or fighting the fire from another direction.

Looking beyond the actual incident, this is clearly now potentially a “big data” project. We have the potential to learn enormous amounts about how fires develop how people move and react within incidents in general, by collecting massive amounts of data from within them. The potential (especially as computing power continues to increase) is to build a library of incident data that drives our modelling, fire safety, fire prediction and fire-fighting tactics in a truly evidence-based way.

It is also a fact of modern emergency response life, that post-incident investigations are becoming more onerous, more adversarial in some cases and certainly have the potential to be damaging to a service or individual’s reputation. Post-incident reviews could theoretically allow the incident to be played back, in real time, with information overlays to interrogate when and why decisions were made. They could also be used as case studies, or training environments later to develop personnel across all ranks/roles. Obviously, this might not be to everyone’s taste, and some may consider this an intrusion into the already difficult role that incident commanders’ play, but we would hope that if used in a sensitive manner, this technology would allow a more supportive review process, as often the stress, confusion and dynamism of events is not truly understood until a later stage.

Finally, no modern incident of any size is a single-agency operation . Interaction and interoperation with other agencies, be they from the voluntary sector, other emergency services, local or national government or industry, is not only required but is often critical to success. Creating richer tactical pictures collected in a digital framework, such as this, allow easy transfer, sharing and integration into other agencies’ systems and will address the issue of incorrect and unclear situational awareness and allow commanders and staff to make more informed decisions. Addressing the demand for information and the resources required to achieve this at the incident and in the post event arena can be achieved through some form of the discussion above. The links to achieving the Joint Emergency Services Interoperability Principles (JESIP) (expanded to the other attending agencies) is a clear opportunity that is within reach. Clearly though, work needs to be done at an early stage to ensure interoperability – we need to learn the lessons of past, failed, attempts and centralised systems and build a suite of systems that genuinely talk to each other. This technology all already exists in other sectors such as defence and specialist commercial sectors, as well as in some cases in a prototype form in the emergency services sectors. There is a significant step from working prototype to something that we can take into a fire – is it robust, heat proof, shock proof, intrinsically safe – does it pass the ultimate test of being “firefighter proof”? However, it is relatively inexpensive to conduct usability tests once technology already exists, and once we demonstrate a concept, there is a significant potential market awaiting the final product(s).


FIA, 2007. FIA Survey of the UK Active and Passive Fire Protection Market. Fire Industry association, London.
Elliot, Dean. 2016. Resilient Response. Fire & Rescue Magazine.

About the Authors

Emerging Augmented reality technologies for Command in Fire & Rescue

Dr Ian Greatbatch FRGS, MEPS, FHEA is a firefighter for Surrey Fire & Rescue Service, a free-lance researcher and formerly an Associate Professor at Kingston University, London. He specialises in Search and Rescue (SAR), Fire and Rescue and applications of Geographical Information to those disciplines.

Emerging Augmented reality technologies for Command in Fire & Rescue

AC Iain Houseman is currently the Head of regulatory fire safety protection and prevention for Surrey fire and rescue service. He has held roles in the Local Authority Trading Company as a contract and business development manager, Head of Training, Cross service Support and operations resources manager creating new systems and processes to support change in the modern fire service.

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More