Mitigating Disaster with Massive Scale Simulation
Unfortunately, disasters are an inevitable phenomenon. And there is often little we can do to prevent them. Instead, steps must be carried out to prepare and plan our responses.
Scenarios that fall under the umbrella of catastrophe planning include evacuation exercises, the redistribution of water, building flood defences, and managing emergency services / hospital capacity – pandemic responses have unsurprisingly been an area of focus over the last twelve months. The complexity, unpredictability and scale of the variables involved in a catastrophe such as this have ensured the popularity of models capable of handling emergent behaviour.
Simulating Emergent Behaviour
Agent based simulation is well suited to addressing emergent phenomena. Although the approach is not especially new, its popularity as a method for addressing scenario planning has begun to accelerate over the last decade. A 2013 study showed the potential benefits to a small scale emergency response in Sweden, while a similar study was carried out across a distributed disaster site in the UK in 2015. Nonetheless, much of the research carried out has remained academic and theoretical.
More recently, research into the pandemic and transmission of disease has relied upon agent based modelling to imitate the effect of lockdowns, regional measures and vaccination programmes. The simulations provide insight derived from the complexity of entity interactions, factoring in the unpredictability and capriciousness of human behaviour. Entities can be humans, biological systems or even man-made processes, and assigned with as many or as few attributes as necessary.
The resultant emergent behaviour is by its nature unpredictable, and at any given moment could put undue strain on the underlying infrastructure designed to manage the simulation. Developers are provided with a choice; allow expensive resources to sit by idly in the event they might be needed, or run the risk of underprovisioning and letting the simulation crash. Across a distributed network, where data is passed back and forth from different sites, the simulation’s complexity is exacerbated and necessitates large DevOps to teams to create and maintain.
Simplifying how we build and run massive scale agent-based simulations will help organisations derive useful insights that might otherwise be out of sight. Simulations can reveal non-obvious outcomes and enable emergency services to optimise planning and resource capacity. By recreating realistic physical landscapes and the complex human dynamics of large populations with higher fidelity and accuracy, it is possible to make decisions based on data, ahead of time. If simulations are rerun over and over again, from a given checkpoint, we are able to analyse a vast array of permutations.
Reducing the barrier to entry and making this possible is primarily dependent on handling the complex infrastructure requirements. Scenarios across distributed locations require a global response – and a networking model to support. If we can find a way to ensure stability with relatively little overhead despite unpredictable and rapidly changing scenarios, agent-based modelling will be able to move out of the academic world and into the practical. Modellers will have the ability to tweak parameters and edit algorithms in an easy and iterative manner, enabling a greater depth to logistical mapping and operations, removing barriers such as physical proximity and enabling international cooperation. Integrating these models with VR and AR technology has the potential to open up further opportunities, enabling response units to take part in training exercises at the point of need that would be otherwise impossible to replicate.
Ultimately, as these technical barriers are lifted, access to complex, massive scale simulations will become democratised and more widely accessible. In the field of catastrophe planning there is huge scope to improve planning and training, and ensure that our responses are both timely and effective.