Expanding the Scope of Synthetic Environments

by Synthetic Environments

Synthetic environments are highly realistic simulated worlds, built on an array of data, including geospatial information, 3D environments and real-time weather systems. The simulations update according to changes in the data, and can be modified or interacted with via augmented and virtual reality systems. 

Synthetic environments go one step beyond a typical sandbox world, as they are used for running analysis and creating predictive models to make actionable insights – informing real-world decision making. However, despite the ambition of synthetic environments, they are currently constrained by a number of factors, including how data and connections are stored and handled. As a result they are often limited in scope, depth, and realism, and can only handle a finite number of users and entities.

Architectural Constraints of Synthetic Environments

Ultimately, these constraints stem from the technical design of the simulation, and the inability to harness the requisite computing resources. Scaling simulations with more information, users, and interactions has a stark effect; the simulation quickly grinds to a halt, and before long crashes completely.

This problem requires balancing the number of connections with available computing power. These two issues of “connectivity” and “compute” have been solved in isolation, but never together, or at scale. Instead, designers and architects are continually forced to make choices and sacrifices that ensure an acceptable level of latency; it becomes impossible to run a realistic simulation beyond a localised level.

Connectivity is drastically increased through asynchronous architectures, which allow multiple events to be carried out simultaneously and provide an “unblocking” effect on the simulation. One event is no longer dependent on another completing before it can be undertaken – instead it enables parallelised computation, all on a single thread. Multiple connections are handled simultaneously, while data ingestion is considerably faster.

However, despite mitigating the drain on CPU resources from context switching and managing access to shared data structures, an influx of connections and data actually creates a larger computational issue. Regions of a simulation become overwhelmed by a dramatic increase in entities, interactions or complex structural changes. Pre-provisioning for this kind of event will result in astronomical costs, but without it, we must accept architectural limits on the number of client connections, as well as the depth and realism of the world.

Actually handling (theoretically) limitless connections and running vast simulations is a question of computing power and its distribution. Spatial load balancing maps virtual space to “CPU space”, dynamically allocating more cores to areas of high compute density and spinning up new machines as required. Computationally intensive areas are broken down into smaller cells, each with its own set of resources. Conversely barren regions with little activity will be provided with less power, keeping the simulation optimised and costs lowered.

Dynamically distributing resources allows the simulation to grow across the cloud, and enables the world to exist on a single shard, without splitting it into multiple isolated instances. Vast numbers of connections can congregate in a single area, while any change to the landscape or world will exist in perpetuity and be visible to everyone. 

Redesigning synthetic environments to meet increased demands in connectivity and computing power will enable more users, data and interactions, as well as the ability to send information back and forth in near real-time. Moreover, combining the simulation with data analytics enables accurate modelling of real-world phenomena, at scale. Scenarios can be played out ahead of time, providing organisations advanced insight into planning strategies, ranging from the design of smart cities and defence planning to epidemic and disaster management.