Reimagining Virtual Events for Thousands of Attendees

by Muxer, Virtual Events

Since being placed under lockdown in March, many of us have turned to the freedoms of the virtual world. Imaginations have been captured and it has borne rise to a diverse set of events and activities including avatar concerts, the flourishing trade of virtual goods and even university graduations. 

A place which once served as an escape from normality, ironically, now preserves it. And although lockdowns are beginning to ease, there is no return on the horizon for public gatherings of any meaningful size. Instead, attention is turning to technology and the role it can play in hosting the vast interactions of hundreds and thousands of people in a single location.

But virtual worlds are not the finished article – if something which exists in perpetuity can ever claim to be. Despite the perception of an endless land of indefinite freedoms, these worlds are restricted by the technology which underpins them. And when large numbers of people gather in a single area, it puts huge strain on the infrastructure.

This is not a new problem, and it’s a technical challenge that gaming studios have been grappling with for some time. Whereas events must consider the impact of 20,000 people congregating to watch a gig, for game developers the question is what happens when thousands of players swarm together to achieve an objective? The answer is the same. Cross over a given threshold – typically 80-100 connections – and it will overload the server and crash the system. In this context, the distinction between virtual events and online gaming is semantic, and it’s unsurprising pioneers in this space include games such as Fortnite and Sansar.

The hitherto solution has been sharding; running parallel, synchronised worlds that display the same content. The easiest and fastest way to segment worlds like this is dividing by geography; grouping together the connections in close proximity together helps mitigate latency and architectural complexity. Of course, this is not an ideal solution. For highly interactive and immersive experiences, capping the number of participants is frustrating for both developers and users. Nonetheless, out-of-the box networking is not supplied by most game engines, meaning most connectivity solutions are currently proprietary and dependent on in-house resources.

Creating a truly global experience requires a rethink of the connectivity model, as current alternatives struggle to manage vast numbers of connections and handle computational spikes. Such a paradigm shift involves two fundamental technical challenges:

  1. Ensuring the requisite processing power is effectively allocated and available on demand
  2. The underlying architecture / topography is able to physically handle the influx of connections 

The first point refers to load balancing. It’s something Hadean has solved through a unique process model, which allows new servers to be spun up on demand. In a distributed simulation, such as a virtual world, this applies to edge datacentres as well as the centralised servers.

The second point is largely handled by the netcode implementation, but also extends to how the state forwarding is handled. It is this which determines what information is sent back and forth between client and simulation, and if the underlying infrastructure is able to handle it. Put simply, the more bandwidth is optimised, the more efficiently and quickly data can be exchanged.

At Hadean, Muxer is used as an intermediary to open up virtual worlds to thousands of users. Sitting in edge data centres, it takes advantage of the network topology, distributing the global simulation state to its many connections and facilitating bidirectional data flow from a single rendezvous point. It removes infrastructure design constraints, realising the possibility of a single-sharded world and transforming the way we communicate, learn and interact. You can find out more about Muxer by downloading the datasheet.