Large Language Models as the universal user interface

summary

AI is revolutionising how we interact with complex software – as demonstrated in Hadean’s Common Operating Picture at I/ITSEC 2023 and beyond.

Defence
Enterprise
Studio
8 min read

The next-generation software user interface will democratise access to specialist software across all verticals, for all skill levels – lifting the productivity of novices, and supercharging what experts are capable of. Across the board, more of us will be able to use complex software with less training or experience. Whether it’s video-editing software, music creation, 3D model creation software, or domain-specific training systems, how we interact with software will be revolutionised.

In many cases, this interface revolution will be closely linked with spatial computing capabilities, leveraging the contextual understanding of space to provide more intuitive workflows which are easy to learn and fast to execute. Excitingly, it will also make interacting with technology a much more enjoyable, fluid, and human experience.

The user interface of the future will understand the context it operates in – adapting itself to the plethora of existing data sources, as well as new data generated while using the software. It will combine that understanding with natural language requests, allowing us to move away from the current state, where just knowing what we want to do and being able to explain it in English to any human is not good enough to get the computer to act. 

This change will be one of the surprising ways in which AI will revolutionise our world.

Hadean is creating capabilities which signpost the way to this new world. Let’s take a specific example which we recently demonstrated.

Showcasing the revolutionary power of Large Language Models and AI – Hadean at DSEI and I/ITSEC

DSEI

At DSEI 2023, adjacent to our main product demonstration, we showcased new capabilities using artificial intelligence to modify an in-flight road traffic simulation. We were interested in controlling a simulation scenario with statements such as “make it rain in this world”, expecting it then – of course – to start raining. 

To do so, we integrated an open-source traffic simulator with an LLM control capability. This gave the ability to make English language statements to the system, such as “there has been an accident on London Bridge”. The AI understood this, and acted to close down London Bridge in the simulated world and provide a reasonable and context-sensitive explanation for doing so: safety to other road traffic. 

Interestingly, in demonstrating this, we found the model showing something akin to out-of-the-box thinking. For example, we showed that it could respond to statements like “humanity has been wiped out” by removing all cars from the simulation. Then, taking that one (fun) step further, we gave it the statement “the human race has been replaced by a newly-evolved species of intelligent lizard-people”. It responded by adding a small number of slow-moving cars, its reasoning being that “lizard-people are new to driving, and so will need to drive carefully”. 

Demonstrating this live at DSEI was a testament to the belief that we have in this new technology’s capability to revolutionise the way we interface with simulations, and it clearly had an impact on the assembled crowd during our demonstration sessions.

This is a great indicator of what is to come. We’ve all heard the phrase “listen to what I mean, not what I say”, perhaps when being instructed by an adult as children. Computers historically haven’t got the memo on this concept. The lack of understanding of what we’re trying to achieve, rather than merely what we (rightly or wrongly) input, creates a lot of frustration when we’re not experts in a particular piece of software or coding language.  We are now on the cusp of solving this problem. 

We imagine a world where AI will become a bigger part of user interfaces, helping to achieve the things that we mean when we express ourselves, rather than just the things that we say.

The I/ITSEC demo

AI-powered user interfaces will cause fundamental shifts across all different software classes. But today, we are interested in talking about defence training capabilities, and specifically those which we showed off at I/ITSEC in Florida at the end of November 2023.

At I/ITSEC, we demonstrated three ways in which these new interfaces will enhance synthetic training, across the live, virtual, and constructive environments. Specifically, how AI can:

  • Help training exercise controller, EXCON, work faster and more adaptively.
  • How it can enhance the command skills of the trainee by monitoring their behaviour and advising on it.
  • How it can be included in tactical-level training to enhance the realism and functionality of the experience.

EXCON

We demonstrated how EXCON can add non-scripted features and events to a running simulation, simply by verbally asking for it. For example, during the scenario, an evacuation convoy was dispatched into the environment. The command used simply specified that a convoy should be dispatched, along with its start location and destination. These values were not pre-scripted, but were interpreted by the AI in the moment, immediately triggering their appearance and movement in the environment.

The bigger picture here is that any necessary adjustments to the environment that might be required during an exercise – for example, to add complexity, variation, or difficulty to the environment – can be triggered in this way.

Similarly, this points us in the direction of having spontaneous plain English as the exercise scripting language, rather than something coded in specialist software. It isn’t hard to imagine the efficiency gains something like this could bring to the process of exercise creation. Imagine simply stating the terrain, population posture, and a mission briefing to kickstart a fully realised simulation exercise.

Command Staff Training

In addition to helping determine the parameters of a simulation, AI was also used to provide an interface to the system which is well-aligned with that of the real world: making verbal requests over the radio. The units created by EXCON can be fully interacted with by trainees using radio protocol. For example, in the scenario we demonstrated at I/ITSEC, the trainee had to issue verbal orders to the convoy to avoid a road traffic accident that was blocking its path. 

Along the way, the system also analyses the radio protocol adherence of the trainee, warning them when they breached protocol, and refusing to action the order. This was also logged to our Snowflake database, so that it could be used to assess trainee’s performance over time and provide guidance on how to improve.

Indirect Fires

As a bonus, the AI was able to understand specific requests for indirect fire support. With an understanding of the spatial domain, the AI was able to respond to requests for fire support at particular locations, verbally identified by the trainee. In the scenario, this enabled the destruction of enemy armour by issuing a verbal request, stating the target grid reference.

Common operating picture 

It’s important to understand that these simulated events were not just taking place within Hadean’s systems. Because we create a common operating picture with contributions of scalable pattern-of-life, our own scalable spatial computing engine and interest management systems can be used to connect 3rd party simulation systems together. 

This meant that the AI-driven events described above were brought into effect across all the simulation systems involved in this demo training exercise, including VR Forces, VBS, and ASCOT7. 

This shows the power of taking our innovations and using them to amplify what is possible with high-quality systems which are already in use today by militaries around the world.

Conclusion and looking ahead

The future is exciting. We are at the beginning of a Cambrian explosion in capabilities, and at Hadean we are thinking deeply about customer needs and use cases for these new AI capabilities, ensuring we deliver real value to the end user, and to society as a whole. 

This is an uncharted territory, and the possibilities are manifold. Now is the time to explore in depth, and find out how these capabilities can enhance what we do. 

We have many more concepts on how AI can enhance existing training simulation systems, and will be revealing them as soon as we are able. But think: worlds which understand their own context, and can react and respond accordingly, creating more realistic training – and beyond the defence sphere, more engaging and humanised virtual worlds that are a pleasure to spend time in.

If you’d like to learn more about how AI can enhance your user interface and training systems, feel free to reach out at hadean.com/contact.

The next-generation software user interface will democratise access to specialist software across all verticals, for all skill levels – lifting the productivity of novices, and supercharging what experts are capable of. Across the board, more of us will be able to use complex software with less training or experience. Whether it’s video-editing software, music creation, 3D model creation software, or domain-specific training systems, how we interact with software will be revolutionised.

In many cases, this interface revolution will be closely linked with spatial computing capabilities, leveraging the contextual understanding of space to provide more intuitive workflows which are easy to learn and fast to execute. Excitingly, it will also make interacting with technology a much more enjoyable, fluid, and human experience.

The user interface of the future will understand the context it operates in – adapting itself to the plethora of existing data sources, as well as new data generated while using the software. It will combine that understanding with natural language requests, allowing us to move away from the current state, where just knowing what we want to do and being able to explain it in English to any human is not good enough to get the computer to act. 

This change will be one of the surprising ways in which AI will revolutionise our world.

Hadean is creating capabilities which signpost the way to this new world. Let’s take a specific example which we recently demonstrated.

Showcasing the revolutionary power of Large Language Models and AI – Hadean at DSEI and I/ITSEC

DSEI

At DSEI 2023, adjacent to our main product demonstration, we showcased new capabilities using artificial intelligence to modify an in-flight road traffic simulation. We were interested in controlling a simulation scenario with statements such as “make it rain in this world”, expecting it then – of course – to start raining. 

To do so, we integrated an open-source traffic simulator with an LLM control capability. This gave the ability to make English language statements to the system, such as “there has been an accident on London Bridge”. The AI understood this, and acted to close down London Bridge in the simulated world and provide a reasonable and context-sensitive explanation for doing so: safety to other road traffic. 

Interestingly, in demonstrating this, we found the model showing something akin to out-of-the-box thinking. For example, we showed that it could respond to statements like “humanity has been wiped out” by removing all cars from the simulation. Then, taking that one (fun) step further, we gave it the statement “the human race has been replaced by a newly-evolved species of intelligent lizard-people”. It responded by adding a small number of slow-moving cars, its reasoning being that “lizard-people are new to driving, and so will need to drive carefully”. 

Demonstrating this live at DSEI was a testament to the belief that we have in this new technology’s capability to revolutionise the way we interface with simulations, and it clearly had an impact on the assembled crowd during our demonstration sessions.

This is a great indicator of what is to come. We’ve all heard the phrase “listen to what I mean, not what I say”, perhaps when being instructed by an adult as children. Computers historically haven’t got the memo on this concept. The lack of understanding of what we’re trying to achieve, rather than merely what we (rightly or wrongly) input, creates a lot of frustration when we’re not experts in a particular piece of software or coding language.  We are now on the cusp of solving this problem. 

We imagine a world where AI will become a bigger part of user interfaces, helping to achieve the things that we mean when we express ourselves, rather than just the things that we say.

The I/ITSEC demo

AI-powered user interfaces will cause fundamental shifts across all different software classes. But today, we are interested in talking about defence training capabilities, and specifically those which we showed off at I/ITSEC in Florida at the end of November 2023.

At I/ITSEC, we demonstrated three ways in which these new interfaces will enhance synthetic training, across the live, virtual, and constructive environments. Specifically, how AI can:

  • Help training exercise controller, EXCON, work faster and more adaptively.
  • How it can enhance the command skills of the trainee by monitoring their behaviour and advising on it.
  • How it can be included in tactical-level training to enhance the realism and functionality of the experience.

EXCON

We demonstrated how EXCON can add non-scripted features and events to a running simulation, simply by verbally asking for it. For example, during the scenario, an evacuation convoy was dispatched into the environment. The command used simply specified that a convoy should be dispatched, along with its start location and destination. These values were not pre-scripted, but were interpreted by the AI in the moment, immediately triggering their appearance and movement in the environment.

The bigger picture here is that any necessary adjustments to the environment that might be required during an exercise – for example, to add complexity, variation, or difficulty to the environment – can be triggered in this way.

Similarly, this points us in the direction of having spontaneous plain English as the exercise scripting language, rather than something coded in specialist software. It isn’t hard to imagine the efficiency gains something like this could bring to the process of exercise creation. Imagine simply stating the terrain, population posture, and a mission briefing to kickstart a fully realised simulation exercise.

Command Staff Training

In addition to helping determine the parameters of a simulation, AI was also used to provide an interface to the system which is well-aligned with that of the real world: making verbal requests over the radio. The units created by EXCON can be fully interacted with by trainees using radio protocol. For example, in the scenario we demonstrated at I/ITSEC, the trainee had to issue verbal orders to the convoy to avoid a road traffic accident that was blocking its path. 

Along the way, the system also analyses the radio protocol adherence of the trainee, warning them when they breached protocol, and refusing to action the order. This was also logged to our Snowflake database, so that it could be used to assess trainee’s performance over time and provide guidance on how to improve.

Indirect Fires

As a bonus, the AI was able to understand specific requests for indirect fire support. With an understanding of the spatial domain, the AI was able to respond to requests for fire support at particular locations, verbally identified by the trainee. In the scenario, this enabled the destruction of enemy armour by issuing a verbal request, stating the target grid reference.

Common operating picture 

It’s important to understand that these simulated events were not just taking place within Hadean’s systems. Because we create a common operating picture with contributions of scalable pattern-of-life, our own scalable spatial computing engine and interest management systems can be used to connect 3rd party simulation systems together. 

This meant that the AI-driven events described above were brought into effect across all the simulation systems involved in this demo training exercise, including VR Forces, VBS, and ASCOT7. 

This shows the power of taking our innovations and using them to amplify what is possible with high-quality systems which are already in use today by militaries around the world.

Conclusion and looking ahead

The future is exciting. We are at the beginning of a Cambrian explosion in capabilities, and at Hadean we are thinking deeply about customer needs and use cases for these new AI capabilities, ensuring we deliver real value to the end user, and to society as a whole. 

This is an uncharted territory, and the possibilities are manifold. Now is the time to explore in depth, and find out how these capabilities can enhance what we do. 

We have many more concepts on how AI can enhance existing training simulation systems, and will be revealing them as soon as we are able. But think: worlds which understand their own context, and can react and respond accordingly, creating more realistic training – and beyond the defence sphere, more engaging and humanised virtual worlds that are a pleasure to spend time in.

If you’d like to learn more about how AI can enhance your user interface and training systems, feel free to reach out at hadean.com/contact.