In Anticipation of A New Simulation Audio System: Q-SYS
Blog post by SimGHOSTS UK Officer Chris Gay
Here at the Hull Institute of Learning & Simulation, we are approaching our 5 year anniversary of operation. Whilst we’ve had a largely unchanged routine including the usual manikin based scenarios with video feedback since the start, one constant has been the pursuit of better audio. For the purpose of this article, I split the audio discussion into the following areas:
- Capture of audio in the scenario
- SimMan 3G or other manikin voice
- Playback / relay of audio for live viewing or debrief
Capture of audio in the scenario
Initially we relied solely on ceiling based microphones integrated into our camera feedback system, but it soon transpired that the wide variety of possible scenarios and soundscapes didn’t benefit from a ‘one size fits all’ audio capture setup. We found certain areas were covered well at good quality, whilst others were dead spots, whilst ‘noise contamination’ meant that key conversations were sometimes missed. We then trialled using a mobile clip mic addition, which brought excellent quality audio in from the main candidate- following the trial, we invested in 3 clip mics, and the system has remained the same since then.
SimMan 3G or other manikin voice
We use the default provided voice methods for our high fidelity manikins –usually some sort of Voice over IP connection which runs in parallel with the controls from the control computer. We find there is a steep learning curve for voice actors – mic positioning, volume and attention to delegate actions all can disrupt the effectiveness of the manikin voice fidelity. Before we retired our SimMan 2G, we had improvised our own wireless mic & amp setup to provide the voice, and whilst it took a while to setup every time, it did provide an as yet unrivaled quality of audio.
We find ( in scenario) delegate comprehension of manikin voice is good about 80% of the time, with it dropping below expectation the other 20% of the time due to usually due to the reasons mentioned above. However, we find playback / live viewing room comprehension of manikin voice frequently is poor, which takes us into our final category…
Playback / relay of audio for live viewing or debrief
Each video source is paired with accompanying sound source, so we can allow several concurrent streams to be played. For example, the main 2 cameras in our 4 bedded ward will be paired with 2 ceiling based microphone located above their default viewpoint. This results in the users in a live viewing/playback situation having to select on screen which source they’d like to listen to. We are now in a situation where we possibly have too much choice, and the method of selecting an option on screen isn’t as dynamic as users expect, resulting in incorrect channels being selected with key conversations being missed. This requires an experienced technician the majority of the time to be present to live ‘mix’ the sounds for optimal use during debriefing.
The next step
Whilst the above gradual improvements gave us a much better system than we started with, the weak link in the chain was still at the viewing end - what's the point of having an amazing ‘in scenario’ system if it’s not user friendly at the other end?
It was with this in mind that we started to consult with our suppliers about making the system even better, which lead us to a demo of Q-SYS with a company called Shure. The Q-SYS system is used across the world in places where audio and media management is critical - such as airports, performance venues and even in Disneyland. In essence, the Q-SYS system can take multiple audio inputs and outputs, and virtualize their transit from point to point. This enables a much broader range of flexible options for routing sounds than over an analogue system.
In HILS we have 20 different sound channels, with multiple varying endpoints- imagine if we were less constrained by their physical location? The second thing that appeals to us about this system is the scalable management of the system - rather than being 100% dependent on the supplier we will be trained in the 'back end' programming of the system, meaning we can adjust the system as our needs change. This also links into our third benefit- the system interface can be presented in various levels of complexity- so we can programme a novice user interface, empowering our more tech phobic faculty to choose the sounds they want to hear, whilst techs can have a more complicated interface. The system also has a range of clever tricks such as noise cancellation, rules about not duplicating sounds and editable profiles and pre-sets, all of which we have yet to play with.
In summary, our quest for audio improvement is leading us on an exciting new journey, and we'll be keen to share the journey over the upcoming months.