Over the past few months I have been working on my speech to text tool. On my own, I had lots of coding and play-ability issues with my game and accessibility tool. With the help of Warren Hilton at The University of Salford, a solution has been found that will allow me to obtain testing data that will allow my dissertation to progress.
The main problem that occurred with my speech to text tool was not with giving my open source unity game the speech to text ability, but to allow cross-network communication so that the speech appeared on the other player’s screen. Cross network communication is a complex task to carry out which was outside of my abilities and outside of the abilities of people who have helped me throughout the project.
Microsoft Azure allowed me to create a simple unity application that recognised speech and outputted it as text; “Quickstart: Recognise speech from a microphone input“. Firstly, this was looked into – the possibility of using this in conjunction with my first person shooter open source unity game. This is when the issue occurred with multi-device conversation through the collaborative gaming network.
Due to the multi-device conversation issue being outside of my coding abilities, further research was carried out into alternative methods of testing my speech to text tool. Using Microsoft Azure, a translator tool has been found which allows speech to text communication running as a separate console; “Quickstart: Multi-device conversation“. The console can be run in “presentation mode” which allows constant speech to text, uninterrupted – this allows game-play and speech to text to run simultaneously. This method appeared to be the best solution to my problems to allow my research to keep moving and to allow for testing to commence in the very near future.