It seems that 2023 is the year of artificial intelligence (AI), and Microsoft is the latest company keen to get in on the action.
A summary on the paper explains how the technology, which is being called VALL-E, “emerges in-context learning capabilities and can be used to synthesize high-quality personalized speech with only a 3-second enrolled recording of an unseen speaker as an acoustic prompt.”
What this means in simple forms is that the tool can now break down what makes a person sound the way they do, including phoneme and acoustic code prompts, thanks to Meta’s EnCodec, and generate a sound that mimics more closely what they person may sound like beyond the three seconds of sample voice recording. The early stages of VALL-E have been made possible by analyzing over 60,000 hours’ worth of English language voice recordings.
The GitHub post surfaces a number of examples of how the technology can be used, including maintaining emotional cues and even environmental effects, such as the disconnected sound that’s typical of a phone conversation.
While concise, there is a mention of the potential implications of such text-to-speech tools, which is increasingly important in a time where AI has uncovered ethical concerns that we’d only previously dreamt of (or had nightmares of).
In fact, any number of problems could arise from false recordings giving permission to something (like the number of banks that use telephone-based voice recognition authentication), to a whole lot worse.
The conclusion states that VALL-E “may carry potential risks in misuse of the model, such as spoofing voice identification or impersonating a specific speaker. Benj Edwards of Ars Technica has also noted that Microsoft is yet to share the project’s code for anybody else to try out, indicating that the potential risks are still being considered.
Looking for the opposite? Here are the best speech-to-text apps