Apple will soon make it possible to create an AI model of your own voice – directly on iPhone, iPad and Mac. The option is part of a new accessibility feature that the manufacturer is expected to introduce with iOS 17, iPadOS 17, and macOS 14 in the fall. A new text-to-speech system is designed to convert text input directly into an acoustic voice output that can be used not only in one’s own environment, but also for communication via video call or phone call. Frequently used phrases can be saved for faster retrieval, the company said.
Live speech and personal voice
The Apple operating systems use the voice of the voice assistance system Siri as standard for this “Live Speech” service assistance, which is available in a male, female and now also a gender-neutral form on the Apple devices.
With the additional function “Personal Voice” (personal voice) users also have the option to train an AI language model with their own voice. To do this, random sentences must be spoken for around 15 minutes, as Apple explained. This is intended in particular for people who are at risk of losing their language skills due to illness. Subsequent training from existing audio recordings of the voice does not appear to be planned for the time being.
Your own voice model is trained locally on the iPhone.
Training locally on iPhone & Co
The personal voice model is trained locally on the device – iPhone, iPad or Mac with an Apple chip provided – and not in the cloud, as Apple emphasized, which also serves to ensure security. Optionally, your own voice model can be synchronized between your own devices via iCloud and should remain protected from any external access by end-to-end encryption.
Personal Voice and Live Speech are among several new accessibility features Apple just announced. The functions are to be introduced in the operating systems in the course of the year, they are only designed for English at the start.
To home page