Connect with us

Hi, what are you looking for?

BUSINESS

Apple introduces bespoke text-to-speech voices.

Photo: Apple

Today, Apple previewed new features to increase cognitive, vision, and voice accessibility. The iPhone, iPad, and Mac will get these tools this year. Apple highlights that disabled people helped build these technologies.

Assistive Access, coming soon to iOS and iPadOS, helps cognitively disabled persons. Assistive Access simplifies the iPhone and iPad interface to make talking to loved ones, sharing images, and listening to music easier. In addition, phone and FaceTime are combined.

Large icons, contrast, and clearer text labels simplify the design. Users can configure these visual characteristics and carry their preferences across Assistive Access-compatible apps.

Blind and low-vision phone users can use the Magnifier tool to find nearby doors, people, and signs. In addition, apple’s Point and Speak function uses the device’s camera and LiDAR scanner to let vision-impaired users communicate with items with multiple text labels.


Point and Speak translate the words on the “popcorn,” “pizza,” and “power level” microwave buttons for low-vision users. In addition, point and Speak will support English, French, Italian, German, Spanish, Portuguese, Chinese, Cantonese, Korean, Japanese, and Ukrainian.

Personal Speech, which simulates your Speech instead of Siri, is intriguing. The tool is for ALS patients who may lose their Voice. The user must recite random text prompts clearly into their microphone for 15 minutes to create a Personal Voice. To build your Voice, your iPhone, iPad, or Mac processes the audio using machine learning. Acapela’s “my voice” solution for assistive devices sounds similar.

A database of unique, highly-trained text-to-voice models could be harmful in the wrong hands. Apple says it never shares personalized voice data, even with itself. Apple doesn’t even link your Voice to your Apple ID, as some homes share a log-in. Instead, users must opt-in to access a Mac-created Personal Voice on their iPhone or vice versa.

Personal Voice can only be established on Apple-powered devices for English speakers at launch.

Apple helps non-verbal persons converse using Siri or your AI voice twin. Live Speech, available on Apple devices, lets users type and speak their words. It can be used in FaceTime and on the lock screen. In addition, live Speech lets users save frequently used phrases, such as coffee orders.

Apple also upgraded its speech-to-text technologies. Phonetic text editing makes it easier for voice typists to repair errors. For example, if your machine transcribes “great” when you intended “grey,” it will be easier to rectify. Phonetic Suggestions currently supports English, Spanish, French, and German.

Apple products should get these accessibility enhancements this year. On Thursday, Apple is expanding SignTime to Germany, Italy, Spain, and South Korea. SignTime gives Apple Store and Apple Support customers an on-demand sign language interpreter.

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

The future of technological innovation is here. Be the first to discover the latest advancements, insights, and reviews. Join us in shaping the future.
SUBSCRIBE

You May Also Like

SUBSCRIBE

The future of technological innovation is here. Be the first to discover the latest advancements, insights, and reviews. Join us in shaping the future.