Apple’s research blog, the “Apple Machine Learning Journal” launched last summer, is spiking the beginning of the month with a new one Under the headline “Optimizing Siri on HomePod in Far-Field Settings”, the book deals with the outstanding speech intelligibility of the intelligent loudspeaker.
Apple’s HomePod responds to many commands – such as playing online radios or playing Spotify playlists – with just one disappointing inability, however, Siri himself instantly rifles even the most whisper-whispered “Hey Siri”.
Apple’s language assistant, which is also said by critics of HomePod, has no problems with noisy environments, with a large gap between the HomePod and the speaker, or with current music. Almost without exception, Siri’s on-call service is excellent.
Why this is so and what factors contribute to this explains the now released blog entry:
The The typical audio environment for HomePod has many challenges – echo, reverberation, and noise. Unlike Siri on the iPhone, which works close to the user’s mouth, Siri needs to work well on the HomePod even in a distant environment.
Users want to access Siri from many places, such as the couch or the kitchen, regardless of where HomePod is positioned. […] In this article, we discuss the machine learning techniques we use for on-line signal processing, the challenges we faced, and our solutions for achieving environmental and algorithmic robustness while ensuring energy efficiency.