WatchOS 4 - Feature parity should exist between using AudioRecorderController and realtime audio analysis via AVAudioInputNode around keeping screen on

Originator:Alex.Silvio.Muller
Number:rdar://32747071 Date Originated:06/13/2017
Status:Open Resolved:
Product:watchOS Product Version:4.0 (15R5281F)
Classification: Reproducible:
 
Currently, with the new feature of watchOS 4 allowing for realtime audio processing (via AVAudioInputNode), we have no way to keep the screen alive while we have a tap installed on the AVAudioInputNode from the AVAudioEngine. This means we can only keep rendering updates from the received audio data for the duration the user has specified in their watches "On Tap" settings of either 15 or 75 seconds. We should be permitted to request additional time to keep the screen alive, similar to how the stock AudioRecorderController has the ability to keep the screen alive for the duration of the recording in order to provide realtime graphics rendering.

Due to these limitations, this results in a poor user experience for users attempting to visually interact with incoming audio data. We can, as an alternative, issue the user to go to their watch settings on the iPhone and set this "On Tap" duration to the longest possible setting, however we also do not have a way to push the user there from the watch (a different issue).

Comments


Please note: Reports posted here will not necessarily be seen by Apple. All problems should be submitted at bugreport.apple.com before they are posted here. Please only post information for Radars that you have filed yourself, and please do not include Apple confidential information in your posts. Thank you!