Audio buffer/file for keyboard dictation should be surfaced as well as the dictated text.
| Originator: | jbarbose | ||
| Number: | rdar://24876175 | Date Originated: | 02/27/16 |
| Status: | open | Resolved: | no |
| Product: | iOS SDK | Product Version: | 9.x |
| Classification: | Enhancement | Reproducible: | n/a |
Summary: Example: Hitting the mic button on an iOS keyboard is an opaque operation, for the most part in that the developer doesn't get access to the voice audio captured by the dictation. Steps to Reproduce: 1. tap in an editable text field or text view 2. tap the mic button on the keyboard 3. tap "Done" when finished dictating. 4. at the delegate method dictationEnded(), provide a reference to the audio buffer property that was captured for a given dictation phrase. Expected Results: I want to be able to save off the actual audio of a dictation, not only the resulting text of a dictation. Actual Results: n/a Version: iOS 9 Notes: Not asking for any structure within the audio (e.g., time ranges mapped to each spoken/translated word), just the audio as a blob of audio data. Configuration: n/a
Comments
Please note: Reports posted here will not necessarily be seen by Apple. All problems should be submitted at bugreport.apple.com before they are posted here. Please only post information for Radars that you have filed yourself, and please do not include Apple confidential information in your posts. Thank you!