Xcode-beta (8S128d): SFSpeechRecognizer API to provide domain hints
| Originator: | GriotSpeak | ||
| Number: | rdar://26919080 | Date Originated: | 2016-06-21 |
| Status: | Open | Resolved: | |
| Product: | Product Version: | ||
| Classification: | Reproducible: |
Summary:
There are many homophones and near homophones which could greatly complicate extracting intent from spoken text. If developers were provided a way to express common terms in their app’s domain before recognition takes place, this challenge could be considerably lessened.
The most ideal API that I can think of would involve providing a dictionary `[String:String]` with International Phonetic Alphabet spellings mapped to the common terms . For an application concerned with music notes and pitches, a dictionary might contain
``` swift
let letterDictionary: [String: String] = [
"ə" : "A",
"bi" : "B",
"si" : "C",
"di" : "D",
"i" : "E",
"ɛf" : "F",
"ʤi" : "G"
]
```
This example is also an example of why we would want to allow dynamic alteration of this hint, possibly via delegation, since the same app would probably want to change the term associated with “si” to “si” in the context of solfeggio syllables (do re mi fa sol la si do).
Comments
Please note: Reports posted here will not necessarily be seen by Apple. All problems should be submitted at bugreport.apple.com before they are posted here. Please only post information for Radars that you have filed yourself, and please do not include Apple confidential information in your posts. Thank you!