An Objective-C sample app that shows how to design and implement accurate and fast voice commands on iOS. The architecture of this solution was used in Cities Unlocked, a collaboration between Microsoft and Guide Dogs in the UK to enable visually impaired users navigate cities independently with the use of an iOS mobile app, remote control and a custom headset.
You can find an in-depth explanation of the solution in the code story: Improving speech and intent recognition on iOS.
- iOS Cognitive Services Speech SDK integration to establish a real-time stream and return partial and final string results as the user is speaking.
- Local intent extraction using a cache system.
- Online intent extraction using LUIS.
- An Objective-C implementation of the iOS 10 Speech SDK.
- You must obtain a Speech API subscription key by following the instructions on our website (https://www.microsoft.com/cognitive-services/en-us/sign-up).
- You need to sign up to Language Understanding Intelligent Service (LUIS) with any Microsoft account.
- Create a new application in LUIS.
- To match the example dictionary, add two new intents (Orientate and Location) and their respective utterances ("What are the points of interest around me" and "What is my current location").
- Train the model (bottom left button) and publish the application (top left).
- In the publish window, copy the App Id and the Subscription Key from the URL.
- Open RapidVoiceCommands\Info.plist and add the keys obtained in step 1 and 6.
- Start the sample app, press Start Listening button and speak a command! You can switch between Cognitive Services and iOS 10 Speech SDK for Speech to Text.
**Please note that iOS 10 Speech SDK does not work in emulator. You need to deploy to a device. **
Copyright (c) Microsoft Corporation, licensed under The MIT License (MIT).