Skip to content

An Objective-C sample app that shows how to design and implement accurate and fast voice commands on iOS.

License

Notifications You must be signed in to change notification settings

iOSExpertise/rapid-voice-commands-ios

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Rapid voice commands iOS

An Objective-C sample app that shows how to design and implement accurate and fast voice commands on iOS. The architecture of this solution was used in Cities Unlocked, a collaboration between Microsoft and Guide Dogs in the UK to enable visually impaired users navigate cities independently with the use of an iOS mobile app, remote control and a custom headset.

You can find an in-depth explanation of the solution in the code story: Improving speech and intent recognition on iOS.

What you will find in this repo

  • iOS Cognitive Services Speech SDK integration to establish a real-time stream and return partial and final string results as the user is speaking.
  • Local intent extraction using a cache system.
  • Online intent extraction using LUIS.
  • An Objective-C implementation of the iOS 10 Speech SDK.

Getting started

  1. You must obtain a Speech API subscription key by following the instructions on our website (https://www.microsoft.com/cognitive-services/en-us/sign-up).
  2. You need to sign up to Language Understanding Intelligent Service (LUIS) with any Microsoft account.
  3. Create a new application in LUIS.
  4. To match the example dictionary, add two new intents (Orientate and Location) and their respective utterances ("What are the points of interest around me" and "What is my current location").
  5. Train the model (bottom left button) and publish the application (top left).
  6. In the publish window, copy the App Id and the Subscription Key from the URL.
  7. Open RapidVoiceCommands\Info.plist and add the keys obtained in step 1 and 6.
  8. Start the sample app, press Start Listening button and speak a command! You can switch between Cognitive Services and iOS 10 Speech SDK for Speech to Text.

**Please note that iOS 10 Speech SDK does not work in emulator. You need to deploy to a device. **

License

Copyright (c) Microsoft Corporation, licensed under The MIT License (MIT).

About

An Objective-C sample app that shows how to design and implement accurate and fast voice commands on iOS.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Objective-C 100.0%