onSendCommand is called when the command is ready to be sent. It’s of type Mover, which we haven’t created yet so just comment that out for now. Our resultTarget will be the object we’re moving. We’re going to need to access a few outside namespaces for this script.įor our variables, we have the url and subscription key. This is because the SDK and even us will be using some new C# features.Ĭreate a new C# script (right click Project > Create > C# Script) and call it LUISManager. Import the Speech SDK package.įor the SDK to work, we need to go to our Project Settings ( Edit > Project Settings…) and set the Scripting Runtime Version to. unitypackage file we can just drag and drop into the project.Ĭreate a new Unity project or use the included project files (we’ll be using those).
#Azure speech to text api and postman download
We then want to download the Speech SDK for Unity here. What we need from the dashboard is the Endpoint location (in my case westus) and Key 1. For this, we’re going to use Microsoft’s cognitive speech services. We still need something to convert our voice to text. LUIS is used for converting a phrase to intents and entities. Then if we press Send, a JSON file with our results should be sent. subscription-key – your authoring key (at the end of the Endpoint).timezoneOffset – the timezone offset for the location of the request in minutes.verbose – if true, will return all intents instead of just the top scoring intent.Then for the parameters, we want to have the following: For the url, paste in the Endpoint up until the first question mark (?). Here, we need copy the Endpoint url.īefore we jump into Unity, let’s test out the API using Postman. The info we need when using the API, is found by clicking on the Manage button and going to the Keys and Endpoints tab. Once that’s all good to go, click on the Publish button to publish the app – allowing us to use the API. Try entering in a phrase and look at the resulting entities. When complete, you can click on the Test button to test out the app. Once that’s done, click on the Train button to train the app. The more phrases you have and the more different they are – the better the final results will be. For the distance (numbers) attach a MoveDistance entity to it. Now for each phase, select the direction and attach a MoveDirection entity to it.
Make sure that you reference all the types of move directions at least once: We need to enter in examples so that LUIS can learn about our intent and entities. This will bring us to a screen where we can enter in example phrases. Now let’s go back to the Intents screen and select our Move intent. In a phrase, LUIS will look for these entities. MoveDirection and MoveDistance (both Simple entity types). Then go to the Entities screen and create two new entities. Here, we’re creating an intent to move the object. An intent is basically the context of a phrase. Here, we want to create a new intent called Move. When your app is created you will be taken to the Intents screen. Once you sign up, it should take you to the My Apps page. In our case, we’re going to be moving an object on the screen. This allows us to easily talk to a computer and have it understand what we want.
LUIS is a Microsoft, machine learning service that can convert phrases and sentences into intents and entities.