So after a long period of “coming soon”, Amazon have released the Gadgets Skill API Beta! And sticking to my previous goal of making the development of skills easier for .NET Core developers I’ve created a NuGet package to help with that.

Originally I was planning to just go straight into skill code and show how this stuff fits together, but pretty soon after releasing the Gadgets package I realised that I only get how it fits together because I’ve had to go through the documentation line by line; If you’ve come across my blog because of a random link or you’re trying to find out about Gadgets it might a strange concept to get your head around.

So this post is a quick starter for 10 about Gadgets and the concepts that you’ll be dealing with in your code, the next post is going to be the skill building itself.

That’s cool, but I know how it all works (or tl;dr)

Have at it! 🙂

For those of you who want a little more

So Alexa Gadgets are a different way for users to interact with custom Echo Skills. Currently the device that Amazon have released to show off this functionality are the Echo Buttons

These buttons come in packs of two and can be paired to an Alexa device (simply done – easy to re-pair them to other devices if, like me, you use a lot). They can then be used to add another dimension to your skills – buzzers in quizzes, or be made as puzzles themselves which the user will have to interact with – perhaps repeating a pattern or only pressing them when the button is a particular color.

Gadget support, mandatory or optional, is highlighted on your skill entry so your users won’t be surprised when your skill mentions them. And this doesn’t take away from the fact that you still need a really top notch voice experience – your customers will start and end with voice.

Now anyone who’s communicated with device in the past might be a little wary of trying to get these Bluletooth buttons to do their bidding. Don’t worry. The team at Amazon have got you covered!

Rather than writing a lot of complicated code to make these interactions happen, the Gadgets communicate with the Alexa device in a black box fashion. As skill developers, we’re given access to two new areas of development which contain new directives and requests:

  • Gadget Controller
    This allows you to alter the behavior of a particular gadget, or of all connected gadgets.

  • Game Engine
    This is a declarative way of specifying events you want to know about, and specifying the conditions under which you should be told about them.

If you want the full description of everything these two areas can handle – then that’s why Amazon have such amazing documentation and I’d highly recommend having a look. But if browsing documentation isn’t your idea of fun (it’s actually really good) then here’s my two cents.

Gadget Controller

Amazon documentation for Gadget Controller Interface
Start off with the easier of the two. With there only being one main Gadget at the moment this section is one where I can see a lot of growth, but right now contains a single Directive: SetLight

I really am trying to write no code in this article – but it does exactly what it says on the tin. Each gadget that connects to your skill has a unique ID – and you can either use that to set the color of the light within the button, or by omitting an ID it will broadcast the request to all the attached devices.

When I say “set the color”, it sounds like a really straight forward process – but I don’t mean “set red”, “set green” – although you can if you want. But no – this is a clean process of specifying absolute hex colors, and not just static colors but patterns of different colors that can flick from one to another or blend into each other over time.

For what is a simple statement, Amazon have really thought about the amount of control they want you to have and given it to you in a consumable way from the very start.

Game Engine

Amazon Documentation for Game Engine Interface
Okay – so you can control your device color. That’s great. My button is changing to different colors in front of my user. How do I know what they’re doing? Well – by using Game Engine alongside Game Controller.

I think what can get lost within the complexity (and it can be complex) of the Game Engine directives is its fundamental purpose. So let me make it clear

Game Engine is there to let you know when your gadget does stuff

Another simple statement. But let me be clear – I don’t mean it’s there to tell you the user pressed the button, and send you a bunch of data that you are forced to interpret. It does, and you can, but that’s only under advanced scenarios – most of the time all you want is the friendly name you give that scenario, enough to know it happened.

You want to know your user pressed a button? Or that they didn’t press anything in the time you gave them?

You want to know that your user pressed the button 10 times and only when it was flashing green?

Your user has four buttons attached, and each is pressed in turn matching the sequence you just sent them?…you get the idea.

So how does it do that? By splitting the process into two parts – Recognisers and Events.

Recognisers
…recognise. So if you define a pattern – say the button is pressed when the light is green, then let go when the light is red. Recognisers return true or false depending on their setup:

  • If the pattern is matched
  • If something happens that doesn’t match the pattern
  • If the user is a certain way through the pattern (so if they pressed down on green – it’s 50%)

Game Engine takes the complicated scenarios you tell it about, and boils it down to a boolean – has this happened yet, true or false.

Events
Events are the notification you receive based on the true and false of the recognisers. So under the scenario above, I might say I want three events:

  • “success” that happens if they match the pattern
  • “halfway: which tells me if they get halfway through the pattern (not going to use that one – I want it for my stats to see if the pattern is too hard)
  • “failure” which happens if they deviate from the pattern, rather than time out before the end.

So I add these three events, and then I say if these recognisers return true, and these return false, then the event has occurred – tell me about it.

This is hugely powerful as it hides all the complexity from the skill and lets the device handle it. Now when an event is triggered (or multiple events if the same interaction triggers more than one) you get a new request type, and as well as the event name you get a bunch of data about which buttons were used etc. But if they followed your pattern and you were sent “success” – do you really need to know how? No – your recognisers handled all that so you can just send back a “woo!” sound effect and some well done speech for your user.

I know that’s only skimming the surface of what sounds like a really deep subject, and you’re right – but the point is that if you start out simple, and build from there, you remember to only go deep when you have to – and not get lost in the complexities when there’s no need.

Okay – you’ve piqued my interest. Now what?

My next blog post will show how we’re able to go from these ideas and write a simple quiz skill in .NET Core using my Gadget package. But right now use the links I’ve sent – have a browse around and see how just how much effort the team have put into giving you a lot of power in an easy to consume way.