Learn more about enhancing your Conversational Action with scenes.
- Custom Scenes
Scenes are building blocks in your Conversational Action, that capture your conversational tasks into individual states. You can use scenes to instruct your Conversational Action to handle certain conversational flows automatically, such as Account Linking or configuring Push Notifications.
Google provides a handful of preconfigured system scenes you can use for tasks such as Account Linking, but for more specialized tasks, you need to define your own custom scenes.
There are basically two ways to build and configure scenes, either in your Actions Console or in your Jovo Language Model:
The syntax is the same as in your Action's
.yaml files, but in JSON format.
Custom Scenes basically have three stages you can configure:
- Activation: A scene must be activated, either by a scene transition or intent matching.
- Execution: Once activated, a scene executes it's lifecycle, containing a variation of tasks and conversational flows.
- Transition: When a scene's lifecycle has been completed, it follows it's defined transition, e.g. ending the conversation or transitioning to another scene.
To activate a scene, you can actively invoke it with a global intent, or transition into it from another scene. You can also choose to transition into your scene from your Jovo app:
After the response has been sent, your scene has been activated, so your conversational flow will now be handled from within the scene.
Once activated, your scene runs it's lifecycle, until meeting the transition criteria. This lifecycle runs in predefined stages, that'll execute your tasks in order. Except for
On Enter, all stages run through an execution loop, meaning that if no stage meets the transition criteria, the scene will execute all stages again, starting from the
This stage is triggered once on scene activation, useful for preconfiguration.
Here you can evaluate certain criteria, depending on which you can then choose to carry on with the lifecycle or exit the scene by calling your webhook, for example.
For example, in this scene, if the user is verified, the scene will transition to another scene to continue with the conversational flow.
You can instruct a scene to collect required data for you. Once all slots have been collected, the attribute
scene.slots.status will be set to
FINAL, which you can act upon in the
Conditions stage. Once a slot has been filled, you can find the value in the session attributes under the property you specified with
In this example, we require a slot
age, which has the type
actions.type.Number. Once the user has filled this slot, the Jovo handler is called and we can access the value with
If you've configured the previous stages to prompt the user, Google Assistant will deliver it to the user and collects optional input in the next stage:
This is the last stage of the execution loop. Depending on your scene's configuration, Google Assistant will try to listen for input from the user, and matches it either to an intent, a slot, or will trigger a system intent (e.g.
NO_INPUT). In the case of a slot match, the scene will return to the
Slot Filling stage. If the scene matches an intent or triggers a system intent, you can either call your webhook or transition to another scene:
In this example, if your scene is active and
MyNameIsIntent is matched, the scene will exit and transition to
NameHandlerScene to prompt the user for more input.
Once your transition criteria has been met, you can define a transition to continue with your conversation.