If you would like to upgrade to a newer long-term support version of Studio Pro, see Moving from Mendix Studio Pro 8 to 9.
Build JavaScript Actions: Part 1 (Basic)
Introduction
Mendix has made nanoflows even more powerful with pluggable nanoflow actions — called JavaScript actions — in Mendix 8. With JavaScript actions, the standard set of actions can be extended with new functionality. A JavaScript action is a reusable action based on JavaScript that runs in the client just like the nanoflow, and can use capabilities such as HTML5 browser functions, Cordova plugins, and React Native modules. JavaScript actions are similar to Java actions, but run on the client instead of the server. To share them inside your organization, JavaScript actions can be distributed and downloaded through the private Mendix Marketplace.
This how-to teaches you how to do the following:
- Create a JavaScript action
- Configure input and output parameters
- Implement web text to speech
- Make an asynchronous return
- Expose an action as a nanoflow action
- Use your actions in a demo
Create a JavaScript action: TextToSpeech
To create a JavaScript action that can synthesize text to speech, follow these steps:
-
Create a new JavaScript action in your Mendix project:
-
Give it a descriptive name:
You can now start creating the API for the JavaScript action, which consists of parameters and a return type.
-
Your TextToSpeech action only requires a single parameter. Create it by clicking the Add button in the top left corner. Give the parameter a name and add an extended description if desired:
You can leave the Return type at the default Boolean value. That means that the action will return
false
if no text is provided, and returntrue
after it has successfully spoken the provided text. -
Next, click the Code tab to begin editing the JavaScript action. Now you can start writing the actual action. Mendix Studio Pro already created a default template for you, using the parameters and return type we provided:
You can only add code between
// BEGIN USER CODE
and// END USER CODE
. Any code outside this block will be lost. The source code is stored in your project folder under javascriptsource > (module name) > actions > (action name).js. This JavaScript action will be asynchronous, so you will be using promises to return values (for details about using promises, see Mozilla’s Using promises guide). -
Now add a check to verify if the required parameter has been set correctly. The action will return
false
if no text was provided:function TextToSpeech(text) { // BEGIN USER CODE if (!text) { return Promise.resolve(false); } return Promise.reject("JavaScript action was not implemented"); // END USER CODE }
-
To enable spoken text, you will need the Web SpeechSynthesis API. However, not all browsers support this experimental API. Add a check to verify if the API is available, and include an error if it is not. For future reference, add a comment with references to documentation about our API and its compatibility.
function TextToSpeech(text) { // BEGIN USER CODE // Documentation: https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis // Compatibility: https://developer.mozilla.org/en-US/docs/Web/API/SpeechSynthesis#Browser_compatibility if (!text) { return Promise.resolve(false); } if ("speechSynthesis" in window === false) { return Promise.reject("Browser does not support text to speech"); } return Promise.reject("JavaScript action was not implemented"); // END USER CODE }
-
Next up is the fun part: making the application speak. Create a new
SpeechSynthesisUtterance
object and call thespeak
function. You will write this new code by overwriting the lastReturn
from the previous code.function TextToSpeech(text) { // BEGIN USER CODE if (!text) { return Promise.resolve(false); } if ("speechSynthesis" in window === false) { return Promise.reject("Browser does not support text to speech"); } var utterance = new SpeechSynthesisUtterance(text); window.speechSynthesis.speak(utterance); return Promise.resolve(true); // END USER CODE }
-
The function will already return even when the browser is not finished speaking. To prevent this we can attach
onend
andonerror
handlers to it. Theonend
handler runs when the application finishes speaking the text, so the promise resolves with a value oftrue
. In case an error occurs, the promise is rejected and a descriptive error message is shown. After attaching these handlers, it can start speaking:function TextToSpeech(text) { // BEGIN USER CODE if (!text) { return Promise.resolve(false); } if ("speechSynthesis" in window === false) { return Promise.reject("Browser does not support text to speech"); } // var utterance = new SpeechSynthesisUtterance(text); // window.speechSynthesis.speak(utterance); // return Promise.resolve(true); return new Promise(function(resolve, reject) { var utterance = new SpeechSynthesisUtterance(text); utterance.onend = function() { resolve(true); }; utterance.onerror = function(event) { reject("An error occurred during playback: " + event.error); }; window.speechSynthesis.speak(utterance); }); // END USER CODE }
-
You have just implemented your first JavaScript action! You can start using the action in your nanoflows by adding a JavaScript action call and selecting the newly created SpeechToText action:
Optionally, you can expose the JavaScript action as a nanoflow action. When you do, you can choose a Caption, Category, and Icon. Note that to choose an icon your image will need to be included in an existing image collection.
It will then appear in the Toolbox window when editing a nanoflow:
-
Now for a JavaScript action test run! First, make a nanoflow which features your new JavaScript action. Right-click your folder in the Project Explorer and click Add nanoflow. Then, add an action to your nanoflow, select call a nanoflow action, and select your JavaScript action. You will see a window which will let you edit the JavaScript action. Click the Edit button of the Input Text and type ‘Hello world’. Then, set Use return value to No radio button.
-
Now you are going to put your new nanoflow to work. On a page of your app, make an action button by clicking Add widget in the top center toolbar. Then, under Button Widgets, select Call nanoflow button. Select your new nanoflow when prompted.
-
Click a place on your page to drop your new button where you want it. With the button now on your page, you can test your work. Run your model, click your new button, and if your sound is on you should be greeted by the voice you programmed!