Error messages in non legacy speech-recognition

My app has 2 screens. In the secondary screen there are 2 speech recognizers, one used in legacy mode, the other in non legacy mode.

Everything works fine, the app acts exactly as expected, but sometimes I happen to see the errors generated by the non legacy speech recognizer.

I've provided the error handler for the screen, as you can see in the attached image.

I don't know where I'm wrong ...

TIA

Sandro.

Which error / errors are being generated?

Does the error affect the workflow of the app ?

1 Like

I see the errors in the image of the handler. It Is like the handler doesn't act as expected

What does this mean?
What do you expect and what happens instead?
You could try to use nuneric error numbers to see if this makes a difference
Taifun

Dear @Alessandro_Binetti,
though I'm Italian, I answer you in english for the community.
First of all: why do you use two speech recognizers on the same screen ? I guess that they could interfere one each other. I normally use only one speech recognizer (non legacy mode) and it works "perfectly".
My Android is version 9, and I use a Lenovo pad M8 (but it is working also on Mediacom, Samsung, and other pads). Honestly I don't know if something has changed with newer Android versions, but with that version (9) everything works fine.
I use my app on my car (I made a digital cockpit interfacing the CAN bus to a pad by means of a BT communication) and I give commands to the app handsfree. In other words: I tell commands to the pad by voice, without the need to touch the screen, therefore not leaving the steering wheel.
To reach this feature I mute the beep-boop tones (the ones that the speech recognizer emits when activated) by means of a @Taifun's extension (TaifunSettings), then I intercept the errors, like you do, but in a silent mode, without showing them, and when the error 3809 is raised, I start a clock that after 500 ms restarts autonomously the speech recognizer. This delay is required to allow the speech recognizer to 'internally reset itself'' . In this way there is a "blind moment" (lasting approx 500 ms) in which the recognizer can't hear you, but the final feeling is that the speech recognizer works in a "continuous" mode, without the need that the user pushes any button.
Now, if I link my app to show you how it works in detail, it can lead for you to a nightmare, since the app does so many things that finding the details of the speech recognizer could be a real headache, but in the next days (unless you find the root cause of the malfunction on you own) I will write a simple app that can do the job.
Per adesso, quindi, buon Capodanno !!!
Ciao, Ugo.
:hugs:

Ugo! Grazie infinite per la tua risposta!

I'm working on a app for voice guided logistics. In this scenario the app prompts for user to do something (by using text to speech).

In this scenario the "legacy" mode is almost perfect. The app has to recognize voice only when the app prompts something like: "Input quantity", or "Input lot number" ...
So I don't need to mute the prompts of the legacy vocal input ...

Now, I'm trying to introduce an "Alexa-style" voice recognition, in order to respond to some aleatory user questions. The user can ask something to the app by using a keyword, like: "Sistema orario", and the app says "12.15 pm".

So I've used 2 recognizers: the input recognizer in legacy mode, the Alexa-style one in non legacy mode. Every time I use the legacy one, I stop the non legacy recognizer. When the text is recognized I restart it.

The app works fine but sometimes I see the error messages generated by the non legacy recognizer.

Thank you very much!

Buona fine e buon inizio anno anche a te!

Dear @Alessandro_Binetti,
if I correctly understand your needs : a continuous mode should work as "Alexa" , being capable to get sudden enquiries, then when the user shall tell specific information (on app's prompt) the other speech recognizer keeps working (having previously switched off the other). It seems to me not so easy, but anyway, please always remember that when switching off or on a speech recognizer, this does not happen immediately: I leave more than 500 ms (in the annexed .aia 1000 ms) between an off and on command before being sure that it works.
Anyway, what I've annexed works, and you could use it as an example.
Once started, by telling "led on" a led appears, and by telling "led off" it disappears.
Saying "esci" or "finito" the app exits.
Saying "luce alta", "luce media" or "luce bassa", the brightness varies.
Before using it in a big bang mode :rofl:, you better can have a look to the blocks, for more details.
Should you need any explanation, don't hesitate to write me..

Buon anno !!!
Ciao, Ugo.
Cont_Speech.aia (88.2 KB)

PS please be aware that for enabling the brightness settings, you have to go to app permissions (in Android settings) and enable this feature manually.
Also allow the app to use the microphone, when it starts for the first time. :nerd_face:

PPSS Thanks to @Taifun and to @WatermelonIce for the extensions that I use in this example.

1 Like

I expect that the error handler of the screen catches all the exceptions ... but I see the toast of the errors ... it seems that the error handler doesn't catch any error raised by the voice recognizer

Thank you very much Taifun :blush:

Most SpeechRecognizer errors are caught by
This Screen Block Alessandro:

SRerrors

:slight_smile:

It's exactly what I've done. See the attached picture of the errorOccurred block I've posted in my topic.

The only difference is that it's not the handler for the main screen, but the handler of another app screen ...

Thank you Steve!

If you want the errors from the second SR, put the additional recognizer on Screen1 and use a virtual 2nd screen. :astonished:

What you seem to want to do with your actual app seems to be handled by a single SR

Ops ... so I cannot use a speech recognizer in non legacy mode inside another screen?

I do not know. What you cannot do is get error messages from a second SpeechRecognizer on a second screen.

What you may be able to do is use a single SpeechRecognizer and switch between Legacy and non-Legacy mode using Blocks.

Have you done the Parrot tutorial? You may be able to get the results using the timer and auto SR modes using the switches.

Hi @Alessandro_Binetti,
have you had a look to my aia ?
I believe it could really help you.

Ciao Ugo!

I've tryed to put your code into another screen, not in the main screen, and the errors appear exactly as the error handler doesn't exist. Same beaviour of my app.

I think it's an app inventor bug ...

Fortunately the app works fine, even if the toast appears every now and then :grin:

Thank you very much for your help.

Ciao!