How can you fill a square drawn with a canvas without using DrawShape?

Hi everyone, I’m new to this platform. I apologize in advance for my basic English.

I’m developing an application using MIT App Inventor, which connects via BLE to an STM32WB09. This microcontroller sends RGB vectors acquired from a thermal camera.

I’m looking for help with the visualization of this RGB vector as an image. The issue is finding a fast and efficient method to display these images, which are generated periodically.
My goal is to reach around 12 FPS.

So far, I’ve managed to display the image inefficiently using a Canvas and the DrawShape function, but the performance is not sufficient.

Can you recommend any extension or technique that could help me achieve better performance?

I also tried the CanvasLayer and Flood Fill Extension, but I don’t fully understand how to use them effectively.

Thanks in advance for your support!

Show us an example of that RBG vector?

Sure, the RGB vector I receive via BLE is a vector of 4608 elements, where each group of three values represents an RGB color in sequential order:
[0, 50, 60, 40, 90, 255, ...]
This is a portion of the vector I receive. Therefore, for the first color/pixel I have: R = 0, G = 50, B = 60.
Best regards.

So that's a list of 1,536 (?) colors.

How did you want to represent those visually?

Thank you for your reply.
My goal is to create a thermal camera, where the image can be viewed on a mobile phone.
I want to generate multiple images in sequence in order to achieve visual continuity.
The current resolution of 1,536 colors is just an initial phase , in the future, I plan to increase that number.

Best regards.

While we wait for the real BLE and File experts to weigh in, here are some considerations based on a search of this board:

"BLE File": - is the BLE data transfer rate fast enough? has the MTU been set larger for efficient block transfer?

Can the data be compressed into one of the more space-efficient graphic file formats before transmission?

Can the data stream be sent into JavaScript via the WebView component for file conversion into a file, left for an Image component to display or to be displayed in the WebView?

Can the Base64 conversion facilities of the Canvas and some extensions by other Power Users be of help here?

The transmission of the vector works, and I’m able to receive it in the previously defined format.
My question is more related to the image generation side , I’m looking for the most efficient way to do it.
I’m open to any suggestions, including the use of extensions created by other Power Users.
Could converting the image to Base64 also help optimize the image generation process?

Best regards.

Here's one approach to try.

Use the tool tip for details.

Search the board for Base64, there has been activity in that area.

A thread with sample projects:

Okay, I’ll give it a try and let you know.
But to use this function, do I also need to change the data reception method?
I mean, should I stop receiving RGB data and use a different format instead?
I apologize if the question sounds basic, I’m just starting to explore these topics.

Best regards

I mentioned the Base64 format option because the Canvas component has a block for it, so it's closer to the metal in the software stack, hence likely to run faster.

But that does come at a conversion cost, and Base64 is less dense than byte streams, so there would be a data transmission time cost.

If you can find a graphic file format (.png, gif, jpg, ...) that can shrink your data stream and a way to drop such an incoming byte stream to a file for display by one of the various AI2 components (Image, Canvas, WebViewer, Arrangement), that would bring you closer to the metal in the AI2 stack.

I am assuming you can find image encoding libraries upstream.

I forgot I had this lying around:

Maybe you can find something useful there.

Here's yet another way to turn a data stream into an image: SVG graphics.

Somewhere in the SVG specifications there are codes for specifying a color and how to layout a grid of colors.

AI2 has advanced list blocks that could be combined to transform your RGB byte list into such a SVG grid, and this sample shows how to feed SVG to a WebView for display, without its feet hitting the ground.

Hello, I took a look at what you linked me, but in my opinion, it doesn’t seem useful for my issue. Let me explain better. I’m using an MLX90640 thermal camera, which, using STM code, converts infrared temperature readings into specific RGB values based on a given color palette. Each color corresponds to a specific area that was read. These data are then sent via BLE to a device (a phone running MIT App Inventor). This structure is already implemented (that is, I can send the values to the phone and manipulate the RGB list).

What I am trying to implement is an optimization (in any form) of the rendering of the thermal image. My first implementation used the DrawShape function to draw 10x10px rectangles, which were then colored using the RGB values I sent (via the makecolor function with the 3 RGB components). What I noticed is that from the moment the vector is received to the actual generation of the image (in the case of my current, inefficient implementation), several seconds pass. What I’m aiming for is the near-instant generation of a thermal image.

Therefore, what you sent me, in my opinion, is not really related to the issue I presented
or at least, I couldn’t find an efficient solution in it.
Do you have any other suggestions?

Thank you very much for your availability, and I look forward to any advice.

Best regards.

I think you need to build an image in memory with the received RGB vectors (= string of pixels). On completion of scan, add an image header and footer. You then have a whole image to display.

That's the theory and I think you can see it would work, but the practice is tricky. I would probably send the raw data to a microcontroller (e.g. esp32), then send the completed image to a smartphone.

Ok, so if I understood correctly, you are suggesting to use an intermediate microcontroller (or the same STM32 on which I have the thermal camera?) to use the raw data I generate with my STM microcontroller and send the image?

Kind regards

If the camera can build an image with header and footer then of course use that, the main aim is to pre-build the image quickly in memory and not send tiny portions of it via BLE to be built by drawing on a canvass in-app, that will always be time consuming.

It won't be plain sailing, you still have to get the image into the app or on to the phone - if via BLE that will mean splitting it into the largest chunks of data possible and sending those, but that should still be much much quicker.

..... or perhaps build a chunk at a time, send to the app, assemble with header and footer on end of scan. That will be tricky to achieve, but if the final size of the image is always known (pixel width by pixel height), it should be do-able.

hello, I understand what you mean however my MLX90640 thermal sensor, it does not generate png or other image formats, it only generates a vector of IR values. These values, are referred to the individual cells with which the sensor divides the environment.

kind regards

1,536 = ? rows by ? columns?

Don't make me factor.

1536 = 2 * 768 = 4 * 384 = 8 * 192 = 16 * 96 = 32 * 48

So I'm guessing 32 rows and 48 columns?

1 Like

Yes, sorry, I made a mistake when writing the vector size. Basically, the thermal camera divides the surrounding environment into a matrix of 32 columns and 24 rows. The problem is not related to the image size, but to the need to speed things up in other words, the goal is to increase the number of frames per second (FPS).

Kind regards.

Here is an SVG based display routine for your specs.

I added a length check for surprises in the data.


RGB_to_SVG.aia (5.5 KB)

I have no test data, so i couldn't test.

1 Like

Hi, THANK YOU SO MUCH!!!!!!!!

Your implementation works perfectly!!!! You were incredibly helpful, thank you again from the bottom of my heart.
Should I now check the "solution" icon?