Hello Everyone, could someone help me with the image classifier because i can't seem to understand the problem. I'm trying to create my own Image Classifier With Multiple Pictures and Reference, but when i'm trying to run my Personal Image Classifier it tells the wrong data. Also, when i'm trying to test it at the personalimageclassifier website all of my uploaded images suddenly became invisible. Please Do Help me as soon as possble.
(Canned Reply: ABG- Export & Upload .aia)
Export your .aia file and upload it here.

URL?
Screen shot?
I added a global variable to catch the classification result, to allow showing its value.
See the {} brackets?
That's a dictionary, not a list, so you can't use list blocks on it.
There's also the matter of your expectations from your model.
It has only two non-null categories, both electronic hardware, and neither mentioning hands.
I went to double check the tool tip for the classifier event block:
This is baffling. It says it returns a list of lists, so either my eyes are going, or my mismatched Companion version on my phone is to blame.
I need to sync versions and retest.
Or you could try this on your phone, with the Companion.
Do It Result: {"Chargers":0.28589,"Compact Disc\/Digital Versatile Disc":0.15881,"Empty Tape Roll":0.09174}
---
I tried again, using the appropriate Companion.
Time to ask AI2 what type got returned into the global variable.
So AI2 says it's a dictionary, but not a list.
On the other hand, I see the first row of the result in the results Label on my phone.
This data type confusion sometimes happens with extensions, crossing the boundary between Java data types and AI2 data types.
My only takeaway from this is that you did not model your hand, or my hand in front of my laptop fits none of your model categories.
Your code "works", as far as I can tell, until you can show me otherwise.
I'm mostly having trouble with the PersonalImageClassifier itself, as I'm experiencing some issues with it, and I don't understand if it's the amount of categories that I have for my data samples or if it's the imbalances between the uploaded data samples that cause it to malfunction or somehow not work.
Looking through the Classifier Blocks Pallette, I see some missed debugging opportunity:
You lack an error event block for the Classifier.
You are also not taking advantage of other information sources:
Look Ma, No Hands!
Oh, maybe you're right, I think there's too much categories where some categories wouldn't get included in the MIT app.
I also notice there is an event block to tell you when the Classifier is ready.
I imagine that's needed because it loads its model from a file, and files take time to process.
You could use that event to enable the photo taking button, to prevent premature classification attempts.
P.S. I hope you are learning the value of Companion Do It debugging and of reading tool tips.
Thank you so much for your guidance. I truly appreciate your help. I’ll try working on the fix now and apply your suggestions. I’ll give you an update once I’ve resolved it. ![]()
I throw this thread open to whoever is awake.
My bedtime approaches.
(Cranky Power User)
Hello Sir,
I would like to inform you that I have already resolved the issue. I was able to fix it by using the TMIC extension, which works similarly to the Personal Image Classifier. The extension handled the functionality more reliably and allowed everything to work as intended.
Thank you ![]()
The original event did return the data as a list of lists because the extension predated the inclusion of dictionaries in App Inventor. We then changed it to provide a dictionary, but dictionaries also coerce to list-of-lists when used with the list blocks for backward compatibility. We should probably update the documentation of the block to be clear that it is now a dictionary (but it shouldn't break existing code that expected the list version).
This topic was automatically closed 7 days after the last reply. New replies are no longer allowed.






