Yes By mistake... will make the corrections
Introducing GroqTextMini (Free)
Features
- Use llama-8b-8192 model in your app
- Limit of maximum 500 tokens per response
- Check the difference between paid and free aix
GroqTextMini | GroqText |
---|---|
Free | Paid(5.99$) |
Use llama-8b model | Use 15+ AI Models (Lllama, Gemma, Mixtral, DeepSeek, Qwen, Distilled models) |
8b model | 1b, 2b, 3b, 8b, 32b, 70b, 80b models |
500 tokens | Unlimited tokens depending on model capacity |
Download aix
com.sarthakdev.groqtextmini.aix (8.3 KB)
If you want to get GroqText extension you can get it from here for only 5.99$
Qwen is now supported
The extension now supports Alibaba Qwen QWQ
Here's the ID: qwen-qwq-32b
Mistral Saba is now supported
The extension now supports Mistral Saba
Here's the ID: mistral-saba-24b
Distilled models are now supported
The extension now supports distilled llama, qwen and deepseek models
Here's the ID: deepseek-r1-distill-qwen-32b, deepseek-r1-distill-llama-70b-specdec, deepseek-r1-distill-llama-70b
Updated Free Daily tokens and Rate Limits
Groq now supports 17 leading AI models from different providers all under a single API Key with daily free token usage
Model ID | RPM | RPD | TPM | TPD | ASH | ASD |
---|---|---|---|---|---|---|
deepseek-r1-distill-llama-70b | 30 | 1,000 | 6,000 | - | - | - |
deepseek-r1-distill-qwen-32b | 30 | 1,000 | 6,000 | - | - | - |
gemma2-9b-it | 30 | 14,400 | 15,000 | 500,000 | - | - |
llama-3.1-8b-instant | 30 | 14,400 | 20,000 | 500,000 | - | - |
llama-3.1-70b-versatile | 30 | 14,400 | 6,000 | 200,000 | - | - |
llama-3.2-1b-preview | 30 | 7,000 | 7,000 | 500,000 | - | - |
llama-3.2-3b-preview | 30 | 7,000 | 7,000 | 500,000 | - | - |
llama-3.3-70b-specdec | 30 | 1,000 | 6,000 | 100,000 | - | - |
llama-3.3-70b-versatile | 30 | 1,000 | 6,000 | 100,000 | - | - |
llama-guard-3-8b | 30 | 14,400 | 15,000 | 500,000 | - | - |
llama3-8b-8192 | 30 | 14,400 | 30,000 | 500,000 | - | - |
llama3-70b-8192 | 30 | 14,400 | 6,000 | 500,000 | - | - |
mistral-saba-24b | 30 | 1,000 | 6,000 | - | - | - |
mixtral-8x7b-32768 | 30 | 14,400 | 5,000 | 500,000 | - | - |
qwen-2.5-32b | 30 | 1,000 | 6,000 | - | - | - |
qwen-2.5-coder-32b | 30 | 1,000 | 6,000 | - | - | - |
qwen-qwq-32b | 30 | 1,000 | 6,000 | - | - | - |
Perfect..
What a work buddy!!
Just imagine!!
Nice work.
Congrats and all the best to you. Take my heart
very good extension
you should use it for chatbots and if you have a budget then you can buy the full version, because its affordable
example app i made:
after paid, when one model usage is reached, can switch to another model and keep using?
Yes its possible
$6 is one-off payment right?
Yes updated versions will be free
Can your paid extension read image?
No
The difference between free and paid is described here
Taifun
You can use GroqVision for that
Allam (Arabic LLM) is now supported
The model responds to any question asked in Arabic.
Model Name: Allam
Developer: SDAIA
Here's the ID: allam-2-7b
Version 1.1
This is a major update with new blocks and awesome feature additions. Such as Chat history, System tone and Json structured output.
New Blocks 
Chat History: The LLM now has access to all of your previously sent messages.
Disabled by default, Enable by setting the block to true
Reset Conversation history
Set Tone (System Instruction of the AI Model)
Eg. You are a developer.
Structured JSON output
RequestCustomStructuredOutput
UserRequest= Question to ask to the AI
customScheme= The json type to generate
For eg.
Output
Existing users, please dm me to get the latest version.
Create Your Own AI “Mini-APIs” in App Inventor — The Simple Way with GroqText!
No more battling messy AI text! Say hello to clean, structured data with your own AI-powered features inside App Inventor.
Ever ask an AI something, only to get back a long blob of text that’s impossible to use without hours of parsing?
Wish you could just get exactly the info you need, in a format your app understands — right away?
it’s way simpler than you might think.
By combining the GroqText Extension with a smart prompt strategy — asking the AI to reply in JSON format — you can build reliable, custom AI tools that work seamlessly in your App Inventor apps.
Think of it like creating a lightweight, personal API — powered by Groq — without any server setup or advanced coding skills!
What Makes This So Powerful?
On-Demand AI Features
Instant translations, summaries, keyword detection, emotion analysis — you name it. You can create mini-AI tools for almost any use case.
Consistent, Structured Output
By using JSON responses, you get clean data like { "summary": "...", "keywords": [...] }
, ready to use in your blocks with no guesswork.
Tidy, Simple Logic
Forget the spaghetti of text parsing. Your block logic becomes clean and easy to follow.
Why It’s a Perfect Fit for App Inventor
You Write the “Function” as a Prompt
Just describe what you want and how the response should look.
Built-in Tools for Decoding
use the inbuilt json parsing block for parsing text using field path.
Easy Data Handling
Use a simple "lookup in pairs" block to get the exact value you need from the dictionary — like the summary, translation, or anything else.
A Sample Workflow:
- Send a prompt with GroqText asking for JSON
- Get the response in the GotText event
- Decode it with
ExtractValuefromJSON
- Done!
No external servers. No complicated parsing. No fuss.
Just visual blocks and the power of GroqText doing exactly what you ask — in a format that fits right into your app.
Ready to level up your projects? Install the GroqText extension, craft a clear JSON-based prompt, and start building smart, AI-enhanced features in minutes — right from App Inventor.
New Update
Now process Images using AI. GroqVision is now free with GroqText.
Grab both of them at 6$. Valid for existing users also. Dm me to get the extension.
Llama 4 is now supported! 
The model that has overthrown Gemma, Mistral and Gemini 2.0
Llama 4 Scout, a 17 billion active parameter model with 16 experts, is the best multimodal model in the world in its class and is more powerful than all previous generation Llama models, while fitting in a single NVIDIA H100 GPU. Additionally, Llama 4 Scout offers an industry-leading context window of 10M and delivers better results than Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across a broad range of widely reported benchmarks.
Here's the ID: meta-llama/llama-4-scout-17b-16e-instruct
Llama 4 Maverick is here!
The model that has overthrown GPT 4o, Gemini 2.0 flash and DeepSeek v3
Llama 4 Maverick, a 17 billion active parameter model with 128 experts, is the best multimodal model in its class, beating GPT-4o and Gemini 2.0 Flash across a broad range of widely reported benchmarks, while achieving comparable results to the new DeepSeek v3 on reasoning and coding—at less than half the active parameters. Llama 4 Maverick offers a best-in-class performance to cost ratio with an experimental chat version scoring ELO of 1417 on LMArena.
Here's the ID: meta-llama/llama-4-maverick-17b-128e-instruct