[Freemium] GroqText: DeepSeek, Llama, Gemma, ALLaM, Mixtral and Qwen in your app with a single extension (with free tokens daily)

No
The difference between free and paid is described here

Taifun

You can use GroqVision for that

:loudspeaker: Allam (Arabic LLM) is now supported

The model responds to any question asked in Arabic.
Model Name: Allam
Developer: SDAIA
Here's the ID: allam-2-7b

Version 1.1

This is a major update with new blocks and awesome feature additions. Such as Chat history, System tone and Json structured output.

New Blocks :smile:

:speech_balloon: Chat History: The LLM now has access to all of your previously sent messages.

Disabled by default, Enable by setting the block to true

image

:wastebasket:Reset Conversation history

image

:speaking_head:Set Tone (System Instruction of the AI Model)

Eg. You are a developer.

image

:office: Structured JSON output

RequestCustomStructuredOutput

UserRequest= Question to ask to the AI
customScheme= The json type to generate

For eg.

Output

Existing users, please dm me to get the latest version.

Create Your Own AI “Mini-APIs” in App Inventor — The Simple Way with GroqText!

No more battling messy AI text! Say hello to clean, structured data with your own AI-powered features inside App Inventor.

Ever ask an AI something, only to get back a long blob of text that’s impossible to use without hours of parsing?
Wish you could just get exactly the info you need, in a format your app understands — right away?

it’s way simpler than you might think.

By combining the GroqText Extension with a smart prompt strategy — asking the AI to reply in JSON format — you can build reliable, custom AI tools that work seamlessly in your App Inventor apps.

Think of it like creating a lightweight, personal API — powered by Groq — without any server setup or advanced coding skills!


What Makes This So Powerful?

:white_check_mark: On-Demand AI Features
Instant translations, summaries, keyword detection, emotion analysis — you name it. You can create mini-AI tools for almost any use case.

:white_check_mark: Consistent, Structured Output
By using JSON responses, you get clean data like { "summary": "...", "keywords": [...] }, ready to use in your blocks with no guesswork.

:white_check_mark: Tidy, Simple Logic
Forget the spaghetti of text parsing. Your block logic becomes clean and easy to follow.


Why It’s a Perfect Fit for App Inventor

:brain: You Write the “Function” as a Prompt
Just describe what you want and how the response should look.

:wrench: Built-in Tools for Decoding

use the inbuilt json parsing block for parsing text using field path.

:package: Easy Data Handling
Use a simple "lookup in pairs" block to get the exact value you need from the dictionary — like the summary, translation, or anything else.


A Sample Workflow:

  1. Send a prompt with GroqText asking for JSON
  2. Get the response in the GotText event
  3. Decode it with ExtractValuefromJSON
  4. Done!

No external servers. No complicated parsing. No fuss.
Just visual blocks and the power of GroqText doing exactly what you ask — in a format that fits right into your app.

Ready to level up your projects? Install the GroqText extension, craft a clear JSON-based prompt, and start building smart, AI-enhanced features in minutes — right from App Inventor.

New Update

Now process Images using AI. GroqVision is now free with GroqText.
Grab both of them at 6$. Valid for existing users also. Dm me to get the extension.

:loudspeaker: Llama 4 is now supported! :smiling_face_with_three_hearts:

The model that has overthrown Gemma, Mistral and Gemini 2.0

Llama 4 Scout, a 17 billion active parameter model with 16 experts, is the best multimodal model in the world in its class and is more powerful than all previous generation Llama models, while fitting in a single NVIDIA H100 GPU. Additionally, Llama 4 Scout offers an industry-leading context window of 10M and delivers better results than Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across a broad range of widely reported benchmarks.

Here's the ID: meta-llama/llama-4-scout-17b-16e-instruct

:fire: Llama 4 Maverick is here!

The model that has overthrown GPT 4o, Gemini 2.0 flash and DeepSeek v3

Llama 4 Maverick, a 17 billion active parameter model with 128 experts, is the best multimodal model in its class, beating GPT-4o and Gemini 2.0 Flash across a broad range of widely reported benchmarks, while achieving comparable results to the new DeepSeek v3 on reasoning and coding—at less than half the active parameters. Llama 4 Maverick offers a best-in-class performance to cost ratio with an experimental chat version scoring ELO of 1417 on LMArena.

Here's the ID: meta-llama/llama-4-maverick-17b-128e-instruct