[Freemium] GroqText: 30+ LLMs including DeepSeek, Llama, Gemma, ALLaM, Mixtral and Qwen (Search / Code Execution / Vision Models / Streaming and more)

:loudspeaker: Allam (Arabic LLM) is now supported

The model responds to any question asked in Arabic.
Model Name: Allam
Developer: SDAIA
Here's the ID: allam-2-7b

Version 1.1

This is a major update with new blocks and awesome feature additions. Such as Chat history, System tone and Json structured output.

New Blocks :smile:

:speech_balloon: Chat History: The LLM now has access to all of your previously sent messages.

Disabled by default, Enable by setting the block to true

image

:wastebasket:Reset Conversation history

image

:speaking_head:Set Tone (System Instruction of the AI Model)

Eg. You are a developer.

image

:office: Structured JSON output

RequestCustomStructuredOutput

UserRequest= Question to ask to the AI
customScheme= The json type to generate

For eg.

Output

Existing users, please dm me to get the latest version.

Create Your Own AI “Mini-APIs” in App Inventor — The Simple Way with GroqText!

No more battling messy AI text! Say hello to clean, structured data with your own AI-powered features inside App Inventor.

Ever ask an AI something, only to get back a long blob of text that’s impossible to use without hours of parsing?
Wish you could just get exactly the info you need, in a format your app understands — right away?

it’s way simpler than you might think.

By combining the GroqText Extension with a smart prompt strategy — asking the AI to reply in JSON format — you can build reliable, custom AI tools that work seamlessly in your App Inventor apps.

Think of it like creating a lightweight, personal API — powered by Groq — without any server setup or advanced coding skills!


What Makes This So Powerful?

:white_check_mark: On-Demand AI Features
Instant translations, summaries, keyword detection, emotion analysis — you name it. You can create mini-AI tools for almost any use case.

:white_check_mark: Consistent, Structured Output
By using JSON responses, you get clean data like { "summary": "...", "keywords": [...] }, ready to use in your blocks with no guesswork.

:white_check_mark: Tidy, Simple Logic
Forget the spaghetti of text parsing. Your block logic becomes clean and easy to follow.


Why It’s a Perfect Fit for App Inventor

:brain: You Write the “Function” as a Prompt
Just describe what you want and how the response should look.

:wrench: Built-in Tools for Decoding

use the inbuilt json parsing block for parsing text using field path.

:package: Easy Data Handling
Use a simple "lookup in pairs" block to get the exact value you need from the dictionary — like the summary, translation, or anything else.


A Sample Workflow:

  1. Send a prompt with GroqText asking for JSON
  2. Get the response in the GotText event
  3. Decode it with ExtractValuefromJSON
  4. Done!

No external servers. No complicated parsing. No fuss.
Just visual blocks and the power of GroqText doing exactly what you ask — in a format that fits right into your app.

Ready to level up your projects? Install the GroqText extension, craft a clear JSON-based prompt, and start building smart, AI-enhanced features in minutes — right from App Inventor.

New Update

Now process Images using AI. GroqVision is now free with GroqText.
Grab both of them at 6$. Valid for existing users also. Dm me to get the extension.

:loudspeaker: Llama 4 is now supported! :smiling_face_with_three_hearts:

The model that has overthrown Gemma, Mistral and Gemini 2.0

Llama 4 Scout, a 17 billion active parameter model with 16 experts, is the best multimodal model in the world in its class and is more powerful than all previous generation Llama models, while fitting in a single NVIDIA H100 GPU. Additionally, Llama 4 Scout offers an industry-leading context window of 10M and delivers better results than Gemma 3, Gemini 2.0 Flash-Lite, and Mistral 3.1 across a broad range of widely reported benchmarks.

Here's the ID: meta-llama/llama-4-scout-17b-16e-instruct

:fire: Llama 4 Maverick is here!

The model that has overthrown GPT 4o, Gemini 2.0 flash and DeepSeek v3

Llama 4 Maverick, a 17 billion active parameter model with 128 experts, is the best multimodal model in its class, beating GPT-4o and Gemini 2.0 Flash across a broad range of widely reported benchmarks, while achieving comparable results to the new DeepSeek v3 on reasoning and coding—at less than half the active parameters. Llama 4 Maverick offers a best-in-class performance to cost ratio with an experimental chat version scoring ELO of 1417 on LMArena.

Here's the ID: meta-llama/llama-4-maverick-17b-128e-instruct

How can I download this version

Its paid purchase from here https://buymeacoffee.com/techxsarthak/e/356706

Introducing Compound-Beta: Enhanced AI Capabilities with External Tools

While Large Language Models (LLMs) excel at generating text, Compound-Beta takes the next step. It's an advanced AI system designed to solve problems by taking action and intelligently using external tools, alongside powerful Llama models.

Key Features

  • Access to Real-Time Information: Compound-Beta interacts with external environments, providing more accurate and up-to-date responses than an LLM alone.
  • Multi-Tool Support: Two agentic tool systems available: compound-beta and compound-beta-mini.

Agentic Tool Systems

Compound-Beta

  • Supports multiple tool calls per request.
  • Ideal for use cases requiring:
    • Multiple web searches per request.
    • Multiple code executions per request.

Compound-Beta-Mini

  • Supports a single tool call per request.
  • Ideal for use cases requiring:
    • A single web search per request.
    • A single code execution per request.
  • Performance: Average of 3x lower latency compared to Compound-Beta.

Available Tools

Both Compound-Beta systems support the following tools:

  • Web Search: Via Tavily.
  • Code Execution: Via E2B (Python support only).

Note: Custom user-provided tools are not supported at this time.

Power of Compound LLM

User: make a query to the spacex api and print its response
AI: To make a query to the SpaceX API and print its response, I will use the requests library in Python to send a GET request to the API endpoint. Based on the search results, I found that the SpaceX API has several endpoints for different types of data. The most relevant endpoint for this task appears to be https://api.spacexdata.com/v4/launches/latest. This endpoint returns information about the latest launch.

Here is the Python code to make a query to the SpaceX API and print its response:

import requests
import json

def get_latest_launch():
    url = 'https://api.spacexdata.com/v4/launches/latest'
    try:
        response = requests.get(url)
        response.raise_for_status()  # Raise an exception for HTTP errors
        return response.json()
    except requests.RequestException as e:
        print(f"An error occurred: {e}")
        return None

latest_launch = get_latest_launch()
print(json.dumps(latest_launch, indent=4))

This code sends a GET request to the https://api.spacexdata.com/v4/launches/latest endpoint and prints the response in a JSON format.

The response from the API includes various details such as the launch date, rocket used, crew members, and more. Here is an example of the response:

{
    "fairings": null,
    "links": {
        "patch": {
            "small": "https://images2.imgbox.com/eb/d8/D1Yywp0w_o.png",
            "large": "https://images2.imgbox.com/33/2e/k6VE4iYl_o.png"
        },
        "reddit": {
            "campaign": null,
            "launch": "https://www.reddit.com/r/spacex/comments/xvm76j/rspacex_crew5_launchcoast_docking_discussion_and/",
            "media": null,
            "recovery": null
        },
        "flickr": {
            "small": [],
            "original": []
        },
        "presskit": null,
        "webcast": "https://youtu.be/5EwW8ZkArL4",
        "youtube_id": "5EwW8ZkArL4",
        "article": null,
        "wikipedia": "https://en.wikipedia.org/wiki/SpaceX_Crew-5"
    },
    "static_fire_date_utc": null,
    "static_fire_date_unix": null,
    "net": false,
    "window": null,
    "rocket": "5e9d0d95eda69973a809d1ec",
    "success": true,
    "failures": [],
    "details": null,
    "crew": [
        "62dd7196202306255024d13c",
        "62dd71c9202306255024d13d",
        "62dd7210202306255024d13e",
        "62dd7253202306255024d13f"
    ],
    "ships": [],
    "capsules": [
        "617c05591bad2c661a6e2909"
    ],
    "payloads": [
        "62dd73ed202306255024d145"
    ],
    "launchpad": "5e9e4502f509094188566f88",
    "flight_number": 187,
    "name": "Crew-5",
    "date_utc": "2022-10-05T16:00:00.000Z",
    "date_unix": 1664985600,
    "date_local": "2022-10-05T12:00:00-04:00",
    "date_precision": "hour",
    "upcoming": false,
    "cores": [
        {
            "core": "633d9da635a71d1d9c66797b",
            "flight": 1,
            "gridfins": true,
            "legs": true,
            "reused": false,
            "landing_attempt": true,
            "landing_success": true,
            "landing_type": "ASDS",
            "landpad": "5e9e3033383ecbb9e534e7cc"
        }
    ],
    "auto_update": true,
    "tbd": false,
    "launch_library_id": "f33d5ece-e825-4cd8-809f-1d4c72a2e0d3",
    "id": "62dd70d5202306255024d139"
}

This response provides information about the latest launch, including the launch date, rocket used, crew members, and more.

This is the power of Groq Compound LLMs search, execute code!

The extension now supports 30+ llms including agentic llms with search and tool call such as code execution.

New Update :loudspeaker:

Version 1.2

This is a major update with lots of new blocks :new: and improvements.
The extension now supports Vision models. You will be able to ask questions about images.

New Features

  1. Supports Streaming chunks of responses
  2. Supports setting Temperature, Stop, Top-P and MaxTokens of responses
  3. Dropdown block is available for selecting the model name of Text and Vision models. Thus avoiding the need to come here to find the ID of available models.
  4. Instead of processing Image from a url in previous versions the extension now allows using local file paths.
  5. The blocks are made more readable, Tooling is supported, Search and Vision models are also made available.

The documentation is updated at the top

If you have already purchased the extension, you will receive a mail with the new extension soon.

Keep Building
Sarthak Gupta

can you send aia?

1 Like

Sharing paid versions of extensions is strictly not allowed. See extension terms. Its piracy even inside AIA.
@TechInterest if you want to share the AIA with @Prem_Gupta share it with the extension removed.

PS: It's fine if you are using GroqTextMini (free version)

Wrote an in depth guide about Groq API and the benefits you get in free plan.

i never sent anything

1 Like

How am I supposed to know I was just advising about the extension license to @Prem_Gupta

Brother, @TechInterest make upper a project with free aix then I said to send aia, if possible pls send aia @TechInterest

1 Like

also im no longer active on this community/forum, i just answer because i get emails that someone responded or messaged me

1 Like

How to make the LLM remember chat history.

Just set the chathistory block to true to send the previous messages to the AI.
The extension handles everything :raised_hands: