[Freemium] GroqText: 30+ LLMs including DeepSeek, Llama, Gemma, ALLaM, Mixtral and Qwen (Search / Code Execution / Vision Models / Streaming and more)

🧩 GroqText

An extension for MIT App Inventor 2.
Extension to integrate AI Text and Vision Models in applications using Groq API with streaming support. My other extensions
Built by Sarthak Gupta

:memo: Specifications


:package: Package: com.sarthakdev.groqtext
:floppy_disk: Size: 20.68 KB
:gear: Version: 1.2
:iphone: Minimum API Level: 7
:date: Updated On: 2025-05-25T18:30:00Z
:computer: Built & documented using: FAST-CLI v2.8.1
Extension License: here

Introduction

  • Integrate a large number of AI Models in your app through the groq api
  • Includes a generous free plan with daily rate limit (without credit card)
  • Includes 2b, 8b and 70b parameter models

Features of Groq Inference

  • Lightning fast AI Models
  • Supports 30+ AI Models from 5+ providers
  • Best Free Plan that gives 500k tokens* daily for free to use in production

Events:

GroqText has total 5 events.

:yellow_heart: GroqStreamResponseReceived

Event triggered when a streaming response part is received

Parameter Type
partialContent text

:yellow_heart: CustomStructuredOutputReceived

Event triggered when custom structured output is received from the Groq API

Parameter Type
jsonOutput text

:yellow_heart: GroqResponseReceived

Event triggered when AI response is received

Parameter Type
statusCode number
response text
messageContent text
finishReason text
model text
queueTime text
promptTokens text
completionTokens text

:yellow_heart: GroqRequestError

Event triggered when an error occurs in Groq API request

Parameter Type
errorMessage text

:yellow_heart: GroqMessageContentReceived

Event triggered when AI message content is extracted

Parameter Type
messageContent text

Methods:

GroqText has total 5 methods.

:purple_heart: ExtractJSONValue

Extract a specific value from the JSON response

Parameter Type
jsonString text
fieldPath text

:purple_heart: AskQuestion

Ask a question to the AI

Parameter Type
userMessage text

:purple_heart: ProcessImage

Process an image with a text prompt using a local image path

Parameter Type
prompt text
imagePath text

:purple_heart: RequestCustomStructuredOutput

Request custom structured output from Groq API

Parameter Type
apiKey text
modelName text
systemInstruction text
userRequest text
customSchema text

:purple_heart: ResetConversation

Reset the conversation by clearing the chat history

Setters:

GroqText has total 11 setter properties.

blocks (23)

blocks (24)

:green_heart: ApiKey

Set the Groq API Key

  • Input type: text

:green_heart: ModelName

Set the AI Model Name for text tasks

  • Input type: text
  • Helper class: ModelName
  • Helper enums: Llama_3_3_70b_Versatile, Llama_3_1_8b_Instant, Llama3_70b_8192, Llama3_8b_8192, Gemma2_9b_It, Meta_Llama_Llama_Guard_4_12B, Allam_2_7b, DeepSeek_R1_Distill_Llama_70b, Meta_Llama_4_Maverick, Meta_Llama_4_Scout, Mistral_Saba_24b, Qwen_Qwq_32b, Compound_Beta, Compound_Beta_Mini

:green_heart: VisionModelName

Set the AI Model Name for vision tasks

  • Input type: text
  • Helper class: VisionModelName
  • Helper enums: Llama_4_Scout, Llama_4_Maverick

:green_heart: ApiUrl

Set the API Endpoint URL

  • Input type: text

:green_heart: SystemMessage

Set the system message for the AI

  • Input type: text

:green_heart: ChatHistoryEnabled

Enable or disable chat history

  • Input type: boolean

:green_heart: Temperature

Set the temperature

  • Input type: number

:green_heart: MaxTokens

Set the max tokens

  • Input type: number

:green_heart: TopP

Set the top P value

  • Input type: number

:green_heart: StreamResponses

Set whether to stream responses for text and vision tasks

  • Input type: boolean

:green_heart: Stop

Set the stop value

  • Input type: text

Getters:

GroqText has total 11 getter properties.

:green_circle: ApiKey

Set the Groq API Key

  • Return type: text

:green_circle: ModelName

Set the AI Model Name for text tasks

  • Return type: text

:green_circle: VisionModelName

Set the AI Model Name for vision tasks

  • Return type: text

:green_circle: ApiUrl

Set the API Endpoint URL

  • Return type: text

:green_circle: SystemMessage

Set the system message for the AI

  • Return type: text

:green_circle: ChatHistoryEnabled

Enable or disable chat history

  • Return type: boolean

:green_circle: Temperature

Set the temperature

  • Return type: number

:green_circle: MaxTokens

Set the max tokens

  • Return type: number

:green_circle: TopP

Set the top P value

  • Return type: number

:green_circle: StreamResponses

Set whether to stream responses for text and vision tasks

  • Return type: boolean

:green_circle: Stop

Set the stop value

  • Return type: text

Try the extension for free with GroqTextMini (Free)

This is the difference b/w free and paid version.

GroqTextMini GroqText
Free Paid(5.99$)
Use llama-8b model Use 30+ AI Models (Lllama, Gemma, Mixtral, DeepSeek, Qwen, Distilled models)
8b model 1b, 2b, 3b, 8b, 32b, 70b, 80b models
500 tokens Unlimited tokens depending on model capacity
No Image Support Image Support
No Code Execution Code Execution Support
No Search Support Search Support

GroqTextMini: :robot: com.sarthakdev.groqtextmini.aix (8.3 KB)

Purchase full GroqText extension from here for only 5.99$

Purchase Extension

You can purchase the extension instantly from the link below for just 5.99$

5 Likes

Sample Blocks to message hello to llm

Extract JSON value Function

ExtractJSONValueBlock

Sample Json
{"id":"chatcmpl-7ceff50a-a5f1","object":"chat.completion","created":
1736604854,"model": "llama3-8b-8192","choices":
[{"index":0,"message":{"role":"assistant","content":"Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?"},"logprobs":null,"finish_reason":"stop"}],"usage": {"queue_time":0.017792521000000002,"prompt_tokens":
11,"prompt_time":0.001729597,"completion_tokens": 25,"completion_time":0.020833333,"total_tokens":
36,"total_time":0.02256293},"system_fin
gerprint":"fp_a9","x_groq": {"id":"req_01jhavea"}}
  • choices[0].message.content : Get content of message sent by assistant
  • choices[0].finish_reason: Get the reason for the stop of the generation
  • model : Get the model name
  • usage.queue_time : Get the queue time
  • usage.prompt_tokens : Get the prompt tokens
  • usage.completion_rokens : Get the completion tokens
1 Like

Please contact me via dm for any questions or reply here. I will try to answer as soon as possible.

:loudspeaker: DeepSeek R1 is now supported

The extension now supports DeepSeek R1
Here's the ID: deepseek-r1-distill-llama-70b

1 Like

:loudspeaker: Qwen coder is now supported

The extension now supports Alibaba Qwen
Here's the ID: qwen-2.5-coder-32b

I don't think this blocks are correct. please make some corrections.

Move 'set label. text to ...' to the right place.

1 Like

yes, exactly


Taifun

1 Like

Yes By mistake... will make the corrections

Introducing GroqTextMini (Free)

Features

  1. Use llama-8b-8192 model in your app
  2. Limit of maximum 500 tokens per response
  3. Check the difference between paid and free aix
GroqTextMini GroqText
Free Paid(5.99$)
Use llama-8b model Use 15+ AI Models (Lllama, Gemma, Mixtral, DeepSeek, Qwen, Distilled models)
8b model 1b, 2b, 3b, 8b, 32b, 70b, 80b models
500 tokens Unlimited tokens depending on model capacity

Download aix

:robot: com.sarthakdev.groqtextmini.aix (8.3 KB)

If you want to get GroqText extension you can get it from here for only 5.99$

1 Like

:loudspeaker: Qwen is now supported

The extension now supports Alibaba Qwen standard
Here's the ID: qwen-2.5-32b

:loudspeaker: Qwen is now supported

The extension now supports Alibaba Qwen QWQ
Here's the ID: qwen-qwq-32b

:loudspeaker: Mistral Saba is now supported

The extension now supports Mistral Saba
Here's the ID: mistral-saba-24b

:loudspeaker: Distilled models are now supported

The extension now supports distilled llama, qwen and deepseek models
Here's the ID: deepseek-r1-distill-qwen-32b, deepseek-r1-distill-llama-70b-specdec, deepseek-r1-distill-llama-70b

:rocket: Updated Free Daily tokens and Rate Limits

Groq now supports 17 leading AI models from different providers all under a single API Key with daily free token usage

Model ID RPM RPD TPM TPD ASH ASD
deepseek-r1-distill-llama-70b 30 1,000 6,000 - - -
deepseek-r1-distill-qwen-32b 30 1,000 6,000 - - -
gemma2-9b-it 30 14,400 15,000 500,000 - -
llama-3.1-8b-instant 30 14,400 20,000 500,000 - -
llama-3.1-70b-versatile 30 14,400 6,000 200,000 - -
llama-3.2-1b-preview 30 7,000 7,000 500,000 - -
llama-3.2-3b-preview 30 7,000 7,000 500,000 - -
llama-3.3-70b-specdec 30 1,000 6,000 100,000 - -
llama-3.3-70b-versatile 30 1,000 6,000 100,000 - -
llama-guard-3-8b 30 14,400 15,000 500,000 - -
llama3-8b-8192 30 14,400 30,000 500,000 - -
llama3-70b-8192 30 14,400 6,000 500,000 - -
mistral-saba-24b 30 1,000 6,000 - - -
mixtral-8x7b-32768 30 14,400 5,000 500,000 - -
qwen-2.5-32b 30 1,000 6,000 - - -
qwen-2.5-coder-32b 30 1,000 6,000 - - -
qwen-qwq-32b 30 1,000 6,000 - - -
1 Like

Perfect..

What a work buddy!!

Just imagine!!

Nice work.

Congrats and all the best to you. Take my heart

:heart: :heart: :heart:

1 Like

very good extension
you should use it for chatbots and if you have a budget then you can buy the full version, because its affordable

example app i made:

1 Like

after paid, when one model usage is reached, can switch to another model and keep using?

1 Like

Yes its possible

$6 is one-off payment right?

1 Like

Yes updated versions will be free

Can your paid extension read image?

1 Like