[PAID] GroqText Extension: Integrate best in class LLM- AI Chat Models in your app (Google, Meta etc.)

ezgif-6-95a00689cb

Introduction

  • Integrate a large number of AI Models in your app through the groq api
  • Includes a generous free plan with daily rate limit (without credit card)
  • Includes 2b, 8b and 70b parameter models

Features of Groq Inference

  • Lightning fast AI Models
  • Supports 10+ Open Source AI Models- Gemma, LLama and Mixtral
  • Best Free Plan that gives 500k tokens* daily for free to use in production

AI Models Supported (Google, Meta and Mixtral)

  • gemma2-9b-it
  • llama3-8b-8192
  • llama3-70b-8192
  • llama-guard-3-8b
  • llama-3.1-8b-instant
  • llama-3.1-70b-versatile
  • llama-3.2-1b-preview*
  • llama-3.2-3b-preview*
  • llama-3.3-70b-versatile
  • llama-3.3-70b-specdec
  • mixtral-8x7b-32768

Groq AI Free Plan limits

The limit refreshes every 24 hrs

ID REQUESTS PER MINUTE REQUESTS PER DAY TOKENS PER MINUTE TOKENS PER DAY
gemma2-9b-it 30 14,400 15,000 500,000
llama-3.1-70b-versatile 30 14,400 6,000 200,000
llama-3.1-8b-instant 30 14,400 20,000 500,000
llama-3.2-1b-preview 30 7,000 7,000 500,000
llama-3.2-3b-preview 30 7,000 7,000 500,000
llama-3.3-70b-specdec 30 1,000 6,000 100,000
llama-3.3-70b-versatile 30 1,000 6,000 100,000
llama-guard-3-8b 30 14,400 15,000 500,000
llama3-70b-8192 30 14,400 6,000 500,000
llama3-8b-8192 30 14,400 30,000 500,000
mixtral-8x7b-32768 30 14,400 5,000 500,000

Note: Preview models are currently in testing. They shouldn't be used in production.

Blocks Documentation

Events:

GroqResponseReceivedBlock

GroqResponseReceived

Event triggered when AI response is received

Parameter Type
statusCode number
response text

GroqRequestErrorBlock

GroqRequestError

Event triggered when an error occurs in Groq API request

Parameter Type
errorMessage text

GroqMessageContentReceivedBlock

GroqMessageContentReceived

Event triggered when AI message content is extracted

Parameter Type
messageContent text

Methods:

SetAPIKeyBlock

SetAPIKey

Set the Groq API Key

Parameter Type
key text

SetModelNameBlock

SetModelName

Set the AI Model Name

Parameter Type
model text

SetAPIURLBlock

SetAPIURL

Set the API Endpoint URL (Currently this block has no use but will be useful when Groq will introduce new API url)

Parameter Type
url text

ExtractJSONValueBlock

ExtractJSONValue

Extract a specific value from the JSON response
use "choices[0].message.content" to get only the answer from the json

Parameter Type
jsonString text
fieldPath text

Return Type: text

AskQuestionBlock

AskQuestion

Ask a question to the AI

Parameter Type
userMessage text

Generate API Key: GroqCloud

Sample Blocks

Purchase Extension

You can purchase the extension instantly from the link below for just 12$= 6$ (launch price)

1 Like

Sample Blocks to message hello to llm

Extract JSON value Function

ExtractJSONValueBlock

Sample Json
{"id":"chatcmpl-7ceff50a-a5f1","object":"chat.completion","created":
1736604854,"model": "llama3-8b-8192","choices":
[{"index":0,"message":{"role":"assistant","content":"Hello! It's nice to meet you. Is there something I can help you with or would you like to chat?"},"logprobs":null,"finish_reason":"stop"}],"usage": {"queue_time":0.017792521000000002,"prompt_tokens":
11,"prompt_time":0.001729597,"completion_tokens": 25,"completion_time":0.020833333,"total_tokens":
36,"total_time":0.02256293},"system_fin
gerprint":"fp_a9","x_groq": {"id":"req_01jhavea"}}
  • choices[0].message.content : Get content of message sent by assistant
  • choices[0].finish_reason: Get the reason for the stop of the generation
  • model : Get the model name
  • usage.queue_time : Get the queue time
  • usage.prompt_tokens : Get the prompt tokens
  • usage.completion_rokens : Get the completion tokens

Please contact me via dm for any questions or reply here. I will try to answer as soon as possible.

New Models Support

The extension now supports DeepSeek R1
Here's the ID: deepseek-r1-distill-llama-70b