ðĻâðŧ Groq Extension
Integrate LLMs like Groq Compound, Qwen, Llama, Gemma, DeepSeek, GPT OSS & more with free limits daily! Supports adding unlimited MCP Servers, Tools like Browser Search, Browser Automation, Code Execution, Site Visiting, Wolfram Alpha & more.
 Specifications
 Package: in.sarthakdev.groq
 Version: 1.0
 Minimum API Level: 14
 Updated On: 2025-09-27T18:30:00Z
 Built & documented using: FAST v5.0.0
 Features of the Extension
- Integrate 15+ LLMs with generous free limits daily.
 - Support for adding remote MCP servers- Tools that help you to automate tasks by connecting your LLM to other real world applications and letting it control them.
 - In built Tools for web search, code execution etc.
 - Chat History maintainance.
 - Real time Streaming of outputs.
 - JSON mode for responses in JSON.
 
 Multi-Components: For making use easy!
Every LLM has specific features, some have Tool use, some have MCP Servers, for some Both! So to simplify integration of LLMs into your app. I have split every LLM Type into a stable, working component. This means when you import the extension, you will get 12 Components with some even having variants, so you get all leading models from Groq.

- Compound (Groq): Agentic model from Groq that uses the best open source LLMs under the hood!
 - Compound Mini (Groq): Agentic model from Groq that uses the best open source LLMs under the hood!
 - DeepSeek (Meta + DeepSeek): The famous OS LLM now distilled with LLama.
 - Gemma (Google): The premier OS LLM from Google.
 - GPT OSS 120B (OpenAI): The first set of OSS models from OpenAI.
 - GptOss20B (OpenAI): The first set of OSS models from OpenAI.
 - Kimi K2 (Kimi): The latest trending model from Kimi.
 - Llama 3 (Meta): The great LLMs from Meta. Known for speed and performance. (Available in two variants)
 - Llama 4 (Meta): The great LLMs from Meta. Known for speed and performance. (Available in two variants)
 - Llama Guard (Meta): Great models for prompt blocking and great LLMs from Meta. Known for speed and performance.
 - Llama Prompt Guard (Meta): Great models for prompt blocking and great LLMs from Meta. Known for speed and performance. (Available in two variants)
 - Qwen (Alibaba): A great model from Alibaba.
 
Feature Comparison 
| LLM | MCP | Tool Use | JSON Mode | Streaming Responses | 
|---|---|---|---|---|
| GPT OSS 120B | Code Interpreter, Browser Search | |||
| GPT OSS 120B | Code Interpreter, Browser Search | |||
| Qwen | ---- | |||
| Llama 4 | ---- | |||
| Kimi K2 | ---- | |||
| Compound | ---- | Web Search, Visit website, Browser Automation, Code Execution. | ---- | |
| Compound Mini | ---- | Web Search, Visit website, Browser Automation, Code Execution. | --- | |
| DeepSeek | ---- | ----- | ||
| Gemma | ---- | ---- | ||
| Llama 3 | ---- | ---- | ||
| Llama Guard | ---- | ---- | ||
| Llama Prompt Guard | ---- | ---- | ---- | ---- | 
General Blocks 
These blocks are present in every component, Generally. (Depends on Feature capability)
Block Details for GPT OSS
 

Events:
GptOss120B has total 5 events.
1. GotResponse
Fired when the complete AI response is ready. Returns the full response text, usage statistics, response time in milliseconds, and model name.
| Parameter | Type | 
|---|---|
| response | text | 
| usage | dictionary | 
| latencyMs | number | 
| model | text | 
2. GotStream
Fired during streaming with each text chunk. Use 'chunk' to append to your label for real-time display. When 'done' is true, streaming has finished.
| Parameter | Type | 
|---|---|
| chunk | text | 
| index | number | 
| done | boolean | 
3. ErrorOccurred
Fired when an error occurs. Returns error code, human-readable message, and raw error response for debugging.
| Parameter | Type | 
|---|---|
| code | text | 
| message | text | 
| raw | text | 
4. GotJSON
Fired when JSON mode is enabled and response is successfully parsed. Returns structured data as dictionary and raw JSON string.
| Parameter | Type | 
|---|---|
| json | dictionary | 
| raw | text | 
5. GotReasoning
Fired when the AI's reasoning/thinking process is available. Shows the model's step-by-step thought process before generating the response.
| Parameter | Type | 
|---|---|
| reasoning | text | 
Methods:
GptOss120B has total 9 methods.
1. AddMcpServer
Add an MCP server tool with label, url and headers (YailDictionary).
| Parameter | Type | 
|---|---|
| serverLabel | text | 
| serverUrl | text | 
| headers | dictionary | 
2. ClearMcpServers
Remove all MCP servers (tools list).
3. GetMcpServers
Get MCP servers as list of dictionaries.
- Return type: 
list 
4. Ask
Send a message to the AI and get a response. The prompt is your question or instruction to the AI.
| Parameter | Type | 
|---|---|
| prompt | text | 
5. AskWithSystem
Send a message with a system instruction. System sets the AI's behavior/role, prompt is your question.
| Parameter | Type | 
|---|---|
| system | text | 
| prompt | text | 
6. AddMessage
Append a message to the internal history. Role must be user/assistant/system.
| Parameter | Type | 
|---|---|
| role | text | 
| content | text | 
7. ClearConversation
Clear all conversation history. Use this to start a fresh conversation with the AI.
8. GetStreamingText
Get the accumulated streaming text so far. Useful for displaying the full response as it builds up.
- Return type: 
text 
9. Cancel
Stop the current AI request immediately. Useful for stopping long responses or streaming.
Designer:
GptOss120B has total 20 designer properties.
1. ApiKey
- Input type: 
string 
2. BaseUrl
- Input type: 
string - Default value: 
https://api.groq.com/openai/v1 
3. Temperature
- Input type: 
float - Default value: 
1 
4. IncludeTemperature
- Input type: 
boolean - Default value: 
True 
5. TopP
- Input type: 
float - Default value: 
1 
6. IncludeTopP
- Input type: 
boolean - Default value: 
True 
7. MaxTokens
- Input type: 
non_negative_integer - Default value: 
8192 
8. IncludeMaxTokens
- Input type: 
boolean - Default value: 
True 
9. Stream
- Input type: 
boolean - Default value: 
False 
10. IncludeStream
- Input type: 
boolean - Default value: 
True 
11. HistoryEnabled
- Input type: 
boolean - Default value: 
False 
12. HistoryLimit
- Input type: 
non_negative_integer - Default value: 
20 
13. JSONMode
- Input type: 
boolean - Default value: 
False 
14. ReasoningLevel
- Input type: 
string - Default value: 
medium 
15. IncludeReasoning
- Input type: 
boolean - Default value: 
True 
16. IncludeResponseFormat
- Input type: 
boolean - Default value: 
True 
17. IncludeStopNull
- Input type: 
boolean - Default value: 
False 
18. IncludeTools
- Input type: 
boolean - Default value: 
True 
19. ToolCodeInterpreter
- Input type: 
boolean - Default value: 
True 
20. ToolBrowserSearch
- Input type: 
boolean - Default value: 
True 
Setters:
GptOss120B has total 20 setter properties.
1. ApiKey
Your Groq API key.
- Input type: 
text 
2. BaseUrl
Base URL (OpenAI-compatible). Default: https://api.groq.com/openai/v1
- Input type: 
text 
3. Temperature
Temperature 0..2 (default 1)
- Input type: 
number 
4. IncludeTemperature
Include temperature field.
- Input type: 
boolean 
5. TopP
TopP 0..1 (default 1)
- Input type: 
number 
6. IncludeTopP
Include top_p field.
- Input type: 
boolean 
7. MaxTokens
Maximum completion tokens (default 8192)
- Input type: 
number 
8. IncludeMaxTokens
Include max_completion_tokens field.
- Input type: 
boolean 
9. Stream
Enable streaming (disabled when JSONMode is true).
- Input type: 
boolean 
10. IncludeStream
Include stream field.
- Input type: 
boolean 
11. HistoryEnabled
Maintain conversation history (default False).
- Input type: 
boolean 
12. HistoryLimit
History turn limit (default 20). Each turn adds user and assistant messages.
- Input type: 
number 
13. JSONMode
JSON mode: when true, response_format=json_object and streaming is disabled.
- Input type: 
boolean 
14. ReasoningLevel
Reasoning level: low, medium, or high.
- Input type: 
text 
15. IncludeReasoning
Include reasoning_effort field.
- Input type: 
boolean 
16. IncludeResponseFormat
Include response_format when JSONMode is true.
- Input type: 
boolean 
17. IncludeStopNull
Include stop: null (send explicit JSON null).
- Input type: 
boolean 
18. IncludeTools
Include tools array.
- Input type: 
boolean 
19. ToolCodeInterpreter
Include code_interpreter tool.
- Input type: 
boolean 
20. ToolBrowserSearch
Include browser_search tool.
- Input type: 
boolean 
Getters:
GptOss120B has total 20 getter properties.
1. ApiKey
Your Groq API key.
- Return type: 
text 
2. BaseUrl
Base URL (OpenAI-compatible). Default: https://api.groq.com/openai/v1
- Return type: 
text 
3. Temperature
Temperature 0..2 (default 1)
- Return type: 
number 
4. IncludeTemperature
Include temperature field.
- Return type: 
boolean 
5. TopP
TopP 0..1 (default 1)
- Return type: 
number 
6. IncludeTopP
Include top_p field.
- Return type: 
boolean 
7. MaxTokens
Maximum completion tokens (default 8192)
- Return type: 
number 
8. IncludeMaxTokens
Include max_completion_tokens field.
- Return type: 
boolean 
9. Stream
Enable streaming (disabled when JSONMode is true).
- Return type: 
boolean 
10. IncludeStream
Include stream field.
- Return type: 
boolean 
11. HistoryEnabled
Maintain conversation history (default False).
- Return type: 
boolean 
12. HistoryLimit
History turn limit (default 20). Each turn adds user and assistant messages.
- Return type: 
number 
13. JSONMode
JSON mode: when true, response_format=json_object and streaming is disabled.
- Return type: 
boolean 
14. ReasoningLevel
Reasoning level: low, medium, or high.
- Return type: 
text 
15. IncludeReasoning
Include reasoning_effort field.
- Return type: 
boolean 
16. IncludeResponseFormat
Include response_format when JSONMode is true.
- Return type: 
boolean 
17. IncludeStopNull
Include stop: null (send explicit JSON null).
- Return type: 
boolean 
18. IncludeTools
Include tools array.
- Return type: 
boolean 
19. ToolCodeInterpreter
Include code_interpreter tool.
- Return type: 
boolean 
20. ToolBrowserSearch
Include browser_search tool.
- Return type: 
boolean 
Get the extension now! 
The extension is available for purchase at just 7.99$ instantly!
If you have any questions, feel free to ask below!




