[Paid] 🌟 ChatGPT extension to create fantastic conversations with ChatGPT

#Mr_Koder

Introduction

Hello everyone I am here today to introduce my new extension called ChatGPT,

Unlock the potential of artificial intelligence in your App Inventor projects with the ChatGPT Extension. This powerful tool empowers you to seamlessly integrate OpenAI's cutting-edge language models, enabling you to build sophisticated and engaging AI-powered features within your apps.

Why Choose the ChatGPT Extension?

  • Unleash the Power of AI: Effortlessly incorporate advanced natural language processing into your apps.

  • Dynamic Conversations: Create interactive chatbots that remember past interactions, providing a more natural and engaging user experience.

  • Streaming for Speed: Handle large responses efficiently with streaming support, ensuring a smooth and responsive user interface.

  • Robust Error Handling: Built-in error management allows you to gracefully handle unexpected issues and provide informative feedback to users.

  • Audio Capabilities: Transcribe and translate audio using OpenAI's powerful Whisper API, expanding the possibilities for accessibility and multilingual support.

  • Image Generation: Create images from text descriptions, generate images inside your app now using Open AI's DALL-E models.

  • Image analysis: Analyze images using OpenAI's ChatGpt Vision API, extract valuable insights from images.

  • Embeddings: Generate text embeddings for various NLP tasks, understand the meaning and relationships between words.

  • Easy Integration: Designed specifically for MIT App Inventor, making integration a breeze.

  • Cost-Effective: Access a wealth of AI features for just $5 (including a Tutorial AIA file)!

Blocks




The SendMessage block is responsible for sending a conversation to the ChatGPT and processing the response. Here's a breakdown of the code:

  1. Block Description: This Block allows users to interact with the OpenAI ChatGPT and receive structured API-style responses.
  2. Function Parameters:
  • prompts: A list of conversation prompts provided by the user like in the block above .
  • model: The name of the OpenAI model to be used.
  • apiKey: The API key for authorization.
  • maxTokens: The maximum number of tokens in the response.
  • temperature: A value controlling the randomness of the response.


The RespondedToChat event is triggered when the OpenAI API successfully provides a response to a user's prompt sent through the SendMessage function. This event delivers the content of the response along with detailed metadata, including token usage information.

Parameters:

  • responseId (String): A unique identifier for the response generated by the OpenAI API.

  • responseType (String): The type of the response object. It usually indicates the nature of the response data structure (e.g., "chat.completion").

  • createdTimestamp (Number - long): The timestamp (in Unix epoch time, milliseconds) when the response was created by the OpenAI API.

  • responseModel (String): The specific OpenAI model that generated the response (e.g., "gpt-3.5-turbo", "gpt-4").

  • choiceIndex (Number - int): The index of the selected choice within the response. The OpenAI API may offer multiple response choices; this parameter indicates which one is being presented (typically 0 for the first and usually only choice).

  • role (String): The role associated with the message in the conversation. Common roles are "system" (for initial instructions), "user" (for the user's prompt), and "assistant" (for the AI's response).

  • content (String): The text content of the response generated by the OpenAI model.

  • finishReason (String): Indicates why the response generation process finished. It can be one of the following:

    • stop: The model reached a natural stopping point or a stop sequence was generated.

    • length: The maximum number of tokens (maxTokens parameter) was reached.

    • content_filter: Content was omitted due to a flag from OpenAI's content filters.

    • null: The API response is still in progress or incomplete (this value might be present if there are issues in receiving the complete response).

  • promptTokens (Number - int): The number of tokens used in the user's prompt. This value is now calculated before sending the request using a basic word-count approximation. It will not be the exact prompt token count but will be a close estimate.

  • completionTokens (Number - int): The number of tokens used in the generated response completion. This value is returned by the OpenAI API. If the key is not provided, it will default to 0.

  • totalTokens (Number - int): The total number of tokens used in the entire request and response (prompt tokens + completion tokens). This value is returned by the OpenAI API. If the key is not provided, it will default to 0.




The SendStreamedMessage function is designed to retrieve a response in chunks from the ChatGPT model. It allows for ongoing communication with the model and is specifically used for streaming responses,

Parameters:

  • The function takes several parameters:
    • id (integer): An identifier for the stream.
    • prompts (YailList): A list of prompts (messages) that constitute the conversation with the model.
    • model (String): The model code used for the conversation.
    • apiKey (String): The API key required for authentication.
    • maxTokens (integer): The maximum number of tokens for the response.
    • temperature (double): A value that controls the randomness of the response.


blocks

The StopStream Block and the associated StoppedStream event are used in the context of managing streaming operations in the code.



blocks

The StoppedStream Block is an essential component in managing streaming operations and is triggered when the streaming process is manually stopped by calling the StopStream Block



The GotStream event is fired repeatedly during an ongoing streaming conversation initiated by the SendStreamedMessage function. Each time the OpenAI API sends a chunk of the response, this event is triggered, delivering the partial response content along with updated token usage information.

Parameters:

  • id (Number - int): A user-defined identifier that was originally passed to the SendStreamedMessage function. This ID helps to distinguish between different streaming conversations if multiple are happening concurrently.

  • stream (String): The partial text content of the response received in the current chunk from the OpenAI API.

  • completionTokens (Number - int): An approximate number of tokens in the current stream chunk. This value is calculated using a basic word-count method similar to the prompt token approximation.

  • totalTokens (Number - int): The running total of estimated tokens (prompt + completion) up to this point in the stream. The prompt tokens are approximated before sending the request, and the completion tokens are accumulated as each chunk is received.



component_event

The FinishedStream event is used to notify when all chunks of a stream have been returned through the GotStream event, indicating the completion of the streaming conversation.



component_method

RequestModeration

Description: This function asynchronously requests content moderation using the OpenAI Moderation API. It takes an API key and input text as parameters, sends a POST request to the API endpoint, and processes the response.

Parameters:

  • apiKey (String): The API key for accessing the OpenAI Moderation API.
  • input (String): The input text or content to be moderated.


ModerationResult

Description: This event is triggered when the moderation result is received from the OpenAI Moderation API. It provides information about whether the content is flagged, categories, and category scores as parameters.

Parameters:

  • flagged (boolean): Indicates whether the content is flagged.
  • categories (String): JSON representation of the detected categories.
  • categoryScores (String): JSON representation of the scores for each category.

Usage: Handle this event to perform actions based on the moderation result, such as updating the user interface or taking appropriate actions based on the moderation outcome.


RequestAudioSpeech Function

Description: This function is responsible for asynchronously requesting audio speech synthesis from OpenAI's Audio Speech API. It takes various parameters such as API key, input text, model, voice, folder path, and file name. The resulting MP3 content is then written to a file.

Parameters:

  • apiKey (String): The API key for accessing OpenAI's Audio Speech API.

  • text (String): The input text to be synthesized into speech.

  • model (String): The model to be used for speech synthesis. models tts-1 or tts-1-hd

  • voice (String): The voice to use when generating the audio. Supported voices are alloy , echo , fable , onyx , nova , and shimmer

  • folderPath (String): The path to the folder where the MP3 file will be saved.

  • fileName (String): The name of the MP3 file to be saved.
    Exambles

  • Alloy :

  • Echo :
  • Fable :
  • Onyx :

You can try other voices



component_method

SpeechFileSaved Event

Description: This event is fired when the MP3 file has been successfully saved. It provides the file path as a parameter.

Parameters:

  • filePath (String): The path where the MP3 file has been saved.

Usage: Handle this event to perform actions after the MP3 file has been successfully saved.


SpeechSynthesisError Event

Description: This event is fired when an error occurs during the audio speech synthesis process. It provides an error message as a parameter.

Parameters:

  • errorMessage (String): The error message describing the issue encountered.

Usage: Handle this event to capture and handle errors during the speech synthesis process.




RequestAudioTranscription

Block is responsible for making a request to OpenAI's Audio Transcriptions API to transcribe audio from a provided audio file. (Transcribes audio into the input language.)

The Blcok takes four parameters:

  • apiKey (API key for authentication),

  • audioFilePath (path to the audio file to be transcribed),

  • model (model configuration), you can set it as whisper-1

  • responseFormat ( The format of the transcript output, in one of these options: json, text, srt, verbose_json, or vtt).



blocks

The AudioTranscriptionReceived block, its purpose is to notify the application when audio transcription data is received




The RequestAudioTranslation block is designed to request audio translation from OpenAI's Audio Translations API and return the "text" value from the response.

Parameters:
The function takes three parameters:

  • apiKey (API key for authorization),

  • audioFilePath (path to the audio file to be translated),

  • model (the model used for translation).




This ReturnAudioTranslation event is triggered when the audio translation response is received.




RequestDALL_EImages

Description

This function initiates a request to the OpenAI DALL-E Images API to generate images based on a given prompt.

Parameters

  • apiKey (String): The API key for authentication.

  • model (String): (Optional) The model to use for image generation, defaults to "dall-e-2".

  • prompt (String): A text description of the desired image(s) (Required). The maximum length is 1000 characters for dall-e-2 and 4000 characters for dall-e-3.

  • n (int): (Optional) The number of images to generate, defaults to 1. Must be between 1 and 10. For dall-e-3, only n=1 is supported.

  • size (String): (Optional) The size of the generated images, defaults to "1024x1024". Must be one of "256x256", "512x512", or "1024x1024" for dall-e-2. Must be one of "1024x1024", "1792x1024", or "1024x1792" for dall-e-3 models.

Events

  • DALL_EImagesGenerated (List imageUrls): Fired when the DALL-E Images API successfully generates images. Returns a list of image URLs.

blocks

  • DALL_EImagesError (String errorMessage): Fired when an error occurs during the DALL-E Images API request. Returns an error message.

Function: RequestChatGPTVision(String apiKey, String imageUrl, String prompt)

Purpose: This function sends a request to OpenAI's ChatGPT vision API to analyze an image and provide insights based on the given prompt.

Parameters:

  • apiKey: Your OpenAI API key.

  • imageUrl: The URL of the image to analyze.

  • prompt: A text prompt to guide the analysis (e.g., "What's in this image?").

Function: RequestChatGPTVisionMultipleImages(String apiKey, YailList imageUrls, String prompt)

Purpose: This function sends a request to OpenAI's ChatGPT vision API to analyze multiple images and provide insights based on the given prompt.

Parameters:

  • apiKey: Your OpenAI API key.

  • imageUrls: A YailList containing the URLs of the images to analyze.

  • prompt: A text prompt to guide the analysis (e.g., "Compare these images").

RequestChatGPTVisionFromFile

Purpose: Analyzes a single local image file and provides insights based on a text prompt.

Parameters:

  • apiKey: Your OpenAI API key.

  • imagePath: The file path of the image to analyze.

  • prompt: A text prompt to guide the analysis (e.g., "What's in this image?").

  • detail: The desired level of detail for the analysis ( low , high , or auto).

  • maxTokens: The maximum number of tokens allowed in the API response.

RequestChatGPTVisionMultipleImagesFromFile

Purpose: Analyzes multiple local image files and provides insights based on a text prompt.

Parameters:

  • apiKey: Your OpenAI API key.

  • imagePaths: A YailList containing the file paths of the images to analyze.

  • prompt: A text prompt to guide the analysis (e.g., "Compare these images").

  • detail: The desired level of detail for the analysis (e.g., "high").

  • maxTokens: The maximum number of tokens allowed in the API response.

Events:

  • ChatGPTVisionResponseReceived : This event is fired when the API response is successfully received and parsed. It provides the following parameters:

    • id: The unique ID of the response.

    • object: The type of object returned ("chat.completion").

    • model: The model used to generate the response.

    • role: The role of the response ("assistant").

    • content: The main content of the response, containing the analysis of the image.

  • ChatGPTVisionError(String errorMessage) : This event is fired if an error occurs during the API request. It provides the error message.

  • The response content will vary depending on the image and the prompt provided.

ChatGPT Extension- Embeddings Functionality


1. GetEmbeddings(String apiKey, String text, String model)

  • Description: This function sends a request to OpenAI's Embeddings API to get the numerical representation (embedding) of a given text.

  • Parameters:

    • apiKey: Your OpenAI API key (required for authentication).

    • text: The text string you want to embed.

    • model: The specific embedding model you want to use (e.g., "text-embedding-ada-002, text-embedding-3-small ,text-embedding-3-lar").

  • Functionality:

    • It constructs an API request with your text and the chosen model.

    • It sends this request to OpenAI's server.

    • It then calls the processEmbeddingsAPIResponse function to handle the server's response.

  • Events Triggered:

    • EmbeddingsReceived: Fired upon a successful response, containing the embeddings.

    • EmbeddingsError: Fired if an error occurs during the request.

component_event1

2. EmbeddingsReceived(String embeddings)

  • Description: This event is fired when the GetEmbeddings function successfully receives a response from the OpenAI API.

  • Parameter:

    • embeddings: The text's embedding, returned as a string representation of a list of numbers.

component_event

3. EmbeddingsError(String errorMessage)

  • Description: This event is fired when an error occurs at any point during the embedding request process.

  • Parameter:

    • errorMessage: A descriptive error message to help you understand the issue.

preview :

I also use this Extension in this project :

Aix file :

you can purchase the AIX and the AIA file from here via PayPal the two files cost 5$ after you pay you will be automatically redirected to the download URL of the extension

4 Likes

Extension doesn't work for me.
When I use your aia, I get an error message in App Inventor and I can't start it at all in Kodular.


When I start my own project, nothing happens at all. When I then open the project in App Inventor or Kodular, it cannot be started.

The screen on the smartphone remains white.

Things don't always work in Companion. Try it in compiled apk.

First step:
yes as @Patryk_F said try with compiled APK cause this also happened with me in companion and I confirm the AIA is working perfectly as well as the extension .

The second step
check your balance in your OpenAI account

also try with this model gpt-3.5-turbo-16k


Auf dem Pc sieht es auch so aus.

the aia is for AI2 not for kodular

The extension updated with new features

you can now add TTS of OpenAI to your app

New update, now you can generate images with DALLE

Hi,

i cannot build an APK, with simple screen and this extension.

App Inventor is unable to compile this project.
The compiler error output was
[ReadBuildInfo] Starting Task
[ReadBuildInfo] Task succeeded in 0.007 seconds
[LoadComponentInfo] Starting Task
[LoadComponentInfo] INFO: Generating assets...
[LoadComponentInfo] Component assets needed, n = 0
[LoadComponentInfo] INFO: Generating activities...
[LoadComponentInfo] Component activities needed, n = 0
[LoadComponentInfo] INFO: Component "com.mrkoder.chatgpt.ChatGPT" does not specify activityMetadata
[LoadComponentInfo] Component activity metadata needed, n = 0
[LoadComponentInfo] INFO: Generating broadcast receivers...
[LoadComponentInfo] INFO: Component "com.mrkoder.chatgpt.ChatGPT" does not specify broadcastReceivers
[LoadComponentInfo] INFO: Component "com.mrkoder.chatgpt.ChatGPT" does not specify services
[LoadComponentInfo] Component content providers needed, n = 0
[LoadComponentInfo] INFO: Generating libraries...
[LoadComponentInfo] INFO: Component "com.mrkoder.chatgpt.ChatGPT" does not specify libraries
[LoadComponentInfo] Libraries needed, n = 0
[LoadComponentInfo] INFO: Component "com.mrkoder.chatgpt.ChatGPT" does not specify metadata
[LoadComponentInfo] Component metadata needed, n = 0
[LoadComponentInfo] INFO: Generating Android minimum SDK...
[LoadComponentInfo] INFO: Generating native libraries...
[LoadComponentInfo] INFO: Component "com.mrkoder.chatgpt.ChatGPT" does not specify native
[LoadComponentInfo] Native Libraries needed, n = 0
[LoadComponentInfo] INFO: Generating permissions...
[LoadComponentInfo] INFO: Component "com.google.appinventor.components.runtime.Button" does not specify permissionConstraints
[LoadComponentInfo] INFO: Component "com.google.appinventor.components.runtime.Form" does not specify permissionConstraints
[LoadComponentInfo] INFO: Component "com.google.appinventor.components.runtime.Label" does not specify permissionConstraints
[LoadComponentInfo] INFO: Component "com.google.appinventor.components.runtime.TextBox" does not specify permissionConstraints
[LoadComponentInfo] INFO: Component "com.google.appinventor.components.runtime.WebViewer" does not specify permissionConstraints
[LoadComponentInfo] INFO: Component "com.mrkoder.chatgpt.ChatGPT" does not specify permissionConstraints
[LoadComponentInfo] usesLocation = False
[LoadComponentInfo] Permissions needed, n = 3
[LoadComponentInfo] INFO: Component "com.mrkoder.chatgpt.ChatGPT" does not specify queries
[LoadComponentInfo] INFO: Component "com.mrkoder.chatgpt.ChatGPT" does not specify services
[LoadComponentInfo] Component services needed, n = 0
[LoadComponentInfo] INFO: Generating component broadcast receivers...
[LoadComponentInfo] INFO: Component "com.mrkoder.chatgpt.ChatGPT" does not specify broadcastReceiver
[LoadComponentInfo] Task succeeded in 0.005 seconds
[PrepareAppIcon] Starting Task
[PrepareAppIcon] INFO: Creating mipmap dirs...
[PrepareAppIcon] INFO: Generating icons...
[PrepareAppIcon] Generating icons for mipmap-mdpi
[PrepareAppIcon] Generating icons for mipmap-hdpi
[PrepareAppIcon] Generating icons for mipmap-xhdpi
[PrepareAppIcon] Generating icons for mipmap-xxhdpi
[PrepareAppIcon] Generating icons for mipmap-xxxhdpi
[PrepareAppIcon] Task succeeded in 1.31 seconds
[XmlConfig] Starting Task
[XmlConfig] INFO: Creating animation xml
[XmlConfig] Creating zoom_enter.xml
[XmlConfig] Creating fadeout.xml
[XmlConfig] Creating slide_v_exit.xml
[XmlConfig] Creating fadein.xml
[XmlConfig] Creating zoom_exit.xml
[XmlConfig] Creating slide_v_enter.xml
[XmlConfig] Creating zoom_exit_reverse.xml
[XmlConfig] Creating slide_v_enter_reverse.xml
[XmlConfig] Creating zoom_enter_reverse.xml
[XmlConfig] Creating slide_enter_reverse.xml
[XmlConfig] Creating slide_exit.xml
[XmlConfig] Creating hold.xml
[XmlConfig] Creating slide_enter.xml
[XmlConfig] Creating slide_v_exit_reverse.xml
[XmlConfig] Creating slide_exit_reverse.xml
[XmlConfig] INFO: Creating style xml
[XmlConfig] INFO: Creating provider_path xml
[XmlConfig] INFO: Creating network_security_config xml
[XmlConfig] INFO: Generating adaptive icon file
[XmlConfig] INFO: Generating round adaptive icon file
[XmlConfig] INFO: Generating adaptive icon background file
[XmlConfig] Task succeeded in 0.007 seconds
[CreateManifest] Starting Task
[CreateManifest] INFO: Reading project specs...
[CreateManifest] VCode: 1
[CreateManifest] VName: 1.0
[CreateManifest] Min SDK 7
[CreateManifest] INFO: Writing screen 'appinventor.ai_jan_dolezal71.AI.Screen1'
[CreateManifest] Task succeeded in 0.001 seconds
[AttachNativeLibs] Starting Task
[AttachNativeLibs] Task succeeded in 0.001 seconds
[AttachAarLibs] Starting Task
[AttachAarLibs] Task succeeded in 0.177 seconds
[AttachCompAssets] Starting Task
[AttachCompAssets] Task succeeded in 0.001 seconds
[MergeResources] Starting Task
[MergeResources] Task succeeded in 0.247 seconds
[SetupLibs] Starting Task
[SetupLibs] Task succeeded in 0.0 seconds
[RunAapt] Starting Task
[RunAapt] Task succeeded in 0.913 seconds
[GenerateClasses] Starting Task
[GenerateClasses] INFO: Source File: appinventor/ai_jan_dolezal71/AI/Screen1.yail
[GenerateClasses] INFO: Libraries Classpath = /tmp/kawa7494451992233225591.jar:/tmp/acra-4.4.08709178649636589884.jar:/tmp/AndroidRuntime1543390751063037176.jar:/tmp/annotation5701149757438374068.jar:/tmp/appcompat3565059883028808012.jar:/tmp/asynclayoutinflater2206470934267205799.jar:/tmp/collection898118495409806128.jar:/tmp/constraintlayout4474543223810579800.jar:/tmp/constraintlayout-solver3451019390645928170.jar:/tmp/coordinatorlayout8570126845714751542.jar:/tmp/core4667604094654879971.jar:/tmp/core-common8491277206636314757.jar:/tmp/core-runtime4932128950006405215.jar:/tmp/cursoradapter4234717057408561384.jar:/tmp/customview8272222352384447782.jar:/tmp/documentfile3009691411001731242.jar:/tmp/drawerlayout704743265119639454.jar:/tmp/fragment1953118997388366003.jar:/tmp/interpolator4380482304581246067.jar:/tmp/legacy-support-core-ui4372073912920254156.jar:/tmp/legacy-support-core-utils5171087957186360901.jar:/tmp/lifecycle-common8758916494702035021.jar:/tmp/lifecycle-livedata5359840068998841090.jar:/tmp/lifecycle-livedata-core6005259685251203005.jar:/tmp/lifecycle-runtime1399425124260792392.jar:/tmp/lifecycle-viewmodel6096341747387473949.jar:/tmp/loader7320155931673431507.jar:/tmp/localbroadcastmanager3954707797140785584.jar:/tmp/print935223127219807010.jar:/tmp/slidingpanelayout7847088642646145036.jar:/tmp/swiperefreshlayout7953690956644193162.jar:/tmp/vectordrawable8241841050545394147.jar:/tmp/vectordrawable-animated7831293918911347909.jar:/tmp/versionedparcelable7603719417125265245.jar:/tmp/viewpager8477837741785703597.jar:/tmp/1705270838507_0.4527318455262811-0/youngandroidproject/../assets/external_comps/com.mrkoder.chatgpt/files/AndroidRuntime.jar:/tmp/1705270838507_0.4527318455262811-0/youngandroidproject/../build/classes:/tmp/android8207389629657748803.jar
[GenerateClasses] ERROR: Kawa compile has failed.
gnu.text.SyntaxException:
at kawa.Shell.run(Shell.java:257)
at kawa.Shell.runFile(Shell.java:490)
at kawa.Shell.runFileOrClass(Shell.java:428)
at kawa.repl.processArgs(repl.java:216)
at kawa.repl.main(repl.java:827)
[GenerateClasses] ERROR: Can't find class file for Screen 'Screen1'
[GenerateClasses] Task errored in 0.633 seconds

I bought this plugin
It does not work well
And that the plugin forgets the conversation
You can see the photo below

Can users customize the conversation style or tone using the ChatGPT extension?

Hi, I bought an extension and I keep getting these errors when I'm exporting it gnu.text.SyntaxException:
at kawa.Shell.run(Shell.java:257)
at kawa.Shell.runFile(Shell.java:490)
at kawa.Shell.runFileOrClass(Shell.java:428)
at kawa.repl.processArgs(repl.java:216)
at kawa.repl.main(repl.java:827)

1 Like

Try to use the extension in a separated project and try to export it cause the AIA file has some problems I will edit it soon

@Okeditse_Nare
@Jeffrey
I have updated the AIA file and tested it and the extension too and sent them to you, check your DM, please

Can someone please tell why this isn't working

Do you have access to the GPT-4 model and you have to pay to use this model
this is the models of the API

also this "GPT-4" is not the right way to add a model as input this should be for examples gpt-4 , gpt-4-0125-preview , gpt-3.5-turbo

you can read the documentation first

New update with gpt-4-vision-preview add to the extension

1 Like

Unfortunately, its still not working!

please help, thanks

you enter prompts in the wrong way :

it should be like this :point_down:
c52b876d22c0298d78ff455a9a799ba49e51a66c_2_690x173
also max token it should be number not string and try with 1000 tokens not any big number !




please read the full tutorial topic :point_down: