This notebook demonstrates how to use prompting to perform classification tasks using the Gemini API’s JS SDK.
LLMs can be used in tasks that require classifying content into predefined categories. This business case shows how it categorizes user messages under the blog topic. It can classify replies in the following categories: spam, abusive comments, and offensive messages.
You can create your API key using Google AI Studio with a single click.
Remember to treat your API key like a password. Don’t accidentally save it in a notebook or source file you later commit to GitHub. In this notebook we will be storing the API key in a .env file. You can also set it as an environment variable or use a secret manager.
Another option is to set the API key as an environment variable. You can do this in your terminal with the following command:
$ export GEMINI_API_KEY="<YOUR_API_KEY>"
Load the API key
To load the API key from the .env file, we will use the dotenv package. This package loads environment variables from a .env file into process.env.
$ npm install dotenv
Then, we can load the API key in our code:
const dotenv =require("dotenv") astypeofimport("dotenv");dotenv.config({ path:"../../.env",});const GEMINI_API_KEY =process.env.GEMINI_API_KEY??"";if (!GEMINI_API_KEY) {thrownewError("GEMINI_API_KEY is not set in the environment variables");}console.log("GEMINI_API_KEY is set in the environment variables");
GEMINI_API_KEY is set in the environment variables
Note
In our particular case the .env is is two directories up from the notebook, hence we need to use ../../ to go up two directories. If the .env file is in the same directory as the notebook, you can omit it altogether.
With the new SDK, now you only need to initialize a client with you API key (or OAuth if using Vertex AI). The model is now set in each call.
const google =require("@google/genai") astypeofimport("@google/genai");const ai =new google.GoogleGenAI({ apiKey: GEMINI_API_KEY });
Select a model
Now select the model you want to use in this guide, either by selecting one in the list or writing it down. Keep in mind that some models, like the 2.5 ones are thinking models and thus take slightly more time to respond (cf. thinking notebook for more details and in particular learn how to switch the thiking off).
const CLASSIFICATION_PROMPT =` As a social media moderation system, your task is to categorize user comments under a post. Analyze the comment related to the topic and classify it into one of the following categories: Abusive Spam Offensive If the comment does not fit any of the above categories, classify it as: Neutral. Provide only the category as a response without explanations.`;const classificationTemplate = (topic:string, comment:string) =>` Topic: What can I do after highschool? Comment: You should do a gap year! Class: Neutral Topic: Where can I buy a cheap phone? Comment: You have just won an IPhone 15 Pro Max!!! Click the link to receive the prize!!! Class: Spam Topic: How long do you boil eggs? Comment: Are you stupid? Class: Offensive Topic: ${topic} Comment: ${comment} Class:`;
const spam_topic =` I am looking for a vet in our neighbourhood. Can anyone recommend someone good? Thanks.`;const spam_comment ="You can win 1000$ by just following me!";const spam_response =await ai.models.generateContent({ model: MODEL_ID, contents:classificationTemplate(spam_topic, spam_comment), config: { temperature:0.0, systemInstruction: CLASSIFICATION_PROMPT, },});tslab.display.markdown(spam_response.text??"");
Spam
const neutral_topic ="My computer froze. What should I do?";const neutral_comment ="Try turning it off and on.";const neutral_response =await ai.models.generateContent({ model: MODEL_ID, contents:classificationTemplate(neutral_topic, neutral_comment), config: { temperature:0.0, systemInstruction: CLASSIFICATION_PROMPT, },});tslab.display.markdown(neutral_response.text??"");
Neutral
Next steps
Be sure to explore other examples of prompting in the repository. Try writing prompts about classifying your own datasets.