Gemini API: List models

This notebook demonstrates how to list the models that are available for you to use in the Gemini API, and how to find details about a model.

Setup

Install the Google GenAI SDK

Install the Google GenAI SDK from npm.

$ npm install @google/genai

Setup your API key

You can create your API key using Google AI Studio with a single click.

Remember to treat your API key like a password. Don’t accidentally save it in a notebook or source file you later commit to GitHub. In this notebook we will be storing the API key in a .env file. You can also set it as an environment variable or use a secret manager.

Here’s how to set it up in a .env file:

$ touch .env
$ echo "GEMINI_API_KEY=<YOUR_API_KEY>" >> .env
Tip

Another option is to set the API key as an environment variable. You can do this in your terminal with the following command:

$ export GEMINI_API_KEY="<YOUR_API_KEY>"

Load the API key

To load the API key from the .env file, we will use the dotenv package. This package loads environment variables from a .env file into process.env.

$ npm install dotenv

Then, we can load the API key in our code:

const dotenv = require("dotenv") as typeof import("dotenv");

dotenv.config({
  path: "../.env",
});

const GEMINI_API_KEY = process.env.GEMINI_API_KEY ?? "";
if (!GEMINI_API_KEY) {
  throw new Error("GEMINI_API_KEY is not set in the environment variables");
}
console.log("GEMINI_API_KEY is set in the environment variables");
GEMINI_API_KEY is set in the environment variables
Note

In our particular case the .env is is one directory up from the notebook, hence we need to use ../ to go up one directory. If the .env file is in the same directory as the notebook, you can omit it altogether.

│
├── .env
└── quickstarts
    └── Models.ipynb

Initialize SDK Client

With the new SDK, now you only need to initialize a client with you API key (or OAuth if using Vertex AI). The model is now set in each call.

const google = require("@google/genai") as typeof import("@google/genai");

const ai = new google.GoogleGenAI({ apiKey: GEMINI_API_KEY });

List models

Use models.list() to see what models are available. These models support generateContent, the main method used for prompting.

const models = await ai.models.list();
let { page } = models;
while (page.length > 0) {
  for (const model of page) {
    console.log(`- ${model.name} (${model.displayName}) | [Actions: ${model.supportedActions?.join(", ")}]`);
  }
  page = models.hasNextPage() ? await models.nextPage() : [];
}
- models/embedding-gecko-001 (Embedding Gecko) | [Actions: embedText, countTextTokens]
- models/gemini-1.0-pro-vision-latest (Gemini 1.0 Pro Vision) | [Actions: generateContent, countTokens]
- models/gemini-pro-vision (Gemini 1.0 Pro Vision) | [Actions: generateContent, countTokens]
- models/gemini-1.5-pro-latest (Gemini 1.5 Pro Latest) | [Actions: generateContent, countTokens]
- models/gemini-1.5-pro-002 (Gemini 1.5 Pro 002) | [Actions: generateContent, countTokens, createCachedContent]
- models/gemini-1.5-pro (Gemini 1.5 Pro) | [Actions: generateContent, countTokens]
- models/gemini-1.5-flash-latest (Gemini 1.5 Flash Latest) | [Actions: generateContent, countTokens]
- models/gemini-1.5-flash (Gemini 1.5 Flash) | [Actions: generateContent, countTokens]
- models/gemini-1.5-flash-002 (Gemini 1.5 Flash 002) | [Actions: generateContent, countTokens, createCachedContent]
- models/gemini-1.5-flash-8b (Gemini 1.5 Flash-8B) | [Actions: createCachedContent, generateContent, countTokens]
- models/gemini-1.5-flash-8b-001 (Gemini 1.5 Flash-8B 001) | [Actions: createCachedContent, generateContent, countTokens]
- models/gemini-1.5-flash-8b-latest (Gemini 1.5 Flash-8B Latest) | [Actions: createCachedContent, generateContent, countTokens]
- models/gemini-2.5-pro-exp-03-25 (Gemini 2.5 Pro Experimental 03-25) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.5-pro-preview-03-25 (Gemini 2.5 Pro Preview 03-25) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.5-flash-preview-04-17 (Gemini 2.5 Flash Preview 04-17) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.5-flash-preview-05-20 (Gemini 2.5 Flash Preview 05-20) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.5-flash-preview-04-17-thinking (Gemini 2.5 Flash Preview 04-17 for cursor testing) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.5-pro-preview-05-06 (Gemini 2.5 Pro Preview 05-06) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.5-pro-preview-06-05 (Gemini 2.5 Pro Preview) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-flash-exp (Gemini 2.0 Flash Experimental) | [Actions: generateContent, countTokens, bidiGenerateContent]
- models/gemini-2.0-flash (Gemini 2.0 Flash) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-flash-001 (Gemini 2.0 Flash 001) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-flash-exp-image-generation (Gemini 2.0 Flash (Image Generation) Experimental) | [Actions: generateContent, countTokens, bidiGenerateContent]
- models/gemini-2.0-flash-lite-001 (Gemini 2.0 Flash-Lite 001) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-flash-lite (Gemini 2.0 Flash-Lite) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-flash-preview-image-generation (Gemini 2.0 Flash Preview Image Generation) | [Actions: generateContent, countTokens]
- models/gemini-2.0-flash-lite-preview-02-05 (Gemini 2.0 Flash-Lite Preview 02-05) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-flash-lite-preview (Gemini 2.0 Flash-Lite Preview) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-pro-exp (Gemini 2.0 Pro Experimental) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-pro-exp-02-05 (Gemini 2.0 Pro Experimental 02-05) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-exp-1206 (Gemini Experimental 1206) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-flash-thinking-exp-01-21 (Gemini 2.5 Flash Preview 04-17) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-flash-thinking-exp (Gemini 2.5 Flash Preview 04-17) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.0-flash-thinking-exp-1219 (Gemini 2.5 Flash Preview 04-17) | [Actions: generateContent, countTokens, createCachedContent, batchGenerateContent]
- models/gemini-2.5-flash-preview-tts (Gemini 2.5 Flash Preview TTS) | [Actions: countTokens, generateContent]
- models/gemini-2.5-pro-preview-tts (Gemini 2.5 Pro Preview TTS) | [Actions: countTokens, generateContent]
- models/learnlm-2.0-flash-experimental (LearnLM 2.0 Flash Experimental) | [Actions: generateContent, countTokens]
- models/gemma-3-1b-it (Gemma 3 1B) | [Actions: generateContent, countTokens]
- models/gemma-3-4b-it (Gemma 3 4B) | [Actions: generateContent, countTokens]
- models/gemma-3-12b-it (Gemma 3 12B) | [Actions: generateContent, countTokens]
- models/gemma-3-27b-it (Gemma 3 27B) | [Actions: generateContent, countTokens]
- models/gemma-3n-e4b-it (Gemma 3n E4B) | [Actions: generateContent, countTokens]
- models/embedding-001 (Embedding 001) | [Actions: embedContent]
- models/text-embedding-004 (Text Embedding 004) | [Actions: embedContent]
- models/gemini-embedding-exp-03-07 (Gemini Embedding Experimental 03-07) | [Actions: embedContent, countTextTokens, countTokens]
- models/gemini-embedding-exp (Gemini Embedding Experimental) | [Actions: embedContent, countTextTokens, countTokens]
- models/aqa (Model that performs Attributed Question Answering.) | [Actions: generateAnswer]
- models/imagen-3.0-generate-002 (Imagen 3.0 002 model) | [Actions: predict]
- models/veo-2.0-generate-001 (Veo 2) | [Actions: predictLongRunning]
- models/gemini-2.5-flash-preview-native-audio-dialog (Gemini 2.5 Flash Preview Native Audio Dialog) | [Actions: countTokens, bidiGenerateContent]
- models/gemini-2.5-flash-preview-native-audio-dialog-rai-v3 (Gemini 2.5 Flash Preview Native Audio Dialog RAI v3) | [Actions: countTokens, bidiGenerateContent]
- models/gemini-2.5-flash-exp-native-audio-thinking-dialog (Gemini 2.5 Flash Exp Native Audio Thinking Dialog) | [Actions: countTokens, bidiGenerateContent]
- models/gemini-2.0-flash-live-001 (Gemini 2.0 Flash 001) | [Actions: bidiGenerateContent, countTokens]

These models support embedContent, used for embeddings:

const models_1 = await ai.models.list();
let page_1 = models_1.page;
while (page_1.length > 0) {
  for (const model of page_1) {
    if (model.supportedActions?.includes("embedContent")) {
      console.log(`- ${model.name} (${model.displayName}) | [Actions: ${model.supportedActions.join(", ")}]`);
    }
  }
  page_1 = models_1.hasNextPage() ? await models_1.nextPage() : [];
}
- models/embedding-001 (Embedding 001) | [Actions: embedContent]
- models/text-embedding-004 (Text Embedding 004) | [Actions: embedContent]
- models/gemini-embedding-exp-03-07 (Gemini Embedding Experimental 03-07) | [Actions: embedContent, countTextTokens, countTokens]
- models/gemini-embedding-exp (Gemini Embedding Experimental) | [Actions: embedContent, countTextTokens, countTokens]

Find details about a model

You can see more details about a model, including the inputTokenLimit and outputTokenLimit as follows.

const models_2 = await ai.models.list();
let page_2 = models_2.page;
while (page_2.length > 0) {
  for (const model of page_2) {
    if (model.name === "models/gemini-2.0-flash") {
      console.log(JSON.stringify(model, null, 2));
    }
  }
  page_2 = models_2.hasNextPage() ? await models_2.nextPage() : [];
}
{
  "name": "models/gemini-2.0-flash",
  "displayName": "Gemini 2.0 Flash",
  "description": "Gemini 2.0 Flash",
  "version": "2.0",
  "tunedModelInfo": {},
  "inputTokenLimit": 1048576,
  "outputTokenLimit": 8192,
  "supportedActions": [
    "generateContent",
    "countTokens",
    "createCachedContent",
    "batchGenerateContent"
  ]
}

Learning more

  • To learn how use a model for prompting, see the Prompting quickstart.
  • To learn how use a model for embedding, see the Embedding quickstart.
  • For more information on models, visit the Gemini models documentation.