You can create your API key using Google AI Studio with a single click.
Remember to treat your API key like a password. Don’t accidentally save it in a notebook or source file you later commit to GitHub. In this notebook we will be storing the API key in a .env file. You can also set it as an environment variable or use a secret manager.
Another option is to set the API key as an environment variable. You can do this in your terminal with the following command:
$ export GEMINI_API_KEY="<YOUR_API_KEY>"
Load the API key
To load the API key from the .env file, we will use the dotenv package. This package loads environment variables from a .env file into process.env.
$ npm install dotenv
Then, we can load the API key in our code:
const dotenv =require("dotenv") astypeofimport("dotenv");dotenv.config({ path:"../../.env",});const GEMINI_API_KEY =process.env.GEMINI_API_KEY??"";if (!GEMINI_API_KEY) {thrownewError("GEMINI_API_KEY is not set in the environment variables");}console.log("GEMINI_API_KEY is set in the environment variables");
GEMINI_API_KEY is set in the environment variables
Note
In our particular case the .env is is two directories up from the notebook, hence we need to use ../../ to go up two directories. If the .env file is in the same directory as the notebook, you can omit it altogether.
Now select the model you want to use in this guide, either by selecting one in the list or writing it down. Keep in mind that some models, like the 2.5 ones are thinking models and thus take slightly more time to respond (cf. thinking notebook for more details and in particular learn how to switch the thiking off).
Each file with a matching path will be loaded and split by RecursiveCharacterTextSplitter. In this example, it is specified, that the files are written in Python. It helps split the files without having documents that lack context.
SupportedTextSplitterLanguages literal provides common separators used in most popular programming languages, it lowers the chances of classes or functions being split in the middle.
awaitcallQAChain("What is the return type of embedding models?");
Answer:
The return type of embedding models depends on whether a single text or a list of texts is being embedded:
For embedding a single text (e.g., embed_query or aembed_query), the return type is a List[float].
For embedding a list of texts (e.g., embed_documents or aembed_documents), the return type is a List[List[float]], where each inner list is the embedding for a corresponding text.
awaitcallQAChain("What classes are related to Attributed Question and Answering.");
Answer:
The classes related to Attributed Question and Answering (AQA) are:
GenAIAqa: This is the main class representing Google’s Attributed Question and Answering service. It’s a RunnableSerializable that takes AqaInput and returns AqaOutput.
AqaInput: A Pydantic model defining the input structure for GenAIAqa.invoke, which includes prompt and source_passages.
AqaOutput: A Pydantic model defining the output structure from GenAIAqa.invoke, which includes the answer, attributed_passages, and answerable_probability.
_AqaModel: An internal wrapper class used by GenAIAqa to interact with the underlying Google Generative Language AQA API.
GroundedAnswer: A dataclass used internally (specifically by _AqaModel) to structure the response received from the generate_answer API call before it’s converted into AqaOutput.
Passage: A dataclass used within GroundedAnswer to represent an individual attributed passage, containing its text and id.
GoogleVectorStore: While not an AQA class itself, it contains an as_aqa method that constructs and returns a Runnable[str, AqaOutput] which includes GenAIAqa, making it a key class for integrating with AQA functionality.
awaitcallQAChain("What are the dependencies of the GenAIAqa class?");
Answer:
The GenAIAqa class has the following dependencies:
langchain_core:
RunnableSerializable
RunnableConfig
Document (indirectly, via _toAqaInput used in as_aqa method from GoogleVectorStore)
RunnablePassthrough, RunnableLambda (used in as_aqa method)
pydantic:
BaseModel
PrivateAttr
google.ai.generativelanguage (aliased as genai):
GenerativeServiceClient
AnswerStyle (from GenerateAnswerRequest)
SafetySetting
Internal modules/classes within langchain_google_genai:
_genai_extension (aliased as genaix): This module provides functions like build_generative_service, generate_answer, and the GroundedAnswer type.
_AqaModel (an internal wrapper class)
AqaInput (its input model)
AqaOutput (its output model)
Summary
Gemini API works great with Langchain. The integration is seamless and provides an easy interface for:
loading and splitting files
creating an In-memory database with embedding information
answering questions based on context from files
What’s next?
This notebook showed only one possible use case for langchain with Gemini API. You can find many more here.