Self ask prompting is similar to chain of thought, but instead of going step by step as one answer, it asks itself questions that will help answer the query. Like the chain of thought, it helps the model to think analytically.
You can create your API key using Google AI Studio with a single click.
Remember to treat your API key like a password. Don’t accidentally save it in a notebook or source file you later commit to GitHub. In this notebook we will be storing the API key in a .env file. You can also set it as an environment variable or use a secret manager.
Another option is to set the API key as an environment variable. You can do this in your terminal with the following command:
$ export GEMINI_API_KEY="<YOUR_API_KEY>"
Load the API key
To load the API key from the .env file, we will use the dotenv package. This package loads environment variables from a .env file into process.env.
$ npm install dotenv
Then, we can load the API key in our code:
const dotenv =require("dotenv") astypeofimport("dotenv");dotenv.config({ path:"../../.env",});const GEMINI_API_KEY =process.env.GEMINI_API_KEY??"";if (!GEMINI_API_KEY) {thrownewError("GEMINI_API_KEY is not set in the environment variables");}console.log("GEMINI_API_KEY is set in the environment variables");
GEMINI_API_KEY is set in the environment variables
Note
In our particular case the .env is is two directories up from the notebook, hence we need to use ../../ to go up two directories. If the .env file is in the same directory as the notebook, you can omit it altogether.
With the new SDK, now you only need to initialize a client with you API key (or OAuth if using Vertex AI). The model is now set in each call.
const google =require("@google/genai") astypeofimport("@google/genai");const ai =new google.GoogleGenAI({ apiKey: GEMINI_API_KEY });
Select a model
Now select the model you want to use in this guide, either by selecting one in the list or writing it down. Keep in mind that some models, like the 2.5 ones are thinking models and thus take slightly more time to respond (cf. thinking notebook for more details and in particular learn how to switch the thiking off).
const prompt =` Question: Who was the president of the united states when Mozart died? Are follow up questions needed?: yes. Follow up: When did Mozart died? Intermediate answer: 1791. Follow up: Who was the president of the united states in 1791? Intermediate answer: George Washington. Final answer: When Mozart died George Washington was the president of the USA. Question: Where did the Emperor of Japan, who ruled the year Maria Skłodowska was born, die?`;const response =await ai.models.generateContent({ model: MODEL_ID, contents: prompt,});tslab.display.markdown(response.text??"");
Are follow up questions needed?: yes. Follow up: When was Maria Skłodowska born? Intermediate answer: 1867. Follow up: Who was the Emperor of Japan in 1867? Intermediate answer: Emperor Meiji. Follow up: Where did Emperor Meiji die? Intermediate answer: The Imperial Palace, Tokyo. Final answer: The Emperor of Japan who ruled the year Maria Skłodowska was born was Emperor Meiji, and he died at the Imperial Palace in Tokyo.
Additional note
Self-ask prompting works well with function calling. Follow-up questions can be used as input to a function, which e.g. searches the internet. The question and answer from the function can be added back to the prompt. During the next query to the model, it can either create another function call or return the final answer.