Gemini API: Safety Quickstart

The Gemini API has adjustable safety settings. This notebook walks you through how to use them. You’ll write a prompt that’s blocked, see the reason why, and then adjust the filters to unblock it.

Safety is an important topic, and you can learn more with the links at the end of this notebook. Here, you will focus on the code

Setup

Install the Google GenAI SDK

Install the Google GenAI SDK from npm.

$ npm install @google/genai

Setup your API key

You can create your API key using Google AI Studio with a single click.

Remember to treat your API key like a password. Don’t accidentally save it in a notebook or source file you later commit to GitHub. In this notebook we will be storing the API key in a .env file. You can also set it as an environment variable or use a secret manager.

Here’s how to set it up in a .env file:

$ touch .env
$ echo "GEMINI_API_KEY=<YOUR_API_KEY>" >> .env
Tip

Another option is to set the API key as an environment variable. You can do this in your terminal with the following command:

$ export GEMINI_API_KEY="<YOUR_API_KEY>"

Load the API key

To load the API key from the .env file, we will use the dotenv package. This package loads environment variables from a .env file into process.env.

$ npm install dotenv

Then, we can load the API key in our code:

const dotenv = require("dotenv") as typeof import("dotenv");

dotenv.config({
  path: "../.env",
});

const GEMINI_API_KEY = process.env.GEMINI_API_KEY ?? "";
if (!GEMINI_API_KEY) {
  throw new Error("GEMINI_API_KEY is not set in the environment variables");
}
console.log("GEMINI_API_KEY is set in the environment variables");
GEMINI_API_KEY is set in the environment variables
Note

In our particular case the .env is is one directory up from the notebook, hence we need to use ../ to go up one directory. If the .env file is in the same directory as the notebook, you can omit it altogether.

│
├── .env
└── quickstarts
    └── Safety.ipynb

Initialize SDK Client

With the new SDK, now you only need to initialize a client with you API key (or OAuth if using Vertex AI). The model is now set in each call.

const google = require("@google/genai") as typeof import("@google/genai");

const ai = new google.GoogleGenAI({ apiKey: GEMINI_API_KEY });

Select a model

Now select the model you want to use in this guide, either by selecting one in the list or writing it down. Keep in mind that some models, like the 2.5 ones are thinking models and thus take slightly more time to respond (cf. thinking notebook for more details and in particular learn how to switch the thiking off).

const tslab = require("tslab") as typeof import("tslab");

const MODEL_ID = "gemini-2.5-flash-preview-05-20";

Send your prompt request to Gemini

Pick the prompt you want to use to test the safety filters settings. An examples could be Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark which was previously tested and trigger the HARM_CATEGORY_HARASSMENT and HARM_CATEGORY_DANGEROUS_CONTENT categories.

The result returned by the generateContent method is a GenerateContentResponse.

const unsafe_response = await ai.models.generateContent({
  model: MODEL_ID,
  contents: `
    I support Martians Soccer Club and I think Jupiterians Football Club
    sucks! Write a ironic phrase about them.
  `,
});

This response object gives you safety feedback about the candidate answers Gemini generates to you.

For each candidate answer you need to check response.candidates.finishReason.

As you can find on the Gemini API safety filters documentation:

  • if the candidate.finishReason is FinishReason.STOP means that your generation request ran successfully
  • if the candidate.finishReason is FinishReason.SAFETY means that your generation request was blocked by safety reasons. It also means that the response.text structure will be empty.
console.log(unsafe_response.candidates?.[0]?.finishReason);
STOP

If the finishReason is FinishReason.SAFETY you can check which filter caused the block checking the safetyRatings list for the candidate answer:

console.log(unsafe_response.candidates?.[0]?.safetyRatings);
undefined
tslab.display.markdown(unsafe_response.text ?? "");

Here are a few ironic phrases about Jupiterians Football Club, playing on the idea that they “suck”:

Option 1 (Feigned admiration for their unique approach):

“Jupiterians Football Club: Consistently expanding our understanding of what ‘football’ can be… which is often not very much like actual football.”

Option 2 (Focus on their generosity to opponents):

“Jupiterians FC is a shining example of sportsmanship; they’re truly dedicated to making the opposing team feel like champions.”

Option 3 (Understated “talent”):

“You have to admire Jupiterians Football Club’s unwavering commitment to… an extremely long-term rebuilding process.”

Option 4 (Playing on “gravity” for the league table):

“Jupiterians FC brings a certain… gravitational pull to the bottom of the league table.”

Option 5 (Complimenting their effort, not results):

“Watching Jupiterians Football Club play, you can really see their dedication to trying.”

Choose the one that best fits your style!

Customizing safety settings

Depending on the scenario you are working with, it may be necessary to customize the safety filters behaviors to allow a certain degree of unsafety results.

To make this customization you must define a safetySettings config as part of your generateContent()request.

Important

To guarantee the Google commitment with the Responsible AI development and its AI Principles, for some prompts Gemini will avoid generating the results even if you set all the filters to none.

const unsafe_response_1 = await ai.models.generateContent({
  model: MODEL_ID,
  contents: `
    I support Martians Soccer Club and I think Jupiterians Football Club
    sucks! Write a ironic phrase about them.
  `,
  config: {
    safetySettings: [
      {
        category: google.HarmCategory.HARM_CATEGORY_HATE_SPEECH,
        threshold: google.HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
      },
      {
        category: google.HarmCategory.HARM_CATEGORY_HARASSMENT,
        threshold: google.HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
      },
      {
        category: google.HarmCategory.HARM_CATEGORY_SEXUALLY_EXPLICIT,
        threshold: google.HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
      },
      {
        category: google.HarmCategory.HARM_CATEGORY_DANGEROUS_CONTENT,
        threshold: google.HarmBlockThreshold.BLOCK_LOW_AND_ABOVE,
      },
    ],
  },
});

Checking again the candidate.finishReason information, if the request was not too unsafe, it must show now the value as FinishReason.STOP which means that the request was successfully processed by Gemini.

console.log(unsafe_response_1.candidates?.[0]?.finishReason);
STOP

Since the request was successfully generated, you can check the result on the response.text:

tslab.display.markdown(unsafe_response_1.text ?? "");

“Jupiterians Football Club: They truly excel at making every other team in the league feel like champions.”

And if you check the safety filters ratings, as you set all filters to be ignored, no filtering category was trigerred:

console.log(unsafe_response.candidates?.[0]?.safetyRatings);
undefined

Learning more

Learn more with these articles on safety guidance and safety settings.

Useful API references

There are 6 configurable safety settings for the Gemini API:

  • HARM_CATEGORY_CIVIC_INTEGRITY
  • HARM_CATEGORY_DANGEROUS_CONTENT
  • HARM_CATEGORY_HARASSMENT
  • HARM_CATEGORY_SEXUALLY_EXPLICIT
  • HARM_CATEGORY_HATE_SPEECH
  • HARM_CATEGORY_UNSPECIFIED

You can refer to the safety settings using either their full name, or the aliases like DANGEROUS.

The HarmCategory enum includes both the categories for PaLM and Gemini models.

  • When specifying enum values the SDK will accept the enum values themselves, or their integer or string representations.
  • The SDK will also accept abbreviated string representations: ["HARM_CATEGORY_DANGEROUS_CONTENT", "DANGEROUS_CONTENT", "DANGEROUS"] are all valid. Strings are case insensitive.