While LLMs are powerful, their actions are limited to content generation. To enable them to interact with the world, perform tasks, and access real-time information, whether it’s reading a file, querying a database, or making an API call, we can use MCP (Model Context Protocol) which is a standardized way for LLMs to interact with external systems.
The MCP protocol was designed by anthropic as a standard way for AI applications and agents to connect to and work with your data sources (e.g. local files, databases, or content repositories) and tools (e.g. GitHub, Google Maps, or Puppeteer).
How does it work?
At a very high level, MCP has three participants:
MCP Client: A component that maintains a connection to the MCP server and obtains context from an MCP server for the MCP host to use, it can also invoke tools and read resources from the MCP server.
MCP Host: The AI application that coordinates and manages one or more MCP clients, e.g. Claude Desktop, VS Code etc.
MCP Server: A remote or a local server that provides context to the MCP client, it can do so by exposing tools, resources and prompts.
You can create your API key using Google AI Studio with a single click.
Remember to treat your API key like a password. Don’t accidentally save it in a notebook or source file you later commit to GitHub. In this notebook we will be storing the API key in a .env file. You can also set it as an environment variable or use a secret manager.
Another option is to set the API key as an environment variable. You can do this in your terminal with the following command:
$ export GEMINI_API_KEY="<YOUR_API_KEY>"
Load the API key
To load the API key from the .env file, we will use the dotenv package. This package loads environment variables from a .env file into process.env.
$ npm install dotenv
Then, we can load the API key in our code:
const dotenv =require("dotenv") astypeofimport("dotenv");dotenv.config({ path:"../../.env",});const GEMINI_API_KEY =process.env.GEMINI_API_KEY??"";if (!GEMINI_API_KEY) {thrownewError("GEMINI_API_KEY is not set in the environment variables");}console.log("GEMINI_API_KEY is set in the environment variables");
GEMINI_API_KEY is set in the environment variables
Note
In our particular case the .env is is two directories up from the notebook, hence we need to use ../../ to go up two directories. If the .env file is in the same directory as the notebook, you can omit it altogether.
Now select the model you want to use in this guide, either by selecting one in the list or writing it down. Keep in mind that some models, like the 2.5 ones are thinking models and thus take slightly more time to respond (cf. thinking notebook for more details and in particular learn how to switch the thiking off).
For this guide you will use one of the MCP servers provided by Anthropic to demonstrate capabilities of the MCP protocol. For this particular example, you will setup the Time MCP Server:
$ pip install mcp-server-time
Then, you can run the server with the following command:
$ python -m mcp_server_time
Connect to the MCP server
You will use the Typescript SDK to connect to the MCP server. The SDK provides a Client class that you can use to connect to the MCP server and interact with it.
You can list the available tools on the MCP server by using the listTools method of the Client class. This will return a list of tools that are available on the MCP server.
{
"tools": [
{
"name": "get_current_time",
"description": "Get current time in a specific timezones",
"inputSchema": {
"type": "object",
"properties": {
"timezone": {
"type": "string",
"description": "IANA timezone name (e.g., 'America/New_York', 'Europe/London'). Use 'Asia/Kolkata' as local timezone if no timezone provided by the user."
}
},
"required": [
"timezone"
]
}
},
{
"name": "convert_time",
"description": "Convert time between timezones",
"inputSchema": {
"type": "object",
"properties": {
"source_timezone": {
"type": "string",
"description": "Source IANA timezone name (e.g., 'America/New_York', 'Europe/London'). Use 'Asia/Kolkata' as local timezone if no source timezone provided by the user."
},
"time": {
"type": "string",
"description": "Time to convert in 24-hour format (HH:MM)"
},
"target_timezone": {
"type": "string",
"description": "Target IANA timezone name (e.g., 'Asia/Tokyo', 'America/San_Francisco'). Use 'Asia/Kolkata' as local timezone if no target timezone provided by the user."
}
},
"required": [
"source_timezone",
"time",
"target_timezone"
]
}
}
]
}
Manually invoking a tool
You can manually invoke a tool by using the callTool method of the Client class. This method takes the name of the tool and the arguments to pass to the tool. The tool will then be executed and the result will be returned.
You can also use modelcontextprotocol/inspectortool to inspect the MCP server and see what tools, resources and prompts are available and how to use them.
$ npx @modelcontextprotocol/inspector
Integrate with Gemini SDK
As you saw above, you can use the callTool method to invoke a tool on the MCP server. The result of the tool invocation will be returned as a JSON object. You can then use this result in your application. In our case, we want Gemini to invoke any necessary tools automatically, and utilize it’s result in the response.
To achieve this you will use the mcpToTool function from the @google/genai package.
Initialize SDK Client
With the new SDK, now you only need to initialize a client with you API key (or OAuth if using Vertex AI). The model is now set in each call.
const google =require("@google/genai") astypeofimport("@google/genai");const ai =new google.GoogleGenAI({ apiKey: GEMINI_API_KEY });
Ask a question
const timezone_response =await ai.models.generateContent({ model: MODEL_ID, contents:"When it's 4 PM in New York, what time is it in London?", config: { tools: [google.mcpToTool(client)], toolConfig: { functionCallingConfig: { mode: google.FunctionCallingConfigMode.AUTO, }, }, },});tslab.display.markdown(timezone_response.text??"");
When it’s 4 PM in New York, it’s 9 PM in London.
Let’s take a look at the history of function calls made during this request:
You can also use multiple tools in a single request, with a more complex prompt.
const complex_response =await ai.models.generateContent({ model: MODEL_ID, contents:"What is the current time in Tokyo and what time is it in New York when it's 4 PM in Tokyo?", config: { tools: [google.mcpToTool(client)], toolConfig: { functionCallingConfig: { mode: google.FunctionCallingConfigMode.AUTO, }, }, },});tslab.display.markdown(complex_response.text??"");
The current time in Tokyo is 03:55:20 (August 9, 2025). When it’s 4 PM in Tokyo, it will be 3:00 AM in New York.
[
{
"role": "user",
"parts": [
{
"text": "What is the current time in Tokyo and what time is it in New York when it's 4 PM in Tokyo?"
}
]
},
{
"parts": [
{
"thoughtSignature": "CqgHAVSoXO5R3NbbLYyQzqlWp92XNtXH/pz5YFNHO9r3freVgNM4xcw9glmZsAkGQr3OCVTDq40auOthKIv6sk3e5z0CnHZplU9Irl94OSp9sNlXUyj+H/xOXA01ODC4+eiMhMECYA6Ckvy5I6OLGH3bec/hzAe+Vek+dMns5ce5nk3t4WMycSJyJKX56PY5NBHqqqKECapGk+0Pz6gat69GkZ7o6SyrAJv602WUaV8xT+uXcA3Q8QVpn+Zq+oQjTx99wq0pnWHDOY/dn9OABvrEDyRtsVsC19u7HU9lZxU3z14Pw9xpR+V01K0UxcEIrCi745q7FvgcbzVAQlyqD43B5NOfVQN9p8DU7Rv1JEPFnYO3/c4Ypr+StTFnkrR+Z/+f38IpB1BMp4Loboiqz3U99zoaCDl8ITk6ahdA+6ojqUjBT643aKRhNFNT+Jqi7bLyC6bt00WH1MYKhSll1oWSbQGtJVpHn03KoREwa3k5ZWSX923iRxeCzTGZyxF2DYtaxMPj/8Uq4iCvm9PKd5anRphWANxNg9cPEZkHeQDbPjtUBzXqobP8RNyTsWGH/WgIsTAdl8/o8VqYnw9QUXwAvHmY1ORekLTesLr4P8rVN3npWrt/ZyZIwScg30+FFSAnUj/jeG56e06pVJYH04+78bo/kytQrE/kRAinq3bo90f3pDGNAIT1BoPOiD4F0ewzO1oEftsD80gWSkV03uBFY4pSEdeGfA02l7wXzIfZYm6FO2tY5Pg0TGzd5xN3jkZ8UvRGyfu8XspfRr5lZHlgDOwGgmMEIQ9pqorc59A+4oH7zYzV15MDjMsm8y/qLdugQLL3+aHdG4Goco+f1NHzlzHvK+eFfhxLrFdl/j6KHzSEIO1Vg/rwGe61KAQY2GYd7Zoy6ti90dkMuCA4WVf4PserNp4H3shAHar2MoTQvybmj8ebkoVwlt9lN8EW9CzL1YrEP4+PRoj61xlzk/EcYaJDI/mhsyQvfg5Bm2jdvmOPxnmIppuKlyOfbTVKPXkxLppeA3Qp0gt+Ee/UM9XHXOqsbPeyD9u8usNHOHW749DHk2JnQmi5eLKdUHt/vU63Vk+hHBF4+ZtW1Fl1ua3rzQ4M9SM+rEKPV8iAhCyvxSHbt88nfkGMylpdErefClw8ZYxQ2/whTbIa8082xuFRNhorwsnh34ZkT4a5W1nGyQDRouJclKI3KGmkVO6WQ9Sjhg43bQ37eINSWt+7Z4BSlwMUCW3l6epT",
"functionCall": {
"name": "get_current_time",
"args": {
"timezone": "Asia/Tokyo"
}
}
},
{
"functionCall": {
"name": "convert_time",
"args": {
"time": "16:00",
"target_timezone": "America/New_York",
"source_timezone": "Asia/Tokyo"
}
}
}
],
"role": "model"
},
{
"role": "user",
"parts": [
{
"functionResponse": {
"name": "get_current_time",
"response": {
"content": [
{
"type": "text",
"text": "{\n \"timezone\": \"Asia/Tokyo\",\n \"datetime\": \"2025-08-09T03:55:20+09:00\",\n \"is_dst\": false\n}"
}
],
"isError": false
}
}
},
{
"functionResponse": {
"name": "convert_time",
"response": {
"content": [
{
"type": "text",
"text": "{\n \"source\": {\n \"timezone\": \"Asia/Tokyo\",\n \"datetime\": \"2025-08-09T16:00:00+09:00\",\n \"is_dst\": false\n },\n \"target\": {\n \"timezone\": \"America/New_York\",\n \"datetime\": \"2025-08-09T03:00:00-04:00\",\n \"is_dst\": true\n },\n \"time_difference\": \"-13.0h\"\n}"
}
],
"isError": false
}
}
}
]
}
]
As you can see, the Gemini model was able to use all the tools available on the MCP server to answer the question.
Summary
In this guide, you learned how to use the Gemini API to query a MCP server. You set up a MCP server, connected to it using the Typescript SDK, and invoked tools on the server. Finally, you integrated the MCP server with the Gemini SDK to automatically invoke tools and utilize their results in the response.
Next steps
MCP servers are not limited to just tools, they can also provide resources and prompts. You can learn more about these in the MCP documentation. The Time MCP Server was a very basic example, but you can create your own MCP server with more complex tools, resources, and prompts to suit your needs.
For a list of available MCP servers, you can check out the Awesome MCP Servers repository.