For Client Developers
Get started building your own client that can integrate with all MCP servers.
In this tutorial, you’ll learn how to build a LLM-powered chatbot client that connects to MCP servers. It helps to have gone through the Server quickstart that guides you through the basic of building your first server.
You can find the complete code for this tutorial here.
System Requirements
Before starting, ensure your system meets these requirements:
- Mac or Windows computer
- Latest Python version installed
- Latest version of
uv
installed
Setting Up Your Environment
First, create a new Python project with uv
:
Setting Up Your API Key
You’ll need an Anthropic API key from the Anthropic Console.
Create a .env
file to store it:
Add your key to the .env
file:
Add .env
to your .gitignore
:
Make sure you keep your ANTHROPIC_API_KEY
secure!
Creating the Client
Basic Client Structure
First, let’s set up our imports and create the basic client class:
Server Connection Management
Next, we’ll implement the method to connect to an MCP server:
Query Processing Logic
Now let’s add the core functionality for processing queries and handling tool calls:
Interactive Chat Interface
Now we’ll add the chat loop and cleanup functionality:
Main Entry Point
Finally, we’ll add the main execution logic:
You can find the complete client.py
file here.
Key Components Explained
1. Client Initialization
- The
MCPClient
class initializes with session management and API clients - Uses
AsyncExitStack
for proper resource management - Configures the Anthropic client for Claude interactions
2. Server Connection
- Supports both Python and Node.js servers
- Validates server script type
- Sets up proper communication channels
- Initializes the session and lists available tools
3. Query Processing
- Maintains conversation context
- Handles Claude’s responses and tool calls
- Manages the message flow between Claude and tools
- Combines results into a coherent response
4. Interactive Interface
- Provides a simple command-line interface
- Handles user input and displays responses
- Includes basic error handling
- Allows graceful exit
5. Resource Management
- Proper cleanup of resources
- Error handling for connection issues
- Graceful shutdown procedures
Common Customization Points
-
Tool Handling
- Modify
process_query()
to handle specific tool types - Add custom error handling for tool calls
- Implement tool-specific response formatting
- Modify
-
Response Processing
- Customize how tool results are formatted
- Add response filtering or transformation
- Implement custom logging
-
User Interface
- Add a GUI or web interface
- Implement rich console output
- Add command history or auto-completion
Running the Client
To run your client with any MCP server:
If you’re continuing the weather tutorial from the server quickstart, your command might look something like this: python client.py .../weather/src/weather/server.py
The client will:
- Connect to the specified server
- List available tools
- Start an interactive chat session where you can:
- Enter queries
- See tool executions
- Get responses from Claude
Here’s an example of what it should look like if connected to the weather server from the server quickstart:
How It Works
When you submit a query:
- The client gets the list of available tools from the server
- Your query is sent to Claude along with tool descriptions
- Claude decides which tools (if any) to use
- The client executes any requested tool calls through the server
- Results are sent back to Claude
- Claude provides a natural language response
- The response is displayed to you
Best practices
-
Error Handling
- Always wrap tool calls in try-catch blocks
- Provide meaningful error messages
- Gracefully handle connection issues
-
Resource Management
- Use
AsyncExitStack
for proper cleanup - Close connections when done
- Handle server disconnections
- Use
-
Security
- Store API keys securely in
.env
- Validate server responses
- Be cautious with tool permissions
- Store API keys securely in
Troubleshooting
Server Path Issues
- Double-check the path to your server script is correct
- Use the absolute path if the relative path isn’t working
- For Windows users, make sure to use forward slashes (/) or escaped backslashes (\) in the path
- Verify the server file has the correct extension (.py for Python or .js for Node.js)
Example of correct path usage:
Response Timing
- The first response might take up to 30 seconds to return
- This is normal and happens while:
- The server initializes
- Claude processes the query
- Tools are being executed
- Subsequent responses are typically faster
- Don’t interrupt the process during this initial waiting period
Common Error Messages
If you see:
FileNotFoundError
: Check your server pathConnection refused
: Ensure the server is running and the path is correctTool execution failed
: Verify the tool’s required environment variables are setTimeout error
: Consider increasing the timeout in your client configuration
You can find the complete code for this tutorial here.
System Requirements
Before starting, ensure your system meets these requirements:
- Mac or Windows computer
- Latest Python version installed
- Latest version of
uv
installed
Setting Up Your Environment
First, create a new Python project with uv
:
Setting Up Your API Key
You’ll need an Anthropic API key from the Anthropic Console.
Create a .env
file to store it:
Add your key to the .env
file:
Add .env
to your .gitignore
:
Make sure you keep your ANTHROPIC_API_KEY
secure!
Creating the Client
Basic Client Structure
First, let’s set up our imports and create the basic client class:
Server Connection Management
Next, we’ll implement the method to connect to an MCP server:
Query Processing Logic
Now let’s add the core functionality for processing queries and handling tool calls:
Interactive Chat Interface
Now we’ll add the chat loop and cleanup functionality:
Main Entry Point
Finally, we’ll add the main execution logic:
You can find the complete client.py
file here.
Key Components Explained
1. Client Initialization
- The
MCPClient
class initializes with session management and API clients - Uses
AsyncExitStack
for proper resource management - Configures the Anthropic client for Claude interactions
2. Server Connection
- Supports both Python and Node.js servers
- Validates server script type
- Sets up proper communication channels
- Initializes the session and lists available tools
3. Query Processing
- Maintains conversation context
- Handles Claude’s responses and tool calls
- Manages the message flow between Claude and tools
- Combines results into a coherent response
4. Interactive Interface
- Provides a simple command-line interface
- Handles user input and displays responses
- Includes basic error handling
- Allows graceful exit
5. Resource Management
- Proper cleanup of resources
- Error handling for connection issues
- Graceful shutdown procedures
Common Customization Points
-
Tool Handling
- Modify
process_query()
to handle specific tool types - Add custom error handling for tool calls
- Implement tool-specific response formatting
- Modify
-
Response Processing
- Customize how tool results are formatted
- Add response filtering or transformation
- Implement custom logging
-
User Interface
- Add a GUI or web interface
- Implement rich console output
- Add command history or auto-completion
Running the Client
To run your client with any MCP server:
If you’re continuing the weather tutorial from the server quickstart, your command might look something like this: python client.py .../weather/src/weather/server.py
The client will:
- Connect to the specified server
- List available tools
- Start an interactive chat session where you can:
- Enter queries
- See tool executions
- Get responses from Claude
Here’s an example of what it should look like if connected to the weather server from the server quickstart:
How It Works
When you submit a query:
- The client gets the list of available tools from the server
- Your query is sent to Claude along with tool descriptions
- Claude decides which tools (if any) to use
- The client executes any requested tool calls through the server
- Results are sent back to Claude
- Claude provides a natural language response
- The response is displayed to you
Best practices
-
Error Handling
- Always wrap tool calls in try-catch blocks
- Provide meaningful error messages
- Gracefully handle connection issues
-
Resource Management
- Use
AsyncExitStack
for proper cleanup - Close connections when done
- Handle server disconnections
- Use
-
Security
- Store API keys securely in
.env
- Validate server responses
- Be cautious with tool permissions
- Store API keys securely in
Troubleshooting
Server Path Issues
- Double-check the path to your server script is correct
- Use the absolute path if the relative path isn’t working
- For Windows users, make sure to use forward slashes (/) or escaped backslashes (\) in the path
- Verify the server file has the correct extension (.py for Python or .js for Node.js)
Example of correct path usage:
Response Timing
- The first response might take up to 30 seconds to return
- This is normal and happens while:
- The server initializes
- Claude processes the query
- Tools are being executed
- Subsequent responses are typically faster
- Don’t interrupt the process during this initial waiting period
Common Error Messages
If you see:
FileNotFoundError
: Check your server pathConnection refused
: Ensure the server is running and the path is correctTool execution failed
: Verify the tool’s required environment variables are setTimeout error
: Consider increasing the timeout in your client configuration
This is a quickstart demo based on Spring AI MCP auto-configuration and boot starters. To learn how to create sync and async MCP Clients manually, consult the Java SDK Client documentation
This example demonstrates how to build an interactive chatbot that combines Spring AI’s Model Context Protocol (MCP) with the Brave Search MCP Server. The application creates a conversational interface powered by Anthropic’s Claude AI model that can perform internet searches through Brave Search, enabling natural language interactions with real-time web data. You can find the complete code for this tutorial here.
System Requirements
Before starting, ensure your system meets these requirements:
- Java 17 or higher
- Maven 3.6+
- npx package manager
- Anthropic API key (Claude)
- Brave Search API key
Setting Up Your Environment
-
Install npx (Node Package eXecute): First, make sure to install npm and then run:
-
Clone the repository:
-
Set up your API keys:
-
Build the application:
-
Run the application using Maven:
Make sure you keep your ANTHROPIC_API_KEY
and BRAVE_API_KEY
keys secure!
How it Works
The application integrates Spring AI with the Brave Search MCP server through several components:
MCP Client Configuration
- Required dependencies in pom.xml:
- Application properties (application.yml):
This activates the spring-ai-mcp-client-spring-boot-starter
to create one or more McpClient
s based on the provided server configuration.
- MCP Server Configuration (
mcp-servers-config.json
):
Chat Implementation
The chatbot is implemented using Spring AI’s ChatClient with MCP tool integration:
Key features:
- Uses Claude AI model for natural language understanding
- Integrates Brave Search through MCP for real-time web search capabilities
- Maintains conversation memory using InMemoryChatMemory
- Runs as an interactive command-line application
Build and run
or
The application will start an interactive chat session where you can ask questions. The chatbot will use Brave Search when it needs to find information from the internet to answer your queries.
The chatbot can:
- Answer questions using its built-in knowledge
- Perform web searches when needed using Brave Search
- Remember context from previous messages in the conversation
- Combine information from multiple sources to provide comprehensive answers
Advanced Configuration
The MCP client supports additional configuration options:
- Client customization through
McpSyncClientCustomizer
orMcpAsyncClientCustomizer
- Multiple clients with multiple transport types:
STDIO
andSSE
(Server-Sent Events) - Integration with Spring AI’s tool execution framework
- Automatic client initialization and lifecycle management
For WebFlux-based applications, you can use the WebFlux starter instead:
This provides similar functionality but uses a WebFlux-based SSE transport implementation, recommended for production deployments.
Next steps
Example servers
Check out our gallery of official MCP servers and implementations
Clients
View the list of clients that support MCP integrations
Building MCP with LLMs
Learn how to use LLMs like Claude to speed up your MCP development
Core architecture
Understand how MCP connects clients, servers, and LLMs