Architecture
Mcp Integration

MCP Integration Architecture

Understanding how Model Context Protocol (MCP) servers integrate with the AltSportsLeagues.ai backend, enabling AI-powered workflows and external tool orchestration.

🎯 MCP Overview

Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to Large Language Models (LLMs). Our backend implements FastMCP to expose structured data and tools to AI agents.

Key Benefits

  • Standardized Interface: Consistent API for AI agents to access system data
  • Tool Composition: Complex workflows built from simple, atomic operations
  • Security: Controlled access with authentication and rate limiting
  • Scalability: Stateless servers that can scale independently

Architecture Position


πŸ› οΈ MCP Server Implementation

Backend Integration

Our FastAPI backend integrates FastMCP to expose tools and resources:

from fastmcp import FastMCP
from fastapi import FastAPI
 
# Initialize FastAPI
app = FastAPI(
    title="AltSportsLeagues API",
    version="1.0.0"
)
 
# Initialize FastMCP
mcp = FastMCP("AltSportsLeagues MCP Server")
 
# Define MCP tools
@mcp.tool()
async def get_league_data(league_id: str) -> dict:
    """
    Retrieve comprehensive league data including teams, players, and stats.
    
    Args:
        league_id: Unique identifier for the league
        
    Returns:
        Complete league data with relationships
    """
    from data_layer.shared.python import neo4j_utils, supabase_client
    
    # Query graph database for relationships
    graph_data = await neo4j_utils.get_league_graph(league_id)
    
    # Query relational data
    league_metadata = await supabase_client.get_league(league_id)
    
    return {
        "league": league_metadata,
        "graph": graph_data,
        "timestamp": datetime.now().isoformat()
    }
 
@mcp.tool()
async def process_questionnaire(pdf_data: bytes, league_name: str) -> dict:
    """
    Process league questionnaire PDF and extract structured data.
    
    Args:
        pdf_data: Raw PDF bytes
        league_name: Name of the league
        
    Returns:
        Extracted and validated league information
    """
    from data_layer.shared.python.ai_processors import extract_questionnaire
    
    # AI-powered PDF extraction
    extracted_data = await extract_questionnaire(pdf_data)
    
    # Validate against schema
    validated = LeagueQuestionnaireSchema(**extracted_data)
    
    # Store in databases
    await store_league_data(validated, league_name)
    
    return validated.dict()
 
# Mount MCP server on FastAPI
app.mount("/tools", mcp.app)

Tool Categories

We organize MCP tools into logical categories:

CategoryToolsPurpose
League Managementget_league_data, create_league, update_leagueCRUD operations for leagues
Data Processingprocess_questionnaire, extract_pdf, validate_dataAI-powered data extraction
Analyticscalculate_tier, generate_contract, analyze_readinessBusiness intelligence
Integrationsync_to_neo4j, upsert_supabase, trigger_webhookCross-system operations

πŸ”Œ MCP Protocol Flow

Request-Response Pattern

Tool Execution Lifecycle

1. Request Reception

MCP server receives tool call with parameters:

{
  "jsonrpc": "2.0",
  "method": "tools/call",
  "params": {
    "name": "get_league_data",
    "arguments": {
      "league_id": "league_123"
    }
  },
  "id": 1
}

2. Authentication & Validation

  • Verify API key or session token
  • Validate parameter types and required fields
  • Check rate limits
  • Log request for monitoring

3. Tool Execution

# Tool handler executes business logic
result = await tool_handler.execute(
    tool_name="get_league_data",
    params={"league_id": "league_123"}
)

4. Response Formatting

{
  "jsonrpc": "2.0",
  "result": {
    "league": {
      "id": "league_123",
      "name": "Sample League",
      "tier": 1
    },
    "graph": { ... },
    "timestamp": "2024-01-15T10:30:00Z"
  },
  "id": 1
}

5. Error Handling

{
  "jsonrpc": "2.0",
  "error": {
    "code": -32602,
    "message": "Invalid league_id format",
    "data": {
      "expected": "string matching /^league_\\d+$/",
      "received": "invalid_id"
    }
  },
  "id": 1
}

πŸ” Security & Authentication

Authentication Methods

from fastapi import Header, HTTPException
from typing import Optional
 
async def verify_mcp_token(
    authorization: Optional[str] = Header(None)
) -> str:
    """Verify MCP client authentication token."""
    
    if not authorization:
        raise HTTPException(401, "Missing authorization header")
    
    if not authorization.startswith("Bearer "):
        raise HTTPException(401, "Invalid authorization format")
    
    token = authorization[7:]  # Remove "Bearer " prefix
    
    # Verify token against database or JWT
    user_id = await validate_token(token)
    
    if not user_id:
        raise HTTPException(401, "Invalid or expired token")
    
    return user_id
 
@mcp.tool()
async def get_league_data(
    league_id: str,
    user_id: str = Depends(verify_mcp_token)
) -> dict:
    """Protected tool requiring authentication."""
    
    # Check user permissions
    if not await has_league_access(user_id, league_id):
        raise HTTPException(403, "Insufficient permissions")
    
    # Execute tool logic
    return await fetch_league_data(league_id)

Rate Limiting

from slowapi import Limiter
from slowapi.util import get_remote_address
 
limiter = Limiter(key_func=get_remote_address)
 
@app.post("/tools/{tool_name}")
@limiter.limit("100/minute")  # 100 requests per minute
async def execute_tool(tool_name: str, params: dict):
    """Rate-limited tool execution endpoint."""
    return await mcp.execute_tool(tool_name, params)

Security Best Practices

  • βœ… API Keys: Unique keys for each client application
  • βœ… JWT Tokens: Short-lived tokens with refresh mechanism
  • βœ… IP Whitelisting: Restrict access to known IP ranges (optional)
  • βœ… HTTPS Only: All MCP traffic over TLS
  • βœ… Input Validation: Pydantic schemas for all tool parameters
  • βœ… Audit Logging: Track all tool executions with user context

πŸ“Š Tool Discovery & Documentation

List Available Tools

# Query MCP server for available tools
curl https://api.altsportsleagues.ai/tools/list \
  -H "Authorization: Bearer YOUR_TOKEN" | jq
 
# Response
{
  "tools": [
    {
      "name": "get_league_data",
      "description": "Retrieve comprehensive league data",
      "inputSchema": {
        "type": "object",
        "properties": {
          "league_id": {
            "type": "string",
            "description": "Unique league identifier"
          }
        },
        "required": ["league_id"]
      }
    },
    {
      "name": "process_questionnaire",
      "description": "Process league questionnaire PDF",
      "inputSchema": { ... }
    }
  ]
}

Tool Schema

Each tool exposes a JSON Schema for its parameters:

{
  "name": "calculate_tier",
  "description": "Calculate league tier classification",
  "inputSchema": {
    "type": "object",
    "properties": {
      "league_id": {
        "type": "string",
        "pattern": "^league_\\d+$"
      },
      "force_recalculate": {
        "type": "boolean",
        "default": false
      }
    },
    "required": ["league_id"]
  },
  "outputSchema": {
    "type": "object",
    "properties": {
      "tier": {
        "type": "integer",
        "minimum": 1,
        "maximum": 5
      },
      "confidence_score": {
        "type": "number",
        "minimum": 0,
        "maximum": 1
      }
    }
  }
}

πŸ€– AI Agent Integration

Cursor/Claude Integration

// In Cursor/Claude AI context
import { MCPClient } from '@modelcontextprotocol/sdk';
 
const mcp = new MCPClient({
  serverUrl: 'https://api.altsportsleagues.ai/tools',
  apiKey: process.env.MCP_API_KEY
});
 
// List available tools
const tools = await mcp.listTools();
 
// Call a tool
const result = await mcp.callTool('get_league_data', {
  league_id: 'league_123'
});
 
console.log(result);
// {
//   league: { ... },
//   graph: { ... },
//   timestamp: "2024-01-15T10:30:00Z"
// }

n8n Integration

// In n8n HTTP Request node
{
  "url": "https://api.altsportsleagues.ai/tools/call",
  "method": "POST",
  "authentication": "headerAuth",
  "headers": {
    "Authorization": "Bearer {{$env.MCP_API_KEY}}",
    "Content-Type": "application/json"
  },
  "body": {
    "jsonrpc": "2.0",
    "method": "tools/call",
    "params": {
      "name": "process_questionnaire",
      "arguments": {
        "pdf_data": "{{$binary.data}}",
        "league_name": "Sample League"
      }
    },
    "id": 1
  }
}

πŸ”„ Tool Composition Patterns

Atomic Tools β†’ Complex Workflows

MCP tools follow the single responsibility principle. Complex workflows are built by chaining simple tools:

Example: League Onboarding Workflow

# In n8n or AI agent
async def onboard_league(email_attachment: bytes, league_name: str):
    """
    Complete league onboarding using MCP tools.
    """
    
    # Step 1: Extract data from PDF
    extracted = await mcp.call_tool('extract_pdf', {
        'pdf_data': email_attachment
    })
    
    # Step 2: Validate against schema
    validated = await mcp.call_tool('validate_data', {
        'data': extracted,
        'schema_type': 'league_questionnaire'
    })
    
    # Step 3: Calculate tier classification
    tier_result = await mcp.call_tool('calculate_tier', {
        'league_data': validated
    })
    
    # Step 4: Generate contract terms
    contract = await mcp.call_tool('generate_contract', {
        'tier': tier_result['tier'],
        'league_name': league_name
    })
    
    # Step 5: Store in databases (parallel)
    await asyncio.gather(
        mcp.call_tool('sync_to_neo4j', {'league_data': validated}),
        mcp.call_tool('upsert_supabase', {'league_data': validated})
    )
    
    return {
        'tier': tier_result['tier'],
        'contract': contract,
        'status': 'complete'
    }

πŸ“ˆ Monitoring & Observability

Tool Execution Metrics

from prometheus_client import Counter, Histogram
 
# Metrics
tool_calls = Counter(
    'mcp_tool_calls_total',
    'Total MCP tool calls',
    ['tool_name', 'status']
)
 
tool_duration = Histogram(
    'mcp_tool_duration_seconds',
    'MCP tool execution duration',
    ['tool_name']
)
 
@mcp.tool()
async def get_league_data(league_id: str) -> dict:
    """Instrumented tool with metrics."""
    
    with tool_duration.labels(tool_name='get_league_data').time():
        try:
            result = await fetch_league_data(league_id)
            tool_calls.labels(
                tool_name='get_league_data',
                status='success'
            ).inc()
            return result
        except Exception as e:
            tool_calls.labels(
                tool_name='get_league_data',
                status='error'
            ).inc()
            raise

Logging

import logging
import json
 
logger = logging.getLogger("mcp_server")
 
@mcp.tool()
async def process_questionnaire(pdf_data: bytes, league_name: str) -> dict:
    """Tool with structured logging."""
    
    logger.info(
        "MCP tool called",
        extra={
            "tool_name": "process_questionnaire",
            "league_name": league_name,
            "pdf_size_bytes": len(pdf_data)
        }
    )
    
    try:
        result = await process(pdf_data, league_name)
        
        logger.info(
            "MCP tool completed",
            extra={
                "tool_name": "process_questionnaire",
                "duration_ms": result.get('processing_time'),
                "status": "success"
            }
        )
        
        return result
    except Exception as e:
        logger.error(
            "MCP tool failed",
            extra={
                "tool_name": "process_questionnaire",
                "error": str(e),
                "error_type": type(e).__name__
            },
            exc_info=True
        )
        raise

πŸš€ Deployment & Scaling

Cloud Run Configuration

MCP server runs as part of the FastAPI backend on Cloud Run:

# cloud-run-config.yaml
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: altsportsleagues-backend
spec:
  template:
    metadata:
      annotations:
        autoscaling.knative.dev/minScale: "0"
        autoscaling.knative.dev/maxScale: "10"
    spec:
      containers:
      - image: gcr.io/project/backend:latest
        ports:
        - containerPort: 8080
        env:
        - name: MCP_ENABLED
          value: "true"
        - name: MCP_AUTH_REQUIRED
          value: "true"
        resources:
          limits:
            memory: "4Gi"
            cpu: "2"

Horizontal Scaling

  • Stateless Design: Each MCP server instance is independent
  • Shared Data Layer: All instances use same databases
  • Load Balancing: Cloud Run handles automatic distribution
  • Cold Start: < 2 seconds due to optimized Docker image

Performance Optimization

# Connection pooling for databases
from sqlalchemy.pool import QueuePool
 
engine = create_engine(
    DATABASE_URL,
    poolclass=QueuePool,
    pool_size=20,
    max_overflow=10
)
 
# Caching for frequently accessed data
from cachetools import TTLCache
 
tool_cache = TTLCache(maxsize=1000, ttl=300)  # 5 minute cache
 
@mcp.tool()
async def get_league_data(league_id: str) -> dict:
    """Cached tool for better performance."""
    
    cache_key = f"league:{league_id}"
    
    # Check cache first
    if cache_key in tool_cache:
        return tool_cache[cache_key]
    
    # Fetch from database
    result = await fetch_league_data(league_id)
    
    # Store in cache
    tool_cache[cache_key] = result
    
    return result

πŸ”— Related Documentation

Learn More:


🎯 MCP Best Practices

  • Single Responsibility: Each tool should do one thing well
  • Idempotency: Tools should be safe to retry
  • Schema Validation: Always validate inputs and outputs
  • Error Messages: Provide clear, actionable error information
  • Documentation: Keep tool descriptions up to date

Platform

Documentation

Community

Support

partnership@altsportsdata.comdev@altsportsleagues.ai

2025 Β© AltSportsLeagues.ai. Powered by AI-driven sports business intelligence.

πŸ€– AI-Enhancedβ€’πŸ“Š Data-Drivenβ€’βš‘ Real-Time