Backend API & MCP Integration

This specification defines requirements for the AltSportsLeagues.ai Backend API & MCP Integration Platform - a FastAPI-based backend system with comprehensive Model Context Protocol (MCP) server integration, enabling AI-powered sports intelligence, league analysis, and business automation.

Key Principle: Build a scalable, production-ready backend that seamlessly integrates MCP servers while providing robust REST APIs for frontend clients.

Glossary

  • Backend_API: FastAPI application providing REST endpoints
  • MCP_Server: Model Context Protocol server providing AI tool interfaces
  • FastMCP: Python framework for building MCP servers
  • Tool_Registry: Central registry of all MCP tools and capabilities
  • Service_Layer: Business logic services (LangMem, Document Pipeline, etc.)
  • Data_Layer: Pydantic models and schema definitions
  • Authentication_Layer: JWT-based authentication and authorization
  • Rate_Limiter: Request throttling and quota management
  • Health_Monitor: System health checks and service status
  • API_Gateway: Unified entry point for all backend services

Requirements

Requirement 1: FastAPI REST API Foundation

User Story: As a frontend developer, I want a well-documented REST API with OpenAPI specifications, so that I can build client applications efficiently.

Acceptance Criteria

  1. WHEN the backend starts, THE Backend_API SHALL expose OpenAPI documentation at /docs and /redoc
  2. WHEN an API endpoint is called, THE Backend_API SHALL validate request bodies using Pydantic models
  3. WHEN validation fails, THE Backend_API SHALL return 422 with detailed error messages
  4. WHEN successful, THE Backend_API SHALL return standardized JSON responses with proper HTTP status codes
  5. WHEN errors occur, THE Backend_API SHALL log errors with trace IDs and return user-friendly error messages

Requirement 2: MCP Server Integration

User Story: As an AI agent developer, I want to access all backend functionality through MCP protocol, so that I can integrate with Claude, ChatGPT, and custom AI agents.

Acceptance Criteria

  1. WHEN the MCP server starts, THE Backend_API SHALL register all tools in the Tool_Registry
  2. WHEN an AI agent calls list_tools, THE MCP_Server SHALL return all available tools with schemas
  3. WHEN an AI agent calls a tool, THE MCP_Server SHALL validate inputs and execute the corresponding service
  4. WHEN tool execution succeeds, THE MCP_Server SHALL return structured results following MCP protocol
  5. WHEN tool execution fails, THE MCP_Server SHALL return error details with recovery suggestions

Requirement 3: Service Layer Architecture

User Story: As a backend developer, I want a clean service layer architecture, so that business logic is separated from API routes and easily testable.

Acceptance Criteria

  1. WHEN processing requests, THE Backend_API SHALL delegate business logic to Service_Layer modules
  2. WHEN services are initialized, THE Backend_API SHALL inject dependencies (database, AI clients, etc.)
  3. WHEN services execute, THE Service_Layer SHALL handle all business logic and data transformations
  4. WHEN services fail, THE Service_Layer SHALL raise custom exceptions with context
  5. WHEN services succeed, THE Service_Layer SHALL return standardized response objects

Requirement 4: Authentication & Authorization

User Story: As a system administrator, I want secure authentication and role-based authorization, so that I can control access to sensitive operations.

Acceptance Criteria

  1. WHEN users authenticate, THE Authentication_Layer SHALL issue JWT tokens with user claims
  2. WHEN protected endpoints are called, THE Authentication_Layer SHALL validate JWT tokens
  3. WHEN authorization is required, THE Authentication_Layer SHALL check user roles and permissions
  4. WHEN tokens expire, THE Authentication_Layer SHALL return 401 and require re-authentication
  5. WHEN API keys are used, THE Authentication_Layer SHALL validate keys against the database

Requirement 5: Rate Limiting & Quotas

User Story: As a platform operator, I want rate limiting and usage quotas, so that I can prevent abuse and manage costs.

Acceptance Criteria

  1. WHEN requests exceed rate limits, THE Rate_Limiter SHALL return 429 with retry-after headers
  2. WHEN quotas are exceeded, THE Rate_Limiter SHALL return 403 with quota information
  3. WHEN premium users access APIs, THE Rate_Limiter SHALL apply higher limits
  4. WHEN monitoring usage, THE Rate_Limiter SHALL track requests per user and endpoint
  5. WHEN limits reset, THE Rate_Limiter SHALL clear counters based on time windows

Requirement 6: Health Monitoring & Observability

User Story: As a DevOps engineer, I want comprehensive health checks and metrics, so that I can monitor system health and diagnose issues.

Acceptance Criteria

  1. WHEN /health is called, THE Health_Monitor SHALL check database, AI services, and MCP servers
  2. WHEN services are healthy, THE Health_Monitor SHALL return 200 with status details
  3. WHEN services are degraded, THE Health_Monitor SHALL return 503 with specific failures
  4. WHEN metrics are requested, THE Health_Monitor SHALL expose Prometheus-compatible metrics
  5. WHEN errors occur, THE Health_Monitor SHALL log structured logs with trace IDs

Requirement 7: Data Layer & Pydantic Models

User Story: As a developer, I want strongly-typed Pydantic models for all data structures, so that I have type safety and automatic validation.

Acceptance Criteria

  1. WHEN data enters the system, THE Data_Layer SHALL validate all inputs using Pydantic models
  2. WHEN models are defined, THE Data_Layer SHALL include field validators and custom types
  3. WHEN serializing data, THE Data_Layer SHALL use model_dump() for JSON serialization
  4. WHEN deserializing data, THE Data_Layer SHALL use model_validate() for type safety
  5. WHEN schemas are needed, THE Data_Layer SHALL generate JSON schemas for OpenAPI docs

Requirement 8: Async Processing & Background Tasks

User Story: As a user, I want long-running operations to execute asynchronously, so that I get immediate responses and can track progress.

Acceptance Criteria

  1. WHEN expensive operations are requested, THE Backend_API SHALL queue background tasks
  2. WHEN tasks are queued, THE Backend_API SHALL return 202 with task ID and status endpoint
  3. WHEN checking task status, THE Backend_API SHALL return current state (pending/running/completed/failed)
  4. WHEN tasks complete, THE Backend_API SHALL store results and send notifications
  5. WHEN tasks fail, THE Backend_API SHALL retry with exponential backoff and eventually mark as failed

Requirement 9: Database Integration

User Story: As a data engineer, I want seamless database integration with connection pooling and migrations, so that I can manage persistent data efficiently.

Acceptance Criteria

  1. WHEN the backend starts, THE Backend_API SHALL establish database connection pools
  2. WHEN queries are executed, THE Backend_API SHALL use connection pooling for efficiency
  3. WHEN database errors occur, THE Backend_API SHALL retry queries with exponential backoff
  4. WHEN schema changes are needed, THE Backend_API SHALL support Alembic migrations
  5. WHEN in development, THE Backend_API SHALL use SQLite, in production PostgreSQL/Supabase

Requirement 10: AI Service Integration

User Story: As an AI developer, I want seamless integration with multiple AI providers, so that I can use the best model for each task.

Acceptance Criteria

  1. WHEN AI services are called, THE Backend_API SHALL route requests to appropriate providers (OpenAI, Anthropic, Vertex)
  2. WHEN providers fail, THE Backend_API SHALL implement fallback strategies
  3. WHEN rate limits are hit, THE Backend_API SHALL queue requests and retry
  4. WHEN costs are tracked, THE Backend_API SHALL log token usage per request
  5. WHEN streaming is needed, THE Backend_API SHALL support SSE for streaming responses

Requirement 11: Google Cloud Integration

User Story: As a platform engineer, I want native Google Cloud integration, so that I can leverage Firestore, Cloud Storage, and Vertex AI.

Acceptance Criteria

  1. WHEN storing documents, THE Backend_API SHALL use Cloud Storage with proper IAM permissions
  2. WHEN querying data, THE Backend_API SHALL use Firestore with efficient query patterns
  3. WHEN using AI, THE Backend_API SHALL integrate with Vertex AI for embeddings and RAG
  4. WHEN deploying, THE Backend_API SHALL run on Cloud Run with auto-scaling
  5. WHEN monitoring, THE Backend_API SHALL send logs and metrics to Cloud Logging and Monitoring

Requirement 12: Development Experience

User Story: As a developer, I want excellent DX with hot reload, debugging, and comprehensive logging, so that I can develop and debug efficiently.

Acceptance Criteria

  1. WHEN running locally, THE Backend_API SHALL support hot reload with uvicorn --reload
  2. WHEN debugging, THE Backend_API SHALL provide detailed stack traces in development mode
  3. WHEN logging, THE Backend_API SHALL use structured logging with JSON format
  4. WHEN testing, THE Backend_API SHALL provide pytest fixtures for database and AI mocks
  5. WHEN documenting, THE Backend_API SHALL auto-generate OpenAPI specs and type hints

Requirement 13: Deployment & Containerization

User Story: As a DevOps engineer, I want containerized deployment with Docker and Cloud Run support, so that I can deploy reliably to any environment.

Acceptance Criteria

  1. WHEN building, THE Backend_API SHALL create optimized Docker images with multi-stage builds
  2. WHEN deploying to Cloud Run, THE Backend_API SHALL handle Cloud Run requirements (PORT env var, etc.)
  3. WHEN scaling, THE Backend_API SHALL support horizontal scaling with stateless design
  4. WHEN updating, THE Backend_API SHALL support zero-downtime deployments with health checks
  5. WHEN rolling back, THE Backend_API SHALL support instant rollback to previous versions

Design Considerations

FastAPI Structure

The backend uses FastAPI for high-performance REST APIs with automatic OpenAPI documentation.

Example API Endpoint:

from fastapi import FastAPI, Depends, HTTPException
from pydantic import BaseModel
from typing import Optional
 
app = FastAPI(title="AltSportsLeagues API", version="1.0.0")
 
class LeagueQuery(BaseModel):
    id: str
    tier: Optional[str] = None
 
@app.get("/leagues/{league_id}", response_model=LeagueResponse)
async def get_league(league_id: str, query: LeagueQuery = Depends()):
    league = await league_service.get_league(league_id, query.tier)
    if not league:
        raise HTTPException(status_code=404, detail="League not found")
    return league

MCP Server Integration

The MCP server uses FastMCP framework for tool registration and execution.

Tool Registration:

from fastmcp import MCPServer, Tool
 
mcp_server = MCPServer(name="AltSports MCP Server")
 
@mcp_server.tool("get_league_intelligence")
async def get_league_intelligence(league_id: str) -> dict:
    """Get comprehensive intelligence for a sports league."""
    intelligence = await intelligence_service.get_league_intelligence(league_id)
    return {
        "league_id": league_id,
        "intelligence_score": intelligence.score,
        "recommendations": intelligence.recommendations,
        "risk_assessment": intelligence.risk
    }

Authentication Layer

JWT-based authentication using PyJWT and dependency injection.

from fastapi import Depends, HTTPException, status
from jose import JWTError, jwt
from passlib.context import CryptContext
 
SECRET_KEY = "your-secret-key"
ALGORITHM = "HS256"
pwd_context = CryptContext(schemes=["bcrypt"], deprecated="auto")
 
def verify_token(token: str = Depends(oauth2_scheme)):
    try:
        payload = jwt.decode(token, SECRET_KEY, algorithms=[ALGORITHM])
        username: str = payload.get("sub")
        if username is None:
            raise HTTPException(status_code=401, detail="Invalid token")
        return username
    except JWTError:
        raise HTTPException(status_code=401, detail="Invalid token")

Rate Limiting

Using slowapi for rate limiting.

from slowapi import Limiter, _rate_limit_exceeded_handler
from slowapi.util import get_remote_address
from slowapi.errors import RateLimitExceeded
 
limiter = Limiter(key_func=get_remote_address)
app.state.limiter = limiter
app.add_exception_handler(RateLimitExceeded, _rate_limit_exceeded_handler)
 
@app.get("/leagues/{league_id}")
@limiter.limit("100/minute")
async def get_league(league_id: str):
    # Rate limited endpoint
    pass

Database Models (SQLAlchemy + Pydantic)

from sqlalchemy import Column, Integer, String, DateTime
from sqlalchemy.ext.declarative import declarative_base
from pydantic import BaseModel
 
Base = declarative_base()
 
class League(Base):
    __tablename__ = "leagues"
    
    id = Column(Integer, primary_key=True, index=True)
    name = Column(String, index=True)
    sport_type = Column(String)
    tier = Column(String)
    intelligence_score = Column(Integer)
    created_at = Column(DateTime)
 
class LeagueResponse(BaseModel):
    id: int
    name: str
    sport_type: str
    tier: str
    intelligence_score: int
    
    class Config:
        from_attributes = True

Related Documentation

Platform

Documentation

Community

Support

partnership@altsportsdata.comdev@altsportsleagues.ai

2025 © AltSportsLeagues.ai. Powered by AI-driven sports business intelligence.

🤖 AI-Enhanced📊 Data-Driven⚡ Real-Time