Claude Code Setup & Integration
This guide covers the complete setup and usage of Claude Code with the AltSportsLeagues.ai project, including MCP server integration, custom commands, and development workflows.
Overview
The AltSportsLeagues.ai project leverages Claude Code extensively through:
- 15+ MCP Server Integrations: n8n, Atlassian, Google Workspace, databases, and more
- 350+ Custom Slash Commands: Organized by category for rapid development
- 50+ Specialized Agents: Architecture, deployment, evaluation, and domain-specific agents
- Real-time Workflow Automation: Integration with n8n for automated pipelines
- Multi-Database Orchestration: Supabase, Neo4j, Firebase, and FAISS/ChromaDB
MCP Server Configuration
Configuration File Location
All MCP servers are configured in .cursor/mcp.json at the project root.
Primary MCP Servers
1. n8n-mcp (Production Workflow Automation)
Purpose: Production workflow automation and orchestration
Endpoint: https://altsportsdata.app.n8n.cloud
Capabilities:
- 525+ n8n nodes with 263 AI-ready tools
- Workflow creation, validation, and deployment
- Template library with 1000+ workflows
- Real-time workflow execution and monitoring
- Node search and documentation access
Common Use Cases:
// Search for workflow nodes
mcp__n8n-mcp__search_nodes({ query: "slack" })
// Create workflow
mcp__n8n-mcp__n8n_create_workflow({
name: "League Onboarding Pipeline",
nodes: [...],
connections: {...}
})
// Trigger workflow via webhook
mcp__n8n-mcp__n8n_trigger_webhook_workflow({
webhookUrl: "https://altsportsdata.app.n8n.cloud/webhook/...",
data: { leagueId: "123" }
})Configuration:
{
"n8n-mcp": {
"command": "npx",
"args": ["n8n-mcp"],
"env": {
"MCP_MODE": "sse",
"LOG_LEVEL": "error",
"N8N_API_URL": "https://altsportsdata.app.n8n.cloud",
"N8N_API_KEY": "your-api-key"
}
}
}2. n8n-mcp-local (Development Instance)
Purpose: Development and testing workflows
Endpoint: https://n8n.altsportsleagues.ai
Use Cases:
- Local workflow development
- Custom league onboarding pipelines
- Integration testing
3. Atlassian (Jira & Confluence)
Purpose: Project management and documentation automation
Capabilities:
- Create Jira issues from league questionnaires
- Update project status automatically
- Search and retrieve Confluence documentation
- Automated ticket creation and updates
Docker-based MCP Server:
{
"atlassian": {
"command": "docker",
"args": [
"run", "-i", "--rm",
"-v", "/app/output:/app/output",
"-e", "JIRA_URL",
"-e", "JIRA_API_TOKEN",
"ghcr.io/sooperset/mcp-atlassian:latest"
],
"env": {
"JIRA_URL": "https://altsportsdata.atlassian.net",
"JIRA_USERNAME": "your-email@example.com",
"JIRA_API_TOKEN": "your-token"
}
}
}Example Usage:
// Create Jira ticket for new league
mcp__atlassian__jira_create_issue({
project: "ASD",
summary: "New League Onboarding: PWHL",
description: "Process league questionnaire and generate contract",
issueType: "Task"
})
// Search Confluence for similar leagues
mcp__atlassian__confluence_search({
query: "ice hockey leagues",
limit: 10
})4. Google Workspace Integration
Three MCP Servers for Google Services:
- gdrive: File management and sharing
- google-workspace: Gmail, Calendar, Sheets
- mcp-gdrive-sheets: Advanced Sheets operations
Use Cases:
- Process league questionnaire PDFs from Drive
- Extract data from Google Sheets
- Send automated emails via Gmail
- Schedule follow-ups in Calendar
5. League Discovery Cross-Comparison
Custom Python MCP Server:
python -m apps.backend.mcp_servers.servers.league_discovery_cross_comparison_mcpPurpose:
- Discover new leagues across multiple sources
- Cross-compare league data
- Identify partnership opportunities
- Analyze market segments
Database: Connected to Supabase production instance
6. Development Tools
playwright: Browser automation and testing
- End-to-end testing
- Screenshot generation
- User flow validation
puppeteer: Web scraping and data extraction
- League website scraping
- Social media monitoring
- Competitor analysis
sequential-thinking: Advanced reasoning
- Complex multi-step analysis
- Decision tree exploration
- Strategic planning
memory: Persistent context
- Store important findings across sessions
- Maintain conversation context
- Build knowledge base
7. Database MCP Servers
postgres: Direct PostgreSQL access sqlite: Local database operations supabase: Custom Supabase MCP for relational data
Use Cases:
- Direct database queries when API overhead is unnecessary
- Batch data operations
- Schema migrations
- Performance testing
8. Infrastructure & Cloud
docker: Container management
- Build and deploy containers
- Manage container lifecycle
- View logs and metrics
kubernetes: K8s operations
- Deploy to clusters
- Scale services
- Monitor deployments
aws: AWS service integration
- S3 file operations
- Lambda function management
- CloudWatch logs
Custom Slash Commands
The project includes 350+ custom slash commands organized by category.
Command Categories
Prime Commands (/prime-*)
Initialize Claude Code context for specific development areas.
Available Primes:
/prime-nextjs: Next.js frontend development/prime-python: Python backend development/prime-mcp: MCP server development/prime-altsportsdata: Business intelligence platform/prime-n8n: n8n workflow automation/prime-db-advisor: Database selection and architecture
Example Usage:
# Start Next.js development session
/prime-nextjs
# Initialize Python backend context
/prime-python
# Prepare for MCP server development
/prime-mcpDatabase Commands (/database:*)
Comprehensive database operations across multiple backends.
ETL Operations:
/database:etl:database.etl.supabase.crud-operations: Test CRUD operations/database:etl:database.etl.supabase.health-check: Comprehensive health validation/database:etl:database.etl.ai.google-vertex-single-query-test: Test RAG queries
Neo4j Graph Operations:
/database:etl:database.etl.neo4j.graph-database-connectivity: Test graph connections/neo4j-league-exploration: Comprehensive league graph exploration
Firebase Real-time:
/database:etl:database.etl.firebase.firestore-connectivity-test: Test Firestore
Example:
# Test Supabase connectivity and schema
/database:etl:database.etl.supabase.database-connectivity-test
# Run comprehensive database health check
/database:etl:database.etl.supabase.health-check
# Test Neo4j graph queries
/database:etl:database.etl.neo4j.graph-database-connectivityMedia Commands (/media:email:*)
Email processing and intelligence workflows.
Available Commands:
/media:email:process-email-attachments: Extract league questionnaires/media:email:scan-inbox-summary: Generate inbox intelligence/media:email:fetch-recent-emails: Retrieve and filter emails/media:email:action-evaluate-satisfaction: Evaluate response quality
Example:
# Process new league questionnaire from email
/media:email:process-email-attachments
# Generate daily inbox summary
/media:email:scan-inbox-summaryDeployment Commands (/deploy:*)
Automated deployment workflows.
Available Deployments:
/deploy:google.cloud-deployment: Deploy backend to Cloud Run/deploy:deploy-docs: Deploy documentation site to Vercel/deploy:prd.google-cloud-generator: Generate PRD for Cloud deployment
Example:
# Deploy backend API to Google Cloud Run
/deploy:google.cloud-deployment
# Deploy docs site to Vercel
/deploy:deploy-docsEvaluation Commands (/evals:*)
Comprehensive testing and quality assurance.
Available Evaluations:
/evals:cook-eval-loop: Master orchestration for 80% β 100% production readiness/evals:email-testing-orchestration: Email system comprehensive testing/evals:test-contract-downloads: Contract accessibility verification
Example:
# Run comprehensive evaluation cycle
/evals:cook-eval-loop
# Test email processing pipeline
/evals:email-testing-orchestrationAltSportsData Commands (/altsportsdata:*)
Business intelligence and partnership workflows.
Categories:
- Enhancement: Data processing and enrichment
- Analytics: Knowledge base and insights
- Intelligence: Workflow orchestration
Example Commands:
# Convert league questionnaire to contract
/altsportsdata:convert-league-questionnaire-to-contracts
# Analyze league data compatibility
/altsportsdata:08-intelligence:altsportsdata.intelligence.55.league-contract-tier-classification-comparison-flow
# Generate usage cost analysis
/altsportsdata:08-intelligence:altsportsdata.intelligence.81.usage-cost-analyzer-flowCustom Agents
The project includes 50+ specialized agents for different development tasks.
Agent Categories
1. Architects
System Design Agents:
system-architect: Senior system architect with 15+ years experiencebackend-typescript-architect: Backend TypeScript/Bun specialistfrontend-developer: React/Next.js frontend expertdata-architect: Database and data platform designsecurity-architect: Security and compliance specialist
Usage:
# Launch system architecture review
@system-architect
# Design backend API architecture
@backend-typescript-architect
# Plan frontend component structure
@frontend-developer2. Deployment Specialists
Multi-Phase Deployment:
multi-phase-deployment-orchestrator: Complete deployment pipelinegcloud-deployment-specialist: Google Cloud Run deploymentvercel-deployment-specialist: Vercel edge deploymentgithub-deployment-specialist: GitHub Actions workflows
Usage:
# Orchestrate full deployment
@multi-phase-deployment-orchestrator
# Deploy to Cloud Run
@gcloud-deployment-specialist3. Evaluation & Testing
Quality Assurance Agents:
production-readiness-evaluator: 80% β 100% production readinesstest.evals.nextjs-react-modern-web: Next.js app evaluationtest.evals.production-deployment: Deployment readiness checkfastapi-testing-specialist: FastAPI comprehensive testing
4. Specialist Agents
Domain Experts:
mcp-engineer: MCP server development specialistn8nagents: Workflow automation expertsui-generation-agent: UI component generationrefactor-agent: Code quality improvement
Development Workflows
Workflow 1: New League Onboarding
Steps:
- Prime Claude Code:
/prime-altsportsdata- Process Email with Questionnaire:
/media:email:process-email-attachments- Generate Contract:
/altsportsdata:convert-league-questionnaire-to-contracts- Create Jira Ticket (via MCP):
mcp__atlassian__jira_create_issue({
project: "ASD",
summary: "New League: [LEAGUE_NAME]",
description: "Contract generated and ready for review"
})- Deploy to Database:
/database:etl:database.etl.supabase.altsports-league-onboardingWorkflow 2: MCP Server Development
Steps:
- Prime for MCP Development:
/prime-mcp- Use MCP Scaffolding Agent:
@mcp-server-scaffolder
# Or generate single-file MCP
@meta-sfmcp-generator- Test with n8n Integration:
// Validate MCP server works with n8n
mcp__n8n-mcp__validate_workflow({
workflow: {...}
})- Deploy to Cloud Run:
/deploy:google.cloud-deploymentWorkflow 3: Frontend Development
Steps:
- Prime for Next.js:
/prime-nextjs- Generate UI Components:
@ui-generation-agent
# Or use specific UI architect
@ui-sidebar-architect- Test with Playwright:
// Use Playwright MCP for testing
mcp__playwright__navigate({ url: "http://localhost:3000" })
mcp__playwright__screenshot({ path: "test.png" })- Deploy to Vercel:
/deploy:deploy-docsWorkflow 4: Data Pipeline Testing
Steps:
- Test Database Connectivity:
/database:etl:database.etl.supabase.database-connectivity-test- Run ETL Operations:
/database:etl:database.etl.supabase.crud-operations- Validate with Neo4j:
/database:etl:database.etl.neo4j.graph-database-connectivity- Check Vector Embeddings:
/database:etl:database.etl.ai.google-vertex-single-query-testBest Practices
MCP Server Usage
-
Use n8n-mcp for Automation:
- Batch operations
- Scheduled workflows
- Multi-step pipelines
-
Leverage Atlassian MCP:
- Automatic ticket creation
- Documentation updates
- Project tracking
-
Database MCP Strategy:
- Use Supabase MCP for relational queries
- Neo4j MCP for graph traversals
- FAISS/ChromaDB for semantic search
Slash Command Best Practices
-
Start with Prime Commands:
- Initialize context before development
- Load relevant documentation
- Set up environment variables
-
Use Evaluation Commands Regularly:
- Run
/evals:cook-eval-loopbefore deployment - Test email workflows with
/evals:email-testing-orchestration
- Run
-
Leverage Database Commands:
- Test connectivity before operations
- Run health checks regularly
- Validate schemas after changes
Agent Best Practices
-
Architecture First:
- Use architect agents for design phase
- Get system design review before implementation
-
Specialized Agents for Complex Tasks:
- MCP development β
@mcp-engineer - UI generation β
@ui-generation-agent - Code quality β
@refactor-agent
- MCP development β
-
Evaluation Agents Before Deployment:
@production-readiness-evaluator@test.evals.production-deployment
Troubleshooting
MCP Server Not Available
Issue: MCP server not responding
Solution:
# Check MCP server status
cat .cursor/mcp.json | jq '.mcpServers.["n8n-mcp"]'
# Restart MCP server
# Claude Code will auto-restart on next useSlash Command Not Found
Issue: Custom command not recognized
Solution:
# Check command exists
ls .claude/commands/
# Verify command syntax
cat .claude/commands/database/etl/database.etl.supabase.database-connectivity-test.mdAgent Not Loading
Issue: Agent configuration not loading
Solution:
# Check agent exists
ls .claude/agents/
# Verify agent configuration
cat .claude/agents/architects/system-architect.mdAdditional Resources
Support
For issues with Claude Code setup:
- Check
.cursor/mcp.jsonconfiguration - Verify environment variables in
.env.local - Review agent/command documentation
- Consult project CLAUDE.md at repository root