Documentation Index
Fetch the complete documentation index at: https://docs.scrapegraphai.com/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The ScrapeGraphAI MCP Server is a production-ready Model Context Protocol (MCP) server that connects Large Language Models (LLMs) to the ScrapeGraph AI API. It enables AI assistants like Claude and Cursor to perform AI-powered web scraping, research, and crawling directly through natural language interactions.β Star us on GitHub
If this server is helpful, a star goes a long way. Thanks!
What is MCP?
The Model Context Protocol (MCP) is a standardized way for AI assistants to access external tools and data sources. By using the ScrapeGraphAI MCP Server, your AI assistant gains access to powerful web scraping capabilities without needing to write code.Key Features
17 Powerful Tools
Scrape, extract, search, crawl, generate schemas, monitor scheduled jobs (with activity polling), and manage your account
Remote & Local
Use the hosted HTTP endpoint or run locally via Python
Universal Compatibility
Works with Cursor, Claude Desktop, and any MCP-compatible client
Production Ready
Robust error handling, timeouts, and reliability tested in production
Available Tools
The MCP server exposes the following tools via API v2:| Tool | Description |
|---|---|
| scrape | Fetch page content in any format: markdown (default), html, screenshot, branding, links, images, summary (POST /scrape) |
| extract | AI-powered structured extraction from a URL (POST /extract) |
| search | Search the web and extract structured results (POST /search) |
| crawl_start | Start async multi-page crawl β markdown, html, links, images, summary, branding, or screenshot (POST /crawl) |
| crawl_get_status | Poll crawl results (GET /crawl/:id) |
| crawl_stop | Stop a running crawl job (POST /crawl/:id/stop) |
| crawl_resume | Resume a stopped crawl job (POST /crawl/:id/resume) |
| schema | Generate or augment a JSON Schema from a prompt (POST /schema) |
| credits | Check your credit balance (GET /credits) |
| history | Browse request history with pagination (GET /history) |
| monitor_create | Create a scheduled extraction job (POST /monitor) |
| monitor_list | List all monitors (GET /monitor) |
| monitor_get | Get monitor details (GET /monitor/:id) |
| monitor_pause | Pause a running monitor (POST /monitor/:id/pause) |
| monitor_resume | Resume a paused monitor (POST /monitor/:id/resume) |
| monitor_delete | Delete a monitor (DELETE /monitor/:id) |
| monitor_activity | Poll tick history for a monitor with pagination (GET /monitor/:id/activity) |
Migrating from v2 (scrapegraph-mcp β€ 2.x)? Tools were renamed in v3.0.0 to match the v2 API canonical names:
smartscraper β extract, searchscraper β search, smartcrawler_initiate β crawl_start, smartcrawler_fetch_results β crawl_get_status, sgai_history β history, generate_schema β schema. markdownify was removed β use scrape with output_format="markdown" instead. See the v3.0.0 release notes for full details.Quick Start
Get Your API Key
Create an account and copy your API key from the ScrapeGraph Dashboard
Choose Your Client
Select your preferred AI assistant: Cursor or Claude Desktop
Setup Guides
Cursor Setup
Configure ScrapeGraph MCP in Cursor (remote-first)
Claude Desktop Setup
Configure ScrapeGraph MCP in Claude Desktop (remote-first)
Recommended: Remote HTTP Endpoint
The easiest way to get started is using our hosted MCP endpoint:Local Installation
Prefer running locally? You can install the Python package and run it via stdio. This gives you more control and doesnβt require internet connectivity for the MCP connection itself.The remote endpoint is recommended for most users as itβs simpler to set up and maintain.
Use Cases
- Research & Analysis - Extract data from multiple sources for research
- Content Aggregation - Collect and structure content from websites
- Market Intelligence - Monitor competitors and market trends
- Lead Generation - Extract contact information and company data
- Data Collection - Build datasets from web sources
Next Steps
- Read the detailed setup guide for Cursor
- Read the detailed setup guide for Claude Desktop
- Browse the GitHub repo for source, advanced configuration, and release notes
Ready to Start?
Choose your client and start scraping with AI!

