Overview

SmartScraper is our flagship LLM-powered web scraping service that intelligently extracts structured data from any website. Using advanced LLM models, it understands context and content like a human would, making web data extraction more reliable and efficient than ever.

Try SmartScraper instantly in our interactive playground - no coding required!

Getting Started

Quick Start

from scrapegraph_py import Client

client = Client(api_key="your-api-key")

response = client.smartscraper(
    website_url="https://scrapegraphai.com/",
    user_prompt="Extract info about the company"
)

Parameters

ParameterTypeRequiredDescription
apiKeystringYesThe ScrapeGraph API Key.
websiteUrlstringYesThe URL of the webpage that needs to be scraped.
promptstringYesA textual description of what you want to achieve.
schemaobjectNoThe Pydantic or Zod object that describes the structure and format of the response.

Get your API key from the dashboard

Key Features

Universal Compatibility

Works with any website structure, including JavaScript-rendered content

AI Understanding

Contextual understanding of content for accurate extraction

Structured Output

Returns clean, structured data in your preferred format

Schema Support

Define custom output schemas using Pydantic or Zod

Use Cases

Content Aggregation

  • News article extraction
  • Blog post summarization
  • Product information gathering
  • Research data collection

Data Analysis

  • Market research
  • Competitor analysis
  • Price monitoring
  • Trend tracking

AI Training

  • Dataset creation
  • Training data collection
  • Content classification
  • Knowledge base building

Want to learn more about our AI-powered scraping technology? Visit our main website to discover how we’re revolutionizing web data extraction.

Other Functionality

Retrieve a previous request

If you know the response id of a previous request you made, you can retrieve all the information.

import { getSmartScraperRequest } from 'scrapegraph-js';

const apiKey = 'your_api_key';
const requestId = 'ID_of_previous_request';

try {
  const requestInfo = await getSmartScraperRequest(apiKey, requestId);
  console.log(requestInfo);
} catch (error) {
  console.error(error);
}

Parameters

ParameterTypeRequiredDescription
apiKeystringYesThe ScrapeGraph API Key.
requestIdstringYesThe request ID associated with the output of a previous smartScraper request.

Custom Schema Example

Define exactly what data you want to extract:

from pydantic import BaseModel, Field

class ArticleData(BaseModel):
    title: str = Field(description="Article title")
    author: str = Field(description="Author name")
    content: str = Field(description="Main article content")
    publish_date: str = Field(description="Publication date")

response = client.smartscraper(
    website_url="https://example.com/article",
    user_prompt="Extract the article information",
    output_schema=ArticleData
)

Async Support

For applications requiring asynchronous execution, SmartScraper provides comprehensive async support through the AsyncClient:

import asyncio
from scrapegraph_py import AsyncClient
from pydantic import BaseModel, Field

# Define your schema
class WebpageSchema(BaseModel):
    title: str = Field(description="The title of the webpage")
    description: str = Field(description="The description of the webpage")
    summary: str = Field(description="A brief summary of the webpage")

async def main():
    # Initialize the async client
    async with AsyncClient(api_key="your-api-key") as client:
        # List of URLs to analyze
        urls = [
            "https://scrapegraphai.com/",
            "https://github.com/ScrapeGraphAI/Scrapegraph-ai",
        ]

        # Create scraping tasks for each URL
        tasks = [
            client.smartscraper(
                website_url=url,
                user_prompt="Summarize the main content",
                output_schema=WebpageSchema
            )
            for url in urls
        ]

        # Execute requests concurrently
        responses = await asyncio.gather(*tasks, return_exceptions=True)

        # Process results
        for i, response in enumerate(responses):
            if isinstance(response, Exception):
                print(f"Error for {urls[i]}: {response}")
            else:
                print(f"Result for {urls[i]}: {response['result']}")

# Run the async function
if __name__ == "__main__":
    asyncio.run(main())

SmartScraper Endpoint

The SmartScraper endpoint is our core service for extracting structured data from any webpage using advanced language models. It automatically adapts to different website layouts and content types, enabling quick and reliable data extraction.

Key Capabilities

  • Universal Compatibility: Works with any website structure, including JavaScript-rendered content
  • Schema Validation: Supports both Pydantic (Python) and Zod (JavaScript) schemas
  • Concurrent Processing: Efficient handling of multiple URLs through async support
  • Custom Extraction: Flexible user prompts for targeted data extraction

Endpoint Details

POST https://api.scrapegraphai.com/v1/smartscraper
Required Headers
HeaderDescription
SGAI-APIKEYYour API authentication key
Content-Typeapplication/json
Request Body
FieldTypeRequiredDescription
website_urlstringYes*URL to scrape (*either this or website_html required)
website_htmlstringNoRaw HTML content to process
user_promptstringYesInstructions for data extraction
output_schemaobjectNoPydantic or Zod schema for response validation
Response Format
{
  "request_id": "sg-req-abc123",
  "status": "completed",
  "website_url": "https://example.com",
  "result": {
    // Structured data based on schema or extraction prompt
  },
  "error": null
}

Best Practices

  1. Schema Definition:

    • Define schemas to ensure consistent data structure
    • Use descriptive field names and types
    • Include field descriptions for better extraction accuracy
  2. Async Processing:

    • Use async clients for concurrent requests
    • Implement proper error handling
    • Monitor rate limits and implement backoff strategies
  3. Error Handling:

    • Always wrap requests in try-catch blocks
    • Check response status before processing
    • Implement retry logic for failed requests

Integration Options

Official SDKs

  • Python SDK - Perfect for data science and backend applications
  • JavaScript SDK - Ideal for web applications and Node.js

AI Framework Integrations

Best Practices

Optimizing Extraction

  1. Be specific in your prompts
  2. Use schemas for structured data
  3. Handle pagination for multi-page content
  4. Implement error handling and retries

Rate Limiting

  • Implement reasonable delays between requests
  • Use async clients for better performance
  • Monitor your API usage

Example Projects

Check out our cookbook for real-world examples:

  • E-commerce product scraping
  • News aggregation
  • Research data collection
  • Content monitoring

API Reference

For detailed API documentation, see:

Support & Resources

Ready to Start?

Sign up now and get your API key to begin extracting data with SmartScraper!