Skip to main content
Proxy Configuration

Overview

The ScrapeGraphAI API uses an intelligent proxy system that automatically handles web scraping requests through multiple proxy providers. The system uses a fallback strategy to ensure maximum reliability - if one provider fails, it automatically tries the next one. No configuration required: The proxy system is fully automatic and transparent to API users. You don’t need to configure proxy credentials or settings yourself.

How It Works

The API automatically routes your scraping requests through multiple proxy providers in a smart order:
  1. The system tries different proxy providers automatically
  2. If one provider fails, it automatically falls back to the next one
  3. Successful providers are cached for each domain to improve performance
  4. Everything happens transparently - you just make your API request as normal

Country Selection (Geotargeting)

You can optionally specify a country code to route requests through proxies in a specific country. This is useful for:
  • Accessing geo-restricted content
  • Getting localized versions of websites
  • Complying with regional requirements
  • Testing location-specific features

Using Country Code

Include the country_code parameter in your API request:
from scrapegraph_py import Client

client = Client(api_key="your-api-key")

# Request with country code
response = client.smartscraper(
    website_url="https://example.com",
    user_prompt="Extract product information",
    country_code="us"  # Route through US proxies
)

Supported Country Codes

The API supports geotargeting for a wide range of countries using ISO 3166-1 alpha-2 country codes:
CodeCountryCodeCountryCodeCountry
usUnited Statesuk / gbUnited KingdomcaCanada
auAustraliadeGermanyfrFrance
itItalyesSpainnlNetherlands
beBelgiumchSwitzerlandatAustria
seSwedennoNorwaydkDenmark
fiFinlandplPolandczCzech Republic
ieIrelandptPortugalgrGreece
jpJapankrSouth KoreacnChina
inIndiasgSingaporehkHong Kong
mxMexicobrBrazilarArgentina
clChilecoColombiapePeru
zaSouth AfricaegEgyptaeUAE
saSaudi ArabiailIsraeltrTurkey
ruRussiauaUkrainenzNew Zealand
And many more! The API supports over 100 countries. Use standard ISO 3166-1 alpha-2 country codes.

Available Parameters

The following parameters in API requests can affect proxy behavior:

country_code (optional)

  • Type: String
  • Description: Two-letter ISO country code to route requests through proxies in a specific country
  • Example: "us", "uk", "de", "it", "fr"
  • Default: No specific country (uses optimal routing)
  • Format: ISO 3166-1 alpha-2 (e.g., us, gb, de)

render_heavy_js (optional)

  • Type: Boolean
  • Description: Whether to render JavaScript-heavy pages. This may affect which proxy provider is used.
  • Default: false

Usage Examples

Basic Request (Automatic Proxy Selection)

from scrapegraph_py import Client

client = Client(api_key="your-api-key")

# Automatic proxy selection - no configuration needed
response = client.smartscraper(
    website_url="https://example.com",
    user_prompt="Extract product information"
)

Request with Country Code

from scrapegraph_py import Client

client = Client(api_key="your-api-key")

# Route through US proxies
response = client.smartscraper(
    website_url="https://example.com",
    user_prompt="Extract product information",
    country_code="us"
)

# Route through UK proxies
response = client.smartscraper(
    website_url="https://example.com",
    user_prompt="Extract product information",
    country_code="uk"
)

Request with JavaScript Rendering and Country Code

from scrapegraph_py import Client

client = Client(api_key="your-api-key")

# Combine JavaScript rendering with geotargeting
response = client.smartscraper(
    website_url="https://example.com",
    user_prompt="Extract product information",
    render_heavy_js=True,
    country_code="uk"
)

Real-World Use Cases

Accessing Geo-Restricted Content

from scrapegraph_py import Client

client = Client(api_key="your-api-key")

# Access US-only content
response = client.smartscraper(
    website_url="https://us-only-service.com",
    user_prompt="Extract available services",
    country_code="us"
)

Getting Localized Content

# Get German version of a website
response = client.smartscraper(
    website_url="https://example.com",
    user_prompt="Extract product prices in local currency",
    country_code="de"
)

# Get French version
response = client.smartscraper(
    website_url="https://example.com",
    user_prompt="Extract product prices in local currency",
    country_code="fr"
)

E-commerce Price Comparison

# Compare prices from different regions
countries = ["us", "uk", "de", "fr"]

for country in countries:
    response = client.smartscraper(
        website_url="https://ecommerce-site.com/product/123",
        user_prompt="Extract product price and availability",
        country_code=country
    )
    print(f"{country}: {response['result']}")

Best Practices

1. Use Country Code When Needed

Only specify a country code if you have a specific requirement:
  • ✅ Accessing geo-restricted content
  • ✅ Getting localized versions of websites
  • ✅ Complying with regional requirements
  • ❌ Don’t specify if you don’t need it - let the system optimize automatically

2. Let the System Handle Routing

The API automatically selects the best proxy provider for each request:
  • No manual proxy selection needed
  • Automatic failover ensures reliability
  • Performance is optimized automatically

3. Handle Errors Gracefully

If a request fails, the system has already tried multiple providers:
from scrapegraph_py import Client
import time

client = Client(api_key="your-api-key")

def scrape_with_retry(url, prompt, max_retries=3):
    for attempt in range(max_retries):
        try:
            response = client.smartscraper(
                website_url=url,
                user_prompt=prompt,
                country_code="us"
            )
            return response
        except Exception as e:
            if attempt < max_retries - 1:
                print(f"Attempt {attempt + 1} failed: {e}")
                time.sleep(2 ** attempt)  # Exponential backoff
            else:
                raise e

4. Monitor Rate Limits

Be aware of your API rate limits:
  • The proxy system respects these limits automatically
  • Monitor your usage in the dashboard
  • Implement appropriate delays between requests

Troubleshooting

Request Failures

If your scraping request fails:
  1. Verify the URL: Make sure the URL is correct and accessible
  2. Check the website: Some websites may block automated access regardless of proxy
  3. Retry the request: The system uses automatic retries, but you can manually retry after a delay
  4. Try different parameters: Sometimes using render_heavy_js: true can help with JavaScript-heavy sites
  5. Try a different country: If geo-restriction is the issue, try a different country_code

Rate Limiting

If you receive rate limit errors (HTTP 429):
  • Wait a few minutes before making new requests
  • The API automatically handles rate limits on proxy providers
  • Consider implementing exponential backoff in your application
  • Check your API usage limits in the dashboard

Geo-Restricted Content

If you’re trying to access geo-restricted content:
  • Use the country_code parameter to specify the required country
  • Make sure the content is available in that country
  • Some content may still be restricted regardless of proxy location
  • Try multiple country codes if one doesn’t work

Proxy Selection Issues

If you’re experiencing proxy-related issues:
  • The system automatically tries multiple providers
  • No manual configuration is needed
  • If issues persist, contact support with your request ID
  • Check if the issue is specific to certain websites or domains

FAQ

A: No, the proxy system is fully managed and automatic. You don’t need to provide any proxy credentials or configuration.
A: No, the system automatically selects the best proxy provider for each request. This ensures optimal performance and reliability.
A: The proxy selection is handled automatically and transparently. You don’t need to know which proxy was used - just use the API as normal.
A: The API uses managed proxy services. If you have specific proxy requirements, please contact support.
A: The API will return an error. The system tries multiple providers with automatic fallback, so this is rare. If it happens, verify the URL and try again.
A: No, the country_code parameter doesn’t affect pricing. Credits are charged the same regardless of proxy location.
A: Yes, country_code is available for all scraping services including SmartScraper, SearchScraper, SmartCrawler, and Markdownify.
A: Both uk and gb refer to the United Kingdom. The API accepts both codes for compatibility.

API Reference

For detailed API documentation, see:

Support & Resources

Need Help?

Contact our support team for assistance with proxy configuration, geotargeting, or any other questions!