Skip to main content

smart-scraper

Extract structured data from any URL using AI. Full docs β†’
just-scrape smart-scraper <url> -p <prompt>
just-scrape smart-scraper <url> -p <prompt> --schema <json>
just-scrape smart-scraper <url> -p <prompt> --scrolls <n>     # infinite scroll (0-100)
just-scrape smart-scraper <url> -p <prompt> --pages <n>       # multi-page (1-100)
just-scrape smart-scraper <url> -p <prompt> --stealth         # anti-bot bypass (+4 credits)
just-scrape smart-scraper <url> -p <prompt> --cookies <json> --headers <json>
just-scrape smart-scraper <url> -p <prompt> --plain-text      # plain text instead of JSON

search-scraper

Search the web and extract structured data from results. Full docs β†’
just-scrape search-scraper <prompt>
just-scrape search-scraper <prompt> --num-results <n>         # sources to scrape (3-20, default 3)
just-scrape search-scraper <prompt> --no-extraction           # markdown only (2 credits vs 10)
just-scrape search-scraper <prompt> --schema <json>
just-scrape search-scraper <prompt> --stealth --headers <json>

markdownify

Convert any webpage to clean markdown. Full docs β†’
just-scrape markdownify <url>
just-scrape markdownify <url> --stealth
just-scrape markdownify <url> --headers <json>

crawl

Crawl multiple pages and extract data from each. Full docs β†’
just-scrape crawl <url> -p <prompt>
just-scrape crawl <url> -p <prompt> --max-pages <n>           # max pages (default 10)
just-scrape crawl <url> -p <prompt> --depth <n>               # crawl depth (default 1)
just-scrape crawl <url> --no-extraction --max-pages <n>       # markdown only (2 credits/page)
just-scrape crawl <url> -p <prompt> --schema <json>
just-scrape crawl <url> -p <prompt> --rules <json>            # include_paths, same_domain
just-scrape crawl <url> -p <prompt> --no-sitemap              # skip sitemap discovery
just-scrape crawl <url> -p <prompt> --stealth

scrape

Get raw HTML content from a URL. Full docs β†’
just-scrape scrape <url>
just-scrape scrape <url> --stealth                            # anti-bot bypass (+4 credits)
just-scrape scrape <url> --branding                           # extract branding (+2 credits)
just-scrape scrape <url> --country-code <iso>                 # geo-targeting

sitemap

Get all URLs from a website’s sitemap. Full docs β†’
just-scrape sitemap <url>
just-scrape sitemap <url> --json | jq -r '.urls[]'

agentic-scraper

Browser automation with AI β€” login, click, navigate, fill forms. Full docs β†’
just-scrape agentic-scraper <url> -s <steps>
just-scrape agentic-scraper <url> -s <steps> --ai-extraction -p <prompt>
just-scrape agentic-scraper <url> -s <steps> --schema <json>
just-scrape agentic-scraper <url> -s <steps> --use-session    # persist browser session

generate-schema

Generate a JSON schema from a natural language description.
just-scrape generate-schema <prompt>
just-scrape generate-schema <prompt> --existing-schema <json>

history

Browse request history for any service. Interactive by default β€” arrow keys to navigate, select to view details.
just-scrape history <service>
just-scrape history <service> <request-id>
just-scrape history <service> --page <n>                      # start from page (default 1)
just-scrape history <service> --page-size <n>                 # results per page (max 100)
just-scrape history <service> --json
Services: markdownify, smartscraper, searchscraper, scrape, crawl, agentic-scraper, sitemap

credits

Check your credit balance.
just-scrape credits
just-scrape credits --json | jq '.remaining_credits'

validate

Validate your API key.
just-scrape validate

Global flags

All commands support these flags:
FlagDescription
--jsonMachine-readable JSON output, no spinners or banners
--helpShow help for a command