snscrape

raw JSON →
0.7.0.20230622 verified Fri May 01 auth: no python

snscrape is a social network service scraper that supports Twitter, Reddit, Telegram, and more. Current version 0.7.0.20230622 targets Python 3.8+. It is actively maintained but has evolving API structures and breaking changes across versions.

pip install snscrape
error ImportError: cannot import name 'TwitterSearchScraper' from 'snscrape'
cause Import path changed in 0.7+.
fix
Use: from snscrape.modules.twitter import TwitterSearchScraper
error AttributeError: module 'snscrape' has no attribute 'get_item'
cause Trying to call snscrape.get_item() directly; should use scraper instance method.
fix
Create scraper: scraper = TwitterSearchScraper(...), then scraper.get_items().
error requests.exceptions.ConnectionError: HTTPSConnectionPool: Max retries exceeded with url: /graphql/...
cause Twitter API rate limiting or blocking.
fix
Add delays between requests, use proxies, or reduce request frequency.
breaking snscrape 0.7+ changed the import structure from `snscrape.snscrape` to `snscrape.modules.*`. Old code using `from snscrape import snscrape` will break.
fix Change imports to `from snscrape.modules.<service> import ...`
gotcha Twitter scraping may be blocked or rate-limited; snscrape does not officially support authenticated scraping. Using excessive requests can lead to IP blocks.
fix Respect robots.txt, use polite delays, and consider official APIs for production.
deprecated The `snscrape.base.Scraper` base class is subject to internal changes and not meant for direct use.
fix Use service-specific scrapers like `TwitterSearchScraper` or `RedditScraper`.
gotcha snscrape may break if Twitter/X changes its HTML/JSON structure; no guarantees of long-term stability.
fix Pin version and test scraping behavior regularly.

Scrapes recent tweets matching a query.

from snscrape.modules.twitter import TwitterSearchScraper
from datetime import datetime

scraper = TwitterSearchScraper('from:@elonmusk since:2022-01-01 until:2022-12-31')
for i, tweet in enumerate(scraper.get_items()):
    if i >= 5:
        break
    print(tweet.url, tweet.date, tweet.content[:50])