GA Attribution Scrape
raw JSON → 0.2.1 verified Sat May 09 auth: no python
Scrapes attribution data from Google Analytics Model Comparison Tool via JS network requests and sends to BigQuery. Currently v0.2.1, pre-1.0 with sporadic releases. Works only on GA Goals (not ecommerce) and loops through each goal individually.
pip install ga-attribution-scrape Common errors
error ModuleNotFoundError: No module named 'ga_attribution_scrape' ↓
cause The package is installed under hyphenated name but import uses underscores.
fix
Run
pip install ga-attribution-scrape then import using underscores: from ga_attribution_scrape import ScrapeAttribution. error TypeError: __init__() got an unexpected keyword argument 'sample_size' ↓
cause Older versions (pre-0.2.0) did not accept sample_size parameter.
fix
Update to latest version:
pip install --upgrade ga-attribution-scrape error google.auth.exceptions.DefaultCredentialsError: Could not automatically determine credentials. ↓
cause Credentials path not provided or invalid.
fix
Pass valid
credentials_path to ScrapeAttribution or set GOOGLE_APPLICATION_CREDENTIALS environment variable. Warnings
gotcha Works only on GA Goals, not ecommerce conversions. Does not aggregate goals; loops through each goal individually. ↓
fix Ensure you're using GA Goal IDs, not ecommerce conversion IDs.
gotcha Requires a valid Google Cloud service account key with BigQuery write permissions. The library will fail silently if permissions are missing. ↓
fix Check that the service account has roles/bigquery.dataEditor on the dataset.
breaking The library relies on headless browser automation (likely Selenium) and may break with browser updates. ↓
fix Keep browser drivers updated. Consider pinning browser version in Docker.
Imports
- ScrapeAttribution wrong
from ga_attribution_scrape.scraper import ScrapeAttributioncorrectfrom ga_attribution_scrape import ScrapeAttribution
Quickstart
from ga_attribution_scrape import ScrapeAttribution
scraper = ScrapeAttribution(
property_id='123456789',
goal_id='90',
lookback_days=30,
sample_size=10000,
credentials_path='path/to/service-account-key.json'
)
results = scraper.scrape()
print(results.head())