How to Build a Price Monitoring Bot in Python (2026 Guide)

May 04, 2026 18 min read

Price monitoring across e-commerce platforms: Amazon, eBay, AliExpress, and Shopify with a price tracking monitor

By the end of this guide, you'll have a working price monitoring bot that scrapes product prices, stores them in a local database, detects changes, and sends you a Slack alert when a price drops. We'll build every piece in Python, from the scraper to the scheduler.

Tracking prices manually is tedious and unreliable. You check Amazon, forget for a week, and miss a 30% price drop. Or you're running an e-commerce store and a competitor undercuts you on Tuesday, but you don't notice until Friday. A bot that runs on a schedule, checks prices automatically, and pings you when something changes solves both problems. And you can build one in about 150 lines of Python.

We're using books.toscrape.com as our demo site so you can run every snippet without getting blocked. The patterns are identical for Amazon, eBay, AliExpress, Shopify stores, or any other e-commerce site. Replace the URL and selectors with your real target.

What you'll build in this guide:

  • A product scraper that extracts titles, prices, and URLs from any e-commerce page
  • A SQLite database that stores price history with timestamps
  • A diff engine that detects price changes above a configurable threshold
  • A Slack notification system for instant price drop alerts
  • An email alerter as a backup channel
  • A cron schedule and a GitHub Actions workflow to run the bot automatically
  • An upgrade path to Browserbeam for JavaScript-heavy stores with anti-bot protection

TL;DR: Build a Python price monitoring bot in five steps: scrape product data with Requests + BeautifulSoup, store price history in SQLite, detect changes with a threshold-based diff, send Slack or email alerts, and schedule everything with cron or GitHub Actions. All code runs against real URLs. For stores that block scrapers or render prices with JavaScript, swap in Browserbeam's Python SDK with two lines changed.


What Is a Price Monitoring Bot?

A price monitoring bot is a script that visits product pages on a schedule, extracts current prices, compares them to previous prices, and notifies you when something changes. It replaces the manual routine of opening tabs, scanning price tags, and copying numbers into a spreadsheet.

Manual vs. Automated Price Tracking

Manual Automated Bot
Frequency When you remember Every hour, every day, your choice
Coverage 5-10 products before fatigue Hundreds or thousands of SKUs
Reaction time Hours to days Minutes
History None unless you maintain a spreadsheet Every price point stored automatically
Cost Your time A few cents of compute per run

Common Use Cases

Price monitoring bots are useful across different contexts:

  • Amazon wishlist tracking: Watch products you want and buy when the price drops
  • Competitor price monitoring: Know when a competitor changes prices so you can react the same day
  • E-commerce repricing: Adjust your own prices automatically based on market changes
  • MAP enforcement: Brands monitoring minimum advertised price violations by retailers
  • Dropshipping margin protection: Track supplier price changes that eat into your margins
  • Deal hunting: Monitor product pages and get notified when items go on sale

Why Build Your Own (vs Buy)?

Price monitoring software like Prisync, Price2Spy, and Keepa exists. These tools work well for teams that need a dashboard, competitor mapping, and enterprise features. But they come with tradeoffs.

Reasons to build your own bot:

  • You control the logic. Custom thresholds, custom alert channels, custom storage. No fighting a vendor's opinionated UI.
  • It costs almost nothing. A cron job on a $5 VPS or a free GitHub Actions workflow. Commercial tools charge $50-500/month.
  • You learn how it works. When a selector breaks or a site changes layout, you know exactly where to look.
  • You can integrate anywhere. Pipe data into your own database, Slack, Telegram, a custom dashboard, or a repricing engine.

Reasons to buy instead:

  • You need to monitor 10,000+ SKUs across 50 competitors with a team dashboard
  • You need pre-built competitor mapping and category matching
  • You don't have a developer available to maintain scrapers

For most developers and small teams, a custom bot is the better starting point. You can always migrate to a paid tool later if your needs outgrow the script.


How a Price Monitoring Bot Works

Every price monitoring bot has four components. The scraper fetches pages and extracts prices. The storage layer saves each price with a timestamp. The diff engine compares today's price to yesterday's and flags changes. The alerter sends a notification when the change matters.

Architecture Overview

Scheduler (cron / GitHub Actions)
↓ triggers run
Scraper: fetch page, extract prices
↓ product data
SQLite: store price + timestamp
↓ current vs. previous
Diff Engine: detect changes
↓ change above threshold?
yes Alert (Slack / Email)
no No action

Each run takes a few seconds. The bot scrapes the target page, saves every price it finds, compares each price to the last known value, and sends an alert only if the change crosses your threshold (for example, more than 1% or more than $0.50).

Component Choices

Component Our Choice Alternatives
Scraper Requests + BeautifulSoup Browserbeam (for JS-rendered sites), Scrapy, Playwright
Storage SQLite PostgreSQL, CSV files, JSON files
Diff Python function with threshold pandas comparison, deepdiff library
Alerter Slack webhook Email (SMTP), Telegram bot, Discord webhook
Scheduler cron / GitHub Actions APScheduler, Celery Beat, cloud schedulers

We're picking the simplest tool at each layer. You can swap any component later without changing the others.


Setting Up the Project

Create a Virtual Environment

mkdir price-monitor && cd price-monitor
python3 -m venv venv
source venv/bin/activate

Install Dependencies

pip install requests beautifulsoup4 lxml

That's it for the core scraper. We'll add schedule later if you want an in-process scheduler, but cron or GitHub Actions handle scheduling without extra dependencies.

Project Layout

price-monitor/
  scraper.py          # Fetch and parse product data
  storage.py          # SQLite helpers
  diff.py             # Price change detection
  alerter.py          # Slack and email notifications
  monitor.py          # Main entry point (wires everything together)
  prices.db           # SQLite database (created automatically)

Each file is a single module. The monitor.py script imports the others and runs one monitoring cycle.


Step 1: Scrape Product Data

Let's start with the scraper. We'll fetch a product listing page and extract titles, prices, and URLs.

Fetching and Parsing HTML

import requests
from bs4 import BeautifulSoup
from urllib.parse import urljoin
import re

BASE_URL = "https://books.toscrape.com"

def parse_price(text):
    match = re.search(r"[\d,]+\.?\d*", text)
    return float(match.group().replace(",", "")) if match else 0.0

def scrape_products(url=BASE_URL):
    response = requests.get(url, headers={
        "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36"
    })
    response.raise_for_status()

    soup = BeautifulSoup(response.content, "lxml")
    products = []

    for article in soup.select("article.product_pod"):
        title = article.select_one("h3 a")["title"]
        price_text = article.select_one(".price_color").get_text(strip=True)
        price = parse_price(price_text)
        relative_url = article.select_one("h3 a")["href"]
        product_url = urljoin(url, relative_url)

        products.append({
            "title": title,
            "price": price,
            "currency": "GBP",
            "url": product_url,
        })

    return products

Notice two details: we pass response.content (raw bytes) to BeautifulSoup instead of response.text. This lets the parser auto-detect the page encoding, which avoids garbled currency symbols when the server's Content-Type header misreports the charset. The parse_price function extracts the numeric part with a regex, so it works regardless of the currency symbol (£, $, , or something else).

Run this and you'll get a list of dictionaries, one per book, with a numeric price and a full URL.

Pro tip: Always use urljoin for relative URLs. Concatenating strings with / breaks when the base URL has a trailing slash or the relative path starts with ../. The urljoin function from Python's standard library handles every edge case.

Handling Multiple Pages

Most product listings span multiple pages. Here's how to scrape all of them:

def scrape_all_pages(start_url=BASE_URL):
    all_products = []
    url = start_url

    while url:
        response = requests.get(url, headers={
            "User-Agent": "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36"
        })
        response.raise_for_status()
        soup = BeautifulSoup(response.content, "lxml")

        for article in soup.select("article.product_pod"):
            title = article.select_one("h3 a")["title"]
            price_text = article.select_one(".price_color").get_text(strip=True)
            price = parse_price(price_text)
            relative_url = article.select_one("h3 a")["href"]
            product_url = urljoin(url, relative_url)

            all_products.append({
                "title": title,
                "price": price,
                "currency": "GBP",
                "url": product_url,
            })

        next_btn = soup.select_one("li.next a")
        if next_btn:
            url = urljoin(url, next_btn["href"])
        else:
            url = None

    return all_products

This follows the "next" link until there are no more pages. On books.toscrape.com, that's 50 pages and 1,000 books.


Step 2: Store Price History

Storing only the current price is a common mistake. Without history, you can't tell if a price went up and came back down, or how long a sale lasted. SQLite gives us a local database with zero setup.

Database Schema

import sqlite3
from datetime import datetime, timezone

DB_PATH = "prices.db"

def init_db(db_path=DB_PATH):
    conn = sqlite3.connect(db_path)
    conn.execute("""
        CREATE TABLE IF NOT EXISTS price_history (
            id INTEGER PRIMARY KEY AUTOINCREMENT,
            url TEXT NOT NULL,
            title TEXT NOT NULL,
            price REAL NOT NULL,
            currency TEXT NOT NULL DEFAULT 'GBP',
            scraped_at TEXT NOT NULL
        )
    """)
    conn.execute("""
        CREATE INDEX IF NOT EXISTS idx_url_scraped
        ON price_history (url, scraped_at DESC)
    """)
    conn.commit()
    conn.close()

Saving Prices

def save_prices(products, db_path=DB_PATH):
    conn = sqlite3.connect(db_path)
    now = datetime.now(timezone.utc).isoformat()
    rows = [(p["url"], p["title"], p["price"], p["currency"], now) for p in products]
    conn.executemany(
        "INSERT INTO price_history (url, title, price, currency, scraped_at) VALUES (?, ?, ?, ?, ?)",
        rows,
    )
    conn.commit()
    conn.close()

Querying Price History

def get_latest_price(url, db_path=DB_PATH):
    conn = sqlite3.connect(db_path)
    row = conn.execute(
        "SELECT price, scraped_at FROM price_history WHERE url = ? ORDER BY scraped_at DESC LIMIT 1",
        (url,),
    ).fetchone()
    conn.close()
    return row

def get_previous_price(url, db_path=DB_PATH):
    conn = sqlite3.connect(db_path)
    row = conn.execute(
        "SELECT price, scraped_at FROM price_history WHERE url = ? ORDER BY scraped_at DESC LIMIT 1 OFFSET 1",
        (url,),
    ).fetchone()
    conn.close()
    return row

def get_price_history(url, limit=30, db_path=DB_PATH):
    conn = sqlite3.connect(db_path)
    rows = conn.execute(
        "SELECT price, scraped_at FROM price_history WHERE url = ? ORDER BY scraped_at DESC LIMIT ?",
        (url, limit),
    ).fetchall()
    conn.close()
    return rows

Pro tip: The OFFSET 1 trick in get_previous_price skips the row we just inserted and returns the price from the last run. That's the comparison baseline for our diff engine.


Step 3: Detect Price Changes

The diff engine compares the current price to the previous one and decides whether the change is worth alerting on. Small fluctuations (rounding differences, currency conversion noise) should not trigger alerts.

Threshold-Based Detection

def detect_changes(products, db_path=DB_PATH, min_pct=1.0, min_abs=0.50):
    changes = []

    for product in products:
        prev = get_previous_price(product["url"], db_path)
        if prev is None:
            continue

        old_price, old_date = prev
        new_price = product["price"]

        if old_price == 0:
            continue

        diff = new_price - old_price
        pct_change = (diff / old_price) * 100

        if abs(pct_change) >= min_pct or abs(diff) >= min_abs:
            changes.append({
                "title": product["title"],
                "url": product["url"],
                "old_price": old_price,
                "new_price": new_price,
                "diff": round(diff, 2),
                "pct_change": round(pct_change, 2),
                "direction": "dropped" if diff < 0 else "increased",
            })

    return changes

The function takes two threshold parameters: min_pct (minimum percentage change) and min_abs (minimum absolute change in the price's currency). A change triggers an alert only if it exceeds at least one of these thresholds. This filters out noise like a book going from £51.77 to £51.78.

Example Output

When a price changes, the function returns something like:

[
  {
    "title": "A Light in the Attic",
    "url": "https://books.toscrape.com/catalogue/a-light-in-the-attic_1000/index.html",
    "old_price": 51.77,
    "new_price": 47.82,
    "diff": -3.95,
    "pct_change": -7.63,
    "direction": "dropped"
  }
]

Step 4: Send Alerts

When the diff engine finds changes worth reporting, we need to send a notification. Slack is the simplest option for developer-focused alerting. Email works as a fallback.

Slack Webhook

Create an incoming webhook in your Slack workspace. You'll get a URL like https://hooks.slack.com/services/T00/B00/xxx.

import requests as http_client

SLACK_WEBHOOK_URL = "https://hooks.slack.com/services/YOUR/WEBHOOK/URL"

def alert_slack(changes, webhook_url=SLACK_WEBHOOK_URL):
    if not changes:
        return

    lines = ["*Price changes detected:*\n"]
    for c in changes:
        emoji = ":chart_with_downwards_trend:" if c["direction"] == "dropped" else ":chart_with_upwards_trend:"
        lines.append(
            f'{emoji} *{c["title"]}*\n'
            f'  {c["old_price"]:.2f} → {c["new_price"]:.2f} ({c["pct_change"]:+.1f}%)\n'
            f'  <{c["url"]}|View product>'
        )

    payload = {"text": "\n".join(lines)}
    resp = http_client.post(webhook_url, json=payload)
    resp.raise_for_status()

Email via SMTP

If your team prefers email, or you want a backup channel:

import smtplib
from email.mime.text import MIMEText

def alert_email(changes, to_addr, from_addr, smtp_host="smtp.gmail.com", smtp_port=587, password=""):
    if not changes:
        return

    lines = ["Price changes detected:\n"]
    for c in changes:
        arrow = "↓" if c["direction"] == "dropped" else "↑"
        lines.append(
            f'{arrow} {c["title"]}: {c["old_price"]:.2f} → {c["new_price"]:.2f} ({c["pct_change"]:+.1f}%)'
        )

    body = "\n".join(lines)
    msg = MIMEText(body)
    msg["Subject"] = f"Price Monitor: {len(changes)} change(s) detected"
    msg["From"] = from_addr
    msg["To"] = to_addr

    with smtplib.SMTP(smtp_host, smtp_port) as server:
        server.starttls()
        server.login(from_addr, password)
        server.send_message(msg)

Pro tip: For Gmail, use an App Password instead of your real password. Two-factor authentication blocks regular password logins for SMTP.

Daily Digest vs. Instant Alerts

You don't always want an alert for every change. Two approaches:

Mode When to use Implementation
Instant Price drops you want to catch immediately (deal hunting, competitor undercutting) Alert inside the monitoring loop, per run
Daily digest Routine monitoring where a summary is enough Collect changes into a file, send one alert at end of day

For this guide, we use instant alerts. To build a digest, write changes to a JSON file during each run, then have a separate cron job at 9am that reads the file, sends one combined alert, and clears the file.


Step 5: Schedule the Bot

A price monitor that only runs when you remember to run it is barely better than checking prices manually. Let's automate it.

The Main Script

First, wire everything together in monitor.py:

from scraper import scrape_products
from storage import init_db, save_prices
from diff import detect_changes
from alerter import alert_slack

def run_monitor():
    print("Starting price monitor...")
    init_db()

    products = scrape_products()
    print(f"Scraped {len(products)} products")

    changes = detect_changes(products)
    if changes:
        print(f"Found {len(changes)} price change(s)")
        alert_slack(changes)
    else:
        print("No price changes detected")

    save_prices(products)
    print("Prices saved. Done.")

if __name__ == "__main__":
    run_monitor()

Notice the order: detect changes before saving the new prices. If you save first, the new price becomes the "previous" price and the diff engine has nothing to compare against.

Option 1: Cron (Linux/macOS)

Add a cron job that runs every 6 hours:

crontab -e

Add this line:

0 */6 * * * cd /home/you/price-monitor && /home/you/price-monitor/venv/bin/python monitor.py >> /home/you/price-monitor/monitor.log 2>&1

Pro tip: Always use the full path to the Python binary inside your virtual environment. Cron jobs don't load your shell profile, so python or python3 might point to the system Python that doesn't have your packages installed.

Option 2: GitHub Actions (Free)

For a free, hosted solution, use GitHub Actions. Push your code to a repository and add this workflow:

name: Price Monitor
on:
  schedule:
    - cron: "0 */6 * * *"
  workflow_dispatch:

jobs:
  monitor:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4
      - uses: actions/setup-python@v5
        with:
          python-version: "3.12"
      - run: pip install requests beautifulsoup4 lxml
      - run: python monitor.py
        env:
          SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL }}

The workflow_dispatch trigger lets you run it manually from the GitHub UI for testing.

One caveat with GitHub Actions: the SQLite database file won't persist between runs unless you commit it back to the repo or use an artifact. For a persistent database, either use a hosted database (PostgreSQL on Railway, Supabase, or Neon) or run the bot on a VPS where the file stays on disk.


Handling Anti-Bot Defenses on Real Stores

books.toscrape.com is designed for scraping practice. Real stores like Amazon, eBay, Shopify shops, and AliExpress have defenses that block simple HTTP requests.

What Blocks Your Bot

Defense What happens Frequency
Rate limiting 429 status code after too many requests Very common
User-Agent checks 403 if the UA looks like a bot Common
JavaScript rendering Prices load via JS, invisible to Requests Common on modern stores
CAPTCHAs Human verification challenge Amazon, Walmart, others
IP blocking Your IP gets banned after repeated requests After sustained scraping

Quick Fixes

Rotate User-Agents: Don't send the same UA on every request.

import random

USER_AGENTS = [
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36",
    "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36",
    "Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/125.0.0.0 Safari/537.36",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/17.5 Safari/605.1.15",
]

def get_headers():
    return {"User-Agent": random.choice(USER_AGENTS)}

Add delays between requests: Don't hammer the server.

import time

def polite_scrape(urls, delay=2.0):
    results = []
    for url in urls:
        response = requests.get(url, headers=get_headers())
        results.append(response)
        time.sleep(delay)
    return results

When to Use Browserbeam

When a store renders prices with JavaScript (React, Vue, Angular frontends) or deploys advanced anti-bot measures, Requests + BeautifulSoup won't work. Prices show up as empty elements because the HTML arrives before the JavaScript executes.

Browserbeam runs a real browser in the cloud, renders the JavaScript, and returns structured data. Here's the same price extraction using the Python SDK:

from browserbeam import Browserbeam

client = Browserbeam(api_key="YOUR_API_KEY")

session = client.sessions.create(
    url="https://books.toscrape.com",
    proxy="residential",
)

result = session.extract(
    products=[{
        "_parent": "article.product_pod",
        "title": "h3 a >> text",
        "price": ".price_color >> text",
    }]
)

for product in result.extraction["products"]:
    print(f'{product["title"]}: {product["price"]}')

session.close()

Two lines change from the Requests version: the session creation (which handles the browser, proxy, and anti-bot) and the extraction (which uses CSS selectors in a schema instead of manual parsing). The rest of your pipeline (storage, diff, alerter) stays exactly the same.

Don't have an API key yet? Create a free Browserbeam account. You get 5,000 credits, no credit card required.

A Note on robots.txt and Terms of Service

Before monitoring prices on any site, check the site's robots.txt file and terms of service. Many sites allow automated access to public product pages but restrict the rate or prohibit commercial resale of the data. This guide is for educational purposes. Respect the sites you scrape: keep your request rate reasonable, don't overload servers, and follow the site's posted rules.


Real-World Patterns

The five-step bot we built works for basic price tracking. Here are patterns for more advanced scenarios.

Pattern 1: Multi-Site Price Comparison

Compare the same product across multiple stores to find the best deal:

TARGETS = [
    {
        "name": "Store A",
        "url": "https://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html",
        "price_selector": ".price_color",
    },
    {
        "name": "Store B",
        "url": "https://books.toscrape.com/catalogue/sapiens-a-brief-history-of-humankind_996/index.html",
        "price_selector": ".price_color",
    },
]

def compare_prices(targets):
    results = []
    for target in targets:
        resp = requests.get(target["url"], headers=get_headers())
        soup = BeautifulSoup(resp.content, "lxml")
        price_text = soup.select_one(target["price_selector"]).get_text(strip=True)
        price = parse_price(price_text)
        results.append({"store": target["name"], "price": price, "url": target["url"]})

    results.sort(key=lambda x: x["price"])
    return results

In production, you'd point each target at a different store selling the same product. The bot sorts by price and alerts you when the cheapest option changes.

Pattern 2: Stock + Price Combo Tracking

Sometimes a product's price doesn't change, but it goes out of stock. Track both:

def scrape_with_stock(url=BASE_URL):
    response = requests.get(url, headers=get_headers())
    response.raise_for_status()
    soup = BeautifulSoup(response.content, "lxml")
    products = []

    for article in soup.select("article.product_pod"):
        title = article.select_one("h3 a")["title"]
        price_text = article.select_one(".price_color").get_text(strip=True)
        price = parse_price(price_text)
        stock_text = article.select_one(".instock.availability")
        in_stock = stock_text is not None and "In stock" in stock_text.get_text()

        products.append({
            "title": title,
            "price": price,
            "currency": "GBP",
            "url": urljoin(url, article.select_one("h3 a")["href"]),
            "in_stock": in_stock,
        })

    return products

Add an in_stock boolean column to your SQLite table and alert when a product goes from in-stock to out-of-stock (or vice versa). This is especially useful for limited-edition items or products with unpredictable restocks.

Pattern 3: Historical Price Chart

After collecting price data for a few weeks, visualize trends:

import matplotlib.pyplot as plt
from storage import get_price_history

def plot_price_history(url, title="Price History"):
    rows = get_price_history(url, limit=90)
    if not rows:
        print("No data to plot")
        return

    prices = [r[0] for r in reversed(rows)]
    dates = [r[1][:10] for r in reversed(rows)]

    plt.figure(figsize=(10, 4))
    plt.plot(dates, prices, marker="o", markersize=3)
    plt.title(title)
    plt.xlabel("Date")
    plt.ylabel("Price")
    plt.xticks(rotation=45)
    plt.tight_layout()
    plt.savefig("price_history.png", dpi=150)
    plt.close()
    print("Chart saved to price_history.png")

This is the same idea behind tools like camelcamelcamel, which tracks Amazon price history. After a few weeks of data, you'll see patterns: some products have predictable sale cycles, others hold steady for months then drop.


Common Mistakes When Building Price Monitors

1. Parsing Prices as Strings

Storing "£51.77" instead of 51.77 makes comparison impossible. Always extract the numeric part and convert to a float before saving. Handle commas in thousand separators ("1,299.99" becomes 1299.99). A regex is more reliable than chaining .replace() calls, because you'll always encounter a currency symbol you forgot to strip.

import re

def parse_price(text):
    match = re.search(r"[\d,]+\.?\d*", text)
    return float(match.group().replace(",", "")) if match else 0.0

2. Ignoring Currency

If you monitor products across different countries, a "price drop" might just be a currency conversion artifact. Store the currency alongside the price and only compare prices within the same currency.

3. Polling Too Aggressively

Checking prices every 5 minutes is unnecessary for most products and will get your IP blocked. For most e-commerce monitoring, every 4-6 hours is enough. Daily checks work for products that rarely change. Only poll more frequently during known sale events (Black Friday, Prime Day).

4. No De-duplication of Alerts

If a price drops and stays low, your bot will alert you on every run because the diff between "current" and "first-ever" price keeps exceeding the threshold. The fix: compare against the previous run's price, not the historical minimum. Our get_previous_price function with OFFSET 1 handles this correctly.

5. Storing Only the Latest Price

Without history, you can't answer "what was the price last month?" or "is this the lowest price ever?". Always append new rows instead of updating existing ones. Disk is cheap. A year of hourly checks on 1,000 products is about 8.7 million rows, which SQLite handles without breaking a sweat.


Build vs Buy: When Off-the-Shelf Wins

A custom bot is great for developers. But if you're evaluating options for a team or a business with thousands of SKUs, here's how the options compare.

Price Monitoring Tool Comparison

Feature Custom Bot (this guide) Keepa Prisync Price2Spy
Price Free (your compute) Free tier + $19/mo From $99/mo From $24/mo
Setup time 1-2 hours Minutes Minutes Minutes
Amazon support You build it Built-in Built-in Built-in
Custom sites Any site Amazon only E-commerce focus Any site
Alert channels Anything you code Email, browser extension Email, dashboard Email, dashboard
Historical data Unlimited (your storage) Up to 10 years for Amazon Plan-dependent Plan-dependent
Customization Full control Limited Limited Moderate
Maintenance You fix broken selectors Managed Managed Managed

Decision Matrix

Build your own when:

  • You need to monitor non-standard sites (internal tools, niche stores, sites behind login)
  • You want full control over storage, alerting, and integration
  • You have a developer who can maintain the scrapers
  • Budget is tight and you're tracking fewer than 500 products

Buy a tool when:

  • You need to monitor 5,000+ Amazon ASINs and want it done today
  • You need a team dashboard with role-based access
  • You don't have developer time to maintain scrapers
  • You need built-in competitor mapping and category matching

A hybrid approach also works: use a paid tool for Amazon (where anti-bot protection is strongest) and a custom bot for smaller competitors and niche sites where commercial tools don't have pre-built support.


Frequently Asked Questions

How often should a price monitoring bot run?

For most products, every 4-6 hours catches meaningful changes without overloading the target site. During major sales events (Black Friday, Prime Day), increase to every 1-2 hours. For slow-moving categories like books or furniture, once daily is enough. Match your polling frequency to how fast prices actually change on your target.

Can I scrape Amazon prices legally?

This is not legal advice. Amazon's Terms of Service prohibit automated access, but enforcement varies. Many price tracking tools (Keepa, camelcamelcamel) scrape Amazon at scale. If you're tracking a small number of products for personal use, the practical risk is low. For commercial use, consider using Amazon's Product Advertising API or a dedicated tool like Keepa that handles compliance. Check the web scraping with Python guide for more on ethical scraping practices.

What if the site renders prices with JavaScript?

Requests + BeautifulSoup only see the raw HTML before JavaScript runs. If prices load dynamically (React, Vue, Angular stores), you need a tool that runs a real browser. Browserbeam handles this with a cloud browser that renders the page fully before extracting data. Alternatively, Playwright or Selenium can run a local browser, but you'll manage the browser binary, memory, and crash recovery yourself.

How do I handle price monitoring for Amazon specifically?

Amazon uses aggressive anti-bot protection: CAPTCHAs, IP rate limiting, and JavaScript-rendered content. For reliable Amazon price tracking, use residential proxies and a real browser. The how to scrape Amazon guide covers the full approach with Browserbeam. For a simpler path, Keepa's browser extension and API track Amazon prices without any scraping code.

How many products can a SQLite-based bot handle?

SQLite handles millions of rows comfortably. A year of 6-hourly checks on 10,000 products produces about 14.6 million rows, well within SQLite's capacity. The bottleneck is scraping speed, not storage. If you're scraping 10,000 product pages, each taking 1-2 seconds, a full run takes 3-5 hours. At that point, consider parallelizing with threading or async HTTP, or upgrading to Browserbeam's concurrent sessions.

Can I build a price monitoring dashboard?

Yes. Once your data is in SQLite (or PostgreSQL), any web framework can display it. Flask or Django for Python, Next.js or Express for JavaScript. The historical price data from get_price_history powers a simple line chart. Matplotlib works for static charts (we showed this in the Real-World Patterns section), and Chart.js or Recharts work for interactive web dashboards.

Does this work for Shopify stores?

Shopify stores often expose product data at yourstore.com/products.json, which returns structured JSON without any HTML parsing needed. Check if this endpoint exists on your target store. If it does, skip BeautifulSoup entirely and parse the JSON directly. If the store has disabled the JSON endpoint, fall back to HTML scraping with the selectors from this guide. The how to scrape Shopify stores guide covers both approaches in detail.

How is a price monitoring bot different from a price monitoring API?

A price monitoring bot is a script you run yourself. It does the scraping, storage, and alerting. A price monitoring API is a hosted service that does the scraping for you and returns structured data via an API. Browserbeam sits in between: it handles the browser and anti-bot challenges, but you control the extraction schema, storage, and alerting logic. Check our structured web scraping guide for how schema-based extraction works.


Start Building Your Price Monitor

You now have a complete price monitoring bot: a scraper that extracts product data, a SQLite database that stores price history, a diff engine that detects changes, alerters for Slack and email, and two scheduling options. Every piece is a standalone Python module you can swap or extend.

Start with the basic version against books.toscrape.com to make sure everything works end to end. Then point it at a real target: a product you've been watching on Amazon, a competitor on eBay or AliExpress, or a set of Shopify stores you want to compare.

For JavaScript-heavy sites that block simple HTTP requests, add Browserbeam's Python SDK as the scraper layer. The rest of the pipeline stays the same. For a more advanced version that uses GPT to analyze pricing trends and generate summaries, check the competitive intelligence agent guide.

What price will you track first?

You might also like:

How to Scrape YouTube: Videos and Transcripts

Scrape YouTube video data, channel listings, and transcripts. Working Python, TypeScript, and Ruby code. Bypasses headless detection with residential proxies.

10 min read May 01, 2026

Give your AI agent a faster, leaner browser

Structured page data instead of raw HTML. Your agent processes less, decides faster, and costs less to run.

Stability detection built in
Fraction of the payload size
Diffs after every action
No credit card required. 5,000 free credits included.