Why SERP Analysis Matters
Keyword metrics tell you the potential. SERP analysis tells you the reality.
A keyword might have 50,000 monthly searches, but if the first page is dominated by Wikipedia, major news sites, and a Featured Snippet - your chances of ranking are different than if it's full of thin content and forum posts.
Google's search results have also changed dramatically. AI Overviews now appear for many queries. Discussion Boxes highlight Reddit and Quora threads. These features can push traditional organic results down the page - or create new opportunities if you understand how to leverage them.
In this tutorial, we'll build a SERP analysis script that:
- Detects which SERP features appear for a keyword
- Returns all organic results with their positions
- Identifies where discussions (Reddit, Quora, forums) rank
- Tracks changes over time with historical snapshots
SERP Features You Need to Know
Before we write code, let's understand what we're looking for. Google displays different features depending on the query:
AI Overview
AI-generated summary at the top. Appears for informational queries.
Discussion Box
Curated forum/Reddit threads. Major opportunity for visibility.
Featured Snippet
Direct answer box. Position zero - above organic results.
Local Pack
Map with local businesses. Appears for "near me" and local queries.
Video Results
YouTube thumbnails in results. Common for how-to queries.
People Also Ask
Expandable related questions. Good for content ideas.
Knowledge Panel
Entity information box on the right. For brands, people, places.
Image Pack
Row of images in results. Common for product and visual queries.
The Discussion Box Opportunity
Google's Discussion Box is relatively new and represents a significant shift. For commercial keywords like "best CRM software," Reddit threads now appear above traditional review sites. If discussions are ranking for your keywords, that's intelligence you can act on.
Your First SERP Analysis
Let's start with a simple script that analyzes the SERP for a single keyword:
import requests def analyze_serp(keyword, api_key): """ Analyze the Google SERP for a keyword. Returns: SERP features, organic results, and discussion positions """ url = "https://reddit-traffic-and-intelligence-api.p.rapidapi.com/api/v2/serp-analysis" headers = { "Content-Type": "application/json", "x-rapidapi-host": "reddit-traffic-and-intelligence-api.p.rapidapi.com", "x-rapidapi-key": api_key } payload = { "keyword": keyword, "include_features": True, "max_results": 10 } response = requests.post(url, json=payload, headers=headers) return response.json() # Try it out if __name__ == "__main__": API_KEY = "your_api_key_here" result = analyze_serp("best crm software", API_KEY) # SERP Features print("=== SERP FEATURES ===") features = result['serp_features'] for feature, present in features.items(): status = "Yes" if present else "No" print(f" {feature}: {status}") # Discussion stats print(f"\n=== DISCUSSIONS ===") print(f" Count: {result['discussion_count']} discussions in top 10") print(f" Positions: {result['discussion_positions']}") # Organic results print(f"\n=== TOP 5 ORGANIC RESULTS ===") for r in result['organic_results'][:5]: print(f" {r['position']}. {r['domain']}") print(f" {r['title'][:50]}...")
Run this and you'll see:
Sample Output
=== SERP FEATURES ===
has_ai_overview: Yes
has_discussion_box: Yes
has_featured_snippet: No
has_local_pack: No
has_video_results: Yes
has_people_also_ask: Yes
has_knowledge_panel: No
has_image_pack: No
=== DISCUSSIONS ===
Count: 3 discussions in top 10
Positions: [2, 5, 8]
=== TOP 5 ORGANIC RESULTS ===
1. forbes.com
Best CRM Software Of 2025...
2. reddit.com
Best CRM for small business? : r/smallbusiness...
3. hubspot.com
Free CRM Software & Tools for Your Whole Team...
4. pcmag.com
The Best CRM Software for 2025...
5. reddit.com
What CRM do you use and why? : r/sales...
Understanding the Response
The API returns three main pieces of data:
| Field | Description |
|---|---|
serp_features |
Object with boolean flags for each SERP feature (AI Overview, Discussion Box, etc.) |
organic_results |
Array of results with position, URL, domain, title, description, and type |
discussion_positions |
Array of positions where discussions appear (Reddit, Quora, forums) |
discussion_count |
Total number of discussion results in the SERP |
What Counts as a Discussion?
The API identifies discussions from these sources:
- reddit.com - Subreddit threads
- quora.com - Q&A threads
- stackoverflow.com - Technical discussions
- *.stackexchange.com - Stack Exchange network
- Forum domains - Sites with /forum/, /community/, /discuss/ patterns
Detecting SERP Feature Changes
SERP features come and go. A keyword might show an AI Overview today but not tomorrow. Here's how to track which features appear and alert on changes:
import requests import json from datetime import datetime def get_serp_features(keyword, api_key): """Get SERP features for a keyword.""" url = "https://reddit-traffic-and-intelligence-api.p.rapidapi.com/api/v2/serp-analysis" headers = { "Content-Type": "application/json", "x-rapidapi-host": "reddit-traffic-and-intelligence-api.p.rapidapi.com", "x-rapidapi-key": api_key } response = requests.post(url, json={"keyword": keyword}, headers=headers) return response.json() def compare_features(current, previous): """Compare two SERP feature snapshots and return changes.""" changes = [] for feature, is_present in current.items(): was_present = previous.get(feature, False) if is_present and not was_present: changes.append(f"+ {feature} appeared") elif not is_present and was_present: changes.append(f"- {feature} disappeared") return changes def track_keywords(keywords, api_key, history_file="serp_history.json"): """ Track SERP features for multiple keywords. Compares against previous run and reports changes. """ # Load previous data try: with open(history_file, 'r') as f: history = json.load(f) except FileNotFoundError: history = {} current_data = {} all_changes = [] for keyword in keywords: print(f"Analyzing: {keyword}") result = get_serp_features(keyword, api_key) features = result.get('serp_features', {}) discussion_count = result.get('discussion_count', 0) current_data[keyword] = { 'features': features, 'discussion_count': discussion_count, 'discussion_positions': result.get('discussion_positions', []), 'timestamp': datetime.now().isoformat() } # Compare with previous if keyword in history: prev = history[keyword] changes = compare_features(features, prev.get('features', {})) # Check discussion count changes prev_count = prev.get('discussion_count', 0) if discussion_count != prev_count: changes.append(f"Discussions: {prev_count} -> {discussion_count}") if changes: all_changes.append({'keyword': keyword, 'changes': changes}) # Save current data with open(history_file, 'w') as f: json.dump(current_data, f, indent=2) return all_changes # Example usage if __name__ == "__main__": API_KEY = "your_api_key_here" keywords = [ "best crm software", "project management tools", "email marketing platform" ] changes = track_keywords(keywords, API_KEY) if changes: print("\n=== SERP CHANGES DETECTED ===") for item in changes: print(f"\n{item['keyword']}:") for change in item['changes']: print(f" {change}") else: print("\nNo changes detected since last run.")
Scheduling Tip
Run this script daily with a cron job or Task Scheduler. SERP features can change quickly, especially for competitive keywords. Weekly tracking at minimum catches major shifts.
Finding Keywords with Discussion Opportunities
If discussions are ranking for a keyword, that's a signal. It means Google thinks user-generated content is relevant - and those discussions are getting traffic. Here's how to find keywords where discussions have strong presence:
import requests import time def analyze_serp(keyword, api_key): """Get SERP analysis for a keyword.""" url = "https://reddit-traffic-and-intelligence-api.p.rapidapi.com/api/v2/serp-analysis" headers = { "Content-Type": "application/json", "x-rapidapi-host": "reddit-traffic-and-intelligence-api.p.rapidapi.com", "x-rapidapi-key": api_key } response = requests.post(url, json={"keyword": keyword}, headers=headers) return response.json() def find_discussion_opportunities(keywords, api_key): """ Find keywords where discussions rank prominently. Returns keywords sorted by discussion opportunity. """ results = [] for keyword in keywords: print(f"Checking: {keyword}") data = analyze_serp(keyword, api_key) features = data.get('serp_features', {}) positions = data.get('discussion_positions', []) count = data.get('discussion_count', 0) # Calculate opportunity score score = 0 # Discussion Box is highest value if features.get('has_discussion_box'): score += 50 # Top 3 discussion = high value top_3_discussions = len([p for p in positions if p <= 3]) score += top_3_discussions * 20 # Any discussions in top 10 score += count * 5 if score > 0: results.append({ 'keyword': keyword, 'score': score, 'has_discussion_box': features.get('has_discussion_box', False), 'discussion_count': count, 'discussion_positions': positions }) time.sleep(0.5) # Rate limiting # Sort by opportunity score return sorted(results, key=lambda x: x['score'], reverse=True) # Example usage if __name__ == "__main__": API_KEY = "your_api_key_here" keywords = [ "best crm for startups", "hubspot vs salesforce", "simple crm software", "crm for small business", "free crm tools", "crm implementation", "what is crm", "crm best practices" ] opportunities = find_discussion_opportunities(keywords, API_KEY) print("\n=== DISCUSSION OPPORTUNITIES (by score) ===\n") for opp in opportunities: box = "[Discussion Box]" if opp['has_discussion_box'] else "" print(f"{opp['keyword']}") print(f" Score: {opp['score']} | Discussions: {opp['discussion_count']} {box}") print(f" Positions: {opp['discussion_positions']}\n")
Sample Output
=== DISCUSSION OPPORTUNITIES (by score) === hubspot vs salesforce Score: 75 | Discussions: 3 [Discussion Box] Positions: [2, 4, 7] best crm for startups Score: 70 | Discussions: 2 [Discussion Box] Positions: [2, 6] crm for small business Score: 45 | Discussions: 3 Positions: [3, 5, 9] simple crm software Score: 25 | Discussions: 1 Positions: [5]
What This Tells You
Keywords with high discussion scores are where real users are influencing the SERP. For "hubspot vs salesforce," Reddit threads at positions 2, 4, and 7 are getting significant traffic. That's traffic going to user opinions rather than vendor marketing.
Complete SERP Analysis Script
Here's a complete, reusable class that combines everything:
""" SERP Analysis Script Analyze Google search results, detect features, and track discussion positions. """ import requests import json import time from datetime import datetime class SERPAnalysis: def __init__(self, api_key): self.api_key = api_key self.base_url = "https://reddit-traffic-and-intelligence-api.p.rapidapi.com" self.headers = { "Content-Type": "application/json", "x-rapidapi-host": "reddit-traffic-and-intelligence-api.p.rapidapi.com", "x-rapidapi-key": api_key } def analyze(self, keyword, max_results=10): """ Analyze the SERP for a keyword. Args: keyword: Search term to analyze max_results: Number of organic results (1-20) Returns: Dict with features, organic results, and discussion data """ payload = { "keyword": keyword, "include_features": True, "max_results": max_results } response = requests.post( f"{self.base_url}/api/v2/serp-analysis", json=payload, headers=self.headers ) return response.json() def analyze_batch(self, keywords, max_results=10): """Analyze multiple keywords with rate limiting.""" results = {} for keyword in keywords: results[keyword] = self.analyze(keyword, max_results) time.sleep(0.5) # Be nice to the API return results def get_features_summary(self, keyword): """Get a simple summary of SERP features.""" data = self.analyze(keyword) features = data.get('serp_features', {}) active = [f for f, v in features.items() if v] return { 'keyword': keyword, 'active_features': active, 'feature_count': len(active), 'has_discussions': data.get('discussion_count', 0) > 0 } def find_discussion_keywords(self, keywords, min_score=20): """ Find keywords with strong discussion presence. Args: keywords: List of keywords to check min_score: Minimum opportunity score to include Returns: List of keywords with discussion opportunities """ opportunities = [] for keyword in keywords: data = self.analyze(keyword) features = data.get('serp_features', {}) positions = data.get('discussion_positions', []) count = data.get('discussion_count', 0) # Score calculation score = 0 if features.get('has_discussion_box'): score += 50 score += len([p for p in positions if p <= 3]) * 20 score += count * 5 if score >= min_score: opportunities.append({ 'keyword': keyword, 'score': score, 'has_discussion_box': features.get('has_discussion_box', False), 'discussion_count': count, 'positions': positions }) time.sleep(0.5) return sorted(opportunities, key=lambda x: x['score'], reverse=True) def compare_snapshots(self, current, previous): """Compare two SERP snapshots and return differences.""" changes = { 'features_added': [], 'features_removed': [], 'discussion_change': None, 'ranking_changes': [] } curr_features = current.get('serp_features', {}) prev_features = previous.get('serp_features', {}) for feature, is_present in curr_features.items(): was_present = prev_features.get(feature, False) if is_present and not was_present: changes['features_added'].append(feature) elif not is_present and was_present: changes['features_removed'].append(feature) curr_disc = current.get('discussion_count', 0) prev_disc = previous.get('discussion_count', 0) if curr_disc != prev_disc: changes['discussion_change'] = {'from': prev_disc, 'to': curr_disc} return changes def export_analysis(self, keywords, filename=None): """Export SERP analysis for multiple keywords to JSON.""" if not filename: timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") filename = f"serp_analysis_{timestamp}.json" results = { 'generated': datetime.now().isoformat(), 'keywords': {} } for keyword in keywords: print(f"Analyzing: {keyword}") results['keywords'][keyword] = self.analyze(keyword) time.sleep(0.5) with open(filename, 'w') as f: json.dump(results, f, indent=2) print(f"Exported to {filename}") return filename def print_report(self, keyword): """Print a formatted SERP report for a keyword.""" data = self.analyze(keyword) print(f"\n{'='*60}") print(f"SERP ANALYSIS: {keyword}") print(f"{'='*60}") # Features print("\nSERP FEATURES:") features = data.get('serp_features', {}) for feature, present in features.items(): icon = "[x]" if present else "[ ]" print(f" {icon} {feature.replace('has_', '').replace('_', ' ').title()}") # Discussions disc_count = data.get('discussion_count', 0) disc_pos = data.get('discussion_positions', []) print(f"\nDISCUSSIONS: {disc_count} in top 10") if disc_pos: print(f" Positions: {disc_pos}") # Top results print("\nTOP 5 RESULTS:") for r in data.get('organic_results', [])[:5]: disc_marker = " [DISCUSSION]" if r.get('is_discussion') else "" print(f" {r['position']}. {r['domain']}{disc_marker}") # Example usage if __name__ == "__main__": API_KEY = "your_api_key_here" serp = SERPAnalysis(API_KEY) # Single keyword report serp.print_report("best project management software") # Find discussion opportunities print("\n\n=== DISCUSSION OPPORTUNITIES ===") keywords = [ "best crm software", "hubspot alternatives", "salesforce vs hubspot", "crm for small business" ] opportunities = serp.find_discussion_keywords(keywords) for opp in opportunities: box = "+ Discussion Box" if opp['has_discussion_box'] else "" print(f"\n{opp['keyword']} (score: {opp['score']})") print(f" {opp['discussion_count']} discussions at positions {opp['positions']} {box}")
Try It With Your Keywords
See what SERP features appear for your target keywords. The free tier includes 15 API calls to test.
Get Free API KeyWhat's Next
Now you can see where discussions rank. But which discussions? How much traffic are they getting? What are people actually saying about your competitors in those threads?
In Tutorial 3, we'll go deeper: discover the actual discussion threads, estimate their traffic, and extract sentiment analysis for any brand mentioned. This is where SERP analysis turns into competitive intelligence.