The Problem with Manual Keyword Research
If you've done keyword research, you know the drill: type a keyword into a tool, write down the metrics, repeat. Maybe you copy everything into a spreadsheet. For 10 keywords, that's fine. For 500? That's your entire afternoon.
Python can do this in minutes. And once you've written the script, you can run it again whenever you need updated data - no manual work required.
In this tutorial, we'll build a keyword research script that:
- Analyzes keywords in batches (up to 10 at a time)
- Returns search volume, CPC, and difficulty scores
- Classifies search intent (informational, commercial, etc.)
- Finds related keywords you might have missed
- Exports everything to CSV for further analysis
What You'll Need
Before we start:
- Python 3.7+ installed
- The
requestslibrary (pip install requests) - Optional:
pandasfor data analysis (pip install pandas) - An API key from RapidAPI (free tier available)
Your First Keyword Analysis
Let's start simple - analyze a single keyword and see what we get back.
import requests def analyze_keyword(keyword, api_key): """ Get metrics for a single keyword. Returns: volume, CPC, difficulty, and search intent """ url = "https://reddit-traffic-and-intelligence-api.p.rapidapi.com/api/v2/keyword-metrics" headers = { "Content-Type": "application/json", "x-rapidapi-host": "reddit-traffic-and-intelligence-api.p.rapidapi.com", "x-rapidapi-key": api_key } payload = { "keywords": [keyword] } response = requests.post(url, json=payload, headers=headers) return response.json() # Try it out if __name__ == "__main__": API_KEY = "your_api_key_here" result = analyze_keyword("project management software", API_KEY) for kw in result['keywords']: print(f"Keyword: {kw['keyword']}") print(f" Volume: {kw['volume']:,}/month") print(f" CPC: ${kw['cpc']}") print(f" Difficulty: {kw['keyword_difficulty']}/100") print(f" Intent: {kw['search_intent']}")
Run this and you'll see:
Sample Output
Keyword: project management software Volume: 74,000/month CPC: $15.23 Difficulty: 72/100 Intent: commercial
Understanding the Metrics
Before we go further, let's understand what each metric tells us:
| Metric | What It Means | How to Use It |
|---|---|---|
volume |
Monthly searches on Google | Higher = more potential traffic, but usually more competition |
cpc |
What advertisers pay per click | High CPC = commercial value. $10+ usually means money keywords |
keyword_difficulty |
How hard to rank (0-100) | Under 30 = easier wins. Over 70 = need strong domain authority |
search_intent |
What the searcher wants | Match your content type to intent (see below) |
Search Intent Types
The API classifies intent into four categories:
- Informational - Looking to learn ("what is project management")
- Navigational - Looking for a specific site ("asana login")
- Commercial - Researching before buying ("best project management software")
- Transactional - Ready to buy ("asana pricing", "buy monday.com")
Quick Win Strategy
Look for keywords with: difficulty under 40, volume over 1,000, and commercial or transactional intent. These are your best opportunities for content that converts.
Batch Analysis: Multiple Keywords at Once
Analyzing keywords one at a time is slow. The API accepts up to 10 keywords per request, so let's use that.
import requests import time def analyze_keywords_batch(keywords, api_key): """ Analyze multiple keywords in batches of 10. Args: keywords: List of keywords to analyze api_key: Your RapidAPI key Returns: List of keyword metrics """ url = "https://reddit-traffic-and-intelligence-api.p.rapidapi.com/api/v2/keyword-metrics" headers = { "Content-Type": "application/json", "x-rapidapi-host": "reddit-traffic-and-intelligence-api.p.rapidapi.com", "x-rapidapi-key": api_key } all_results = [] # Process in batches of 10 for i in range(0, len(keywords), 10): batch = keywords[i:i+10] print(f"Analyzing batch {i//10 + 1}: {len(batch)} keywords...") payload = {"keywords": batch} response = requests.post(url, json=payload, headers=headers) data = response.json() if data.get('status') == 'success': all_results.extend(data['keywords']) # Small delay between batches to be nice to the API if i + 10 < len(keywords): time.sleep(1) return all_results # Example: analyze a list of keywords if __name__ == "__main__": API_KEY = "your_api_key_here" my_keywords = [ "project management software", "task management app", "team collaboration tools", "agile project management", "kanban board software", "free project management", "project tracking software", "monday vs asana", "best project management app", "project management for small teams" ] results = analyze_keywords_batch(my_keywords, API_KEY) # Sort by volume (highest first) results.sort(key=lambda x: x['volume'], reverse=True) print("\n--- RESULTS (sorted by volume) ---\n") for kw in results: print(f"{kw['keyword']}") print(f" Vol: {kw['volume']:,} | CPC: ${kw['cpc']} | Diff: {kw['keyword_difficulty']} | {kw['search_intent']}\n")
Finding Related Keywords
One of the most useful features: the API can suggest related keywords you might not have thought of. These come from Google's "related searches" and can uncover opportunities you'd otherwise miss.
import requests def get_keyword_with_related(keyword, api_key, max_related=10): """ Get metrics for a keyword plus related keyword suggestions. Args: keyword: Seed keyword to analyze api_key: Your RapidAPI key max_related: How many related keywords to return (1-20) Returns: Keyword metrics plus related keywords with their metrics """ url = "https://reddit-traffic-and-intelligence-api.p.rapidapi.com/api/v2/keyword-metrics" headers = { "Content-Type": "application/json", "x-rapidapi-host": "reddit-traffic-and-intelligence-api.p.rapidapi.com", "x-rapidapi-key": api_key } payload = { "keywords": [keyword], "include_related": True, "max_related": max_related } response = requests.post(url, json=payload, headers=headers) return response.json() # Example usage if __name__ == "__main__": API_KEY = "your_api_key_here" result = get_keyword_with_related("email marketing software", API_KEY, max_related=10) for kw in result['keywords']: print(f"=== {kw['keyword'].upper()} ===") print(f"Volume: {kw['volume']:,} | CPC: ${kw['cpc']} | Difficulty: {kw['keyword_difficulty']}\n") related = kw.get('related_keywords', []) if related: print("Related keywords:") for r in related: print(f" - {r['keyword']}") print(f" Vol: {r['volume']:,} | Diff: {r['keyword_difficulty']} | {r['search_intent']}")
Sample Output
=== EMAIL MARKETING SOFTWARE ===
Volume: 49,500 | CPC: $12.87 | Difficulty: 68
Related keywords:
- email marketing platforms
Vol: 18,100 | Diff: 62 | commercial
- best email marketing service
Vol: 12,100 | Diff: 58 | commercial
- email automation software
Vol: 8,100 | Diff: 45 | commercial
- free email marketing tools
Vol: 6,600 | Diff: 38 | commercial
- email campaign software
Vol: 4,400 | Diff: 52 | commercial
Finding Hidden Opportunities
Notice how "email automation software" has lower difficulty (45) than the seed keyword (68)? That's the kind of opportunity related keywords reveal. Similar intent, less competition.
Exporting to CSV
Data in Python is useful, but sometimes you need it in a spreadsheet for filtering, sharing, or further analysis. Here's how to export everything to CSV.
import requests import csv from datetime import datetime def export_keywords_to_csv(keywords, api_key, filename=None): """ Analyze keywords and export results to CSV. Args: keywords: List of keywords to analyze api_key: Your RapidAPI key filename: Output filename (auto-generated if not provided) Returns: Path to the created CSV file """ # Generate filename if not provided if not filename: timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") filename = f"keyword_research_{timestamp}.csv" url = "https://reddit-traffic-and-intelligence-api.p.rapidapi.com/api/v2/keyword-metrics" headers = { "Content-Type": "application/json", "x-rapidapi-host": "reddit-traffic-and-intelligence-api.p.rapidapi.com", "x-rapidapi-key": api_key } # Analyze keywords (in batches of 10) all_results = [] for i in range(0, len(keywords), 10): batch = keywords[i:i+10] payload = {"keywords": batch} response = requests.post(url, json=payload, headers=headers) data = response.json() if data.get('status') == 'success': all_results.extend(data['keywords']) # Write to CSV with open(filename, 'w', newline='', encoding='utf-8') as f: writer = csv.writer(f) # Header row writer.writerow([ 'Keyword', 'Volume', 'CPC', 'Difficulty', 'Intent', 'Opportunity Score' ]) # Data rows for kw in all_results: # Calculate a simple opportunity score # Higher volume + lower difficulty = better opportunity volume = kw.get('volume', 0) difficulty = kw.get('keyword_difficulty', 100) opportunity = round(volume / max(difficulty, 1), 1) writer.writerow([ kw['keyword'], kw['volume'], kw['cpc'], kw['keyword_difficulty'], kw['search_intent'], opportunity ]) print(f"Exported {len(all_results)} keywords to {filename}") return filename # Example usage if __name__ == "__main__": API_KEY = "your_api_key_here" # Your keyword list keywords = [ "crm software", "best crm for small business", "free crm", "crm comparison", "hubspot alternatives", "salesforce competitors", "simple crm", "crm for startups" ] export_keywords_to_csv(keywords, API_KEY)
Complete Keyword Research Script
Here's everything combined into a reusable class that you can drop into any project:
""" Keyword Research Automation Script Analyze keywords with volume, CPC, difficulty, and related suggestions. """ import requests import csv import time from datetime import datetime class KeywordResearch: def __init__(self, api_key): self.api_key = api_key self.base_url = "https://reddit-traffic-and-intelligence-api.p.rapidapi.com" self.headers = { "Content-Type": "application/json", "x-rapidapi-host": "reddit-traffic-and-intelligence-api.p.rapidapi.com", "x-rapidapi-key": api_key } def analyze(self, keywords, include_related=False, max_related=5): """ Analyze one or more keywords. Args: keywords: Single keyword (str) or list of keywords include_related: Whether to fetch related keywords max_related: Number of related keywords per seed (1-20) Returns: List of keyword data dicts """ # Handle single keyword if isinstance(keywords, str): keywords = [keywords] all_results = [] # Process in batches of 10 for i in range(0, len(keywords), 10): batch = keywords[i:i+10] payload = { "keywords": batch, "include_related": include_related, "max_related": max_related } response = requests.post( f"{self.base_url}/api/v2/keyword-metrics", json=payload, headers=self.headers ) data = response.json() if data.get('status') == 'success': all_results.extend(data['keywords']) # Rate limiting if i + 10 < len(keywords): time.sleep(1) return all_results def find_opportunities(self, keywords, max_difficulty=50, min_volume=500): """ Find low-competition keyword opportunities. Args: keywords: Keywords to analyze max_difficulty: Maximum difficulty score (0-100) min_volume: Minimum monthly volume Returns: List of opportunity keywords, sorted by potential """ results = self.analyze(keywords) opportunities = [ kw for kw in results if kw['keyword_difficulty'] <= max_difficulty and kw['volume'] >= min_volume ] # Sort by opportunity score (volume / difficulty) for kw in opportunities: kw['opportunity_score'] = round( kw['volume'] / max(kw['keyword_difficulty'], 1), 1 ) return sorted(opportunities, key=lambda x: x['opportunity_score'], reverse=True) def expand_seed_keywords(self, seed_keywords, max_related=10): """ Expand a list of seed keywords with related suggestions. Args: seed_keywords: Starting keywords max_related: Related keywords per seed Returns: Dict with seeds and all discovered keywords """ results = self.analyze(seed_keywords, include_related=True, max_related=max_related) all_keywords = [] for kw in results: all_keywords.append(kw) for related in kw.get('related_keywords', []): all_keywords.append(related) return { 'seed_count': len(seed_keywords), 'total_keywords': len(all_keywords), 'keywords': all_keywords } def export_csv(self, keywords, filename=None, include_related=False): """Export keyword analysis to CSV.""" if not filename: timestamp = datetime.now().strftime("%Y%m%d_%H%M%S") filename = f"keywords_{timestamp}.csv" results = self.analyze(keywords, include_related=include_related) # Flatten related keywords into the main list all_keywords = [] for kw in results: all_keywords.append({ 'keyword': kw['keyword'], 'volume': kw['volume'], 'cpc': kw['cpc'], 'difficulty': kw['keyword_difficulty'], 'intent': kw['search_intent'], 'source': 'seed' }) for related in kw.get('related_keywords', []): all_keywords.append({ 'keyword': related['keyword'], 'volume': related['volume'], 'cpc': related['cpc'], 'difficulty': related['keyword_difficulty'], 'intent': related['search_intent'], 'source': f"related:{kw['keyword']}" }) with open(filename, 'w', newline='', encoding='utf-8') as f: writer = csv.DictWriter(f, fieldnames=['keyword', 'volume', 'cpc', 'difficulty', 'intent', 'source']) writer.writeheader() writer.writerows(all_keywords) print(f"Exported {len(all_keywords)} keywords to {filename}") return filename # Example usage if __name__ == "__main__": API_KEY = "your_api_key_here" kr = KeywordResearch(API_KEY) # Basic analysis print("=== BASIC ANALYSIS ===") results = kr.analyze(["seo software", "seo tools"]) for kw in results: print(f"{kw['keyword']}: {kw['volume']:,} vol, {kw['keyword_difficulty']} diff") # Find opportunities print("\n=== LOW-COMPETITION OPPORTUNITIES ===") opportunities = kr.find_opportunities( ["seo audit", "seo checklist", "technical seo", "local seo"], max_difficulty=50, min_volume=1000 ) for kw in opportunities[:5]: print(f"{kw['keyword']}: score {kw['opportunity_score']}") # Expand with related keywords print("\n=== EXPANDED KEYWORD LIST ===") expanded = kr.expand_seed_keywords(["email marketing"], max_related=5) print(f"Started with {expanded['seed_count']} seed, found {expanded['total_keywords']} total") # Export to CSV print("\n=== EXPORT ===") kr.export_csv(["content marketing", "content strategy"], include_related=True)
Try It With Your Keywords
The free tier includes 15 API calls - enough to analyze 150 keywords and test with your actual keyword list.
Get Free API KeyWhat's Next
Now that you can pull keyword metrics programmatically, the next step is understanding what's actually ranking for those keywords. In Tutorial 2, we'll build a SERP analysis script that shows you exactly what Google displays - including which positions have discussions, forums, and user-generated content.