Research Methodology Framework
Phase 1: Research Design (Week 1-2)
Step 1: Define Research Objectives
Clear Objectives Framework:
def define_research_objectives():
"""
Define clear, specific, and measurable research objectives
"""
objectives = {
'primary_objective': {
'statement': 'What do you want to learn?',
'example': 'Understand how AI answers pricing queries in B2B SaaS',
'success_criteria': 'Identify citation patterns and optimization opportunities'
},
'secondary_objectives': {
'statement': 'What additional insights will be valuable?',
'examples': [
'Compare performance across AI platforms',
'Analyze content characteristics driving citations',
'Identify competitive opportunities'
]
},
'target_audience': {
'primary': 'Who will use this research?',
'examples': ['Marketing leaders', 'Content teams', 'Strategy executives'],
'secondary': 'Who else might benefit?',
'examples': ['Industry analysts', 'Media', 'Academic researchers']
},
'business_value': {
'statement': 'How will this research create value?',
'examples': [
'Inform content strategy',
'Identify optimization opportunities',
'Establish thought leadership',
'Generate leads and visibility'
]
}
}
return objectives
Step 2: Develop Research Questions
Research Question Framework:
-
Descriptive Questions (What is happening?)
- "What is the average citation rate for [query type]?"
- "How many sources does AI typically cite?"
-
Comparative Questions (How do things compare?)
- "How does citation rate vary by content type?"
- "Which AI platforms cite our content most frequently?"
-
Explanatory Questions (Why is this happening?)
- "What content characteristics drive higher citation rates?"
- "Why do certain queries have higher AI answer rates?"
-
Prescriptive Questions (What should we do?)
- "What content optimizations will improve citation rates?"
- "How should we structure content for AI extraction?"
Step 3: Determine Research Scope
Scope Definition Matrix:
| Scope Dimension | Narrow Scope | Medium Scope | Broad Scope |
|---|
| Query Sample | 100-200 queries | 500-1,000 queries | 1,500+ queries |
| Time Period | 1 month | 3 months | 6-12 months |
| Platforms | 1 platform | 2-3 platforms | 4+ platforms |
| Industries | Single industry | 2-3 industries | 5+ industries |
| Content Types | Single type | 2-3 types | 4+ types |
Recommended First Study Scope:
- Query Sample: 500-750 queries
- Time Period: 3 months
- Platforms: 2-3 major platforms (Google AI, Bing Copilot, Perplexity)
- Industries: Your industry plus 1-2 related industries for context
- Content Types: 2-3 most relevant to your audience
Phase 2: Data Collection (Week 3-5)
Step 1: Query Selection
Query Selection Framework:
def select_queries(research_scope):
"""
Select representative queries for analysis
"""
query_categories = {
'commercial_queries': {
'subcategories': [
'pricing inquiries',
'product comparisons',
'feature questions',
'vendor selection',
'buying guides'
],
'weight': 0.40
},
'informational_queries': {
'subcategories': [
'how-to questions',
'best practices',
'industry trends',
'methodology questions'
],
'weight': 0.35
},
'navigational_queries': {
'subcategories': [
'brand searches',
'product-specific searches',
'category searches'
],
'weight': 0.25
}
}
queries = sample_queries(query_categories, research_scope.sample_size)
return queries
Query Selection Best Practices:
- Representative distribution: Reflect real query volume distribution
- Variety within categories: Include different query variations
- Business relevance: Focus on queries important to your business
- Search volume consideration: Balance high-volume and long-tail queries
Step 2: Platform Testing
Testing Protocol:
-
Prepare Testing Environment
- Clear browser cookies between queries
- Use consistent location settings
- Document testing conditions
-
Execute Query Testing
- Test each query across all target platforms
- Document AI-generated answers
- Capture source citations
- Record answer characteristics
-
Data Collection Checklist:
Data Collection Template: