Why this method works
Traditional B2B databases like Apollo and Sales Navigator miss 90%+ of local/service-based businesses because:
- Many don't have LinkedIn company pages
- Employees don't list these companies on their profiles
- Example: Apollo found only 1,300 UK clinics vs. 19,220 using this method
Results from this approach:
- 32% LinkedIn connection rate
- 24% reply rate
- 31 positive responses from ~1,000 outreach attempts
- 35 positive replies from 1,500 email campaigns
Tools required
- Apify Google Maps Scraper - to get data from Google Maps
- Clay - data enrichment and workflow
- n8n - automation workflow
- ChatGPT - generating city lists
- Lead Magic - personal email finder
- HeyReach - LinkedIn outreach
- Instantly - email outreach
Part 1: Getting Google Maps data
Step 1: Generate your location list
Get major cities in your target country:
- Go to ChatGPT
- Prompt: "List all major cities in [YOUR COUNTRY]"
- Copy the list
Step 2: Set up Apify to get data from Google Maps
Basic configuration:
- Choose category search (not search terms)
- Select all relevant categories for your niche meticulously
- Example: For clinics - select every clinic-related category
- Set location to City + Country format (not just country)
- Better results: "London, United Kingdom"
- Worse results: "United Kingdom"
Why use categories? Captures businesses that don't use standard keywords in their names.
Part 2: Automate getting data in bulk (Optional)
Manual option: Takes ~1 hour to copy/paste locations individually
Automated option using Clay + n8n:
Step 3: Set up Clay table
- Import your city list from ChatGPT into Clay
- Create HTTP API column to trigger Apify runs
- Get your Apify settings:
- Configure one scraper run manually in Apify
- Click "JSON" at the top
- Copy the entire JSON object
- Paste into Clay HTTP API column
- Change only the location field to dynamically reference your city column
- Run in batches (based on your Apify plan limits)
Step 4: Build n8n automation
Purpose: Consolidate all scraper runs into one file
- Create webhook to receive data from Clay
- Add "Get Run Data" node using the Dataset ID from Clay
- Find API endpoint: Apify → API → API endpoints
- Send consolidated data back to Clay
Why this workflow?
- Clay processes row-by-row (perfect for multiple locations)
- n8n handles the data consolidation
- Automatic data aggregation from all scraper runs
Part 3: Qualify and filter your list
Step 5: Filter out unqualified leads
Critical qualification prompts in Clay:
1. Exclude irrelevant business types
Example: NHS vs. Private Clinics
- Search their website
- Return: "NHS" or "Private"
- Filter out NHS (government-linked, lower budget)
2. Identify relevant software/tools they useCheck booking page URLs for:
- Cliniko (clinic management software)
- E-prescriber
- Other relevant tools
Why this matters? Software usage = positive buying signal for your offer.
Step 6: Find contact emails
Use Clay to scrape:
- Generic email addresses (info@, contact@)
- Useful for small clinics with single contact points
- Fallback option when personal emails aren't available
Part 4: Extract decision makers
Step 7: Scrape website for Staff Names
The money prompt - adapt this for your niche:
Visit this company website. Check the About Us page and Team page.Return a list of [DECISION MAKERS] with:
- Full name
- Job title
- Contact email (if available)Focus on: [YOUR TARGET ROLES - e.g., doctors, clinic managers]
Results example with this prompt: Found 41 doctors at one clinic vs. 2 on Apollo
This data lives on websites but not LinkedIn/Apollo - completely untapped.
Step 8: Transfer these results to new Clay table
Use "Write to Other Table" function:
- Create new blank table
- In original table: Add "Write to Other Table" column
- Select your doctor/staff list (JSON array)
- Check "Select from a list" - this creates one row per person
- Map ALL data points you need:
- Full name from JSON
- Job title from JSON
- Email from JSON
- Company name (from parent table)
- Company website (from parent table)
- All other relevant fields
⚠️ Common mistake: Assuming all data transfers automatically - you must manually map each field.
Part 5: Cleanse and enrich contact data
Step 9: Extract and format names
Problem: Full names include titles like "Dr. Rita Johnson"
Solution: Clay prompt:
Extract first name from this full name: [FULL NAME]Rules:
- Remove titles (Dr., Mr., Mrs.)
- If only initial available (Dr. R. Johnson), return "Dr. Johnson"
- Return only the first name for personalization
Step 10: Categorize and re-qualify roles
Second qualification layer prompt:
Review this data:
- Job title: [JOB TITLE]
- Full name: [FULL NAME]
- Category: [CATEGORY]
Is this person a [TARGET ROLE]? Yes/No
Aim for 95% accuracy- better than manual work
⚠️ Critical: Use API keys (OpenAI, Anthropic) not Clay credits - much cheaper
Step 11: Enrich with contact details
Waterfall enrichment strategy:
- Find LinkedIn URL first:Search prompt:Find LinkedIn profile for [FIRST NAME] [JOB TITLE] at [COMPANY NAME]Return only the URL, no other text
- Personal email enrichment:
- Tool 1: [First personal email tool]
- Tool 2: [Backup personal email tool]
- Requires LinkedIn URL to work effectively
- Work email enrichment:
- Use Lead Magic or similar
- Expect low match rates for local businesses
- That's why personal emails are critical
- Fallback options:
- Generic company emails from Step 6
Part 6: Launch outreach campaigns
Step 12: Export to outreach tools
You now have:
- 30,000+ qualified leads (vs. 1,300 from Apollo)
- Personal emails
- Work emails (where available)
- Generic company emails
- Personalization variables (software used, role, etc.)
Add to campaigns:
- LinkedIn: HeyReach
- Email: Instantly
Step 13: Write targeted copy
Personalization variables available:
- First name
- Company name
- Software they use (Cliniko, etc.)
- Role/specialty
- Location
Keep copy relatively generic when data is limited, but use specific callouts when you have them (e.g., "I noticed you use Cliniko...")
Expected results
Based on ~1,000-1,500 outreach attempts:
LinkedIn (HeyReach):
- 32% connection acceptance
- 24% reply rate
- 31 positive responses
Email (Instantly):
- 35 positive replies
- High personal email usage indicates data quality


.webp)