Your sales team needs to find all SaaS companies in San Francisco with 50 to 200 employees that are actively hiring engineers.
Manually, you'd spend hours clicking through LinkedIn, filtering results, copying company names into a spreadsheet, then researching each one individually.
With a company search API: 30 seconds. 200+ matching companies. Complete data on each including employee count, headquarters, industry, follower count, and open positions.
But LinkedIn's official API doesn't let you search for companies this way. You can manage your company page. You can post updates. But searching for "SaaS companies in SF with 50 to 200 employees"? LinkedIn says no.
Except thousands of sales teams, investors, and market researchers do it every day using unofficial APIs.
This guide shows you exactly how to search LinkedIn for companies programmatically. We'll cover real use cases with working code, advanced filtering techniques, and how to build automated company research systems.
Why Company Search Matters for B2B
Let's talk about what manual company research actually costs your team.
Scenario: You're building a target account list for Q2. You need 500 companies matching your ICP.
Your criteria:
- Industry: SaaS/Software
- Location: United States
- Company size: 51 to 500 employees
- Currently hiring (growth signal)
- Has active LinkedIn presence
The manual process:
11. LinkedIn search with filters
22. Scroll through results (slow loading, limited filters)
33. Click each company to verify details
44. Copy name, size, industry to spreadsheet
55. Check their jobs page for hiring signals
66. Research decision makers separately
77. Remove duplicates
88. QA the list
9
10Time per qualified company: 10 to 15 minutes
11Total time for 500 companies: 83 to 125 hours
12Cost at $50/hour: $4,150 to $6,250And you still have incomplete data, manual errors, and information that's already outdated.
The API approach:
1from linkdapi import LinkdAPI
2
3client = LinkdAPI("your_api_key")
4
5# Search with all criteria at once
6results = client.search_companies(
7 keyword="SaaS",
8 geoUrn="103644278", # United States
9 companySize="51-200,201-500",
10 hasJobs=True,
11 count=50
12)
13
14# Get 500+ qualified companies in seconds
15for company in results['data']['companies']:
16 print(f"{company['name']} - {company['industry']} - {company['employeeCount']} employees")
17
18# Time: 2 minutes
19# Cost: ~$1.50 (500 companies)
20# Quality: Fresh, accurate, structuredSavings: 83 to 125 hours reduced to 2 minutes. $4,150 to $6,250 reduced to $1.50.
Official API vs Unofficial API
LinkedIn's official Marketing Developer Platform has strict limitations on company data:
What you CANNOT do with LinkedIn's official API:
- Search for companies by criteria
- Get company data for companies you don't manage
- Filter companies by size, location, or industry
- Build prospect lists or target account lists
- Research competitors programmatically
What you CAN do (limited use cases):
- Manage your own company page
- Post updates to your company
- Get analytics for pages you admin
- Manage ad campaigns
For 99% of B2B use cases like sales intelligence, market research, competitive analysis, and account based marketing, you need an unofficial API.
LinkdAPI Company Search Methods
LinkdAPI provides multiple endpoints for finding and enriching company data. Here's how each works:
1. Company Search (Primary Discovery)
The /api/v1/search/companies endpoint lets you search with multiple filters:
1from linkdapi import LinkdAPI
2
3client = LinkdAPI("your_api_key")
4
5# Search for fintech companies in New York
6results = client.search_companies(
7 keyword="fintech",
8 geoUrn="102221843", # New York City
9 companySize="51-200",
10 hasJobs=True,
11 count=25
12)
13
14print(f"Found {len(results['data']['companies'])} companies")
15
16for company in results['data']['companies']:
17 print(f"- {company['name']}: {company['headline']}")Available Parameters:
| Parameter | Type | Description |
|---|---|---|
keyword | string | Company name or keyword (required) |
start | int | Pagination offset (default: 0) |
count | int | Results per page (max: 50) |
geoUrn | string | Geographic location filter |
Company Size Options:
1-10(Startups)11-50(Small)51-200(Growing)201-500(Mid size)501-1000(Large)1001-5000(Enterprise)5001-10000(Very large)
2. Company Name Lookup (Autocomplete)
The /api/v1/companies/name-lookup endpoint works like autocomplete. Great when you have partial names:
1# Search by partial name
2results = client.company_name_lookup(query="strip")
3
4# Returns: Stripe, Stripechat, Strip Mining Co, etc.
5for company in results['data']:
6 print(f"{company['name']} (ID: {company['id']})")3. Get Company ID (Lightweight Lookup)
When you need just the numeric ID from a company's universal name (URL slug), use the lightweight /api/v1/companies/company/universal-name-to-id endpoint:
1import requests
2
3response = requests.get(
4 'https://linkdapi.com/api/v1/companies/company/universal-name-to-id',
5 headers={'X-linkdapi-apikey': 'YOUR_API_KEY'},
6 params={'universalName': 'google'}
7)
8
9result = response.json()
10print(result)Response:
1{
2 "success": true,
3 "statusCode": 200,
4 "message": "Data retrieved successfully",
5 "errors": null,
6 "data": {
7 "id": "1441",
8 "universalName": "google"
9 }
10}This is the most efficient way to get a company ID when you only have the LinkedIn URL or universal name.
4. Company Details (Full Enrichment)
Once you have a company ID, get complete details with /api/v1/companies/company/info:
1# Get full company details
2company = client.get_company_info(id="1441") # Google
3
4data = company['data']
5print(f"Name: {data['name']}")
6print(f"Industry: {data['industry']}")
7print(f"Employee Count: {data['staffCount']}")
8print(f"Headquarters: {data['headquarter']['city']}, {data['headquarter']['countryCode']}")
9print(f"Website: {data['website']}")
10print(f"Follower Count: {data['followerCount']}")Fields returned:
id,name,universalNamedescription,taglinewebsite,linkedinURLindustrystaffCount,followerCount
5. Extended Company Info (V2)
For additional data like "People Also Follow" and affiliated companies, use /api/v1/companies/company/info-v2:
1# Get extended company info
2extended = client.get_company_info_v2(id="1441")
3
4data = extended['data']
5print(f"People Also Follow: {data.get('peopleAlsoFollow', [])}")
6print(f"Affiliated By Jobs: {data.get('affiliatedByJobs', [])}")Use Case 1: Competitor Research
Scenario: You want to monitor all competitors in your space and track their growth signals.
1from linkdapi import LinkdAPI
2import json
3
4client = LinkdAPI("your_api_key")
5
6def research_competitors(industry_keyword, location, size_range):
7 """
8 Find and analyze competitors in your space
9 """
10 # Search for companies
11 results = client.search_companies(
12 keyword=industry_keyword,
13 geoUrn=location,
14 companySize=size_range,
15 count=50
16 )
17
18 competitors = []
19
20 for company in results['data']['companies']:
21 # Get detailed info for each
22 details = client.get_company_info(id=company['id'])
23
24 if details['success']:
25 data = details['data']
26 competitors.append({
27 'name': data['name'],
28 'id': data['id'],
29 'website': data.get('website'),
30 'employee_count': data.get('staffCount'),
31 'follower_count': data.get('followerCount'),
32 'industry': data.get('industry'),
33 'headquarters': f"{data.get('headquarter', {}).get('city', 'N/A')}",
34 'founded': data.get('founded'),
35 'specialties': data.get('specialties', [])
36 })
37
38 return competitors
39
40# Find competitors
41competitors = research_competitors(
42 industry_keyword="marketing automation",
43 location="103644278", # United States
44 size_range="51-200,201-500"
45)
46
47# Analyze
48print(f"Found {len(competitors)} competitors\n")
49
50# Sort by follower count (brand strength indicator)
51competitors.sort(key=lambda x: x['follower_count'] or 0, reverse=True)
52
53print("Top 10 by LinkedIn followers:")
54for i, comp in enumerate(competitors[:10], 1):
55 print(f"{i}. {comp['name']}: {comp['follower_count']:,} followers, {comp['employee_count']} employees")
56
57# Export for analysis
58with open('competitor_analysis.json', 'w') as f:
59 json.dump(competitors, f, indent=2)Track Competitor Hiring (Growth Signals)
1def get_competitor_jobs(company_id):
2 """
3 Get job listings as growth signal
4 """
5 jobs = client.get_company_jobs(companyIDs=company_id, start=0)
6
7 if jobs['success']:
8 return {
9 'total_jobs': jobs['data'].get('totalCount', 0),
10 'jobs': jobs['data'].get('jobs', [])
11 }
12 return {'total_jobs': 0, 'jobs': []}
13
14# Check each competitor's hiring
15for comp in competitors[:10]:
16 job_data = get_competitor_jobs(comp['id'])
17 comp['open_jobs'] = job_data['total_jobs']
18
19 # Categorize job types
20 engineering_jobs = sum(1 for j in job_data['jobs']
21 if 'engineer' in j.get('title', '').lower())
22 sales_jobs = sum(1 for j in job_data['jobs']
23 if 'sales' in j.get('title', '').lower())
24
25 comp['engineering_jobs'] = engineering_jobs
26 comp['sales_jobs'] = sales_jobs
27
28# Who's scaling engineering?
29scaling_eng = sorted(competitors[:10], key=lambda x: x.get('engineering_jobs', 0), reverse=True)
30print("\nCompetitors scaling engineering:")
31for comp in scaling_eng[:5]:
32 print(f" {comp['name']}: {comp.get('engineering_jobs', 0)} engineering roles")
33
34# Who's scaling sales?
35scaling_sales = sorted(competitors[:10], key=lambda x: x.get('sales_jobs', 0), reverse=True)
36print("\nCompetitors scaling sales:")
37for comp in scaling_sales[:5]:
38 print(f" {comp['name']}: {comp.get('sales_jobs', 0)} sales roles")Use Case 2: Account Based Marketing (ABM)
Scenario: You need to build a target account list for ABM campaigns with specific criteria.
1from linkdapi import LinkdAPI, AsyncLinkdAPI
2import asyncio
3import csv
4
5client = LinkdAPI("your_api_key")
6
7# Define your ICP
8ICP = {
9 'industries': ['SaaS', 'fintech', 'healthtech'],
10 'size_ranges': ['51-200', '201-500'],
11 'locations': [
12 ('103644278', 'United States'),
13 ('101165590', 'United Kingdom'),
14 ('101174742', 'Canada')
15 ],
16 'must_be_hiring': True
17}
18
19def build_target_account_list(icp):
20 """
21 Build ABM target account list matching ICP
22 """
23 all_companies = []
24
25 for industry in icp['industries']:
26 for geo_urn, geo_name in icp['locations']:
27 print(f"Searching: {industry} in {geo_name}")
28
29 results = client.search_companies(
30 keyword=industry,
31 geoUrn=geo_urn,
32 companySize=','.join(icp['size_ranges']),
33 hasJobs=icp['must_be_hiring'],
34 count=50
35 )
36
37 if results['success']:
38 for company in results['data'].get('companies', []):
39 company['search_industry'] = industry
40 company['search_location'] = geo_name
41 all_companies.append(company)
42
43 # Remove duplicates
44 seen = set()
45 unique_companies = []
46 for company in all_companies:
47 if company['id'] not in seen:
48 seen.add(company['id'])
49 unique_companies.append(company)
50
51 return unique_companies
52
53# Build the list
54target_accounts = build_target_account_list(ICP)
55print(f"\nFound {len(target_accounts)} unique target accounts")
56
57# Score accounts
58def score_account(company):
59 """
60 Score account fit
61 """
62 score = 0
63
64 # Size scoring
65 employee_count = company.get('employeeCount', 0)
66 if 51 <= employee_count <= 200:
67 score += 10 # Sweet spot
68 elif 201 <= employee_count <= 500:
69 score += 8
70
71 # Growth signals
72 if company.get('hasJobs'):
73 score += 5
74
75 # Brand strength (followers)
76 followers = company.get('followerCount', 0)
77 if followers > 10000:
78 score += 5
79 elif followers > 5000:
80 score += 3
81
82 return score
83
84# Score all accounts
85for account in target_accounts:
86 account['fit_score'] = score_account(account)
87
88# Sort by score
89target_accounts.sort(key=lambda x: x['fit_score'], reverse=True)
90
91# Export to CSV for ABM platform
92with open('abm_target_accounts.csv', 'w', newline='') as f:
93 fieldnames = ['Company', 'Industry', 'Location', 'Employees', 'Followers', 'LinkedIn URL', 'Fit Score']
94 writer = csv.DictWriter(f, fieldnames=fieldnames)
95 writer.writeheader()
96
97 for account in target_accounts[:500]: # Top 500
98 writer.writerow({
99 'Company': account['name'],
100 'Industry': account.get('industry', 'N/A'),
101 'Location': account.get('search_location', 'N/A'),
102 'Employees': account.get('employeeCount', 'N/A'),
103 'Followers': account.get('followerCount', 0),
104 'LinkedIn URL': f"https://linkedin.com/company/{account.get('universalName', '')}",
105 'Fit Score': account['fit_score']
106 })
107
108print(f"Exported top 500 accounts to abm_target_accounts.csv")Use Case 3: Market Research and Sizing
Scenario: You're doing market research for investors or strategic planning. You need to understand market size and player landscape.
1from linkdapi import LinkdAPI
2from collections import defaultdict
3
4client = LinkdAPI("your_api_key")
5
6def analyze_market(keyword, locations):
7 """
8 Analyze market size and segmentation
9 """
10 market_data = {
11 'total_companies': 0,
12 'by_size': defaultdict(int),
13 'by_location': defaultdict(int),
14 'with_jobs': 0,
15 'companies': []
16 }
17
18 size_ranges = ['1-10', '11-50', '51-200', '201-500', '501-1000', '1001-5000', '5001-10000', '10001+']
19
20 for geo_urn, geo_name in locations:
21 for size in size_ranges:
22 results = client.search_companies(
23 keyword=keyword,
24 geoUrn=geo_urn,
25 companySize=size,
26 count=50
27 )
28
29 if results['success']:
30 items = results['data'].get('companies', [])
31 count = len(items)
32
33 market_data['total_companies'] += count
34 market_data['by_size'][size] += count
35 market_data['by_location'][geo_name] += count
36
37 # Track companies with jobs
38 for company in items:
39 if company.get('hasJobs'):
40 market_data['with_jobs'] += 1
41 market_data['companies'].append(company)
42
43 return market_data
44
45# Research the AI/ML market
46locations = [
47 ('103644278', 'United States'),
48 ('101165590', 'United Kingdom'),
49 ('101174742', 'Canada'),
50 ('102713980', 'India')
51]
52
53market = analyze_market('artificial intelligence', locations)
54
55print("=== AI/ML MARKET ANALYSIS ===\n")
56print(f"Total Companies Found: {market['total_companies']}")
57print(f"Companies Currently Hiring: {market['with_jobs']}")
58
59print("\nBy Company Size:")
60for size, count in sorted(market['by_size'].items()):
61 pct = (count / market['total_companies'] * 100) if market['total_companies'] > 0 else 0
62 print(f" {size}: {count} companies ({pct:.1f}%)")
63
64print("\nBy Location:")
65for location, count in sorted(market['by_location'].items(), key=lambda x: x[1], reverse=True):
66 pct = (count / market['total_companies'] * 100) if market['total_companies'] > 0 else 0
67 print(f" {location}: {count} companies ({pct:.1f}%)")
68
69# Identify market leaders (by follower count)
70leaders = sorted(market['companies'], key=lambda x: x.get('followerCount', 0), reverse=True)[:20]
71
72print("\nTop 20 Market Leaders (by LinkedIn followers):")
73for i, company in enumerate(leaders, 1):
74 print(f" {i}. {company['name']}: {company.get('followerCount', 0):,} followers")Use Case 4: Finding Similar Companies
Scenario: You closed a deal with Company X. Now you want to find 50 companies just like them.
1from linkdapi import LinkdAPI
2
3client = LinkdAPI("your_api_key")
4
5def find_lookalike_companies(seed_company_name):
6 """
7 Find companies similar to a successful customer
8 """
9 # First, get the company ID
10 id_result = client.get_company_universal_name_to_id(universalName=seed_company_name)
11
12 if not id_result['success']:
13 print(f"Company not found: {seed_company_name}")
14 return []
15
16 company_id = id_result['data']['id']
17
18 # Get the seed company details for context
19 seed_details = client.get_company_info(id=company_id)
20
21 if seed_details['success']:
22 seed = seed_details['data']
23 print(f"Seed company: {seed['name']}")
24 print(f"Industry: {seed.get('industry')}")
25 print(f"Size: {seed.get('staffCount')} employees")
26 print(f"Location: {seed.get('headquarter', {}).get('city')}\n")
27
28 # Get similar companies
29 similar = client.get_similar_companies(id=company_id)
30
31 if similar['success']:
32 return similar['data'].get('companies', [])
33
34 return []
35
36# Find companies similar to Notion
37lookalikes = find_lookalike_companies('notion-so')
38
39print(f"Found {len(lookalikes)} similar companies:\n")
40
41for company in lookalikes[:20]:
42 print(f"- {company['name']}")
43 print(f" Industry: {company.get('industry', 'N/A')}")
44 print(f" Employees: {company.get('staffCount', 'N/A')}")
45 print()Building a Lookalike List from Multiple Seeds
1def build_lookalike_list(seed_companies):
2 """
3 Build lookalike list from multiple successful customers
4 """
5 all_lookalikes = []
6
7 for seed in seed_companies:
8 print(f"Finding companies similar to {seed}...")
9 similar = find_lookalike_companies(seed)
10 all_lookalikes.extend(similar)
11
12 # Remove duplicates and seed companies
13 seen = set(seed_companies)
14 unique_lookalikes = []
15
16 for company in all_lookalikes:
17 universal_name = company.get('universalName', '')
18 if universal_name not in seen:
19 seen.add(universal_name)
20 unique_lookalikes.append(company)
21
22 # Score by how many times they appeared (more appearances = closer match)
23 appearance_count = defaultdict(int)
24 for company in all_lookalikes:
25 appearance_count[company.get('universalName')] += 1
26
27 for company in unique_lookalikes:
28 company['match_score'] = appearance_count.get(company.get('universalName'), 1)
29
30 # Sort by match score
31 unique_lookalikes.sort(key=lambda x: x['match_score'], reverse=True)
32
33 return unique_lookalikes
34
35# Your best customers
36best_customers = ['notion-so', 'figma', 'linear', 'vercel', 'stripe']
37
38lookalikes = build_lookalike_list(best_customers)
39
40print(f"\n=== TOP 30 LOOKALIKE COMPANIES ===\n")
41for i, company in enumerate(lookalikes[:30], 1):
42 print(f"{i}. {company['name']} (Match score: {company['match_score']})")Use Case 5: Enriching Existing Company Lists
Scenario: You have a CSV of company names. You need to enrich each with LinkedIn data.
1from linkdapi import AsyncLinkdAPI
2import asyncio
3import csv
4
5async def enrich_company_list(company_names):
6 """
7 Enrich a list of company names with LinkedIn data
8 """
9 enriched = []
10
11 async with AsyncLinkdAPI("your_api_key") as client:
12 for name in company_names:
13 # Try to find the company
14 lookup = await client.company_name_lookup(query=name)
15
16 if lookup['success'] and lookup['data']:
17 # Get the best match (first result)
18 match = lookup['data'][0]
19 company_id = match['id']
20
21 # Get full details
22 details = await client.get_company_info(id=company_id)
23
24 if details['success']:
25 data = details['data']
26 enriched.append({
27 'original_name': name,
28 'matched_name': data['name'],
29 'linkedin_id': company_id,
30 'linkedin_url': f"https://linkedin.com/company/{data.get('universalName', '')}",
31 'website': data.get('website'),
32 'industry': data.get('industry'),
33 'employee_count': data.get('staffCount'),
34 'follower_count': data.get('followerCount'),
35 'headquarters_city': data.get('headquarter', {}).get('city'),
36 'headquarters_country': data.get('headquarter', {}).get('countryCode'),
37 'founded': data.get('founded'),
38 'description': data.get('description', '')[:200]
39 })
40 print(f"✓ Enriched: {name} → {data['name']}")
41 else:
42 enriched.append({'original_name': name, 'matched_name': 'NOT_FOUND'})
43 print(f"✗ No details: {name}")
44 else:
45 enriched.append({'original_name': name, 'matched_name': 'NOT_FOUND'})
46 print(f"✗ Not found: {name}")
47
48 return enriched
49
50# Read your company list
51company_names = [
52 "Stripe",
53 "Notion",
54 "Figma",
55 "Linear",
56 "Vercel",
57 "Supabase",
58 "Railway",
59 "Resend"
60]
61
62# Enrich
63enriched_companies = asyncio.run(enrich_company_list(company_names))
64
65# Export enriched data
66with open('enriched_companies.csv', 'w', newline='') as f:
67 fieldnames = ['original_name', 'matched_name', 'linkedin_id', 'linkedin_url',
68 'website', 'industry', 'employee_count', 'follower_count',
69 'headquarters_city', 'headquarters_country', 'founded', 'description']
70 writer = csv.DictWriter(f, fieldnames=fieldnames)
71 writer.writeheader()
72 writer.writerows(enriched_companies)
73
74print(f"\nExported {len(enriched_companies)} companies to enriched_companies.csv")Start building with 100 free credits
Access profiles, companies, jobs, and more through our reliable, high-performance API. No credit card required.
Use Case 6: Employee Distribution Analysis
Scenario: You want to understand a company's team structure before selling to them.
1from linkdapi import LinkdAPI
2
3client = LinkdAPI("your_api_key")
4
5def analyze_company_structure(company_name):
6 """
7 Analyze a company's employee distribution
8 """
9 # Get company ID
10 id_result = client.get_company_universal_name_to_id(universalName=company_name)
11
12 if not id_result['success']:
13 print(f"Company not found: {company_name}")
14 return None
15
16 company_id = id_result['data']['id']
17
18 # Get employee data
19 emp_data = client.get_company_employees_data(id=company_id)
20
21 if not emp_data['success']:
22 print("Could not retrieve employee data")
23 return None
24
25 data = emp_data['data']
26
27 print(f"=== {company_name.upper()} EMPLOYEE ANALYSIS ===\n")
28 print(f"Total Employees: {data.get('totalEmployees', 'N/A')}\n")
29
30 # By function
31 if 'function' in data.get('distribution', {}):
32 print("By Function:")
33 functions = sorted(data['distribution']['function'].items(),
34 key=lambda x: x[1], reverse=True)
35 for func, count in functions[:10]:
36 print(f" {func}: {count}")
37
38 # By seniority
39 if 'seniority' in data.get('distribution', {}):
40 print("\nBy Seniority:")
41 seniority = sorted(data['distribution']['seniority'].items(),
42 key=lambda x: x[1], reverse=True)
43 for level, count in seniority:
44 print(f" {level}: {count}")
45
46 # By location
47 if 'geography' in data.get('distribution', {}):
48 print("\nBy Location:")
49 locations = sorted(data['distribution']['geography'].items(),
50 key=lambda x: x[1], reverse=True)
51 for loc, count in locations[:10]:
52 print(f" {loc}: {count}")
53
54 return data
55
56# Analyze a target account
57analyze_company_structure('stripe')SDK Examples
Python SDK
Installation:
1pip install linkdapi


