You know that feeling when you're building a product and you need to enrich user data, but every API you try either costs a fortune, returns stale data from 6 months ago, or requires you to deal with proxies, CAPTCHAs, and rate limits?
I've been there. Multiple times. And it sucks.
Here's what happened last time: I was building a CRM enrichment pipeline for a sales tool. Used one of those "enterprise" data providers everyone talks about. $2,000/month for 10,000 credits. Day one, I enriched 50 profiles. 23 had job titles from companies people left 4 months ago. One person was listed as "VP of Sales at Google" but had moved to a startup in February.
That's when I realized: most B2B data APIs are just fancy databases that get refreshed quarterly. They're not actually enriching your data with fresh information—they're matching it against old snapshots.
So I spent two weeks researching every B2B enrichment API I could find. Tested 11 different providers. Wrote comparison scripts. Checked data freshness. Compared pricing. And I found something that actually works.
Let me show you what makes a truly good B2B enrichment API, and how to use it to build production-grade tools.
What Actually Matters in a B2B Enrichment API
Before we dive into code, let's talk about what you should look for. Because most comparison articles just list features without telling you what matters in production.
1. Data Freshness (This Is Everything)
Static databases are dead. If your API returns data that's 3-6 months old, you're enriching with lies.
Here's a real stat that'll blow your mind: 40% of B2B contacts change jobs every year. That means if your database was last updated in Q2, literally 10-20% of the data is already wrong by Q4.
You need real-time data. Not "updated monthly." Not "refreshed quarterly." Real-time.
2. Developer Experience
Your API should have:
- Sync AND async support (not just sync)
- SDKs for Python, Node.js, Go (at minimum)
- Accurate response schemas (no "object" types everywhere)
- Batch processing built-in (enriching 1 profile at a time is insane)
- Proper error handling (with retry logic)
3. Pricing That Doesn't Make You Cry
Most "enterprise" APIs charge:
- $0.15 - $0.50 per enrichment
- Minimum commitments ($2K-$5K/month)
- Extra fees for "premium" fields like email
At that rate, enriching 50,000 profiles costs $7,500 - $25,000.
You need transparent pricing with no minimums and no hidden fees.
4. API Reliability
Your enrichment API should not:
- Return 503 errors during peak hours
- Throttle you unpredictably
- Require you to manage proxies
- Block your requests randomly
It should just work.
The API That Actually Delivers
After testing 11 different providers, one API stood out: LinkdAPI.
Not because it's perfect (no API is), but because it solves the problems that actually matter:
✅ Real-time data (0-24 hours fresh, not quarterly updates)
✅ Multi-language SDKs (Python sync/async, Node.js, Go)
✅ Transparent pricing (100 free credits, no credit card)
✅ Direct data access (not scraped from search engines)
✅ Production-ready (built-in retries, error handling, batch support)
Let's look at real code.
Getting Started (Python)
Installation
1# Install the SDK
2pip install linkdapiBasic Profile Enrichment (Sync)
1from linkdapi import LinkdAPI
2
3# Initialize client
4api = LinkdAPI("your_api_key") # Get free key at linkdapi.com/signup
5
6# Enrich a single profile
7profile = api.get_profile_overview("ryanroslansky")
8
9if profile['success']:
10 data = profile['data']
11
12 # Access rich profile data
13 print(f"Name: {data['fullName']}")
14 print(f"Headline: {data['headline']}")
15 print(f"Location: {data['location']['fullLocation']}")
16 print(f"Followers: {data['followerCount']}")
17 print(f"Connections: {data['connectionsCount']}")
18
19 # Current company info
20 if data.get('CurrentPositions'):
21 company = data['CurrentPositions'][0]
22 print(f"Company: {company['name']}")
23 print(f"Logo: {company['logoURL']}")Response structure:
1{
2 "success": true,
3 "statusCode": 200,
4 "message": "Data retrieved successfully",
5 "data": {
6 "firstName": "Ryan",
7 "lastName": "Roslansky",
8 "fullName": "Ryan Roslansky",
9 "headline": "CEO at LinkedIn",
10 "followerCount": 887877,
11 "connectionsCount": 8624,
12 "premium": true,
13 "creator": true,
14 "location": {
15 "countryCode": "US",
16 "city": "San Francisco Bay Area",
17 "fullLocation": "San Francisco Bay Area"
18 },
19 "CurrentPositions": [
20 {
21 "name": "LinkedIn",
22 "url": "https://www.linkedin.com/company/linkedin/",
23 "logoURL": "https://media.licdn.com/dms/image/..."
24 }
25 ],
26 "profilePictureURL": "https://media.licdn.com/...",
27 "urn": "ACoAAAAKXBwBikfbNJww68eYvcu2dqDYJhHbp4g"
28 }
29}Notice the real data structure. No guessing at field names. No "object" types everywhere.
Async Processing (Because You're Not Enriching One Profile at a Time)
Here's the thing: if you're enriching profiles one by one, you're doing it wrong. Async/await lets you process hundreds concurrently.
Async Enrichment (Python)
1import asyncio
2from linkdapi import AsyncLinkdAPI
3
4async def enrich_leads(usernames):
5 """Enrich 100 profiles in ~2 seconds instead of ~200 seconds."""
6 async with AsyncLinkdAPI("your_api_key") as api:
7 # Fetch all profiles concurrently
8 tasks = [api.get_profile_overview(username) for username in usernames]
9 profiles = await asyncio.gather(*tasks, return_exceptions=True)
10
11 enriched = []
12 for i, profile in enumerate(profiles):
13 if isinstance(profile, dict) and profile.get('success'):
14 data = profile['data']
15 enriched.append({
16 'username': usernames[i],
17 'name': data['fullName'],
18 'title': data['headline'],
19 'location': data['location']['fullLocation'],
20 'company': data['CurrentPositions'][0]['name'] if data.get('CurrentPositions') else None,
21 'followers': data['followerCount']
22 })
23
24 return enriched
25
26# Process 100 leads in parallel
27leads = ['ryanroslansky', 'satyanadella', 'jeffweiner08', ...] # 100 usernames
28data = asyncio.run(enrich_leads(leads))
29
30print(f"Enriched {len(data)} profiles")Performance:
- Sync (sequential): 100 profiles × 2 seconds = 200 seconds
- Async (concurrent): 100 profiles ÷ 50 concurrent = ~4 seconds
50x faster.
Node.js / TypeScript
For JavaScript developers, the Node SDK is equally powerful.
Installation
1npm install linkdapi
2# or
3yarn add linkdapiBasic Usage (ESM)
1import { LinkdAPI } from 'linkdapi';
2
3const api = new LinkdAPI({ apiKey: 'your_api_key' });
4
5// Single profile enrichment
6const profile = await api.getProfileOverview('ryanroslansky');
7
8if (profile.success) {
9 const { fullName, headline, location, followerCount } = profile.data;
10 console.log(`${fullName} - ${headline}`);
11 console.log(`Location: ${location.fullLocation}`);
12 console.log(`Followers: ${followerCount}`);
13}Concurrent Enrichment (Node.js)
1import { LinkdAPI } from 'linkdapi';
2
3async function enrichLeads(usernames) {
4 const api = new LinkdAPI({ apiKey: 'your_api_key' });
5
6 // Fetch all profiles concurrently with Promise.all
7 const profiles = await Promise.all(
8 usernames.map(username =>
9 api.getProfileOverview(username).catch(err => null)
10 )
11 );
12
13 return profiles
14 .filter(p => p && p.success)
15 .map(p => ({
16 name: p.data.fullName,
17 title: p.data.headline,
18 company: p.data.CurrentPositions?.[0]?.name,
19 location: p.data.location.fullLocation
20 }));
21}
22
23// Process 100 leads concurrently
24const leads = ['ryanroslansky', 'satyanadella', /* ... */];
25const enriched = await enrichLeads(leads);
26
27console.log(`Enriched ${enriched.length} profiles`);Go (For High-Performance Systems)
If you're building high-throughput systems, Go is perfect.
Installation
1go get github.com/linkdapi/linkdapi-go-sdkBasic Usage
1package main
2
3import (
4 "fmt"
5 "log"
6
7 "github.com/linkdapi/linkdapi-go-sdk/linkdapi"
8)
9
10func main() {
11 // Initialize client
12 client := linkdapi.NewClient("your_api_key")
13 defer client.Close()
14
15 // Enrich profile
16 profile, err := client.GetProfileOverview("ryanroslansky")
17 if err != nil {
18 log.Fatal(err)
19 }
20
21 if success, ok := profile["success"].(bool); ok && success {
22 if data, ok := profile["data"].(map[string]interface{}); ok {
23 fmt.Printf("Name: %v\n", data["fullName"])
24 fmt.Printf("Headline: %v\n", data["headline"])
25 fmt.Printf("Location: %v\n", data["location"].(map[string]interface{})["fullLocation"])
26 }
27 }
28}Concurrent Processing (Go)
1package main
2
3import (
4 "fmt"
5 "sync"
6
7 "github.com/linkdapi/linkdapi-go-sdk/linkdapi"
8)
9
10func enrichLeads(usernames []string) []map[string]interface{} {
11 client := linkdapi.NewClient("your_api_key")
12 defer client.Close()
13
14 var wg sync.WaitGroup
15 results := make(chan map[string]interface{}, len(usernames))
16
17 // Launch goroutines for concurrent enrichment
18 for _, username := range usernames {
19 wg.Add(1)
20 go func(u string) {
21 defer wg.Done()
22
23 profile, err := client.GetProfileOverview(u)
24 if err == nil {
25 results <- profile
26 }
27 }(username)
28 }
29
30 // Wait for all goroutines
31 go func() {
32 wg.Wait()
33 close(results)
34 }()
35
36 // Collect results
37 var enriched []map[string]interface{}
38 for profile := range results {
39 if success, ok := profile["success"].(bool); ok && success {
40 enriched = append(enriched, profile["data"].(map[string]interface{}))
41 }
42 }
43
44 return enriched
45}
46
47func main() {
48 leads := []string{"ryanroslansky", "satyanadella", "jeffweiner08"}
49 data := enrichLeads(leads)
50
51 fmt.Printf("Enriched %d profiles\n", len(data))
52}Real-World Use Cases
Let's look at actual production scenarios.
Use Case 1: CRM Auto-Enrichment
Scenario: You have 5,000 contacts in your CRM. You want to enrich them with:
- Current job title
- Current company
- Profile picture
- Social metrics (followers, connections)
Solution:
1import asyncio
2from linkdapi import AsyncLinkdAPI
3
4async def enrich_crm_contacts(contacts):
5 """
6 Enrich 5,000 CRM contacts with fresh data.
7
8 Args:
9 contacts: List of dicts with 'username' key
10
11 Returns:
12 Enriched contact data
13 """
14 async with AsyncLinkdAPI("your_api_key") as api:
15 # Process in batches of 100 to avoid overwhelming the API
16 batch_size = 100
17 all_enriched = []
18
19 for i in range(0, len(contacts), batch_size):
20 batch = contacts[i:i + batch_size]
21
22 # Fetch batch concurrently
23 tasks = [
24 api.get_profile_overview(contact['username'])
25 for contact in batch
26 ]
27
28 results = await asyncio.gather(*tasks, return_exceptions=True)
29
30 # Process results
31 for j, result in enumerate(results):
32 if isinstance(result, dict) and result.get('success'):
33 data = result['data']
34
35 all_enriched.append({
36 'id': batch[j]['id'],
37 'name': data['fullName'],
38 'title': data['headline'],
39 'company': data['CurrentPositions'][0]['name'] if data.get('CurrentPositions') else None,
40 'location': data['location']['fullLocation'],
41 'profile_pic': data['profilePictureURL'],
42 'followers': data['followerCount'],
43 'connections': data['connectionsCount'],
44 'is_premium': data.get('premium', False),
45 'last_enriched': datetime.utcnow().isoformat()
46 })
47
48 print(f"Processed {len(all_enriched)} / {len(contacts)}")
49
50 return all_enriched
51
52# Usage
53crm_contacts = [
54 {'id': 1, 'username': 'ryanroslansky'},
55 {'id': 2, 'username': 'satyanadella'},
56 # ... 5,000 contacts
57]
58
59enriched = asyncio.run(enrich_crm_contacts(crm_contacts))
60print(f"✓ Enriched {len(enriched)} contacts")Time: ~2-3 minutes for 5,000 contacts (vs. 2+ hours with sync)
Use Case 2: Company Intelligence
Scenario: You want to research a company — get employee count, recent hires, job openings, similar companies.
1from linkdapi import LinkdAPI
2
3api = LinkdAPI("your_api_key")
4
5# Get company info by name
6company = api.get_company_info(name="google")
7
8if company['success']:
9 comp_data = company['data']
10 company_id = comp_data['id']
11
12 print(f"Company: {comp_data['name']}")
13 print(f"Industry: {comp_data.get('industry')}")
14
15 # Get employee data
16 employees = api.get_company_employees_data(company_id)
17 if employees['success']:
18 print(f"Employee count: {employees['data'].get('totalEmployees')}")
19
20 # Get active jobs
21 jobs = api.get_company_jobs([company_id], start=0)
22 if jobs['success']:
23 job_list = jobs['data'].get('jobs', [])
24 print(f"Active jobs: {len(job_list)}")
25
26 for job in job_list[:5]:
27 print(f" - {job.get('title')} ({job.get('location')})")
28
29 # Find similar companies
30 similar = api.get_similar_companies(company_id)
31 if similar['success']:
32 print("\nSimilar companies:")
33 for comp in similar['data'].get('companies', [])[:5]:
34 print(f" - {comp.get('name')}")Start building with 100 free credits
Access profiles, companies, jobs, and more through our reliable, high-performance API. No credit card required.
Use Case 3: Lead Scoring with Enrichment
Scenario: You have inbound leads. You want to score them based on:
- Company size (employees)
- Job title seniority
- Industry
- Social proof (followers)
1import asyncio
2from linkdapi import AsyncLinkdAPI
3
4def calculate_lead_score(profile_data):
5 """Score a lead from 0-100 based on enriched data."""
6 score = 0
7
8 # Seniority score (0-40 points)
9 title = profile_data.get('headline', '').lower()
10 if any(word in title for word in ['ceo', 'founder', 'president']):
11 score += 40
12 elif any(word in title for word in ['vp', 'vice president', 'director']):
13 score += 30
14 elif any(word in title for word in ['manager', 'lead', 'head']):
15 score += 20
16 else:
17 score += 10
18
19 # Company quality (0-30 points)
20 if profile_data.get('CurrentPositions'):
21 company_name = profile_data['CurrentPositions'][0].get('name', '')
22 # You'd check this against a list of target companies
23 if company_name in ['Google', 'Microsoft', 'Amazon', 'Apple']:
24 score += 30
25 else:
26 score += 15
27
28 # Social proof (0-20 points)
29 followers = profile_data.get('followerCount', 0)
30 if followers > 10000:
31 score += 20
32 elif followers > 5000:
33 score += 15
34 elif followers > 1000:
35 score += 10
36 else:
37 score += 5
38
39 # Premium account (0-10 points)
40 if profile_data.get('premium'):
41 score += 10
42
43 return min(score, 100)
44
45async def score_leads(leads):
46 """Enrich and score leads."""
47 async with AsyncLinkdAPI("your_api_key") as api:
48 # Enrich all leads
49 tasks = [api.get_profile_overview(lead['username']) for lead in leads]
50 profiles = await asyncio.gather(*tasks, return_exceptions=True)
51
52 scored_leads = []
53 for i, profile in enumerate(profiles):
54 if isinstance(profile, dict) and profile.get('success'):
55 data = profile['data']
56 score = calculate_lead_score(data)
57
58 scored_leads.append({
59 'lead_id': leads[i]['id'],
60 'name': data['fullName'],
61 'title': data['headline'],
62 'company': data['CurrentPositions'][0]['name'] if data.get('CurrentPositions') else None,
63 'score': score,
64 'priority': 'High' if score >= 70 else 'Medium' if score >= 40 else 'Low'
65 })
66
67 # Sort by score
68 scored_leads.sort(key=lambda x: x['score'], reverse=True)
69 return scored_leads
70
71# Usage
72leads = [
73 {'id': 1, 'username': 'ryanroslansky'},
74 {'id': 2, 'username': 'satyanadella'},
75 # ... more leads
76]
77
78scored = asyncio.run(score_leads(leads))
79
80print("Top 10 leads:")
81for lead in scored[:10]:
82 print(f"{lead['name']} ({lead['company']}) - Score: {lead['score']} [{lead['priority']}]")


