If you're reading this, you're probably one of the thousands of developers who built data pipelines on Proxycurl, only to find out it shut down in July 2025.
No gradual sunset. No migration period. Just gone.
Now you need a replacement that actually works and won't disappear on you six months from now.
This guide covers everything you need to migrate from Proxycurl to LinkdAPI. Complete endpoint mapping, SDK migration, code examples, and common pitfalls to avoid.
Let's get into it.
Why Proxycurl Shut Down (And Why It Matters)
On January 24, 2025, LinkedIn filed a federal lawsuit against Proxycurl, its parent company Nubela Pte. Ltd., and its founder Steven Goh. The allegations were serious:
- Hundreds of thousands of fake LinkedIn accounts created to scrape data
- Millions of LinkedIn member profiles scraped without authorization
- Direct violations of LinkedIn's User Agreement
By July 2025, Proxycurl was done. CEO Steven Goh announced the shutdown, stating there was simply no point in fighting the lawsuit.
Here's why this matters for your migration decision:
The fake accounts problem. Proxycurl's method involved creating fake LinkedIn accounts to access data behind login walls. This is fundamentally different from scraping publicly available data. When LinkedIn's AI detection improved, Proxycurl had to create accounts faster than LinkedIn could block them. A losing battle.
The precedent is set. LinkedIn isn't backing down. They filed another lawsuit against ProAPIs in October 2025. Any provider using fake accounts is operating on borrowed time.
This isn't fear mongering. This is the reality of the LinkedIn data landscape in 2026.
Why LinkdAPI is Different
1. Transparent Data Collection
LinkdAPI scrapes publicly available data. The same data visible when you're not logged into LinkedIn. No fake accounts. No login simulation. No gray areas.
2. API Compatibility
LinkdAPI's endpoint structure is similar enough to Proxycurl that migration isn't a complete rewrite. You'll need to change some parameter names and adjust for response format differences, but the core concepts map cleanly.
3. Simple Pricing
Proxycurl's pricing was complicated. Different endpoints cost different credits. Search endpoints had base costs PLUS per result costs. Annual contracts with monthly payments that you couldn't cancel.
LinkdAPI is simpler:
- 100 free credits on signup (no credit card required)
- Most endpoints cost 1 credit (including full profile)
- Credits never expire
- No annual contracts
- Tiered pricing that gets cheaper as you scale
4. Native SDK Support
SDKs for Python, Node.js, and Go. The Python SDK includes an async client that handles batch processing efficiently. Up to 40x faster than synchronous requests.
5. Performance
Under 200ms response times with 99.9% uptime. Real time data on every request, not cached or stale profiles.
Key Differences You Need to Know
Before diving into the endpoint mapping, here are the fundamental differences that affect your migration:
Authentication Header
This trips up everyone.
Proxycurl:
Authorization: Bearer YOUR_API_KEY
LinkdAPI:
X-linkdapi-apikey: YOUR_API_KEY
That's X-linkdapi-apikey, not X-Api-Key or Authorization. Get this wrong and you'll get 401s all day.
Base URL
Proxycurl: https://nubela.co/proxycurl
LinkdAPI: https://linkdapi.com
Profile Identifiers
This is the biggest conceptual difference.
Proxycurl used full LinkedIn URLs as identifiers:
linkedin_profile_url=https://www.linkedin.com/in/williamhgates/
LinkdAPI uses usernames and URNs:
username=williamhgates
For endpoints that require URN, use the lightweight /api/v1/profile/username-to-urn endpoint:
1{
2 "success": true,
3 "statusCode": 200,
4 "message": "Data retrieved successfully",
5 "errors": null,
6 "data": {
7 "urn": "ACoAAAAKXBwBikfbNJww68eYvcu2dqDYJhHbp4g",
8 "username": "ryanroslansky"
9 }
10}This endpoint is optimized for URN lookups. It returns only what you need without extra data.
Company Identifiers
For endpoints that require a company ID, use the lightweight /api/v1/companies/company/universal-name-to-id endpoint:
1{
2 "success": true,
3 "statusCode": 200,
4 "message": "Data retrieved successfully",
5 "errors": null,
6 "data": {
7 "id": "1441",
8 "universalName": "google"
9 }
10}Response Envelope
Proxycurl returned data directly:
1{
2 "first_name": "Bill",
3 "last_name": "Gates",
4 ...
5}LinkdAPI wraps responses in a standard envelope:
1{
2 "success": true,
3 "statusCode": 200,
4 "message": "Data retrieved successfully",
5 "errors": null,
6 "data": {
7 "firstName": "Bill",
8 "lastName": "Gates",
9 ...
10 }
11}Always check success before accessing data. This envelope makes error handling much cleaner.
Field Naming Convention
Proxycurl: snake_case (e.g., first_name, company_linkedin_url)
LinkdAPI: camelCase (e.g., firstName, companyLink)
Plan for a find and replace session in your codebase.
Complete Endpoint Mapping Reference
Here's the full mapping table. Bookmark this. You'll reference it constantly during migration.
Profile Endpoints
| Proxycurl Endpoint | LinkdAPI Equivalent | Notes |
|---|---|---|
GET /api/v2/linkedin | GET /api/v1/profile/full | Full profile in one request (1 credit) |
GET /api/linkedin/profile/resolve | GET /api/v1/profile/overview | Basic lookup by username |
| N/A | GET /api/v1/profile/username-to-urn | Lightweight URN lookup |
Company Endpoints
| Proxycurl Endpoint | LinkdAPI Equivalent | Notes |
|---|---|---|
GET /api/linkedin/company | GET /api/v1/companies/company/info | Company details |
GET /api/linkedin/company/resolve | GET /api/v1/companies/name-lookup | Search by name |
| N/A | GET /api/v1/companies/company/universal-name-to-id | Lightweight ID lookup |
Job Endpoints
| Proxycurl Endpoint | LinkdAPI Equivalent | Notes |
|---|---|---|
GET /api/v2/linkedin/company/job | GET /api/v1/jobs/search | V1 job search |
GET /api/linkedin/job | GET /api/v1/jobs/job/details | Job details (open jobs) |
| Job details (all statuses) | GET /api/v1/jobs/job/details-v2 | Includes closed jobs |
| Similar jobs |
Search Endpoints
| Proxycurl Endpoint | LinkdAPI Equivalent | Notes |
|---|---|---|
GET /api/search/person | GET /api/v1/search/people | Find professionals |
GET /api/search/company | GET /api/v1/search/companies | Find companies |
| Post search | GET /api/v1/search/posts | Search LinkedIn posts |
| Advanced job search |
Content Endpoints
| Proxycurl Endpoint | LinkdAPI Equivalent | Notes |
|---|---|---|
| Person's posts | GET /api/v1/posts/all | All posts by profile |
| Post details | GET /api/v1/posts/info | Single post info |
| Post comments | GET /api/v1/posts/comments | Comments on a post |
| Post likes | GET /api/v1/posts/likes | Reactions on a post |
Lookup/Utility Endpoints
| Proxycurl Endpoint | LinkdAPI Equivalent | Notes |
|---|---|---|
| Geo location lookup | GET /api/v1/geos/name-lookup | Get geoUrn for filters |
| School lookup | GET /api/v1/search/schools | Find school IDs |
| Industry lookup | GET /api/v1/g/industry-lookup | Industry identifiers |
| Skills lookup | GET /api/v1/g/title-skills-lookup | Title/skill IDs |
Authentication Migration
Basic Setup
Proxycurl (old way):
1import requests
2
3headers = {
4 'Authorization': f'Bearer {PROXYCURL_API_KEY}'
5}
6
7response = requests.get(
8 'https://nubela.co/proxycurl/api/v2/linkedin',
9 params={'url': 'https://www.linkedin.com/in/williamhgates/'},
10 headers=headers
11)LinkdAPI (new way):
1import requests
2
3headers = {
4 'X-linkdapi-apikey': LINKDAPI_API_KEY
5}
6
7response = requests.get(
8 'https://linkdapi.com/api/v1/profile/full',
9 params={'username': 'williamhgates'},
10 headers=headers
11)
12
13data = response.json()
14if data['success']:
15 profile = data['data']
16else:
17 print(f"Error: {data['message']}")Environment Variable Naming
Update your .env file:
Before:
PROXYCURL_API_KEY=your_key_here
After:
LINKDAPI_KEY=your_key_here
Getting Your API Key
- Sign up at https://linkdapi.com/signup
- You get 100 free credits immediately with no credit card required
- Copy your API key from the dashboard
- Set it as the
LINKDAPI_KEYenvironment variable
SDK Migration (Python, Node.js, Go)
Python
Install:
1pip install linkdapiProxycurl (old):
1from proxycurl.asyncio import Proxycurl
2import asyncio
3
4proxycurl = Proxycurl()
5
6person = asyncio.run(proxycurl.linkedin.person.get(
7 linkedin_profile_url='https://www.linkedin.com/in/williamhgates/'
8))LinkdAPI (new) Synchronous:
1from linkdapi import LinkdAPI
2
3client = LinkdAPI("YOUR_API_KEY")
4
5# Get basic profile
6profile = client.get_profile_overview("williamhgates")
7print(f"Name: {profile['data']['fullName']}")
8print(f"URN: {profile['data']['urn']}")
9
10# Get full profile (1 credit)
11full_profile = client.get_full_profile(username="williamhgates")LinkdAPI (new) Async (recommended for batch):
1from linkdapi import AsyncLinkdAPI
2import asyncio
3
4async def enrich_profiles(usernames):
5 async with AsyncLinkdAPI("YOUR_API_KEY") as api:
6 # Get URNs first using lightweight endpoint
7 tasks = [api.get_profile_urn(u) for u in usernames]
8 urn_results = await asyncio.gather(*tasks)
9
10 # Extract URNs for detailed enrichment
11 urns = [r['data']['urn'] for r in urn_results if r['success']]
12
13 # Fetch detailed data in parallel
14 detail_tasks = []
15 for urn in urns:
16 detail_tasks.extend([
17 api.get_full_experience(urn),
18 api.get_skills(urn),
19 api.get_education(urn)
20 ])
21
22 results = await asyncio.gather(*detail_tasks)
23 return results
24
25# Run
26usernames = ["williamhgates", "satyanadella", "ryanroslansky"]
27enriched = asyncio.run(enrich_profiles(usernames))The async client is up to 40x faster for batch operations. If you're processing more than a few profiles, always use async.
Node.js
Install:
1npm install linkdapiProxycurl (old):
1const axios = require('axios');
2
3const response = await axios.get('https://nubela.co/proxycurl/api/v2/linkedin', {
4 headers: { 'Authorization': `Bearer ${API_KEY}` },
5 params: { url: 'https://www.linkedin.com/in/williamhgates/' }
6});LinkdAPI (new):
1import { LinkdAPI } from 'linkdapi';
2
3const api = new LinkdAPI({ apiKey: 'YOUR_API_KEY' });
4
5// Get profile
6const overview = await api.getProfileOverview('williamhgates');
7console.log(overview.data.fullName);
8
9// Get company info
10const company = await api.getCompanyInfo({ name: 'google' });
11console.log(company.data.employeeCount);
12
13// Batch with Promise.all
14const companyId = company.data.id;
15const [employees, similar, jobs] = await Promise.all([
16 api.getCompanyEmployeesData(companyId),
17 api.getSimilarCompanies(companyId),
18 api.getCompanyJobs([companyId]),
19]);Go
Install:
1go get github.com/linkdapi/linkdapi-go-sdkUsage:
1package main
2
3import (
4 "fmt"
5 "github.com/linkdapi/linkdapi-go-sdk/linkdapi"
6)
7
8func main() {
9 client := linkdapi.NewClient("YOUR_API_KEY")
10 defer client.Close()
11
12 // Get profile
13 profile, err := client.GetProfileOverview("williamhgates")
14 if err != nil {
15 panic(err)
16 }
17 fmt.Println(profile.Data.FullName)
18
19 // Search people
20 params := linkdapi.PeopleSearchParams{
21 Keyword: "VP Sales",
22 Title: "VP Sales",
23 GeoUrn: "103644278", // San Francisco
24 Start: 0,
25 }
26 results, _ := client.SearchPeople(params)
27
28 for _, person := range results.Data.People {
29 fmt.Printf("%s at %s\n", person.FullName, person.Headline)
30 }
31}Code Examples: Before and After
Example 1: Basic Profile Enrichment
Proxycurl (before):
1import requests
2
3def get_profile(linkedin_url):
4 response = requests.get(
5 'https://nubela.co/proxycurl/api/v2/linkedin',
6 headers={'Authorization': f'Bearer {API_KEY}'},
7 params={
8 'url': linkedin_url,
9 'fallback_to_cache': 'on-error',
10 'skills': 'include',
11 'personal_email': 'include'
12 }
13 )
14 return response.json()
15
16profile = get_profile('https://www.linkedin.com/in/williamhgates/')
17print(profile['first_name'])
18print(profile['occupation'])LinkdAPI (after):
1from linkdapi import LinkdAPI
2
3client = LinkdAPI(API_KEY)
4
5def get_profile(username):
6 # Full profile in one call (1 credit)
7 result = client.get_full_profile(username=username)
8
9 if not result['success']:
10 raise Exception(result['message'])
11
12 return result['data']
13
14profile = get_profile('williamhgates')
15print(profile['firstName'])
16print(profile['headline'])Example 2: Company Research
Proxycurl (before):
1def research_company(company_url):
2 # Get company info
3 company = requests.get(
4 'https://nubela.co/proxycurl/api/linkedin/company',
5 headers={'Authorization': f'Bearer {API_KEY}'},
6 params={'url': company_url}
7 ).json()
8
9 # Get employee count
10 employees = requests.get(
11 'https://nubela.co/proxycurl/api/linkedin/company/employees/',
12 headers={'Authorization': f'Bearer {API_KEY}'},
13 params={'url': company_url}
14 ).json()
15
16 return company, employeesLinkdAPI (after):
1from linkdapi import LinkdAPI
2
3client = LinkdAPI(API_KEY)
4
5def research_company(company_name):
6 # Get company ID using lightweight endpoint
7 id_result = requests.get(
8 'https://linkdapi.com/api/v1/companies/company/universal-name-to-id',
9 headers={'X-linkdapi-apikey': API_KEY},
10 params={'universalName': company_name}
11 ).json()
12
13 company_id = id_result['data']['id']
14
15 # Get company info
16 info = client.get_company_info(id=company_id)
17
18 # Get employee data
19 employees = client.get_company_employees_data(company_id)
20
21 return info['data'], employees['data']
22
23company, employees = research_company('google')
24print(f"Company: {company['name']}")
25print(f"Total employees: {employees['totalEmployees']}")
26print(f"Distribution by function: {employees['distribution']['function']}")Example 3: People Search with Location Filter
Proxycurl (before):
1def search_vps_in_sf():
2 response = requests.get(
3 'https://nubela.co/proxycurl/api/search/person',
4 headers={'Authorization': f'Bearer {API_KEY}'},
5 params={
6 'country': 'US',
7 'city': 'San Francisco',
8 'current_role_title': 'VP Sales',
9 'page_size': 10
10 }
11 )
12 return response.json()LinkdAPI (after):
1from linkdapi import LinkdAPI
2
3client = LinkdAPI(API_KEY)
4
5def search_vps_in_sf():
6 # First, get the geoUrn for San Francisco
7 geo_result = client.geo_name_lookup('San Francisco')
8 geo_urn = geo_result['data'][0]['urn']
9
10 # Now search
11 results = client.search_people(
12 keyword='VP Sales',
13 title='VP Sales',
14 geoUrn=geo_urn,
15 start=0,
16 count=10
17 )
18
19 return results['data']['people']
20
21vps = search_vps_in_sf()
22for person in vps:
23 print(f"{person['fullName']} at {person['headline']}")Example 4: Job Market Intelligence
Proxycurl (before):
1def find_remote_engineering_jobs():
2 response = requests.get(
3 'https://nubela.co/proxycurl/api/v2/linkedin/company/job',
4 headers={'Authorization': f'Bearer {API_KEY}'},
5 params={
6 'job_type': 'full-time',
7 'search_id': 'COMPANY_SEARCH_ID',
8 'when': 'past-week'
9 }
10 )
11 return response.json()LinkdAPI (after):
1# V2 search has more filters
2def find_remote_engineering_jobs():
3 results = requests.get(
4 'https://linkdapi.com/api/v1/search/jobs',
5 headers={'X-linkdapi-apikey': API_KEY},
6 params={
7 'keyword': 'Software Engineer',
8 'workplaceTypes': 'remote',
9 'datePosted': '1week',
10 'easyApply': 'true',
11 'start': 0
12 }
13 )
14 return results.json()['data']
15
16# Or use V1 for simpler queries
17def find_jobs_v1():
18 results = requests.get(
19 'https://linkdapi.com/api/v1/jobs/search',
20 headers={'X-linkdapi-apikey': API_KEY},
21 params={
22 'keyword': 'Marketing Manager',
23 'location': 'London',
24 'timePosted': '1week',
25 'workArrangement': 'hybrid'
26 }
27 )
28 return results.json()['data']Example 5: Content Research (Posts)
LinkdAPI:
1from linkdapi import LinkdAPI
2
3client = LinkdAPI(API_KEY)
4
5def get_thought_leader_content(username):
6 # Get profile URN first using lightweight endpoint
7 urn_result = requests.get(
8 'https://linkdapi.com/api/v1/profile/username-to-urn',
9 headers={'X-linkdapi-apikey': API_KEY},
10 params={'username': username}
11 ).json()
12
13 urn = urn_result['data']['urn']
14
15 # Get all their posts
16 posts = requests.get(
17 'https://linkdapi.com/api/v1/posts/all',
18 headers={'X-linkdapi-apikey': API_KEY},
19 params={'urn': urn, 'start': 0}
20 ).json()
21
22 # Get featured/pinned posts
23 featured = requests.get(
24 'https://linkdapi.com/api/v1/posts/featured',
25 headers={'X-linkdapi-apikey': API_KEY},
26 params={'urn': urn}
27 ).json()
28
29 return posts['data'], featured['data']
30
31# Search posts by keyword
32def search_ai_posts():
33 results = requests.get(
34 'https://linkdapi.com/api/v1/search/posts',
35 headers={'X-linkdapi-apikey': API_KEY},
36 params={
37 'keyword': 'AI marketing',
38 'sortBy': 'date_posted',
39 'datePosted': 'past-week'
40 }
41 ).json()
42 return results['data']Start building with 100 free credits
Access profiles, companies, jobs, and more through our reliable, high-performance API. No credit card required.
Handling Response Format Differences
The Response Envelope
Every LinkdAPI response follows this structure:
1{
2 "success": true,
3 "statusCode": 200,
4 "message": "Data retrieved successfully",
5 "errors": null,
6 "data": { ... }
7}Always check success first:
1def safe_request(endpoint, params):
2 response = requests.get(
3 f'https://linkdapi.com{endpoint}',
4 headers={'X-linkdapi-apikey': API_KEY},
5 params=params
6 )
7
8 result = response.json()
9
10 if not result['success']:
11 # Handle error
12 status = result['statusCode']
13 message = result['message']
14
15 if status == 401:
16 raise AuthenticationError("Invalid API key")
17 elif status == 404:
18 raise NotFoundError(f"Resource not found: {message}")
19 elif status == 429:
20 raise RateLimitError("Rate limited, retry with backoff")
21 else:
22 raise APIError(f"Error {status}: {message}")
23
24 return result['data']Field Name Mapping
Quick reference for common field name changes:
| Proxycurl Field | LinkdAPI Field |
|---|---|
first_name |



