The $10K/Month Opportunity: LinkedIn lead generation tools are crushing it in 2025. Companies are paying $99-499/month for tools that enrich LinkedIn profiles with complete data. The market is massive, the demand is real, and building one is... well, let me show you exactly how.
This comprehensive guide reveals three approaches to building a LinkedIn lead generation tool:
- The DIY Selenium Approach (6 months, $50K+ cost)
- The Requests + BeautifulSoup Approach (3 months, constant maintenance)
- The LinkdAPI Approach (2 days, production-ready)
By the end, you'll understand why approach #3 is how successful products are built in 2025.
Spoiler: The difference between success and failure isn't your coding skills—it's choosing the right foundation.
What We're Building
Product: LinkedIn Lead Enrichment Tool
Core Features:
- Extract complete LinkedIn profiles by username
- Enrich CRM contacts with LinkedIn data
- Search for leads by criteria
- Export enriched data to CSV/databases
- Real-time enrichment API
Tech Stack:
- Backend: Python (Flask/FastAPI)
- Frontend: React
- Database: PostgreSQL
- Queue: Redis (for bulk processing)
- API: LinkdAPI (spoiler: this is the secret sauce)
Target Users:
- Sales teams enriching leads
- Recruiters sourcing candidates
- Agencies building lead lists
- SaaS products needing LinkedIn data
Approach #1: The DIY Selenium Way (Don't Do This)
Let's start with how most developers THINK they should build it.
The Idea
"I'll use Selenium to automate a real browser, log in with a LinkedIn account, navigate to profiles, and scrape the HTML."
Implementation
1from selenium import webdriver
2from selenium.webdriver.common.by import By
3from selenium.webdriver.support.ui import WebDriverWait
4from selenium.webdriver.support import expected_conditions as EC
5from bs4 import BeautifulSoup
6import time
7import random
8
9class LinkedInScraperSelenium:
10 def __init__(self, email, password):
11 self.email = email
12 self.password = password
13 self.driver = None
14
15 def login(self):
16 """Login to LinkedIn using Selenium"""
17 self.driver = webdriver.Chrome()
18 self.driver.get("https://www.linkedin.com/login")
19
20 # Wait for login form
21 time.sleep(2)
22
23 # Enter credentials
24 email_field = self.driver.find_element(By.ID, "username")
25 email_field.send_keys(self.email)
26
27 password_field = self.driver.find_element(By.ID, "password")
28 password_field.send_keys(self.password)
29
30 # Click login
31 login_button = self.driver.find_element(By.CSS_SELECTOR, "button[type='submit']")
32 login_button.click()
33
34 # Wait for dashboard
35 time.sleep(5)
36
37 # Handle 2FA manually (ugh)
38 input("Complete 2FA if prompted, then press Enter...")
39
40 def scrape_profile(self, username):
41 """Scrape a profile by username"""
42 profile_url = f"https://www.linkedin.com/in/{username}/"
43 self.driver.get(profile_url)
44
45 # Random delay to avoid detection
46 time.sleep(random.uniform(3, 7))
47
48 # Scroll to load dynamic content
49 self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
50 time.sleep(2)
51
52 # Click "Show more" buttons
53 try:
54 show_more_buttons = self.driver.find_elements(By.CLASS_NAME, "inline-show-more-text__button")
55 for button in show_more_buttons:
56 try:
57 button.click()
58 time.sleep(1)
59 except:
60 pass
61 except:
62 pass
63
64 # Get HTML
65 html = self.driver.page_source
66 soup = BeautifulSoup(html, 'html.parser')
67
68 # Now the nightmare begins: parsing the HTML
69 profile_data = {}
70
71 # Try to find name (LinkedIn changes classes constantly)
72 try:
73 name_element = soup.find('h1', class_='text-heading-xlarge inline t-24 v-align-middle break-words')
74 profile_data['name'] = name_element.text.strip() if name_element else None
75 except:
76 profile_data['name'] = None
77
78 # Try to find headline (good luck with this selector)
79 try:
80 headline = soup.find('div', class_='text-body-medium break-words')
81 profile_data['headline'] = headline.text.strip() if headline else None
82 except:
83 profile_data['headline'] = None
84
85 # Try to find experience... oh boy
86 try:
87 experience_section = soup.find('div', {'id': 'experience'})
88 # ... 50+ more lines of fragile parsing code
89 except:
90 profile_data['experience'] = []
91
92 return profile_data
93
94 def close(self):
95 if self.driver:
96 self.driver.quit()
97
98# Usage
99scraper = LinkedInScraperSelenium("[email protected]", "your-password")
100scraper.login()
101profile = scraper.scrape_profile("ryanroslansky")
102print(profile)
103scraper.close()The Problems (Why This Fails)
1. Account Ban Risk
- LinkedIn detects Selenium
- Your account gets banned in days/weeks
- Need constant new accounts
- Violates LinkedIn ToS
2. Incredibly Slow
- 15-30 seconds per profile
- Can't parallelize (too risky)
- Maximum: 240 profiles/hour
- Daily maximum: ~2,000 profiles (before ban)
3. Constantly Breaks
- LinkedIn changes HTML structure weekly
- CSS classes change randomly
- Need constant updates
- 10-20 hours/month maintenance
4. CAPTCHA Hell
- Random CAPTCHAs
- Phone verification
- Email verification
- Manual intervention required
5. Infrastructure Costs
- Proxies: $500/month minimum
- Residential IPs: $2,000/month
- CAPTCHA solving: $300/month
- Total: $2,800/month just to TRY
6. Development Time
- Initial build: 3-4 weeks
- HTML parsing: 2 weeks
- Error handling: 2 weeks
- Testing: 1 week
- Total: 8-10 weeks (2.5 months)
7. The Real Cost
- Development: $15,000-25,000
- Infrastructure: $2,800/month
- Maintenance: $2,000/month
- Year 1 Total: $63,600-79,600
- Ongoing: $4,800/month
Verdict: ❌ Don't waste your time. There's a better way.
Approach #2: The Requests + BeautifulSoup Way (Also Don't Do This)
"Okay, so Selenium is slow. What if I use requests directly and parse the HTML?"
The Idea
Send HTTP requests to LinkedIn, grab HTML, parse it. Faster than Selenium, right?
Implementation
1import requests
2from bs4 import BeautifulSoup
3import json
4import re
5
6class LinkedInScraperRequests:
7 def __init__(self):
8 self.session = requests.Session()
9 self.session.headers.update({
10 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36',
11 'Accept-Language': 'en-US,en;q=0.9',
12 # Add 20+ more headers to look human
13 })
14 self.cookies = None
15
16 def login(self, email, password):
17 """Login via requests"""
18 # Get CSRF token
19 login_page = self.session.get('https://www.linkedin.com/login')
20 soup = BeautifulSoup(login_page.text, 'html.parser')
21 csrf = soup.find('input', {'name': 'loginCsrfParam'})['value']
22
23 # Submit login
24 login_data = {
25 'session_key': email,
26 'session_password': password,
27 'loginCsrfParam': csrf
28 }
29
30 response = self.session.post(
31 'https://www.linkedin.com/checkpoint/lg/login-submit',
32 data=login_data
33 )
34
35 if response.status_code != 200:
36 raise Exception("Login failed")
37
38 self.cookies = self.session.cookies
39
40 def scrape_profile(self, username):
41 """Scrape profile HTML"""
42 url = f"https://www.linkedin.com/in/{username}/"
43
44 response = self.session.get(url)
45
46 if response.status_code == 999:
47 # Rate limited!
48 raise Exception("Rate limited - need to wait")
49
50 soup = BeautifulSoup(response.text, 'html.parser')
51
52 # Now parse the nightmare HTML
53 # LinkedIn uses dynamic IDs, changing classes, and React
54 # This is nearly impossible to maintain
55
56 # Try to extract data from various possible structures
57 profile = {}
58
59 # Method 1: Look for JSON-LD
60 json_ld = soup.find('script', {'type': 'application/ld+json'})
61 if json_ld:
62 try:
63 data = json.loads(json_ld.string)
64 profile['name'] = data.get('name')
65 profile['headline'] = data.get('description')
66 except:
67 pass
68
69 # Method 2: Look for React props (changes constantly)
70 scripts = soup.find_all('script')
71 for script in scripts:
72 if script.string and 'voyagerIdentity' in script.string:
73 # Try to extract JSON from JavaScript
74 # This is fragile as hell
75 try:
76 match = re.search(r'{"data":.*?}}', script.string)
77 if match:
78 data = json.loads(match.group())
79 # Parse deeply nested structure
80 # ... 100+ lines of complex parsing
81 except:
82 pass
83
84 # Method 3: Fallback to HTML parsing (unreliable)
85 # ... another 200 lines of fragile code
86
87 return profile
88
89# Usage
90scraper = LinkedInScraperRequests()
91scraper.login("[email protected]", "your-password")
92profile = scraper.scrape_profile("ryanroslansky")The Problems (Still Bad)
1. Still Need LinkedIn Account
- Account ban risk remains
- Cookie management nightmare
- Session expiration handling
2. HTML Parsing Hell
- LinkedIn uses React (client-side rendering)
- Most data is in embedded JSON
- Structure changes weekly
- Regex hell
3. Rate Limiting
- Get blocked after 50-100 requests
- Need proxy rotation
- Residential proxies required
- Expensive
4. Missing Data
- Can't get all fields (many are lazy-loaded)
- Missing engagement data
- No access to API-only fields
5. Maintenance Nightmare
- Breaks every 1-2 weeks
- Need dedicated developer
- 20+ hours/month fixing
Cost:
- Development: $10,000-15,000
- Proxies: $2,000/month
- Maintenance: $2,000/month
- Year 1 Total: $58,000-63,000
Verdict: ❌ Still a terrible idea. Keep reading.
Approach #3: The LinkdAPI Way (THE Right Way)
Now let me show you how professionals build this in 2025.
The Magic: LinkdAPI's Full Profile Endpoint
Instead of scraping HTML, parsing React, managing cookies, and maintaining fragile code...
What if you could get EVERYTHING in ONE API call?
LinkdAPI provides BOTH sync and async clients - use async for production scale:
1from linkdapi import AsyncLinkdAPI
2import asyncio
3
4async def enrich_profile():
5 # Initialize async client (for production scale)
6 client = AsyncLinkdAPI("your_api_key")
7
8 # Get COMPLETE profile in ONE call
9 profile = await client.get_full_profile("ryanroslansky")
10
11 # That's it. You're done.
12 return profile
13
14# Run it
15profile = asyncio.run(enrich_profile())Why async? Process hundreds of profiles concurrently. Sync is available too for simple scripts, but async is how you scale to production.
What You Get (Real Response)
1{
2 "success": true,
3 "statusCode": 200,
4 "message": "Data retrieved successfully",
5 "data": {
6 "id": 678940,
7 "urn": "ACoAAAAKXBwBikfbNJww68eYvcu2dqDYJhHbp4g",
8 "username": "ryanroslansky",
9 "firstName": "Ryan",
10 "lastName": "Roslansky",
11 "isCreator": true,
12 "isPremium": true,
13 "profilePicture": "https://media.licdn.com/dms/image/...",
14 "profilePictures": [
15 {"url": "...", "width": 100, "height": 100},
16 {"url": "...", "width": 200, "height": 200},
17 {"url": "...", "width": 400, "height": 400},
18 {"url": "...", "width": 800, "height": 800}
19 ],
20 "backgroundImage": [
21 {"width": 800, "height": 200, "url": "..."},
22 {"width": 1280, "height": 320, "url": "..."}
23 ],
24 "summary": "As CEO of LinkedIn and EVP of Microsoft...",
25 "headline": "CEO at LinkedIn",
26 "geo": {
27 "country": "Stati Uniti d'America",
28 "city": "San Francisco",
29 "full": "San Francisco",
30 "countryCode": "us"
31 },
32 "languages": [
33 {"name": "English", "proficiency": "NATIVE_OR_BILINGUAL"},
34 {"name": "Spanish", "proficiency": "FULL_PROFESSIONAL"}
35 ],
36 "educations": [],
37 "position": [
38 {
39 "companyId": 1337,
40 "companyName": "LinkedIn",
41 "companyUsername": "linkedin",
42 "companyURL": "https://www.linkedin.com/company/linkedin/",
43 "companyLogo": "https://media.licdn.com/dms/image/...",
44 "companyIndustry": "Software",
45 "companyStaffCountRange": "10001 - 0",
46 "title": "Chief Executive Officer",
47 "location": "San Francisco Bay Area",
48 "description": "",
49 "employmentType": "A tempo pieno",
50 "start": {"year": 2020, "month": 6, "day": 0},
51 "end": {"year": 0, "month": 0, "day": 0}
52 }
53 ],
54 "fullPositions": [...], // Complete work history
55 "skills": [
56 {"name": "Microsoft Copilot", "passedSkillAssessment": false},
57 {"name": "Product Management", "passedSkillAssessment": false}
58 ],
59 "courses": [],
60 "certifications": [],
61 "projects": {},
62 "publications": [],
63 "volunteering": [],
64 "supportedLocales": [{"country": "US", "language": "en"}]
65 }
66}ONE API call. EVERYTHING. Clean, structured, ready to use.
Building the Lead Generation Tool (The Right Way)
Now let's build a production-ready LinkedIn lead generation tool in 2 days.
Why AsyncLinkdAPI for Production
LinkdAPI provides TWO clients:
1# Sync client - perfect for simple scripts
2from linkdapi import LinkdAPI
3client = LinkdAPI("your_api_key")
4profile = client.get_full_profile("username")
5
6# Async client - perfect for production scale
7from linkdapi import AsyncLinkdAPI
8client = AsyncLinkdAPI("your_api_key")
9profile = await client.get_full_profile("username")When to use Async:
- ✅ Building web APIs (FastAPI, Flask)
- ✅ Processing bulk data (100+ profiles)
- ✅ Real-time applications
- ✅ High-throughput systems
When to use Sync:
- ✅ Simple one-off scripts
- ✅ Jupyter notebooks
- ✅ Quick data exploration
- ✅ Learning/testing
Performance Comparison:
1# Sequential (sync or bad async)
2# 100 profiles × 0.5 seconds = 50 seconds
3
4# Concurrent (AsyncLinkdAPI properly used)
5# 100 profiles in parallel = 0.5 seconds total!That's 100x faster. This is why we use async for production.
Architecture
1┌─────────────┐
2│ Frontend │ (React)
3│ Dashboard │
4└─────┬───────┘
5 │
6 ▼
7┌─────────────┐
8│ FastAPI │ ← async/await
9│ Backend │
10└─────┬───────┘
11 │
12 ├─────────────────┐
13 │ │
14 ▼ ▼
15┌─────────────┐ ┌──────────────┐
16│ PostgreSQL │ │ AsyncLinkdAPI│ ← 100x faster
17│ Database │ │ (async) │
18└─────────────┘ └──────────────┘
19 │
20 ▼
21┌─────────────┐
22│ Redis │ (Queue for bulk)
23│ Queue │
24└─────────────┘Day 1: Backend (6 hours)
Step 1: Setup (30 minutes)
1# Create project
2mkdir linkedin-lead-gen
3cd linkedin-lead-gen
4
5# Create virtual environment
6python -m venv venv
7source venv/bin/activate # On Windows: venv\Scripts\activate
8
9# Install dependencies
10pip install fastapi uvicorn linkdapi sqlalchemy psycopg2-binary redis celeryStep 2: Core Enrichment Service (2 hours)
1# services/enrichment_service.py
2
3from linkdapi import AsyncLinkdAPI
4from typing import Dict, Optional, List
5import os
6import asyncio
7
8class LinkedInEnrichmentService:
9 def __init__(self):
10 # Use AsyncLinkdAPI for production scale
11 # Note: LinkdAPI also provides sync client for simple scripts
12 self.client = AsyncLinkdAPI(os.getenv("LINKDAPI_KEY"))
13
14 async def enrich_profile(self, username: str) -> Dict:
15 """
16 Enrich a LinkedIn profile using the KILLER full profile endpoint
17
18 This ONE call gets you:
19 - Basic info (name, headline, location)
20 - Complete work experience with descriptions
21 - Full education history
22 - All skills
23 - Languages, certifications, courses
24 - Profile pictures (all sizes)
25 - Background image
26 - And MORE
27
28 Compare this to Selenium: ONE call vs 30 seconds of scraping
29 """
30 try:
31 # THE MAGIC LINE (async for concurrent processing)
32 profile = await self.client.get_full_profile(username)
33
34 # Data is already clean, structured, and ready to use
35 return {
36 "success": True,
37 "data": profile,
38 "source": "linkedin",
39 "method": "linkdapi"
40 }
41
42 except Exception as e:
43 return {
44 "success": False,
45 "error": str(e)
46 }
47
48 async def enrich_bulk(self, usernames: List[str]) -> List[Dict]:
49 """
50 Enrich multiple profiles CONCURRENTLY
51
52 This is where async SHINES:
53 - Process 100 profiles in parallel
54 - Same time as processing 1 profile
55 - 100x faster than sequential processing
56 """
57 # Create tasks for all profiles
58 tasks = [self.enrich_profile(username) for username in usernames]
59
60 # Execute ALL concurrently
61 results = await asyncio.gather(*tasks)
62
63 return results
64
65 async def enrich_bulk_batched(self, usernames: List[str], batch_size: int = 50) -> List[Dict]:
66 """
67 Enrich profiles in batches for rate limit control
68
69 Process 1000s of profiles efficiently:
70 - Batch size 50: Process 50 at a time
71 - Prevents overwhelming the API
72 - Still 50x faster than sequential
73 """
74 results = []
75
76 for i in range(0, len(usernames), batch_size):
77 batch = usernames[i:i + batch_size]
78 batch_results = await self.enrich_bulk(batch)
79 results.extend(batch_results)
80
81 # Small delay between batches if needed
82 if i + batch_size < len(usernames):
83 await asyncio.sleep(0.1)
84
85 return results
86
87 def extract_key_fields(self, profile: Dict) -> Dict:
88 """
89 Extract and format key fields for CRM integration
90 """
91 data = profile.get('data', {})
92
93 return {
94 # Basic Info
95 "first_name": data.get('firstName'),
96 "last_name": data.get('lastName'),
97 "full_name": f"{data.get('firstName', '')} {data.get('lastName', '')}".strip(),
98 "headline": data.get('headline'),
99 "summary": data.get('summary'),
100
101 # Location
102 "city": data.get('geo', {}).get('city'),
103 "country": data.get('geo', {}).get('country'),
104 "location_full": data.get('geo', {}).get('full'),
105
106 # Current Position
107 "current_company": data.get('position', [{}])[0].get('companyName') if data.get('position') else None,
108 "current_title": data.get('position', [{}])[0].get('title') if data.get('position') else None,
109 "company_industry": data.get('position', [{}])[0].get('companyIndustry') if data.get('position') else None,
110
111 # Social Proof
112 "is_creator": data.get('isCreator'),
113 "is_premium": data.get('isPremium'),
114
115 # Experience
116 "total_positions": len(data.get('fullPositions', [])),
117 "years_experience": self.calculate_years_experience(data.get('fullPositions', [])),
118
119 # Skills
120 "skills": [skill.get('name') for skill in data.get('skills', [])],
121 "total_skills": len(data.get('skills', [])),
122
123 # Languages
124 "languages": [lang.get('name') for lang in data.get('languages', [])],
125
126 # Images
127 "profile_picture": data.get('profilePicture'),
128 "profile_pictures": data.get('profilePictures', []),
129
130 # LinkedIn URLs
131 "linkedin_url": f"https://www.linkedin.com/in/{data.get('username')}/",
132 "username": data.get('username'),
133
134 # Raw data for advanced use
135 "raw_profile": data
136 }
137
138 def calculate_years_experience(self, positions: list) -> int:
139 """Calculate total years of experience"""
140 total_months = 0
141 for position in positions:
142 start = position.get('start', {})
143 end = position.get('end', {})
144
145 start_year = start.get('year', 0)
146 end_year = end.get('year', 0) if end.get('year', 0) > 0 else 2025
147
148 if start_year > 0:
149 years = end_year - start_year
150 total_months += years * 12
151
152 return total_months // 12Step 3: FastAPI Backend (2 hours)
1# main.py
2
3from fastapi import FastAPI, HTTPException, BackgroundTasks
4from fastapi.middleware.cors import CORSMiddleware
5from pydantic import BaseModel
6from typing import List, Optional
7from services.enrichment_service import LinkedInEnrichmentService
8import uvicorn
9import asyncio
10
11app = FastAPI(
12 title="LinkedIn Lead Generation API",
13 description="Powered by LinkdAPI AsyncLinkdAPI for production scale",
14 version="1.0.0"
15)
16
17# CORS
18app.add_middleware(
19 CORSMiddleware,
20 allow_origins=["*"],
21 allow_credentials=True,
22 allow_methods=["*"],
23 allow_headers=["*"],
24)
25
26# Initialize enrichment service
27enrichment_service = LinkedInEnrichmentService()
28
29# Request models
30class EnrichRequest(BaseModel):
31 username: str
32
33class BulkEnrichRequest(BaseModel):
34 usernames: List[str]
35 batch_size: Optional[int] = 50
36
37# Routes
38@app.post("/api/enrich")
39async def enrich_profile(request: EnrichRequest):
40 """
41 Enrich a single LinkedIn profile using async
42
43 Example:
44 POST /api/enrich
45 {"username": "ryanroslansky"}
46 """
47 result = await enrichment_service.enrich_profile(request.username)
48
49 if not result['success']:
50 raise HTTPException(status_code=400, detail=result['error'])
51
52 # Extract key fields
53 enriched = enrichment_service.extract_key_fields(result)
54
55 return {
56 "success": True,
57 "data": enriched
58 }
59
60@app.post("/api/enrich/bulk")
61async def enrich_bulk(request: BulkEnrichRequest):
62 """
63 Enrich multiple profiles CONCURRENTLY using AsyncLinkdAPI
64
65 This is the POWER of async:
66 - 100 profiles? Takes same time as 1 profile!
67 - All processed in parallel
68 - 100x faster than sequential
69
70 Example:
71 POST /api/enrich/bulk
72 {
73 "usernames": ["ryanroslansky", "williamhgates", "jeffweiner08"],
74 "batch_size": 50
75 }
76 """
77 # Use batched processing for large lists
78 results = await enrichment_service.enrich_bulk_batched(
79 request.usernames,
80 batch_size=request.batch_size
81 )
82
83 enriched_results = []
84 for result in results:
85 if result['success']:
86 enriched = enrichment_service.extract_key_fields(result)
87 enriched_results.append(enriched)
88
89 return {
90 "success": True,
91 "count": len(enriched_results),
92 "total_requested": len(request.usernames),
93 "data": enriched_results
94 }
95
96@app.post("/api/enrich/stream")
97async def enrich_stream(request: BulkEnrichRequest):
98 """
99 Stream enriched profiles as they complete
100
101 Perfect for real-time UI updates:
102 - Start receiving results immediately
103 - Don't wait for entire batch
104 - Show progress in real-time
105 """
106 async def generate():
107 for username in request.usernames:
108 result = await enrichment_service.enrich_profile(username)
109 if result['success']:
110 enriched = enrichment_service.extract_key_fields(result)
111 yield f"data: {json.dumps(enriched)}\n\n"
112
113 return StreamingResponse(
114 generate(),
115 media_type="text/event-stream"
116 )
117
118@app.get("/api/health")
119async def health_check():
120 """Health check endpoint"""
121 return {"status": "healthy", "client": "AsyncLinkdAPI"}
122
123if __name__ == "__main__":
124 uvicorn.run(app, host="0.0.0.0", port=8000)Start building with 100 free credits
Access profiles, companies, jobs, and more through our reliable, high-performance API. No credit card required.
Step 4: Database Models (1.5 hours)
1# models/lead.py
2
3from sqlalchemy import Column, Integer, String, Boolean, JSON, DateTime, Text
4from sqlalchemy.ext.declarative import declarative_base
5from datetime import datetime
6
7Base = declarative_base()
8
9class Lead(Base):
10 __tablename__ = "leads"
11
12 id = Column(Integer, primary_key=True, index=True)
13
14 # LinkedIn Info
15 username = Column(String, unique=True, index=True)
16 linkedin_url = Column(String)
17
18 # Basic Info
19 first_name = Column(String)
20 last_name = Column(String)
21 full_name = Column(String, index=True)
22 headline = Column(String)
23 summary = Column(Text)
24
25 # Location
26 city = Column(String)
27 country = Column(String)
28 location_full = Column(String)
29
30 # Current Position
31 current_company = Column(String, index=True)
32 current_title = Column(String, index=True)
33 company_industry = Column(String)
34
35 # Experience
36 total_positions = Column(Integer)
37 years_experience = Column(Integer)
38
39 # Skills & Languages
40 skills = Column(JSON) # Array of skill names
41 languages = Column(JSON) # Array of language names
42
43 # Social Proof
44 is_creator = Column(Boolean)
45 is_premium = Column(Boolean)
46
47 # Images
48 profile_picture = Column(String)
49
50 # Raw Data
51 raw_profile = Column(JSON) # Complete LinkdAPI response
52
53 # Metadata
54 created_at = Column(DateTime, default=datetime.utcnow)
55 updated_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow)
56 enrichment_status = Column(String, default="completed")
57
58 def to_dict(self):
59 return {
60 "id": self.id,
61 "username": self.username,
62 "full_name": self.full_name,
63 "headline": self.headline,
64 "current_company": self.current_company,
65 "current_title": self.current_title,
66 "location": self.location_full,
67 "years_experience": self.years_experience,
68 "skills": self.skills,
69 "profile_picture": self.profile_picture,
70 "linkedin_url": self.linkedin_url
71 }Day 2: Frontend + Deployment (8 hours)
Step 1: React Dashboard (4 hours)
1// src/App.jsx
2
3import React, { useState } from 'react';
4import axios from 'axios';
5
6function App() {
7 const [username, setUsername] = useState('');
8 const [loading, setLoading] = useState(false);
9 const [result, setResult] = useState(null);
10 const [error, setError] = useState(null);
11
12 const enrichProfile = async () => {
13 if (!username.trim()) return;
14
15 setLoading(true);
16 setError(null);
17 setResult(null);
18
19 try {
20 const response = await axios.post('http://localhost:8000/api/enrich', {
21 username: username
22 });
23
24 setResult(response.data.data);
25 } catch (err) {
26 setError(err.response?.data?.detail || 'Enrichment failed');
27 } finally {
28 setLoading(false);
29 }
30 };
31
32 return (
33 <div className="min-h-screen bg-gray-50 py-12 px-4">
34 <div className="max-w-4xl mx-auto">
35 <h1 className="text-4xl font-bold text-center mb-8">
36 LinkedIn Lead Enrichment
37 </h1>
38 <p className="text-center text-gray-600 mb-8">
39 Powered by LinkdAPI - Get complete profile data in seconds
40 </p>
41
42 <div className="bg-white rounded-lg shadow-lg p-8 mb-8">
43 <div className="flex gap-4">
44 <input
45 type="text"
46 value={username}
47 onChange={(e) => setUsername(e.target.value)}
48 placeholder="Enter LinkedIn username (e.g., ryanroslansky)"
49 className="flex-1 px-4 py-3 border border-gray-300 rounded-lg focus:ring-2 focus:ring-blue-500 focus:border-transparent"
50 onKeyPress={(e) => e.key === 'Enter' && enrichProfile()}
51 />
52 <button
53 onClick={enrichProfile}
54 disabled={loading}
55 className="px-8 py-3 bg-blue-600 text-white rounded-lg hover:bg-blue-700 disabled:bg-gray-400 disabled:cursor-not-allowed transition"
56 >
57 {loading ? 'Enriching...' : 'Enrich'}
58 </button>
59 </div>
60 </div>
61
62 {error && (
63 <div className="bg-red-50 border border-red-200 rounded-lg p-4 mb-8">
64 <p className="text-red-800">{error}</p>
65 </div>
66 )}
67
68 {result && (
69 <div className="bg-white rounded-lg shadow-lg p-8">
70 <div className="flex items-start gap-6 mb-6">
71 <img
72 src={result.profile_picture}
73 alt={result.full_name}
74 className="w-24 h-24 rounded-full"
75 />
76 <div>
77 <h2 className="text-2xl font-bold mb-2">{result.full_name}</h2>
78 <p className="text-lg text-gray-700 mb-2">{result.headline}</p>
79 <p className="text-gray-600">{result.location_full}</p>
80 </div>
81 </div>
82
83 <div className="grid grid-cols-2 gap-6">
84 <div>
85 <h3 className="font-semibold text-gray-900 mb-2">Current Position</h3>
86 <p className="text-gray-700">{result.current_title}</p>
87 <p className="text-gray-600">{result.current_company}</p>
88 </div>
89
90 <div>
91 <h3 className="font-semibold text-gray-900 mb-2">Experience</h3>
92 <p className="text-gray-700">{result.years_experience} years</p>
93 <p className="text-gray-600">{result.total_positions} positions</p>
94 </div>
95
96 <div>
97 <h3 className="font-semibold text-gray-900 mb-2">Skills ({result.total_skills})</h3>
98 <div className="flex flex-wrap gap-2">
99 {result.skills.slice(0, 10).map((skill, idx) => (
100 <span
101 key={idx}
102 className="px-3 py-1 bg-blue-100 text-blue-800 rounded-full text-sm"
103 >
104 {skill}
105 </span>
106 ))}
107 </div>
108 </div>
109
110 <div>
111 <h3 className="font-semibold text-gray-900 mb-2">Languages</h3>
112 <div className="flex flex-wrap gap-2">
113 {result.languages.map((lang, idx) => (
114 <span
115 key={idx}
116 className="px-3 py-1 bg-green-100 text-green-800 rounded-full text-sm"
117 >
118 {lang}
119 </span>
120 ))}
121 </div>
122 </div>
123 </div>
124
125 <div className="mt-6 pt-6 border-t">
126 <h3 className="font-semibold text-gray-900 mb-2">Summary</h3>
127 <p className="text-gray-700 leading-relaxed">{result.summary}</p>
128 </div>
129
130 <div className="mt-6">
131 <a
132 href={result.linkedin_url}
133 target="_blank"
134 rel="noopener noreferrer"
135 className="inline-block px-6 py-2 bg-blue-600 text-white rounded-lg hover:bg-blue-700 transition"
136 >
137 View LinkedIn Profile →
138 </a>
139 </div>
140 </div>
141 )}
142 </div>
143 </div>
144 );
145}
146
147export default App;


