LinkedIn has evolved into the world's premier platform for professional content, with millions of posts published daily by thought leaders, companies, and professionals. Learning how to scrape LinkedIn posts unlocks powerful insights for competitive intelligence, content strategy, lead generation, and market research. However, extracting post data comes with challenges—from account bans and anti-bot detection to legal concerns and technical complexity.
This comprehensive guide will show you everything you need to know about LinkedIn post scraping, from understanding why it matters to discovering the most effective tools and methods. We'll explore DIY approaches, discuss their limitations, and reveal why modern developers are switching to LinkdAPI—the most advanced solution for extracting LinkedIn posts at scale.
Why Scraping LinkedIn Posts Matters
Scraping LinkedIn posts provides invaluable business intelligence that can transform your marketing, sales, and competitive strategies. Here's why extracting post data has become essential for data-driven organizations:
1. Competitive Intelligence and Market Analysis
LinkedIn insights scraping allows you to monitor competitors' content strategies in real-time:
- Track competitor messaging: Understand what resonates with their audience
- Analyze engagement patterns: Identify which post types (text, images, videos, polls) generate the most interaction
- Monitor product launches: Get early signals of new features or offerings
- Benchmark performance: Compare your engagement metrics against industry leaders
- Identify trending topics: Discover what subjects are gaining traction in your industry
2. Lead Generation and Prospect Research
Scraping LinkedIn for leads through post engagement data is highly effective:
- Identify engaged prospects: Find people who actively comment on industry-related posts
- Discover decision-makers: Track executives sharing thought leadership content
- Monitor job changes: Detect career transitions through congratulatory post comments
- Build targeted lists: Create prospect databases based on content interests
- Warm lead identification: Find prospects already discussing your product category
3. Content Strategy and Inspiration
Extract LinkedIn posts to fuel your own content creation:
- Discover viral content patterns: Analyze which headlines and formats drive engagement
- Identify content gaps: Find topics your competitors haven't covered
- Track hashtag performance: Understand which tags amplify reach
- Optimal posting times: Determine when your audience is most active
- Format effectiveness: Learn whether carousels, videos, or articles perform best
4. Influencer and Partnership Discovery
LinkedIn post automation for influencer research:
- Find micro-influencers: Identify niche experts with engaged audiences
- Track engagement rates: Measure authentic influence beyond follower counts
- Partnership opportunities: Discover potential collaboration partners
- Brand advocates: Identify users naturally promoting your brand
- Content collaboration: Find creators for partnership campaigns
5. Sentiment Analysis and Brand Monitoring
LinkedIn data scraper for reputation management:
- Monitor brand mentions: Track when your company is discussed in posts or comments
- Sentiment tracking: Analyze positive, negative, and neutral sentiment
- Crisis detection: Get early warnings of potential PR issues
- Customer feedback: Gather product feedback shared publicly
- Industry sentiment: Understand overall mood toward your market sector
6. Recruitment and Talent Intelligence
LinkedIn data collection for HR and recruiting:
- Identify passive candidates: Find professionals sharing industry expertise
- Company culture insights: Analyze employee posts about workplace satisfaction
- Skill trends: Track which skills are frequently discussed
- Competitor hiring: Monitor job change announcements
- Talent mapping: Build databases of potential future hires
Is It Legal to Scrape LinkedIn Posts?
Before you scrape LinkedIn content, understanding the legal landscape is crucial. Post scraping exists in a complex legal and ethical space:
The Legal Framework
Public Data Scraping Precedents:
✅ hiQ Labs v. LinkedIn (2019-2022)
- U.S. Court of Appeals ruled that scraping publicly available data doesn't violate the Computer Fraud and Abuse Act (CFAA)
- Accessing public information without bypassing authentication is generally legal
- LinkedIn's appeal to Supreme Court was dismissed, letting the ruling stand
✅ Public vs. Private Data
- Posts visible without login are generally considered public information
- Data that doesn't require authentication is typically fair game
- Courts have consistently upheld rights to access public web data
LinkedIn's Terms of Service
⚠️ Platform Restrictions:
- LinkedIn's User Agreement explicitly prohibits automated scraping
- Using LinkedIn accounts for scraping violates Terms of Service
- LinkedIn reserves the right to ban accounts and pursue legal action
- Automated data collection using accounts risks permanent suspension
Data Protection Regulations
🔒 Compliance Requirements:
GDPR (Europe):
- Requires lawful basis for processing personal data
- Users have rights to access, deletion, and portability
- Organizations must document data processing activities
- Fines up to €20 million or 4% of global revenue
CCPA (California):
- California consumers have data rights
- Businesses must disclose data collection practices
- Consumers can opt-out of data sales
- Penalties for non-compliance
Best Practices for Legal Scraping
To scrape LinkedIn posts responsibly:
- Only access public data: Don't attempt to scrape private profiles or content behind authentication
- Avoid using LinkedIn accounts: Account-based scraping violates ToS and risks bans
- Respect robots.txt: While not legally binding, it shows good faith
- Implement rate limiting: Don't overwhelm LinkedIn's servers
- Have legitimate business purposes: Use data for lawful business intelligence, not harassment or spam
- Comply with data regulations: Ensure GDPR, CCPA, and local law compliance
- Provide opt-out mechanisms: Allow individuals to request data removal
The Risks of Account-Based Scraping
Traditional methods using LinkedIn accounts carry severe risks:
- Permanent account bans with no appeal process
- Loss of professional network and connections
- IP address blacklisting affecting your organization
- Legal liability for ToS violations
- Reputation damage if exposed publicly
Methods to Scrape LinkedIn Posts
Let's explore various approaches to extract LinkedIn posts, from manual techniques to modern API solutions.
Method 1: Manual Copy-Paste (Not Scalable)
The most basic approach involves manually saving post content.
How it works:
- Visit LinkedIn profiles or company pages
- Scroll through posts
- Copy post text, author info, and metrics
- Paste into spreadsheets
Pros:
- No technical skills needed
- Free (except time cost)
- Zero ban risk
Cons:
- ⏱️ Extremely time-consuming (5-10 minutes per post)
- 📊 High error rate and inconsistency
- 📈 Impossible to scale (100 posts = 10+ hours)
- 💤 Soul-crushingly tedious
- 📉 Can't capture engagement data over time
Verdict: Only viable for analyzing 5-10 posts. Useless for any serious competitive intelligence.
Method 2: Browser Extensions
Chrome extensions offer basic LinkedIn scraping tools functionality:
- Instant Data Scraper: General-purpose extension
- Data Miner: Template-based scraping
- LinkedHelper: LinkedIn-specific automation
How it works:
- Install browser extension
- Visit LinkedIn posts
- Activate scraping extension
- Export to CSV or JSON
Pros:
- No coding required
- Works in browser
- Quick setup
Cons:
- ⚠️ Uses your LinkedIn account (ban risk)
- 🐌 Very limited scalability
- 🔒 Breaks when LinkedIn updates UI
- 💰 Many require paid subscriptions ($30-100/month)
- 📉 Limited data fields
- ⏰ Manual triggering required
Verdict: Suitable only for occasional small-scale scraping (<50 posts), but risky.
Method 3: Python with Selenium (DIY Approach)
Developers attempting to build a LinkedIn post extractor with Selenium face significant challenges.
Basic LinkedIn Post Scraper with Selenium
1from selenium import webdriver
2from selenium.webdriver.common.by import By
3from selenium.webdriver.chrome.options import Options
4from selenium.webdriver.support.ui import WebDriverWait
5from selenium.webdriver.support import expected_conditions as EC
6import time
7import json
8
9class LinkedInPostScraper:
10 def __init__(self, email, password):
11 self.email = email
12 self.password = password
13 self.driver = None
14
15 def setup_driver(self):
16 """Initialize Chrome with stealth settings"""
17 chrome_options = Options()
18 chrome_options.add_argument('--disable-blink-features=AutomationControlled')
19 chrome_options.add_argument('--user-agent=Mozilla/5.0 (Windows NT 10.0; Win64; x64)')
20 chrome_options.add_experimental_option("excludeSwitches", ["enable-automation"])
21
22 self.driver = webdriver.Chrome(options=chrome_options)
23
24 def login(self):
25 """Login to LinkedIn (HIGH BAN RISK!)"""
26 self.driver.get('https://www.linkedin.com/login')
27 time.sleep(2)
28
29 # Enter credentials
30 self.driver.find_element(By.ID, 'username').send_keys(self.email)
31 self.driver.find_element(By.ID, 'password').send_keys(self.password)
32 self.driver.find_element(By.CSS_SELECTOR, 'button[type="submit"]').click()
33
34 time.sleep(5)
35
36 # Check for CAPTCHA
37 if 'checkpoint' in self.driver.current_url:
38 print("⚠️ CAPTCHA detected! Manual intervention required.")
39 input("Complete CAPTCHA and press Enter...")
40
41 def scrape_profile_posts(self, profile_url):
42 """
43 Scrape posts from a profile
44 WARNING: Extremely fragile! Breaks constantly!
45 """
46 self.driver.get(f"{profile_url}/recent-activity/all/")
47 time.sleep(3)
48
49 posts = []
50
51 # Scroll to load more posts (lazy loading)
52 scroll_pause = 2
53 last_height = self.driver.execute_script("return document.body.scrollHeight")
54
55 for _ in range(5): # Scroll 5 times
56 self.driver.execute_script("window.scrollTo(0, document.body.scrollHeight);")
57 time.sleep(scroll_pause)
58
59 new_height = self.driver.execute_script("return document.body.scrollHeight")
60 if new_height == last_height:
61 break
62 last_height = new_height
63
64 # Try to extract post data (CSS selectors change constantly!)
65 try:
66 post_elements = self.driver.find_elements(
67 By.CSS_SELECTOR,
68 'div.feed-shared-update-v2'
69 )
70
71 for post_elem in post_elements[:10]: # Limit to first 10
72 try:
73 # Extract post text (selectors are fragile!)
74 post_text = post_elem.find_element(
75 By.CSS_SELECTOR,
76 'span.break-words'
77 ).text
78
79 # Try to get engagement metrics (often fails)
80 try:
81 reactions = post_elem.find_element(
82 By.CSS_SELECTOR,
83 'span.social-details-social-counts__reactions-count'
84 ).text
85 except:
86 reactions = "0"
87
88 posts.append({
89 'text': post_text[:200], # Truncate
90 'reactions': reactions
91 })
92
93 except Exception as e:
94 continue
95
96 return posts
97
98 except Exception as e:
99 print(f"Error scraping posts: {e}")
100 return []
101
102 def close(self):
103 if self.driver:
104 self.driver.quit()
105
106# Usage (DON'T DO THIS - WILL GET YOUR ACCOUNT BANNED!)
107"""
108scraper = LinkedInPostScraper('[email protected]', 'your_password')
109scraper.setup_driver()
110scraper.login()
111
112posts = scraper.scrape_profile_posts('https://www.linkedin.com/in/ryanroslansky')
113print(f"Scraped {len(posts)} posts")
114
115scraper.close()
116"""Method 4: Python BeautifulSoup with Requests
Attempting to scrape LinkedIn content without a browser:
1import requests
2from bs4 import BeautifulSoup
3
4class LinkedInPostRequestsScraper:
5 def __init__(self, li_at_cookie):
6 """Initialize with LinkedIn session cookie"""
7 self.session = requests.Session()
8 self.session.cookies.set('li_at', li_at_cookie)
9 self.session.headers.update({
10 'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64)',
11 'Accept': 'text/html,application/xhtml+xml'
12 })
13
14 def get_posts(self, profile_username):
15 """Attempt to fetch posts (usually fails)"""
16 url = f'https://www.linkedin.com/in/{profile_username}/recent-activity/all/'
17
18 try:
19 response = self.session.get(url, timeout=10)
20
21 if response.status_code == 429:
22 print("⚠️ Rate limited!")
23 return None
24 elif response.status_code == 999:
25 print("⚠️ LinkedIn detected automation!")
26 return None
27
28 # Try to parse HTML (good luck with this!)
29 soup = BeautifulSoup(response.text, 'html.parser')
30
31 # LinkedIn's post HTML structure is incredibly complex
32 # and changes constantly. This approach rarely works.
33
34 return []
35
36 except Exception as e:
37 print(f"Request failed: {e}")
38 return None
39
40# Usage (also risky and unreliable)
41"""
42scraper = LinkedInPostRequestsScraper('your_li_at_cookie')
43posts = scraper.get_posts('ryanroslansky')
44"""The Critical Problems with DIY Post Scraping
While the above examples show technical feasibility, they reveal fundamental issues making DIY web scraping LinkedIn posts unsustainable:
⛔ Problem 1: Account Bans (The #1 Risk)
The harsh reality: LinkedIn's anti-scraping systems are sophisticated:
- Permanent account bans with zero warnings or appeals
- Loss of your entire professional network (connections, messages, posts, history)
- IP blacklisting affecting your whole company
- No distinction between personal and business accounts
Real story: "I built a LinkedIn post scraper for market research. It worked great for 3 days. On day 4, my account with 1,500+ connections was permanently banned. LinkedIn support wouldn't even review my case."
🍪 Problem 2: Cookie and Session Hell
Managing authentication is a nightmare:
- Cookies expire randomly (sometimes hourly)
- LinkedIn detects automation through behavioral analysis
- Multi-factor authentication breaks automation entirely
- Geographic inconsistencies trigger security alerts
- Session management requires constant monitoring
🤖 Problem 3: CAPTCHA Gauntlet
LinkedIn deploys aggressive CAPTCHA challenges:
- reCAPTCHA v3 scoring on every request
- Visual challenges require human solving
- CAPTCHA solving services cost $1-3 per 1,000 solves
- Frequent interruptions halt entire pipelines
- Detection patterns from repeated solving
🐌 Problem 4: Pathetically Slow Performance
Selenium-based scraping is glacially slow:
- Browser startup: 5-10 seconds
- Page load: 3-5 seconds per profile
- Scroll waiting: 2-3 seconds per scroll to load lazy content
- Anti-detection delays: 3-5 seconds between actions
- Post parsing: 1-2 seconds per post
Reality check:
- Scraping 100 posts manually: 45-60 minutes
- At scale (10,000 posts): 75-100 hours of continuous operation
🔧 Problem 5: Constant Maintenance Nightmare
LinkedIn updates constantly:
- HTML structure changes monthly
- CSS selectors get refactored without notice
- JavaScript rendering adds layers of complexity
- A/B testing means different users see different layouts
- Developer time: 60%+ spent fixing broken code
💸 Problem 6: Hidden Infrastructure Costs
Building production LinkedIn scraping tools requires:
- Residential proxies: $10-20 per GB ($200-500/month for moderate use)
- CAPTCHA solving: $50-150/month for moderate volume
- Cloud servers: $50-200/month for headless browsers
- Monitoring: $20-50/month for uptime tracking
- Storage: $20-100/month for databases
- Developer time: $5,000-15,000 for initial build + ongoing maintenance
Total monthly cost: $500-2,500+ for a medium-scale operation
📊 Problem 7: Incomplete and Inconsistent Data
DIY scrapers struggle with:
- Missing engagement metrics: Comments and shares often don't load
- Truncated text: "See more" links require clicking
- Media handling: Images and videos need separate extraction
- Comment threads: Nested replies are difficult to capture
- Reaction breakdowns: LinkedIn hides detailed reaction data
- Timestamp parsing: Various formats create inconsistencies
⚖️ Problem 8: Legal and Compliance Exposure
Account-based scraping creates legal risks:
- Terms of Service violations are actionable
- CFAA implications in the United States
- GDPR non-compliance risks massive fines
- Data breach liability if scraped data is compromised
- LinkedIn's legal precedent of suing scrapers
The Modern Solution: LinkdAPI for Effortless Post Scraping
After experiencing the pain of DIY scraping, the solution is clear: use LinkdAPI—the most advanced unofficial LinkedIn API with comprehensive post scraping capabilities.
Why LinkdAPI is the Best LinkedIn Post Scraper
LinkdAPI offers complete post data extraction through direct LinkedIn API endpoints—the same APIs powering LinkedIn's mobile and web apps.
✅ No Account or Authentication Required
Complete freedom:
- No LinkedIn account needed
- No cookie extraction or management
- No session maintenance
- No login flows or 2FA
- Zero account ban risk
✅ Zero CAPTCHA Challenges
Uninterrupted operation:
- Built-in CAPTCHA bypassing
- No manual intervention
- No third-party services
- No pipeline delays
- Runs 24/7 smoothly
✅ Blazing Fast Performance
Real speed at scale:
- Response times: 300-900ms average per post
- No browser overhead: Direct API calls
- Concurrent extraction: Process multiple profiles simultaneously
- Bulk operations: Extract thousands of posts in minutes
Performance comparison:
- Selenium scraper: ~60 seconds per profile (5-10 posts)
- LinkdAPI: ~2-3 seconds for 10 posts
That's a 20-30x speed improvement!
✅ Comprehensive Post Data
Extract everything:
- Complete post text (including "see more" content)
- All engagement metrics (reactions, comments, shares)
- Detailed reaction breakdowns (Like, Love, Support, Celebrate, Insightful, Curious)
- Comment threads with full text and author data
- Post attachments (images, videos, documents)
- Hashtags and mentions
- Post visibility and timestamp
- Author information
- Reshare and repost data
✅ Multiple Post Scraping Methods
Flexible data access:
- Featured posts: Get pinned/featured posts from any profile
- All posts: Extract complete post history
- Post details: Get comprehensive info for specific posts
- Post comments: Retrieve full comment threads
- Post likes: Extract users who reacted
- Bulk extraction: Process multiple profiles concurrently
✅ Clean, Structured JSON
Developer-friendly structured formatting:
1{
2 "success": true,
3 "statusCode": 200,
4 "message": "Data retrieved successfully",
5 "errors": null,
6 "data": {
7 "posts": [
8 {
9 "text": "Excited to announce our new product launch...",
10 "url": "https://www.linkedin.com/feed/update/urn:li:activity:7392580420552273920",
11 "urn": "urn:li:activity:7392580420552273920",
12 "author": {
13 "name": "Ryan Roslansky",
14 "headline": "CEO at LinkedIn",
15 "urn": "ACoAAAHT6VYBz_yVpo-jE-Or8WNKjArXMd-S6KM",
16 "id": "urn:li:member:30665046",
17 "url": "https://www.linkedin.com/in/ryanroslansky",
18 "profilePictureURL": "https://media.licdn.com/dms/image/..."
19 },
20 "postedAt": {
21 "timestamp": 1762528519762,
22 "fullDate": "2025-11-07 15:15:19.00 +0000 UTC",
23 "relativeDay": "1 day"
24 },
25 "edited": false,
26 "engagements": {
27 "totalReactions": 1543,
28 "commentsCount": 87,
29 "repostsCount": 234,
30 "reactions": [
31 {
32 "reactionType": "LIKE",
33 "reactionCount": 890
34 },
35 {
36 "reactionType": "EMPATHY",
37 "reactionCount": 234
38 },
39 {
40 "reactionType": "PRAISE",
41 "reactionCount": 156
42 }
43 ]
44 },
45 "mediaContent": null,
46 "resharedPostContent": null
47 }
48 ]
49 }
50}Getting Started with LinkdAPI Post Scraping
Let's explore practical examples using LinkdAPI's async implementation—the recommended approach.
Installation
1pip install linkdapiBasic Post Extraction (Async - Recommended)
1import asyncio
2from linkdapi import AsyncLinkdAPI
3
4async def extract_featured_posts():
5 """
6 Extract featured (pinned) posts from a profile
7 Docs: https://linkdapi.com/docs?endpoint=/api/v1/posts/featured&folder=Posts
8 """
9 client = AsyncLinkdAPI("your_api_key")
10
11 try:
12 # Get featured posts - returns array directly in data
13 response = await client.get_featured_posts("ryanroslansky")
14 featured_posts = response.get('data', [])
15
16 print(f"Found {len(featured_posts)} featured posts\n")
17
18 for post in featured_posts:
19 print(f"Title: {post.get('title', 'No title')}")
20 print(f"Text: {post['text'][:100]}...")
21 print(f"Type: {post['type']}")
22 print(f"Likes: {post['likesCount']}")
23 print(f"Comments: {post['commentsCount']}")
24 if post.get('mediaContent'):
25 print(f"Media: {len(post['mediaContent'])} items")
26 print("-" * 80)
27
28 return featured_posts
29
30 except Exception as e:
31 print(f"Error: {e}")
32 return None
33
34# Run
35asyncio.run(extract_featured_posts())Extracting All Posts from a Profile
1import asyncio
2from linkdapi import AsyncLinkdAPI
3
4async def extract_all_posts(profile_urn, max_posts=50):
5 """
6 Extract all posts from a profile with pagination
7 Docs: https://linkdapi.com/docs?endpoint=/api/v1/posts/all&folder=Posts
8 """
9 client = AsyncLinkdAPI("your_api_key")
10
11 try:
12 all_posts = []
13 cursor = ''
14 start = 0
15
16 while len(all_posts) < max_posts:
17 # Get batch of posts
18 response = await client.get_all_posts(
19 urn=profile_urn,
20 cursor=cursor,
21 start=start
22 )
23
24 # Extract posts from response
25 data = response.get('data', {})
26 posts = data.get('posts', [])
27
28 if not posts:
29 break
30
31 all_posts.extend(posts)
32 print(f"Extracted {len(all_posts)} posts so far...")
33
34 # Update cursor for pagination
35 cursor = data.get('cursor', '')
36 start += len(posts)
37
38 # Check if we got a cursor for next page
39 if not cursor:
40 break
41
42 # Rate limiting
43 await asyncio.sleep(0.5)
44
45 print(f"\n✓ Total posts extracted: {len(all_posts)}")
46
47 # Analyze engagement
48 if all_posts:
49 avg_reactions = sum(p['engagements']['totalReactions'] for p in all_posts) / len(all_posts)
50 avg_comments = sum(p['engagements']['commentsCount'] for p in all_posts) / len(all_posts)
51
52 print(f"Average reactions per post: {avg_reactions:.1f}")
53 print(f"Average comments per post: {avg_comments:.1f}")
54
55 return all_posts
56
57 except Exception as e:
58 print(f"Error: {e}")
59 return []
60
61# Usage - first get profile URN, then extract posts
62async def main():
63 client = AsyncLinkdAPI("your_api_key")
64
65 # Get profile to obtain URN
66 profile = await client.get_profile_overview("ryanroslansky")
67 profile_urn = profile['profileUrn']
68
69 # Extract all posts
70 posts = await extract_all_posts(profile_urn, max_posts=100)
71
72asyncio.run(main())Extracting Post Details and Comments
1import asyncio
2from linkdapi import AsyncLinkdAPI
3
4async def extract_post_with_comments(post_urn):
5 """
6 Extract detailed post information including full comment threads
7 Docs: https://linkdapi.com/docs?endpoint=/api/v1/posts/info&folder=Posts
8 Docs: https://linkdapi.com/docs?endpoint=/api/v1/posts/comments&folder=Posts
9 """
10 client = AsyncLinkdAPI("your_api_key")
11
12 try:
13 # Get post details
14 post_response = await client.get_post_info(post_urn)
15 post_info = post_response.get('data', {}).get('post', {})
16
17 print("=== POST DETAILS ===")
18 print(f"Author: {post_info['author']['name']}")
19 print(f"Posted: {post_info['postedAt']['fullDate']}")
20 print(f"Text: {post_info['text'][:200]}...")
21 print(f"\nEngagement:")
22 print(f" Reactions: {post_info['engagements']['totalReactions']}")
23 print(f" Comments: {post_info['engagements']['commentsCount']}")
24 print(f" Reposts: {post_info['engagements']['repostsCount']}")
25
26 # Get detailed reactions breakdown
27 if post_info['engagements'].get('reactions'):
28 print(f"\nReaction Breakdown:")
29 for reaction in post_info['engagements']['reactions']:
30 print(f" {reaction['reactionType']}: {reaction['reactionCount']}")
31
32 # Get comments
33 comments_response = await client.get_post_comments(
34 urn=post_urn,
35 start=0,
36 count=20 # Get first 20 comments
37 )
38
39 comments = comments_response.get('data', {}).get('comments', [])
40
41 print(f"\n=== COMMENTS ({len(comments)}) ===")
42
43 for i, comment in enumerate(comments, 1):
44 author = comment.get('author', {})
45 print(f"\n{i}. {author.get('name', 'Unknown')}")
46 print(f" {comment.get('comment', '')[:100]}...")
47 print(f" Likes: {comment.get('engagements', {}).get('totalReactions', 0)}")
48 print(f" Posted: {comment.get('createdAt', 'Unknown')}")
49
50 return {
51 'post': post_info,
52 'comments': comments
53 }
54
55 except Exception as e:
56 print(f"Error: {e}")
57 return None
58
59# Usage
60asyncio.run(extract_post_with_comments("urn:li:activity:7259185939057438720"))Start building with 100 free credits
Access profiles, companies, jobs, and more through our reliable, high-performance API. No credit card required.
Bulk Post Extraction from Multiple Profiles
1import asyncio
2from linkdapi import AsyncLinkdAPI
3import json
4
5async def bulk_extract_posts(usernames, posts_per_profile=20):
6 """
7 Extract posts from multiple profiles concurrently
8 This is the RECOMMENDED high-performance approach
9 """
10 client = AsyncLinkdAPI("your_api_key")
11
12 async def extract_profile_posts(username):
13 """Helper to extract posts from one profile"""
14 try:
15 # Get profile URN
16 profile = await client.get_profile_overview(username)
17 urn = profile['profileUrn']
18
19 # Get posts
20 response = await client.get_all_posts(urn=urn, start=0)
21 posts = response.get('data', {}).get('posts', [])
22
23 # Limit to specified number
24 posts = posts[:posts_per_profile]
25
26 print(f"✓ Extracted {len(posts)} posts from {profile['firstName']} {profile['lastName']}")
27
28 return {
29 'username': username,
30 'profile': {
31 'name': f"{profile['firstName']} {profile['lastName']}",
32 'headline': profile['headline']
33 },
34 'posts': posts
35 }
36
37 except Exception as e:
38 print(f"✗ Failed {username}: {e}")
39 return None
40
41 # Execute all extractions concurrently
42 tasks = [extract_profile_posts(username) for username in usernames]
43 results = await asyncio.gather(*tasks)
44
45 # Filter out failures
46 successful = [r for r in results if r is not None]
47
48 # Calculate statistics
49 total_posts = sum(len(r['posts']) for r in successful)
50
51 print(f"\n=== BULK EXTRACTION COMPLETE ===")
52 print(f"Profiles processed: {len(successful)}/{len(usernames)}")
53 print(f"Total posts extracted: {total_posts}")
54
55 return successful
56
57# Example: Extract posts from multiple thought leaders
58usernames = [
59 "ryanroslansky",
60 "jeffweiner08",
61 "satyanadella",
62 "sundarpichai",
63 "tim-cook",
64 # Add hundreds more...
65]
66
67results = asyncio.run(bulk_extract_posts(usernames, posts_per_profile=10))
68
69# Save to file
70with open('linkedin_posts_bulk.json', 'w') as f:
71 json.dump(results, f, indent=2)
72
73print("✓ Results saved to linkedin_posts_bulk.json")Extracting Post Likes (Who Reacted)
1import asyncio
2from linkdapi import AsyncLinkdAPI
3
4async def extract_post_likers(post_urn):
5 """
6 Extract users who reacted to a post
7 Docs: https://linkdapi.com/docs?endpoint=/api/v1/posts/likes&folder=Posts
8 """
9 client = AsyncLinkdAPI("your_api_key")
10
11 try:
12 # Get users who liked the post
13 response = await client.get_post_likes(urn=post_urn, start=0)
14 data = response.get('data', {})
15 likers = data.get('likes', [])
16 total_likes = data.get('totalLikes', 0)
17
18 print(f"Found {total_likes} total reactions")
19 print(f"Showing {len(likers)} reactions\n")
20
21 for liker in likers[:20]: # Show first 20
22 actor = liker.get('actor', {})
23 print(f"- {actor.get('name', 'Unknown')}")
24 print(f" {actor.get('headline', '')}")
25 print(f" Reaction: {liker.get('reactionType', 'LIKE')}")
26 print()
27
28 return likers
29
30 except Exception as e:
31 print(f"Error: {e}")
32 return []
33
34# Usage
35asyncio.run(extract_post_likers("urn:li:activity:7259185939057438720"))


