Skip to main content

How to fully automate LinkedIn lead search, at any scale?

Are you looking for a solution to integrate LinkedIn leads into your sales process automatically? In this article, I'll show you how to leverage our API to scrape leads from LinkedIn Sales Navigator searches programmatically—without using your own LinkedIn account, without providing any cookies, and without requiring a browser extension. All you need to provide are the search criteria.

Code demonstration

For instance, let's say you need a lead list with the following criteria:

  • Job title includes: owner, founder, or director
  • Location: United States
  • Industry: Software Development
  • Company headcount range: 11-200
  • For more filters: Please check out the employee search filters page

Below is how you can do it easily in Python, step by step.

import requests
import time

# Replace this with your actual API key
api_key = "YOUR_API_KEY"

# API base URL
api_host = "api.infoplug.io"

def initiate_search():
url = f"https://{api_host}/search-leads"
payload = {
"geo_codes": [103644278], # United States
"title_keywords": ["Owner", "Founder", "Director"],
"industry_codes": [4], # Software Development
"company_headcounts": ["11-50", "51-200"],
"limit": 50 # set max number of results to be scraped
}
headers = {
"Authorization": f"Bearer {api_key}",
"Content-Type": "application/json"
}

response = requests.post(url, json=payload, headers=headers)
response_data = response.json()
print("Initiate Search Response:", response_data)
return response_data.get("request_id")

Step 2: Check Search Progress

def check_search_status(request_id):
url = f"https://{api_host}/check-search-status"
querystring = {"request_id": request_id}
headers = {
"Authorization": f"Bearer {api_key}"
}

response = requests.get(url, headers=headers, params=querystring)
response_data = response.json()
print("Check Search Status Response:", response_data)
return response_data.get("status") # pending, processing, or done

Step 3: Retrieve Search Results

def get_search_results(request_id):
url = f"https://{api_host}/get-search-results"
headers = {
"Authorization": f"Bearer {api_key}"
}

all_results = []
page = 1

while True:
querystring = {"request_id": request_id, "page": page}
response = requests.get(url, headers=headers, params=querystring)
response_data = response.json()

if response_data.get("data"):
all_results.extend(response_data.get("data"))
page += 1
else:
# no more results
break

return all_results

Step 4: Put All Together

def main():
# initiate search
request_id = initiate_search()
if not request_id:
print("Failed to initiate search")
return

# monitor search progress
while True:
status = check_search_status(request_id)
if status == "done":
# search completed
break
# sleep for a while
time.sleep(30)

# ready to fetch results
results = get_search_results(request_id)
print(f"Total Results Fetched: {len(results)}")
print("Results:", results)

if __name__ == "__main__":
main()

Prerequisites

Before running the script, ensure you have:

  1. At least a PRO plan to use the search feature.
  2. Python installed on your system.
  3. The requests library installed. You can install it using:
pip install requests

How much does it cost?

Each stage of the search process costs a different number of credits:

  1. Search initiation: 50 credits. Even if the search returns no results you'll be charged 50 credits.
  2. Status monitoring: free of charge. You can check it as frequently as needed until the job is complete.
  3. Result fetching: each result costs 1 credit.

For example, if your search yields 1,000 results, the total credit cost would be:

50 credits + (1 × 1000 credits) = 1050 credits.