TL;DR
Get your API token from Settings. Use the Python or JavaScript SDK to start runs and fetch results. Every actor has a REST API endpoint. Runs can be synchronous (wait for results) or asynchronous (check status later).
Why Use the API?
The web console is great for testing. For production use, you want the API. It lets you:
- Trigger runs from your own applications
- Automate data collection pipelines
- Integrate with your existing tools
- Build custom dashboards and alerts
Step 1: Get Your API Token
Your API token authenticates requests. Keep it secret like a password.
- Log in to console.apify.com
- Go to Settings in the left sidebar
- Click Integrations
- Copy your Personal API token
Warning: Never share your API token or commit it to public repositories. Use environment variables to store it.
Step 2: Install the SDK
Python
pip install apify-client
JavaScript / Node.js
npm install apify-client
Step 3: Run an Actor
Python Example
from apify_client import ApifyClient
# Initialize the client
client = ApifyClient("YOUR_API_TOKEN")
# Run the Google Maps Scraper
run_input = {
"searchStringsArray": ["coffee shops in Seattle"],
"maxCrawledPlaces": 100
}
run = client.actor("compass/crawler-google-places").call(run_input=run_input)
# Fetch results
dataset = client.dataset(run["defaultDatasetId"])
for item in dataset.iterate_items():
print(item["title"], item["address"])
JavaScript Example
import { ApifyClient } from 'apify-client';
const client = new ApifyClient({ token: 'YOUR_API_TOKEN' });
const input = {
searchStringsArray: ['coffee shops in Seattle'],
maxCrawledPlaces: 100
};
const run = await client.actor('compass/crawler-google-places').call(input);
const { items } = await client.dataset(run.defaultDatasetId).listItems();
items.forEach(item => console.log(item.title, item.address));
Synchronous vs Asynchronous Runs
| Mode | How It Works | Best For |
|---|---|---|
| Synchronous | Wait for run to finish, get results immediately | Small jobs, real-time needs |
| Asynchronous | Start run, check status later, fetch results when done | Large jobs, background processing |
Asynchronous Example (Python)
from apify_client import ApifyClient
client = ApifyClient("YOUR_API_TOKEN")
# Start the run without waiting
run = client.actor("compass/crawler-google-places").start(
run_input={"searchStringsArray": ["hotels in Miami"], "maxCrawledPlaces": 500}
)
print(f"Run started with ID: {run['id']}")
# Later, check status and get results
run_info = client.run(run["id"]).get()
if run_info["status"] == "SUCCEEDED":
dataset = client.dataset(run_info["defaultDatasetId"])
items = list(dataset.iterate_items())
print(f"Got {len(items)} results")
REST API Endpoints
If you prefer raw HTTP requests, every actor has REST endpoints.
Start a Run
POST https://api.apify.com/v2/acts/{actorId}/runs
Authorization: Bearer YOUR_API_TOKEN
Content-Type: application/json
{
"searchStringsArray": ["restaurants in NYC"],
"maxCrawledPlaces": 50
}
Get Run Status
GET https://api.apify.com/v2/actor-runs/{runId}
Authorization: Bearer YOUR_API_TOKEN
Get Dataset Items
GET https://api.apify.com/v2/datasets/{datasetId}/items
Authorization: Bearer YOUR_API_TOKEN
Webhooks
Instead of polling for status, you can receive a webhook when a run finishes.
- Go to the actor page
- Click "Integrations" tab
- Add a webhook URL
- Select "Run succeeded" event
Apify will POST to your URL with run details when the job completes.
Common Questions
Q: What is the rate limit?
A: 200 requests per second per token. Most use cases stay well under this limit.
Q: Can I run multiple actors in parallel?
A: Yes. Start multiple async runs. Your plan determines how many can run concurrently (5 for Starter, 25 for Scale).
Q: How long are results stored?
A: Depends on your plan. Free: 7 days. Starter: 14 days. Scale: 30 days. Download results before they expire.