Claude Code + Salesforce API: Query and Update Records With AI in 2026
By Kushal Magar · April 26, 2026 · 14 min read
Key Takeaway
Claude Code can query and update Salesforce records via the REST API, run composite requests to batch up to 25 operations into a single API call, and process 50,000+ records using Bulk API v2 — all without a middleware layer. The key steps are: create a Connected App, authenticate via OAuth 2.0, then let Claude Code write and execute SOQL queries, CRUD calls, and asynchronous bulk jobs.
Most teams interact with Salesforce through clicks and dashboards. Claude Code gives you a faster path: describe what you need in plain language and let the AI construct the API call, handle authentication, parse the response, and write the result back — all in one session.
This guide covers the claude code salesforce API workflow end-to-end. It starts with OAuth setup, walks through SOQL queries and CRUD operations, explains when to use composite requests versus the Bulk API, and ends with a pattern for building automated data pipelines that run on a schedule without human intervention. If you have read the Claude Code Salesforce integration guide, this is the deeper API layer that sits underneath that setup.
TL;DR
- Authentication: Create a Salesforce Connected App with OAuth 2.0. Use JWT Bearer or Username-Password flow for unattended scripts.
- SOQL queries: Ask Claude Code to write and run SOQL — it handles the REST endpoint, response pagination, and data extraction.
- CRUD: REST API handles create, read, update, and delete on any standard or custom object.
- Composite requests: Bundle up to 25 API calls into one HTTP request. Cuts API limit usage by up to 90%.
- Bulk API v2: For 10,000+ records — async jobs, CSV upload, no per-record API calls.
- Pipelines: Claude Code writes the full extract-transform-load script. Schedule it with GitHub Actions or cron.
- Enrichment: SyncGTM writes enriched data back to Salesforce — no additional pipeline code required.
What Is the Salesforce REST API?
The Salesforce REST API is a set of HTTP endpoints that expose your entire org — contacts, leads, accounts, opportunities, custom objects, metadata, and more — over standard JSON requests. Every Enterprise or Unlimited Edition org gets 15,000 API calls per day by default; higher editions and add-ons raise that limit.
The REST API is the right tool for most GTM automation: querying account lists, updating field values, creating tasks after prospect activity, or pushing enrichment data into records. It works over any HTTP client — which means Claude Code can use it directly in a Python or Node.js script without installing a Salesforce-specific library.
When to use which API:
| API | Best for | Limit |
|---|---|---|
| REST API | Single records, small batches, real-time lookups | 15,000/day (Enterprise) |
| Composite API | Batching up to 25 REST calls into 1 HTTP request | Counts as 1 API call |
| Bulk API v2 | 10,000+ record operations, async jobs | Separate bulk quota |
| SOQL (via REST) | Structured queries across related objects | 2,000 rows per page |
According to Salesforce's official REST API documentation, the REST API supports all Salesforce objects and uses standard OAuth 2.0 for authentication — making it straightforward for Claude Code to consume without org-specific SDK setup.
Step 1: Authentication — OAuth 2.0 Connected App
Every Salesforce API call requires an OAuth 2.0 access token. The token proves your script has permission to act on behalf of a Salesforce user. Claude Code can guide you through the entire setup — just say "set up Salesforce OAuth 2.0 authentication for a server-side script" and it will walk you through the steps.
Create a Connected App
- In Salesforce Setup, search for App Manager and click New Connected App.
- Enable OAuth Settings. Set the callback URL to
https://login.salesforce.com/services/oauth2/successfor testing. - Add scopes: api, refresh_token, and offline_access.
- Save. Note the Consumer Key (Client ID) and Consumer Secret.
Get an Access Token
For unattended server scripts, use the Username-Password flow. Claude Code can generate this request for you:
curl -X POST https://login.salesforce.com/services/oauth2/token \ -d "grant_type=password" \ -d "client_id=YOUR_CONSUMER_KEY" \ -d "client_secret=YOUR_CONSUMER_SECRET" \ -d "username=YOUR_USERNAME" \ -d "password=YOUR_PASSWORD+YOUR_SECURITY_TOKEN"
The response includes an access_token and instance_url. Store both in environment variables — never in source code.
Security note:
The Username-Password flow requires IP allowlisting or security token appended to the password. For production pipelines, use the JWT Bearer flow instead — it does not require a password in the request and is not affected by MFA.
Step 2: Run SOQL Queries With Claude Code
SOQL (Salesforce Object Query Language) is SQL-like but Salesforce-specific. It lets you query records across related objects, filter by any field, and paginate through large result sets. Claude Code writes accurate SOQL without needing a reference guide — just describe what records you want.
Example: Query Open Opportunities Over $50,000
import requests
import os
access_token = os.environ["SF_ACCESS_TOKEN"]
instance_url = os.environ["SF_INSTANCE_URL"]
soql = """
SELECT Id, Name, StageName, Amount, Account.Name, Owner.Name
FROM Opportunity
WHERE IsClosed = false
AND Amount > 50000
ORDER BY Amount DESC
LIMIT 200
"""
response = requests.get(
f"{instance_url}/services/data/v59.0/query",
headers={"Authorization": f"Bearer {access_token}"},
params={"q": soql}
)
data = response.json()
print(f"Found {data['totalSize']} opportunities")Handling Pagination
Salesforce returns up to 2,000 records per query page. If done is false in the response, fetch the next page using nextRecordsUrl. Claude Code writes the pagination loop automatically — just ask for it.
For complex cross-object queries — like pulling contacts with their account's industry and the most recent opportunity stage — Claude Code handles the relationship query syntax that most developers look up each time:
SELECT Id, FirstName, LastName, Email,
Account.Name, Account.Industry, Account.AnnualRevenue,
(SELECT Name, StageName, CloseDate FROM Opportunities
WHERE IsClosed = false ORDER BY CloseDate ASC LIMIT 1)
FROM Contact
WHERE Account.Industry = 'Technology'
AND Email != null
LIMIT 500Step 3: CRUD Operations via REST API
The Salesforce REST API follows standard HTTP verbs. Claude Code uses GET to read, POST to create, PATCH to update, and DELETE to remove records. The endpoint pattern is always:
{instance_url}/services/data/v59.0/sobjects/{ObjectName}/{recordId}Create a Lead
import requests, os, json
headers = {
"Authorization": f"Bearer {os.environ['SF_ACCESS_TOKEN']}",
"Content-Type": "application/json"
}
lead_data = {
"FirstName": "Alex",
"LastName": "Kim",
"Email": "alex.kim@example.com",
"Company": "Acme Corp",
"LeadSource": "Web",
"Status": "Open - Not Contacted"
}
response = requests.post(
f"{os.environ['SF_INSTANCE_URL']}/services/data/v59.0/sobjects/Lead/",
headers=headers,
data=json.dumps(lead_data)
)
print(response.json()) # Returns {"id": "00Q...", "success": true}Update an Opportunity Stage
opp_id = "006XXXXXXXXXXXXXXX"
update_data = {"StageName": "Proposal/Price Quote", "Probability": 60}
response = requests.patch(
f"{os.environ['SF_INSTANCE_URL']}/services/data/v59.0/sobjects/Opportunity/{opp_id}",
headers=headers,
data=json.dumps(update_data)
)
# 204 No Content = successUpsert by External ID
Use a PATCH request with an external ID field to upsert — create if not exists, update if found. This is the right pattern for sync pipelines where you don't want duplicates:
# Upsert Contact by external CRM ID
external_id_value = "CRM-12345"
response = requests.patch(
f"{os.environ['SF_INSTANCE_URL']}/services/data/v59.0/sobjects/Contact/External_CRM_ID__c/{external_id_value}",
headers=headers,
data=json.dumps({"Phone": "+1-555-0100", "Title": "VP Sales"})
)Step 4: Composite Requests — Batch Multiple API Calls
Composite requests are one of the most under-used Salesforce API features. A single composite request bundles up to 25 subrequests into one HTTP call and counts as a single API call against your daily limit. Each subrequest can reference the output of a previous one — making it possible to create a Contact and immediately link it to an Account in the same atomic request.
Why Composite Requests Matter for GTM Pipelines
- API limit savings: 25 updates = 1 API call instead of 25.
- Atomic execution: Set
allOrNone: trueto roll back all subrequests if one fails. - Chained references: Use
@{referenceId.body.id}to pass a created record's ID to a subsequent subrequest.
Example: Create Account, Then Create Contact Linked to It
composite_body = {
"allOrNone": True,
"compositeRequest": [
{
"method": "POST",
"url": "/services/data/v59.0/sobjects/Account/",
"referenceId": "newAccount",
"body": {
"Name": "New Corp",
"Industry": "Technology",
"AnnualRevenue": 5000000
}
},
{
"method": "POST",
"url": "/services/data/v59.0/sobjects/Contact/",
"referenceId": "newContact",
"body": {
"FirstName": "Sam",
"LastName": "Torres",
"Email": "sam@newcorp.com",
"AccountId": "@{newAccount.id}"
}
}
]
}
response = requests.post(
f"{os.environ['SF_INSTANCE_URL']}/services/data/v59.0/composite/",
headers=headers,
data=json.dumps(composite_body)
)
results = response.json()["compositeResponse"]
print(f"Account: {results[0]['body']['id']}")
print(f"Contact: {results[1]['body']['id']}")Claude Code generates this pattern reliably. Tell it "create a composite request that creates an account, then creates a contact under that account" and it writes the chained reference syntax correctly on the first try.
Step 5: Salesforce Bulk API for Large Data Sets
The Bulk API v2 handles large-scale operations asynchronously. You submit a CSV file, Salesforce processes the job in the background, and you poll for completion. There is no per-record API call — the whole batch counts as a handful of management calls regardless of record count.
According to Salesforce Bulk API v2 documentation, a single bulk job can process up to 100 million records — though practical limits depend on org size and data volume.
Bulk Upsert Workflow
- Create the job: POST to
/services/data/v59.0/jobs/ingest/with object, operation (upsert), and external ID field. - Upload CSV data: PUT the CSV to the job's
batchesendpoint. - Close the job: PATCH the job state to
UploadComplete. - Poll for status: GET the job until
stateisJobComplete. - Retrieve results: GET
successfulResultsandfailedResultsCSVs.
import requests, os, json, time, csv
from io import StringIO
base = os.environ["SF_INSTANCE_URL"]
token = os.environ["SF_ACCESS_TOKEN"]
auth_headers = {"Authorization": f"Bearer {token}"}
# 1. Create job
job = requests.post(
f"{base}/services/data/v59.0/jobs/ingest/",
headers={**auth_headers, "Content-Type": "application/json"},
data=json.dumps({
"object": "Contact",
"operation": "upsert",
"externalIdFieldName": "External_CRM_ID__c",
"contentType": "CSV",
"lineEnding": "LF"
})
).json()
job_id = job["id"]
# 2. Upload CSV
csv_data = "External_CRM_ID__c,Phone,Title\nCRM-001,+15550101,Director\nCRM-002,+15550102,VP"
requests.put(
f"{base}/services/data/v59.0/jobs/ingest/{job_id}/batches",
headers={**auth_headers, "Content-Type": "text/csv"},
data=csv_data
)
# 3. Close job
requests.patch(
f"{base}/services/data/v59.0/jobs/ingest/{job_id}",
headers={**auth_headers, "Content-Type": "application/json"},
data=json.dumps({"state": "UploadComplete"})
)
# 4. Poll for completion
while True:
status = requests.get(
f"{base}/services/data/v59.0/jobs/ingest/{job_id}",
headers=auth_headers
).json()
if status["state"] in ("JobComplete", "Failed", "Aborted"):
break
time.sleep(5)
print(f"Processed: {status.get('numberRecordsProcessed', 0)}")
print(f"Failed: {status.get('numberRecordsFailed', 0)}")Claude Code writes this entire script when you say "write a Salesforce Bulk API v2 upsert job for Contacts using External_CRM_ID__c as the external ID." It handles the polling loop, error retrieval, and CSV parsing without prompting.
Step 6: Automated Data Pipelines With Claude Code
A Salesforce data pipeline extracts records via SOQL, transforms them (enrichment, deduplication, standardization), and writes the result back via REST or Bulk API. Claude Code writes all three stages and can schedule the pipeline using a simple cron expression or GitHub Actions workflow.
Pipeline Pattern: Nightly Contact Enrichment
- Extract: SOQL query fetches all Contacts updated in the last 24 hours with missing phone or title fields.
- Enrich: Each contact is passed to an enrichment API (e.g., SyncGTM or a waterfall provider) to fill missing fields.
- Transform: Standardize phone format, normalize company names, remove duplicates.
- Load: Composite requests update up to 25 records per call — or Bulk API v2 for batches over 200.
- Log: Write a CSV report of successful and failed updates to S3 or a shared Drive folder.
Schedule With GitHub Actions
# .github/workflows/sf-enrichment.yml
name: Nightly Salesforce Enrichment
on:
schedule:
- cron: "0 2 * * *" # 2am UTC daily
jobs:
enrich:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v4
- uses: actions/setup-python@v5
with:
python-version: "3.12"
- run: pip install requests python-dotenv
- run: python pipelines/sf_contact_enrichment.py
env:
SF_ACCESS_TOKEN: ${{ secrets.SF_ACCESS_TOKEN }}
SF_INSTANCE_URL: ${{ secrets.SF_INSTANCE_URL }}
SYNCGTM_API_KEY: ${{ secrets.SYNCGTM_API_KEY }}Claude Code writes the full sf_contact_enrichment.py and the GitHub Actions YAML in one session. The pipeline runs at 2am daily, processes stale records, and logs results — all without human intervention.
For teams not on GitHub Actions, the same script runs on any cron scheduler: Pipedream, n8n, AWS Lambda, or a local crontab. See the best Salesforce enrichment tools guide for a comparison of enrichment providers that support API-based pipelines.
Error Handling and Rate Limits
Two things will break your pipeline if you ignore them: API limit exhaustion and authentication token expiry. Claude Code handles both when you ask it to write "production-grade" code.
Common Salesforce API Errors
| HTTP Status | Error Code | Fix |
|---|---|---|
| 401 | INVALID_SESSION_ID | Re-authenticate. Access tokens expire after ~2 hours. |
| 403 | REQUEST_LIMIT_EXCEEDED | Check your daily API limit. Add exponential backoff. |
| 404 | NOT_FOUND | Record ID is wrong or the object API name is incorrect. |
| 400 | INVALID_FIELD | Field name is wrong or not accessible in the object's layout. |
Token Refresh Pattern
For long-running pipelines, build a token refresh wrapper. Claude Code writes a Python class with an auto-refresh method — it re-authenticates when a 401 is received and retries the request automatically:
class SalesforceClient:
def __init__(self):
self.access_token = None
self.instance_url = None
self.authenticate()
def authenticate(self):
resp = requests.post(
"https://login.salesforce.com/services/oauth2/token",
data={
"grant_type": "password",
"client_id": os.environ["SF_CLIENT_ID"],
"client_secret": os.environ["SF_CLIENT_SECRET"],
"username": os.environ["SF_USERNAME"],
"password": os.environ["SF_PASSWORD"] + os.environ["SF_TOKEN"]
}
)
data = resp.json()
self.access_token = data["access_token"]
self.instance_url = data["instance_url"]
def request(self, method, path, **kwargs):
headers = kwargs.pop("headers", {})
headers["Authorization"] = f"Bearer {self.access_token}"
resp = getattr(requests, method)(
f"{self.instance_url}{path}", headers=headers, **kwargs
)
if resp.status_code == 401:
self.authenticate()
headers["Authorization"] = f"Bearer {self.access_token}"
resp = getattr(requests, method)(
f"{self.instance_url}{path}", headers=headers, **kwargs
)
return respEnrich Salesforce Records With SyncGTM
The pipeline above handles the mechanics — extract, transform, write back. The enrichment step is where data quality actually improves. SyncGTM connects to 50+ B2B data providers in a waterfall sequence, returning the best available phone, email, title, and firmographic data for each contact or account.
Instead of building your own waterfall logic in the pipeline script, you call the SyncGTM API once per record. SyncGTM handles the cascade — checking Apollo, FullEnrich, RocketReach, and others in sequence until a verified result is found. The waterfall enrichment pattern consistently achieves 80–90% hit rates versus 40–60% for single-provider lookups.
SyncGTM + Salesforce pipeline in three steps:
- Query Salesforce for contacts with empty
PhoneorMobilePhone. - Pass each contact to SyncGTM's enrichment endpoint with the LinkedIn URL or email as the lookup key.
- Write the enriched phone, title, and company data back via Salesforce composite requests.
Claude Code can write the entire pipeline — including the SyncGTM API call and the Salesforce write-back — in a single session. See the sign-up enrichment workflow guide for a real-world example of this pattern applied to new lead enrichment.
For teams already using Claude Code GTM skills, SyncGTM's MCP server exposes enrichment as a tool call — so Claude Code can enrich and write back to Salesforce in a single agentic workflow without switching contexts.
Final Verdict
The Salesforce REST API is well-documented but tedious to work with manually. Claude Code removes the friction: it writes correct SOQL queries, handles OAuth token management, builds composite requests with chained references, and produces working Bulk API v2 jobs — all from plain-language descriptions.
The practical result: a GTM engineer can build a production-grade Salesforce data pipeline in a day instead of a week. A sales ops analyst with basic Python knowledge can query and update records without waiting for IT. And enrichment workflows that used to require a Zapier subscription and three hours of setup now run as version-controlled Python scripts on a $0/month cron scheduler.
Start with authentication and a SOQL query. Add composite requests once you have more than 10 records to update in a run. Move to Bulk API v2 when your nightly batch exceeds 1,000 records. Layer in SyncGTM for enrichment to keep Salesforce data complete without manual lookups.
Ready to automate your Salesforce data pipeline?
SyncGTM connects Claude Code to 50+ enrichment providers and writes data directly to Salesforce — no custom integration code required. Start free, no credit card needed.
