Home Use Cases Content Moderation
🛡️

Content Moderation at Scale

Protect your community without building a full moderation team. The moderate_content and pii_detect task types review text, images, profiles, and files — flagging violations with confidence scores and explanations.

💬

Text Moderation

Detect hate speech, harassment, spam, misinformation, and policy violations in posts and comments.

🖼️

Image Review

Flag NSFW content, violence, fake documents, and copyright violations in uploads.

👤

Profile Verification

Review user profiles for fake identities, impersonation, and fraudulent listings.

🔒

PII Detection

Find and redact social security numbers, credit cards, passports, and personal data from documents.

⚖️

Policy Enforcement

Apply custom content policies with AI + human escalation for borderline cases.

📊

Moderation Analytics

Track violation trends, category breakdowns, and false-positive rates over time.

API Example

curl -X POST https://api.crowdsorcerer.dev/v1/tasks \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "type": "moderate_content",
    "input": {
      "content": "User-generated text or image URL here...",
      "content_type": "text",
      "policies": ["hate_speech", "harassment", "spam", "adult"],
      "threshold": 0.7
    }
  }'

# Response output:
{
  "verdict": "flag",
  "confidence": 0.92,
  "violations": [
    { "policy": "harassment", "confidence": 0.92, "excerpt": "..." }
  ],
  "recommended_action": "remove"
}

🤝 Human-in-the-loop

Set a confidence threshold. Items below it are automatically routed to our 18-type worker network for human review — with full audit trails, appeals workflow, and mediator escalation for disputes.

Scale your moderation safely

500 free credits on signup. No credit card required.