Developer10 min readMay 5, 2026

Handling False Positives in Fraud Detection Without Losing Real Users

False positives are the hidden cost of fraud prevention. Learn how to set score thresholds, build review queues, implement allowlists, and create feedback loops that improve accuracy over time.

The False Positive Problem

Every fraud detection system makes mistakes. A legitimate user signs up from a VPN because they value privacy. A new employee uses a freshly created company email address. A developer testing your product uses a plus-addressed Gmail account. Each of these can trigger fraud signals, and if your system is too aggressive, these real users get blocked.

False positives are the silent killer of fraud prevention programs. Block too many legitimate users and you will hear about it fast, from angry support tickets, from declining conversion rates, or from your CEO asking why signups dropped 30% after you "improved security."

The goal is not zero fraud. The goal is catching the maximum amount of fraud while maintaining a false positive rate that your business can tolerate. This article covers practical strategies for getting that balance right.

Understanding Your Tolerance

Before tuning anything, you need to define what a false positive costs your business versus what a false negative (missed fraud) costs.

Consider two scenarios:

  • Scenario A (E-commerce promo abuse): A fraudulent signup costs you a $10 coupon. A blocked legitimate user costs you a potential $200 lifetime value. Your false positive tolerance should be very low. Err on the side of letting suspicious signups through.
  • Scenario B (AI token abuse): A fraudulent signup costs you $50 in stolen compute. A blocked legitimate user costs you maybe $15 in average revenue. You can afford a slightly higher false positive rate because the fraud cost outweighs the false positive cost.

Most teams never do this math explicitly, and they end up with thresholds based on gut feeling. Run the numbers for your specific product.

Setting Score Thresholds

BigShield returns a score from 0 to 100. The simplest approach is a single threshold: block everything below X, allow everything above. But a single threshold forces a binary decision on what is inherently a spectrum.

A better approach uses three (or more) tiers:

interface ThresholdConfig {
  blockBelow: number;      // Hard deny
  reviewBelow: number;     // Allow with restrictions
  trustAbove: number;      // Full access
}

// Conservative (fewer false positives, more fraud gets through)
const conservative: ThresholdConfig = {
  blockBelow: 20,
  reviewBelow: 50,
  trustAbove: 50,
};

// Balanced (good starting point for most apps)
const balanced: ThresholdConfig = {
  blockBelow: 30,
  reviewBelow: 70,
  trustAbove: 70,
};

// Aggressive (more false positives, less fraud gets through)
const aggressive: ThresholdConfig = {
  blockBelow: 40,
  reviewBelow: 80,
  trustAbove: 80,
};

function decideAction(score: number, config: ThresholdConfig) {
  if (score < config.blockBelow) return 'block';
  if (score < config.reviewBelow) return 'review';
  return 'allow';
}

Start with the balanced configuration. After two weeks of data, look at how many signups fall into each bucket and what percentage of "review" cases turn out to be legitimate. Adjust from there.

Building a Manual Review Queue

The review tier is where false positive management actually happens. Users in this band get through the door but with reduced access until a human (or automated process) confirms them.

Here is a practical implementation:

// lib/review-queue.ts
import { db } from '@/lib/database';

interface ReviewItem {
  userId: string;
  email: string;
  score: number;
  signals: string[];
  createdAt: Date;
  status: 'pending' | 'approved' | 'rejected';
  reviewedBy?: string;
  reviewedAt?: Date;
  notes?: string;
}

export async function addToReviewQueue(
  userId: string,
  email: string,
  score: number,
  signals: string[]
): Promise<void> {
  await db.insert('review_queue', {
    userId,
    email,
    score,
    signals: JSON.stringify(signals),
    createdAt: new Date(),
    status: 'pending',
  });

  // Notify the team
  await notifySlack({
    channel: '#fraud-review',
    text: `New review: ${email} (score: ${score}). Top signals: ${signals.slice(0, 3).join(', ')}`,
  });
}

export async function approveUser(
  userId: string,
  reviewerId: string,
  notes?: string
): Promise<void> {
  await db.update('review_queue', {
    where: { userId },
    set: {
      status: 'approved',
      reviewedBy: reviewerId,
      reviewedAt: new Date(),
      notes,
    },
  });

  // Upgrade user access
  await db.update('users', {
    where: { id: userId },
    set: { tier: 'standard', verified: true },
  });

  // Send BigShield feedback for model improvement
  await bigshield.feedback({
    email: (await db.get('users', userId)).email,
    actual: 'legitimate',
  });
}

export async function rejectUser(
  userId: string,
  reviewerId: string,
  reason: string
): Promise<void> {
  await db.update('review_queue', {
    where: { userId },
    set: {
      status: 'rejected',
      reviewedBy: reviewerId,
      reviewedAt: new Date(),
      notes: reason,
    },
  });

  // Suspend the account
  await db.update('users', {
    where: { id: userId },
    set: { suspended: true, suspendReason: reason },
  });
}

Auto-Approval Rules

Not every review item needs human eyes. You can auto-approve users who complete certain verification steps:

// Cron job or webhook handler
export async function autoReviewPending(): Promise<void> {
  const pending = await db.query('review_queue', {
    where: { status: 'pending' },
    olderThan: '24h',
  });

  for (const item of pending) {
    const user = await db.get('users', item.userId);

    // Auto-approve if the user verified their email AND has real activity
    if (user.emailVerified && user.sessionsCount >= 2) {
      await approveUser(item.userId, 'system:auto-review', 'Auto-approved: email verified + real engagement');
      continue;
    }

    // Auto-reject if no activity after 48 hours
    const age = Date.now() - item.createdAt.getTime();
    if (age > 48 * 60 * 60 * 1000 && user.sessionsCount === 0) {
      await rejectUser(item.userId, 'system:auto-review', 'Auto-rejected: no activity after 48h');
    }
  }
}

Implementing Allowlists

Some users will always look suspicious to automated systems. A CTO signing up from a corporate VPN with a brand-new company domain. A security researcher using Tor. An enterprise client whose IT policies trigger every network signal. Allowlists give you an escape valve.

Domain Allowlists

If you sell to enterprises, their domains should skip fraud checks (or use a much lower threshold):

const TRUSTED_DOMAINS = new Set([
  'bigcorp.com',
  'enterprise-customer.io',
  'partner-company.dev',
]);

async function validateWithAllowlist(
  email: string,
  ip: string
): Promise<ValidationDecision> {
  const domain = email.split('@')[1]?.toLowerCase();

  // Trusted domains get automatic approval
  if (domain && TRUSTED_DOMAINS.has(domain)) {
    return {
      allowed: true,
      requiresReview: false,
      score: 100,
      source: 'allowlist',
    };
  }

  // Everyone else goes through normal validation
  return validateWithBigShield(email, ip);
}

Dynamic Allowlists From Feedback

Rather than manually maintaining a static list, build an allowlist that learns from your review queue outcomes:

// After enough approvals from a domain, trust it
async function updateDynamicAllowlist(): Promise<void> {
  const domainStats = await db.query(`
    SELECT
      split_part(email, '@', 2) as domain,
      COUNT(*) FILTER (WHERE status = 'approved') as approved,
      COUNT(*) FILTER (WHERE status = 'rejected') as rejected
    FROM review_queue
    WHERE reviewed_at > NOW() - INTERVAL '90 days'
    GROUP BY domain
    HAVING COUNT(*) >= 5
  `);

  for (const stat of domainStats) {
    const approvalRate = stat.approved / (stat.approved + stat.rejected);
    if (approvalRate >= 0.95 && stat.approved >= 10) {
      await addToDynamicAllowlist(stat.domain);
    }
  }
}

Building Feedback Loops

The most important thing you can do for long-term accuracy is close the feedback loop. When BigShield flags someone who turns out to be legitimate, or misses someone who turns out to be fraudulent, that information needs to flow back into the system.

Explicit Feedback

When your team reviews an account, send the outcome to BigShield:

// After manual review
await bigshield.feedback({
  email: user.email,
  validationId: originalValidation.id,
  actual: 'legitimate', // or 'fraudulent'
  notes: 'Verified via phone call with user',
});

This feedback is used to improve BigShield's models. Over time, similar patterns become less likely to trigger false positives.

Implicit Feedback

You can also derive feedback from user behavior without manual review:

// Weekly job: analyze user behavior for implicit signals
async function generateImplicitFeedback(): Promise<void> {
  // Users who paid = definitely legitimate
  const paidUsers = await db.query(`
    SELECT u.email, v.validation_id
    FROM users u
    JOIN validations v ON v.email = u.email
    WHERE u.has_paid = true
    AND v.score < 70
    AND v.feedback_sent = false
  `);

  for (const user of paidUsers) {
    await bigshield.feedback({
      email: user.email,
      validationId: user.validation_id,
      actual: 'legitimate',
      notes: 'Implicit: user converted to paid',
    });
  }

  // Users who were suspended for abuse = definitely fraudulent
  const suspendedUsers = await db.query(`
    SELECT u.email, v.validation_id
    FROM users u
    JOIN validations v ON v.email = u.email
    WHERE u.suspended = true
    AND u.suspend_reason LIKE '%abuse%'
    AND v.score >= 30
    AND v.feedback_sent = false
  `);

  for (const user of suspendedUsers) {
    await bigshield.feedback({
      email: user.email,
      validationId: user.validation_id,
      actual: 'fraudulent',
      notes: 'Implicit: suspended for abuse',
    });
  }
}

Measuring False Positive Rate

You cannot improve what you do not measure. Track these metrics weekly:

  • Block rate: Percentage of total signups blocked. If this exceeds 20%, investigate whether your thresholds are too aggressive.
  • Review approval rate: Of accounts sent to review, what percentage were approved? If it is above 50%, your review threshold might be too low (you are reviewing too many legitimate users).
  • Support ticket rate: How many "I cannot sign up" support tickets per week? Track this before and after threshold changes.
  • Fraud escape rate: Of accounts that were allowed through, what percentage later showed fraudulent behavior? This measures false negatives.
// Weekly metrics calculation
async function calculateFraudMetrics(weekStart: Date) {
  const weekEnd = new Date(weekStart.getTime() + 7 * 24 * 60 * 60 * 1000);

  const totalSignups = await db.count('signups', {
    createdAt: { gte: weekStart, lt: weekEnd },
  });

  const blocked = await db.count('signups', {
    createdAt: { gte: weekStart, lt: weekEnd },
    status: 'blocked',
  });

  const reviewed = await db.count('review_queue', {
    createdAt: { gte: weekStart, lt: weekEnd },
  });

  const reviewApproved = await db.count('review_queue', {
    createdAt: { gte: weekStart, lt: weekEnd },
    status: 'approved',
  });

  const laterSuspended = await db.count('users', {
    createdAt: { gte: weekStart, lt: weekEnd },
    suspended: true,
    suspendReason: { like: '%fraud%' },
  });

  return {
    blockRate: blocked / totalSignups,
    reviewApprovalRate: reviewed > 0 ? reviewApproved / reviewed : 0,
    fraudEscapeRate: (totalSignups - blocked) > 0
      ? laterSuspended / (totalSignups - blocked)
      : 0,
  };
}

Practical Tips for Reducing False Positives

Based on patterns we see across BigShield customers, here are specific adjustments that reduce false positives without significantly increasing fraud:

1. Do Not Block VPN Users. Review Them.

A surprising number of legitimate users, especially developers and privacy-conscious individuals, use VPNs daily. Blocking all VPN traffic has one of the highest false positive rates of any single signal. Instead, use VPN detection as one input into the composite score, which is exactly how BigShield handles it.

2. Give New Domains a Probation Period, Not a Block

Domains less than 30 days old are suspicious, but new companies launch every day. Route new-domain signups to the review tier rather than blocking them outright.

3. Weight Behavioral Signals Higher Than Static Ones

An email from a disposable domain is a static signal. The signup being completed in 1.2 seconds with a pasted email address from a datacenter IP is a behavioral cluster. Behavioral signals have a lower false positive rate because they measure what the user is doing, not just who they claim to be.

4. Implement a Self-Service Appeal Process

When you block a signup, offer a clear path to resolution. A simple "Having trouble signing up? Contact us" link catches false positives before they become lost customers or angry social media posts.

5. Review Your Thresholds Monthly

Fraud patterns shift. Your user base evolves. Thresholds that were perfect three months ago might be too aggressive (or too loose) today. Set a monthly calendar reminder to review your fraud metrics and adjust.

The Architecture Perspective

If you are interested in how this review queue fits into a broader fraud detection architecture, our guide on the architecture of a fraud detection platform covers the full system design, from signal collection to scoring to action.

For the foundational concepts behind integrating email validation into your application, the developer's guide to email validation walks through the basics.

Finding Your Balance

Perfect fraud detection does not exist. Every system operates on a tradeoff curve between security and accessibility. The strategies in this article (tiered thresholds, review queues, allowlists, and feedback loops) give you the tools to find the right point on that curve for your business.

BigShield is built to make this balance easier. The composite scoring, individual signal transparency, and feedback API give you granular control over how aggressively you filter without forcing a binary block-or-allow decision. Start with sensible defaults, measure everything, and iterate. Try it at bigshield.app.

Ready to stop fake signups?

BigShield validates emails with 20+ signals in under 200ms. Start for free, no credit card required.

Get Started Free

Related Articles