Softplorer Logo

Proxy Guide

What Proxy Pool Size Actually Means

Pool size is the most cited proxy metric and the least informative one in isolation. The number of IPs in a pool tells you how many addresses exist. It tells you nothing about how many are usable for your specific target.

In practice

  • Larger pools distribute customer traffic across more IPs — lower per-IP request concentration ✔
  • More IPs in rotation slows per-IP reputation accumulation ✔
  • Raw pool count includes inactive, flagged, and geo-restricted IPs ✗
  • Pool size says nothing about subnet cleanness or IP freshness ✗
  • A small clean pool outperforms a large contaminated one on hardened targets ✗

The relevant metric is not how many IPs are in the pool. It's how many are clean on your specific target — and no provider publishes that number.

Overview

Proxy providers lead with pool size in marketing because it's the only metric that's easy to state and impossible to verify. '72 million residential IPs' is a number the prospect cannot audit and the provider cannot invalidate. It persists in marketing copy regardless of how many of those IPs are currently active, currently clean, or available in the geography the customer actually needs.

The number that matters for a specific workload is how many IPs in that pool pass the target's detection layer without a challenge. That number is specific to the target, the proxy type, and the current state of the IP stock. No provider publishes it. The only way to measure it is to test.

How to think about it

Total pool count is the headline number: every IP address the provider has ever registered or enrolled. It includes IPs in cooldown after abuse detection, IPs in geographic regions with no current customer demand, IPs belonging to enrolled devices that are currently offline, and IPs in subnets the provider has retired from active rotation due to reputation degradation. The total count is a ceiling — the operational pool is smaller.

Active pool count is the subset available at any given moment. For residential peer networks, it fluctuates continuously with device online status — a pool of '72 million enrolled devices' may have 4 to 8 million devices active at any given hour depending on time zones and device usage patterns. For datacenter pools, the active count is stable but still lower than total count due to cooling policies and subnet rotation schedules.

Clean pool count — the subset that passes a specific target's detection layer — is neither published nor constant. It degrades as customer traffic accumulates reputation signals on the IPs. It improves as the provider refreshes the pool with new allocations and retires contaminated ones. The gap between total pool size and clean pool size on a hardened target is typically larger than provider marketing implies.

How it works

Providers manage pool quality through IP rotation policies: IPs that receive challenge or block responses from targets are cooled — removed from active rotation for a period — before being re-used. The cooling window varies by provider and pool type. Short cooling windows recycle IPs back into rotation before their reputation has meaningfully recovered. Long cooling windows maintain pool quality but reduce the active pool available to customers at any moment.

Geographic distribution within the pool is rarely uniform. A '100 million IP' residential pool may have 60 million IPs in the US, 20 million in Western Europe, and 20 million distributed across the rest of the world. For geo-targeted workloads in regions with thin pool coverage — Southeast Asia, Latin America, the Middle East — the effective pool is a fraction of the headline count. Geo-specific pool depth is a more operationally relevant number than total pool size for any workload with geographic targeting requirements.

Concurrency — how many simultaneous connections the pool supports — is a separate constraint from IP count. A pool with 10 million IPs and a 500K concurrency limit cannot support 600K simultaneous connections regardless of how many IPs are available. For high-concurrency scraping operations, the concurrency ceiling is often the binding constraint before pool size becomes relevant.

Where it breaks

On hardened targets — major e-commerce platforms, social networks, search engines — block decisions are based on IP reputation signals that pool size doesn't influence. A pool of 100 million contaminated residential IPs performs worse than a pool of 500K fresh ones on a target that queries a thorough IP intelligence database. The target isn't counting how many IPs the provider has; it's evaluating the specific IP that made the current request.

For geo-targeted workloads, total pool size becomes irrelevant when the required geographic segment has low pool depth. The operator effectively operates within the regional subset — which may be orders of magnitude smaller than the headline number.

For residential peer networks specifically, the active pool varies by time of day — and the provider's pool size figure is a maximum, not a floor. Running high-volume workloads during off-peak hours in the target region may exhaust the available active pool even when the total enrolled count is large.

In context

Pool quality — the fraction of the active pool that clears the target's detection layer — is more predictive of operational success than total pool size. Quality is hard to measure without testing but has proxies: provider subnet rotation policies, IP cooling window length, and whether the provider offers separate pool segments for different use cases (scraping vs multi-accounting vs ad verification). Providers that segment by use case reduce cross-contamination, which preserves quality.

Pool depth in the required geography is the relevant size metric for geo-targeted workloads. A provider with 500K ISP proxy IPs uniformly distributed across 50 countries serves geo-targeted workloads better than a provider with 10 million residential IPs concentrated in the US and UK. The comparison is not total size — it's active IPs available in the specific target region.

Rotation speed — how frequently the provider cycles active IPs through the pool — matters as much as pool size for workloads that accumulate per-IP reputation quickly. A smaller pool that rotates IPs into cooldown aggressively can maintain higher average cleanness than a larger pool with slow rotation. Rotation policy is rarely disclosed but can be inferred from provider documentation on 'fresh IP' availability and rotation interval settings.

Choose your path

Pool size is a starting filter, not a selection criterion. Eliminate providers with pools too small to support the required concurrency or geographic coverage. Within the remaining set, pool quality, rotation policy, and geo-depth are the variables that determine operational outcomes — and those require testing against the actual target, not reading a marketing page.

  • Geo-targeted workload → ask for regional pool depth, not total pool size
  • High-concurrency scraping → verify concurrency limits before pool size
  • Hardened target → test block rate directly; pool size claim is not predictive
  • Shared pool contamination is a problem → evaluate dedicated IPs or use-case-segmented pools
  • Residential peer network for time-sensitive work → verify active pool size during target hours, not enrolled device total
Proxy providers for scraping — evaluated by pool quality, not pool countIP reputation — what the pool's cleanness score actually measuresChoosing a proxy provider — evaluation criteria beyond the headline numbers