VPS Guide
Bandwidth vs Throughput on VPS
Bandwidth is the ceiling providers advertise; throughput is what the application actually moves — and the gap between them is where most network capacity surprises live.
Overview
A VPS is provisioned with a 1Gbps network port. A file transfer test delivers 80Mbps. Nothing is wrong with the network port — it is genuinely connected at 1Gbps. The constraint is somewhere else: the number of parallel connections, the TCP window size, the distance to the destination, the destination server's own capacity. Network capacity is not a single number. Bandwidth is the ceiling. Throughput is what reaches the floor.
How to think about it
Bandwidth describes the capacity of the network connection — the maximum rate at which data can theoretically flow. A 1Gbps port can transfer up to 1 gigabit per second. This number appears on VPS pricing pages because it's the provider's infrastructure commitment. It does not describe what any specific transfer will achieve.
Throughput is the actual data transfer rate achieved for a specific transfer between specific endpoints under specific conditions. Throughput is bounded by bandwidth but determined by a longer list of variables: network distance, routing path, TCP behavior, packet loss rate, the receiving end's capacity, and whether the connection is shared with other traffic. Throughput is the number that matters for application performance.
Monthly bandwidth allocation is the third number — the total volume of data the VPS can transfer in a billing period before overage charges apply. This is a volume measurement, not a speed measurement. A 1TB monthly allocation at 1Gbps port speed can be exhausted in about 2.2 hours of full-speed transfer — or spread across a month of moderate traffic. Volume and speed are independent dimensions of the same resource.
How it works
Network distance is the most fundamental constraint on throughput for connections between geographically distant points. The speed of light in fiber limits how quickly a TCP acknowledgment can travel from sender to receiver and back. A server in Frankfurt and a client in Sydney have a round-trip time of approximately 280ms regardless of available bandwidth. TCP throughput for a single connection is bounded by: bandwidth × round-trip-time. Overcoming this requires either parallelism — many simultaneous connections — or protocols designed for high-latency paths.
Shared bandwidth at the provider level is a less-discussed constraint. A VPS's 1Gbps port connects to switches and uplinks that are shared across many servers in the datacenter. During periods of high aggregate traffic, the available bandwidth per server may be lower than the advertised port speed. Premium providers invest in sufficient uplink capacity to avoid this saturation. Budget providers may not.
Packet loss has a disproportionate effect on TCP throughput. TCP interprets packet loss as a signal of congestion and reduces its transmission rate in response. A path with 1% packet loss can produce TCP throughput that is a fraction of the available bandwidth. For applications serving users over paths with variable quality — mobile networks, certain international routes — packet loss is a more significant constraint than raw bandwidth.
Where it breaks
Bandwidth figures from providers describe the local port capacity — the connection between the server and the datacenter's internal network. They don't describe the quality of the provider's upstream peering, the capacity of the routes to specific destinations, or the performance of the path to users in specific geographic regions. A provider with excellent local bandwidth and poor upstream peering delivers fast local transfers and slow user-facing responses.
In context
Metered bandwidth models charge for actual transfer volume beyond an included allocation. The risk is bill shock — a traffic spike from a viral post, a large file download, or a misconfigured crawler generates egress charges that weren't in the budget. The benefit is that steady-state costs are predictable, and low-traffic workloads pay only for what they use.
Unmetered bandwidth models advertise no usage caps, typically with a port speed limit instead. What 'unmetered' means in practice varies: some providers genuinely allow sustained full-port-speed transfer indefinitely; others enforce soft limits through acceptable use policies or traffic shaping at the switch level when aggregate datacenter traffic is high. The unmetered label requires reading the terms to understand what it actually permits.
For workloads that serve media, large file downloads, or high-volume API responses to users outside the provider's network, egress costs and throughput quality to specific regions matter more than port speed. For internal workloads or applications serving users close to the datacenter, the bandwidth allocation model matters more. These are different evaluations of the same spec.
From understanding to decision
Estimating monthly transfer volume — average request size multiplied by expected request count — converts the bandwidth question from a spec comparison to a cost calculation. For applications serving large assets to geographically distributed users, throughput quality to specific regions and egress pricing matter as much as port speed. For internal tools and low-traffic applications, the bandwidth question is nearly irrelevant.
Related
Where to go next
© 2026 Softplorer