RAID Calculator – Capacity, Overhead & Fault Tolerance

Estimate usable capacity, redundancy overhead, and resilience for common RAID levels. Private by design—everything runs in your browser.

Inputs

Vendors use decimal units (1 TB = 1000 GB). Results show both GB and TB.

Results

Results will appear here after you calculate.

Understanding Results

  • Usable Capacity – Space available after parity/mirroring.
  • Overhead – Space reserved for redundancy.
  • Fault Tolerance – How many drives can fail without data loss (worst-case).
  • Efficiency – Usable / Raw capacity.

Note: Real-world results depend on filesystem formatting, controller behavior, and manufacturer units.

Frequently Asked Questions

Do mixed drive sizes work?

Yes, but arrays are limited by the smallest drive in each redundancy set. We conservatively use the smallest drive across all disks.

What are the minimum drive counts?

RAID 0: 2 · RAID 1: 2 · RAID 5: 3 · RAID 6: 4 · RAID 10: 4 (even) · RAID 50: ≥ 2 groups of ≥ 3 each · RAID 60: ≥ 2 groups of ≥ 4 each.

Why is RAID 10 “1+” for fault tolerance?

Each mirrored pair can lose at most one drive; in the best case several drives can fail (one per pair), but in the worst case only one failure is tolerated if two failures land in the same pair.

Is my data private?

Yes—calculations run entirely in your browser.

RAID Best Practices & Practical Standards

RAID improves availability and/or performance, but it is not a backup. Use the guidance below to plan safer, faster arrays.

1) Data Protection Basics

  • Follow 3-2-1 backups: 3 copies of data, on 2 different media, with 1 off-site/immutable copy.
  • Use a UPS to protect against write-hole corruption and sudden power loss. Prefer controllers or NVMe devices with power-loss protection.
  • Test restores, not just backups: Verify integrity and practice a restore workflow periodically.

2) Picking a RAID Level

  • Large SATA/NL-SAS drives (≥ 8 TB): prefer RAID 6 / RAIDZ2 or better; rebuilds are long and URE risk increases with size.
  • Performance + safety: RAID 10 (striped mirrors) gives excellent IOPS and simple failure domains.
  • Capacity with resilience: RAID 6 (or RAID 60) balances usable space and fault tolerance for big pools.
  • Avoid RAID 0 for important data—zero fault tolerance.

3) Drive & Layout Considerations

  • Match technologies: Don’t mix SMR and CMR in the same set.
  • Use similar models/firmware to minimize variance during rebuilds.
  • Sector size alignment: Keep 4Kn/512e consistent; align partitions to 1 MiB boundaries.
  • Hot spares: Keep at least one global hot spare for big enclosures.
  • Cooling & vibration: Ensure good airflow and isolation.

4) Controller / Filesystem Settings

  • Write cache safety: Pair write-back cache with BBU/flash-backed cache—or disable volatile write-back.
  • TLER/ERC: Enable short error recovery for HDDs behind hardware RAID.
  • TRIM/Discard for SSD arrays: Enable periodic TRIM and leave 10–20% OP.
  • Checksums & scrubs: Prefer filesystems with end-to-end checksums; schedule scrubs.

5) Rebuild Strategy

  • Prioritize resiliency during rebuilds: Temporarily throttle background jobs and raise rebuild priority if possible.
  • Keep cold spares on site when hot spares aren’t feasible.
  • Plan headroom: Leave 10–20% free capacity.
Minimums & sanity checks
  • RAID 0: ≥2 drives · RAID 1: 2 drives · RAID 5: ≥3 drives · RAID 6: ≥4 drives
  • RAID 10: ≥4 drives (even) · RAID 50: ≥2 groups of ≥3 · RAID 60: ≥2 groups of ≥4
  • Avoid mixing 4Kn and 512e; keep firmware families consistent.

Remember: RAID increases availability—but only backups protect against deletion, ransomware, or site loss.

Explore more tools