RAID Calculator – Capacity, Overhead & Fault Tolerance
Enter your drives below to estimate usable capacity, redundancy overhead, and resilience for common RAID levels. Everything runs in your browser.
Tip: Vendors use decimal units (1 TB = 1000 GB). Results show both GB and TB.
Results
Understanding Results
This calculator estimates the usable capacity and redundancy overhead for common RAID levels.
- Usable Capacity – Space available after parity/mirroring.
- Overhead – Space reserved for redundancy.
- Fault Tolerance – How many drives can fail without data loss (worst-case).
- Efficiency – Usable / Raw capacity.
Note: Real-world results depend on filesystem formatting, controller behavior, and manufacturer units.
Frequently Asked Questions
Do mixed drive sizes work?
Yes, but arrays are limited by the smallest drive in each redundancy set. We conservatively use the smallest drive across all disks.
What are the minimum drive counts?
RAID 0: 2 · RAID 1: 2 · RAID 5: 3 · RAID 6: 4 · RAID 10: 4 (even) · RAID 50: ≥ 2 groups of ≥ 3 each · RAID 60: ≥ 2 groups of ≥ 4 each.
Why is RAID 10 “1+” for fault tolerance?
Each mirrored pair can lose at most one drive; in the best case several drives can fail (one per pair), but in the worst case only one failure is tolerated if two failures land in the same pair.
Is my data private?
Yes—calculations run entirely in your browser.
RAID Best Practices & Practical Standards
RAID improves availability and/or performance, but it is not a backup. Use the guidance below to plan safer, faster arrays.
1) Data Protection Basics
- Follow 3-2-1 backups: 3 copies of data, on 2 different media, with 1 off-site/immutable copy.
- Use a UPS to protect against write-hole corruption and sudden power loss. Prefer controllers or NVMe devices with power-loss protection.
- Test restores, not just backups: Verify integrity and practice a restore workflow periodically.
2) Picking a RAID Level
- Large SATA/NL-SAS drives (≥ 8 TB): prefer RAID 6 / RAIDZ2 or better; rebuilds are long and URE risk increases with size.
- Performance + safety: RAID 10 (striped mirrors) gives excellent IOPS and simple failure domains.
- Capacity with resilience: RAID 6 (or RAID 60) balances usable space and fault tolerance for big pools.
- Avoid RAID 0 for important data—zero fault tolerance.
3) Drive & Layout Considerations
- Match technologies: Don’t mix SMR with CMR in the same set; SMR can degrade rebuilds and sustained writes.
- Use similar models/firmware to minimize performance variance and weird edge cases during rebuilds.
- Sector size alignment: Keep 4Kn/512e consistent across members; align partitions to 1 MiB boundaries to avoid RMW penalties.
- Hot spares: Keep at least one global hot spare for big enclosures to shorten time-to-recovery.
- Cooling & vibration: High temps and chassis vibration increase failure rates—ensure good airflow and isolation.
4) Controller / Filesystem Settings
- Write cache safety: If using a RAID controller write cache, pair it with BBU/flash-backed cache; otherwise disable volatile write-back.
- TLER/ERC: For HDDs behind hardware RAID, enable short error recovery (e.g., TLER/ERC) to prevent long drive timeouts.
- TRIM/Discard for SSD arrays: Enable periodic TRIM and leave 10–20% over-provisioning for endurance.
- Checksums & scrubs: Use filesystems with end-to-end checksums (e.g., ZFS, Btrfs) and schedule regular scrubs/patrol-reads.
5) Scrubbing, Patrol Reads & Monitoring
- Monthly or quarterly scrubs (large pools) help surface latent errors before a failure forces a rebuild.
- SMART monitoring: Watch Reallocated/Pending sectors, UDMA CRC errors, temperature, and sudden changes in these metrics.
- Alerts on failures/degrades: Email/Slack/Syslog notifications reduce Mean-Time-to-Repair (MTTR).
6) Rebuild Strategy
- Prioritize resiliency during rebuilds: Temporarily throttle background jobs, increase rebuild priority/IO if possible, and avoid heavy workloads.
- Keep cold spares on site if hot spares aren’t feasible—hours matter when degraded.
- Plan capacity headroom: Leave 10–20% free to maintain performance and reduce write amplification during rebuilds.
7) Practical “Rules of Thumb”
8) Light-Touch Standards & References (Practical)
There isn’t a single universal RAID “law,” but common practice aligns with:
- Controller vendor guidance (e.g., minimum drives, cache/BBU requirements, patrol read/scrub cadence).
- Filesystem vendor guidance (ZFS/Btrfs docs for scrub frequency, checksum/repair behavior, recommended redundancy levels).
- Enterprise norms such as using dual-parity for large nearline HDD pools, verified backups (3-2-1), and UPS-backed write caches.
Minimums & sanity checks
- RAID 0: ≥2 drives · RAID 1: 2 drives · RAID 5: ≥3 drives · RAID 6: ≥4 drives
- RAID 10: ≥4 drives (even count) · RAID 50: ≥2 groups of ≥3 · RAID 60: ≥2 groups of ≥4
- Avoid mixing 4Kn and 512e members in the same set; keep firmware/drive families consistent.
Remember: RAID increases availability—but only backups protect against deletion, ransomware, or site loss.