You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: WindowsServerDocs/storage/storage-spaces/drive-symmetry-considerations.md
+3-3
Original file line number
Diff line number
Diff line change
@@ -60,13 +60,13 @@ To see why this happens, consider the following simplified illustration. Each co
60
60
61
61
As drawn, Server 1 (10 TB) and Server 2 (10 TB) are full. Server 3 has larger drives, therefore its total capacity is larger (15 TB). However, to store more three-way mirror data on Server 3 would require copies on Server 1 and Server 2 too, which are already full. The remaining 5 TB capacity on Server 3 can't be used – it's *stranded* capacity.
62
62
63
-
:::image type="content" source="media/drive-symmetry-considerations/size-asymmetry-3n-stranded.png" alt-text="Three-way mirror, three servers, stranded capacity." lightbox="media/drive-symmetry-considerations/size-asymmetry-3n-stranded.png":::
63
+
:::image type="content" source="media/drive-symmetry-considerations/size-asymmetry-3n-stranded.png" alt-text="Three-way mirror, three servers, stranded capacity.":::
64
64
65
65
### Optimal placement
66
66
67
67
Conversely, with four servers of 10 TB, 10 TB, 10 TB, and 15 TB capacity and three-way mirror resiliency, it's possible to validly place copies in a way that uses all available capacity, as drawn. Whenever this is possible, the Storage Spaces Direct allocator finds and uses the optimal placement, leaving no stranded capacity.
68
68
69
-
:::image type="content" source="media/drive-symmetry-considerations/size-asymmetry-4n-no-stranded.png" alt-text="Three-way mirror, four servers, no stranded capacity." lightbox="media/drive-symmetry-considerations/size-asymmetry-4n-no-stranded.png":::
69
+
:::image type="content" source="media/drive-symmetry-considerations/size-asymmetry-4n-no-stranded.png" alt-text="Three-way mirror, four servers, no stranded capacity.":::
70
70
71
71
The number of servers, the resiliency, the severity of the capacity imbalance, and other factors affect whether there's stranded capacity. **The most prudent general rule is to assume that only capacity available in every server is guaranteed to be usable.**
72
72
@@ -76,7 +76,7 @@ Storage Spaces Direct can also withstand a cache imbalance across drives and acr
76
76
77
77
Using cache drives of different sizes may not improve cache performance uniformly or predictably: only IO to [drive bindings](cache.md#server-side-architecture) with larger cache drives may see improved performance. Storage Spaces Direct distributes IO evenly across bindings and doesn't discriminate based on cache-to-capacity ratio.
Copy file name to clipboardexpand all lines: WindowsServerDocs/storage/storage-spaces/manage-volumes.md
+2-2
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ ms.topic: how-to
5
5
author: robinharwood
6
6
ms.author: roharwoo
7
7
ms.reviewer: jgerend
8
-
ms.date: 07/10/2024
8
+
ms.date: 02/11/2025
9
9
---
10
10
11
11
# Manage volumes in Azure Stack HCI and Windows Server
@@ -57,7 +57,7 @@ Before you expand a volume, make sure you have enough capacity in the storage po
57
57
58
58
In Storage Spaces Direct, every volume is composed of several stacked objects: the cluster shared volume (CSV), which is a volume; the partition; the disk, which is a virtual disk; and one or more storage tiers (if applicable). To resize a volume, you need to resize several of these objects.
59
59
60
-
:::image type="content" source="media/manage-volumes/volumes-in-smapi.png" alt-text="Diagram shows the layers of a volume, including cluster shard volume, volume, partition, disk, virtual disk, and storage tiers." lightbox="media/manage-volumes/volumes-in-smapi.png":::
60
+
:::image type="content" source="media/manage-volumes/volumes-in-smapi.png" alt-text="Diagram shows the layers of a volume, including cluster shard volume, volume, partition, disk, virtual disk, and storage tiers.":::
61
61
62
62
To familiarize yourself with them, try running the `Get-` cmdlet with the corresponding noun in PowerShell.
Copy file name to clipboardexpand all lines: WindowsServerDocs/storage/storage-spaces/plan-volumes.md
+7-7
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ description: How to plan storage volumes on Azure Stack HCI and Windows Server c
4
4
author: robinharwood
5
5
ms.author: roharwoo
6
6
ms.topic: conceptual
7
-
ms.date: 02/22/2024
7
+
ms.date: 02/11/2025
8
8
---
9
9
10
10
# Plan volumes on Azure Stack HCI and Windows Server clusters
@@ -23,7 +23,7 @@ Volumes are where you put the files your workloads need, such as VHD or VHDX fil
23
23
>[!NOTE]
24
24
> We use term "volume" to refer jointly to the volume and the virtual disk under it, including functionality provided by other built-in Windows features such as Cluster Shared Volumes (CSV) and ReFS. Understanding these implementation-level distinctions is not necessary to plan and deploy Storage Spaces Direct successfully.
25
25
26
-
:::image type="content" source="media/plan-volumes/what-are-volumes.png" alt-text="Diagram shows three folders labeled as volumes each associated with a virtual disk labeled as volumes, all associated with a common storage pool of disks." lightbox="media/plan-volumes/what-are-volumes.png":::
26
+
:::image type="content" source="media/plan-volumes/what-are-volumes.png" alt-text="Diagram shows three folders labeled as volumes each associated with a virtual disk labeled as volumes, all associated with a common storage pool of disks.":::
27
27
28
28
All volumes are accessible by all servers in the cluster at the same time. Once created, they show up at **C:\ClusterStorage\\** on all servers.
29
29
@@ -57,7 +57,7 @@ With two servers in the cluster, you can use two-way mirroring or you can use ne
57
57
58
58
Two-way mirroring keeps two copies of all data, one copy on the drives in each server. Its storage efficiency is 50 percent; to write 1 TB of data, you need at least 2 TB of physical storage capacity in the storage pool. Two-way mirroring can safely tolerate one hardware failure at a time (one server or drive).
59
59
60
-
:::image type="content" source="media/plan-volumes/two-way-mirror.png" alt-text="Diagram shows volumes labeled data and copy connected by circular arrows and both volumes are associated with a bank of disks in servers." lightbox="media/plan-volumes/two-way-mirror.png":::
60
+
:::image type="content" source="media/plan-volumes/two-way-mirror.png" alt-text="Diagram shows volumes labeled data and copy connected by circular arrows and both volumes are associated with a bank of disks in servers.":::
61
61
62
62
Nested resiliency provides data resiliency between servers with two-way mirroring, then adds resiliency within a server with two-way mirroring or mirror-accelerated parity. Nesting provides data resilience even when one server is restarting or unavailable. Its storage efficiency is 25 percent with nested two-way mirroring and around 35-40 percent for nested mirror-accelerated parity. Nested resiliency can safely tolerate two hardware failures at a time (two drives, or a server and a drive on the remaining server). Because of this added data resilience, we recommend using nested resiliency on production deployments of two-server clusters. For more info, see [Nested resiliency](/windows-server/storage/storage-spaces/nested-resiliency).
63
63
@@ -67,15 +67,15 @@ Nested resiliency provides data resiliency between servers with two-way mirrorin
67
67
68
68
With three servers, you should use three-way mirroring for better fault tolerance and performance. Three-way mirroring keeps three copies of all data, one copy on the drives in each server. Its storage efficiency is 33.3 percent – to write 1 TB of data, you need at least 3 TB of physical storage capacity in the storage pool. Three-way mirroring can safely tolerate [at least two hardware problems (drive or server) at a time](/windows-server/storage/storage-spaces/storage-spaces-fault-tolerance#examples). If 2 nodes become unavailable the storage pool loses quorum, since 2/3 of the disks aren't available, and the virtual disks are unaccessible. However, a node can be down and one or more disks on another node can fail and the virtual disks remain online. For example, if you're rebooting one server when suddenly another drive or server fails, all data remains safe and continuously accessible.
69
69
70
-
:::image type="content" source="media/plan-volumes/three-way-mirror.png" alt-text="Diagram shows a volume labeled data and two labeled copy connected by circular arrows with each volume associated with a server containing physical disks." lightbox="media/plan-volumes/three-way-mirror.png":::
70
+
:::image type="content" source="media/plan-volumes/three-way-mirror.png" alt-text="Diagram shows a volume labeled data and two labeled copy connected by circular arrows with each volume associated with a server containing physical disks.":::
71
71
72
72
### With four or more servers
73
73
74
74
With four or more servers, you can choose for each volume whether to use three-way mirroring, dual parity (often called "erasure coding"), or mix the two with mirror-accelerated parity.
75
75
76
76
Dual parity provides the same fault tolerance as three-way mirroring but with better storage efficiency. With four servers, its storage efficiency is 50.0 percent; to store 2 TB of data, you need 4 TB of physical storage capacity in the storage pool. This increases to 66.7 percent storage efficiency with seven servers, and continues up to 80.0 percent storage efficiency. The tradeoff is that parity encoding is more compute-intensive, which can limit its performance.
77
77
78
-
:::image type="content" source="media/plan-volumes/dual-parity.png" alt-text="Diagram shows two volumes labeled data and two labeled parity connected by circular arrows with each volume associated with a server containing physical disks." lightbox="media/plan-volumes/dual-parity.png":::
78
+
:::image type="content" source="media/plan-volumes/dual-parity.png" alt-text="Diagram shows two volumes labeled data and two labeled parity connected by circular arrows with each volume associated with a server containing physical disks.":::
79
79
80
80
Which resiliency type to use depends on the performance and capacity requirements for your environment. Here's a table that summarizes the performance and storage efficiency of each resiliency type.
81
81
@@ -129,15 +129,15 @@ Size is distinct from volume's *footprint*, the total physical storage capacity
129
129
130
130
The footprints of your volumes need to fit in the storage pool.
131
131
132
-
:::image type="content" source="media/plan-volumes/size-versus-footprint.png" alt-text="Diagram shows a 2 TB volume compared to a 6 TB footprint in the storage pool with a multiplier of three specified." lightbox="media/plan-volumes/size-versus-footprint.png":::
132
+
:::image type="content" source="media/plan-volumes/size-versus-footprint.png" alt-text="Diagram shows a 2 TB volume compared to a 6 TB footprint in the storage pool with a multiplier of three specified.":::
133
133
134
134
### Reserve capacity
135
135
136
136
Leaving some capacity in the storage pool unallocated gives volumes space to repair "in-place" after drives fail, improving data safety and performance. If there's sufficient capacity, an immediate, in-place, parallel repair can restore volumes to full resiliency even before the failed drives are replaced. This happens automatically.
137
137
138
138
We recommend reserving the equivalent of one capacity drive per server, up to 4 drives. You may reserve more at your discretion, but this minimum recommendation guarantees an immediate, in-place, parallel repair can succeed after the failure of any drive.
139
139
140
-
:::image type="content" source="media/plan-volumes/reserve.png" alt-text="Diagram shows a volume associated with several disks in a storage pool and unassociated disks marked as reserve." lightbox="media/plan-volumes/reserve.png":::
140
+
:::image type="content" source="media/plan-volumes/reserve.png" alt-text="Diagram shows a volume associated with several disks in a storage pool and unassociated disks marked as reserve.":::
141
141
142
142
For example, if you have 2 servers and you're using 1 TB capacity drives, set aside 2 x 1 = 2 TB of the pool as reserve. If you have 3 servers and 1 TB capacity drives, set aside 3 x 1 = 3 TB as reserve. If you have 4 or more servers and 1 TB capacity drives, set aside 4 x 1 = 4 TB as reserve.
Copy file name to clipboardexpand all lines: WindowsServerDocs/storage/storage-spaces/storage-spaces-direct-overview.md
+5-5
Original file line number
Diff line number
Diff line change
@@ -4,7 +4,7 @@ ms.author: cosdar
4
4
manager: dongill
5
5
ms.topic: article
6
6
author: cosmosdarwin
7
-
ms.date: 01/03/2025
7
+
ms.date: 02/11/2025
8
8
ms.assetid: 8bd0d09a-0421-40a4-b752-40ecb5350ffd
9
9
description: An overview of Storage Spaces Direct, a feature of Windows Server and Azure Stack HCI that enables you to cluster servers with internal storage into a software-defined storage solution.
10
10
---
@@ -45,7 +45,7 @@ In these volumes, you can place your files, such as .vhd and .vhdx for VMs. You
45
45
46
46
The following section describes the features and components of a Storage Spaces Direct stack.
47
47
48
-
:::image type="content" source="media/storage-spaces-direct-overview/converged-full-stack.png" alt-text="Storage Spaces Direct Stack." lightbox="media/storage-spaces-direct-overview/converged-full-stack.png":::
48
+
:::image type="content" source="media/storage-spaces-direct-overview/converged-full-stack.png" alt-text="Storage Spaces Direct Stack.":::
49
49
50
50
**Networking Hardware.** Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet to communicate between servers. We strongly recommend using 10+ GbE with remote-direct memory access (RDMA), either iWARP or RoCE.
51
51
@@ -102,13 +102,13 @@ Storage Spaces Direct supports the following two deployment options:
102
102
103
103
In a hyperconverged deployment, you use single cluster for both compute and storage. The hyperconverged deployment option runs Hyper-V virtual machines or SQL Server databases directly on the servers providing the storage—storing their files on the local volumes. This eliminates the need to configure file server access and permissions, which in turn reduces hardware costs for small-to-medium business and remote or branch office deployments. To deploy Storage Spaces Direct on Windows Server, see [Deploy Storage Spaces Direct on Windows Server](/windows-server/storage/storage-spaces/deploy-storage-spaces-direct). To deploy Storage Spaces Direct as part of Azure Stack HCI, see [What is the deployment process for Azure Stack HCI?](/azure-stack/hci/deploy/operating-system)
104
104
105
-
:::image type="content" source="media/storage-spaces-direct-overview/hyper-converged-minimal.png" alt-text="[Storage Spaces Direct serves storage to Hyper-V VMs in the same cluster.]" lightbox="media/storage-spaces-direct-overview/hyper-converged-minimal.png":::
105
+
:::image type="content" source="media/storage-spaces-direct-overview/hyper-converged-minimal.png" alt-text="Storage Spaces Direct serves storage to Hyper-V VMs in the same cluster.":::
106
106
107
107
### Converged deployment
108
108
109
109
In a converged deployment, you use separate clusters for storage and compute. The converged deployment option, also known as 'disaggregated,' layers a Scale-out File Server (SoFS) atop Storage Spaces Direct to provide network-attached storage over SMB3 file shares. This allows for scaling compute and workload independently from the storage cluster, essential for larger-scale deployments such as Hyper-V IaaS (Infrastructure as a Service) for service providers and enterprises.
110
110
111
-
:::image type="content" source="media/storage-spaces-direct-overview/converged-minimal.png" alt-text="Storage Spaces Direct serves storage using the Scale-Out File Server feature to Hyper-V VMs in another server or cluster." lightbox="media/storage-spaces-direct-overview/converged-minimal.png":::
111
+
:::image type="content" source="media/storage-spaces-direct-overview/converged-minimal.png" alt-text="Storage Spaces Direct serves storage using the Scale-Out File Server feature to Hyper-V VMs in another server or cluster.":::
112
112
113
113
## Manage and monitor
114
114
@@ -145,7 +145,7 @@ There are [over 10,000 clusters](https://techcommunity.microsoft.com/t5/storage-
145
145
146
146
Visit [Microsoft.com/HCI](https://www.microsoft.com/hci) to read their stories.
147
147
148
-
:::image type="content" source="media/storage-spaces-direct-overview/customer-stories.png" alt-text="Grid of customer logos." link="https://azure.microsoft.com/products/azure-stack/hci/" lightbox="media/storage-spaces-direct-overview/customer-stories.png":::
148
+
<!-->:::image type="content" source="media/storage-spaces-direct-overview/customer-stories.png" alt-text="Grid of customer logos." link="https://azure.microsoft.com/products/azure-stack/hci/":::-->
Copy file name to clipboardexpand all lines: includes/create-volumes-with-nested-resiliency.md
+7-4
Original file line number
Diff line number
Diff line change
@@ -2,7 +2,7 @@
2
2
author: ManikaDhiman
3
3
ms.author: v-manidhiman
4
4
ms.topic: include
5
-
ms.date: 01/23/2025
5
+
ms.date: 02/11/2025
6
6
---
7
7
8
8
You can use familiar storage cmdlets in PowerShell to create volumes with nested resiliency, as described in the following section.
@@ -12,21 +12,24 @@ You can use familiar storage cmdlets in PowerShell to create volumes with nested
12
12
Windows Server 2019 requires you to create new storage tier templates using the `New-StorageTier` cmdlet before creating volumes. You only need to do this once, and then every new volume you create can reference these templates.
13
13
14
14
> [!NOTE]
15
-
> If you're running Windows Server 2022, Azure Stack HCI21H2, or Azure Stack HCI 20H2, you can skip this step.
15
+
> If you're running Windows Server 2022, Azure Stack HCI, version 21H2, or Azure Stack HCI, version 20H2, you can skip this step.
16
16
17
17
Specify the `-MediaType` of your capacity drives and, optionally, the `-FriendlyName` of your choice. Don't modify other parameters.
18
18
19
19
For example, if your capacity drives are hard disk drives (HDD), launch PowerShell as Administrator and run the following cmdlets.
If your capacity drives are solid-state drives (SSD), set the `-MediaType` to `SSD` instead and change the `-FriendlyName` to `*OnSSD`. Don't modify other parameters.
31
34
32
35
> [!TIP]
@@ -64,7 +67,7 @@ Volumes that use nested resiliency appear in [Windows Admin Center](/windows-ser
64
67
65
68
### Optional: Extend to cache drives
66
69
67
-
With its default settings, nested resiliency protects against the loss of multiple capacity drives at the same time, or one server and one capacity drive at the same time. To extend this protection to [cache drives](/azure-stack/hci/concepts/cache), there's another consideration: because cache drives often provide read and write caching for multiple capacity drives, the only way to ensure you can tolerate the loss of a cache drive when the other server is down is to not cache writes, but that impacts performance.
70
+
With its default settings, nested resiliency protects against the loss of multiple capacity drives at the same time, or one server and one capacity drive at the same time. To extend this protection to [cache drives](../WindowsServerDocs/storage/storage-spaces/cache.md), there's another consideration: because cache drives often provide read and write caching for multiple capacity drives, the only way to ensure you can tolerate the loss of a cache drive when the other server is down is to not cache writes, but that impacts performance.
68
71
69
72
To address this scenario, Storage Spaces Direct offers the option to automatically disable write caching when one server in a two-server cluster is down, and then re-enable write caching once the server is back up. To allow routine restarts without performance impact, write caching isn't disabled until the server has been down for 30 minutes. Once write caching is disabled, the contents of the write cache is written to capacity devices. After this, the server can tolerate a failed cache device in the online server, though reads from the cache might be delayed or fail if a cache device fails.
0 commit comments