Skip to content

Commit c978cdf

Browse files
committed
Addressed review feedback
1 parent ce62235 commit c978cdf

6 files changed

+26
-23
lines changed

WindowsServerDocs/storage/storage-spaces/cache.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -1,10 +1,10 @@
11
---
22
title: Understanding the storage pool cache in Azure Stack HCI and Windows Server clusters
3-
description: How read and write caching works to accelerate performance in Storage Spaces Direct.
3+
description: How to read and write caching works to accelerate performance in Storage Spaces Direct.
44
author: robinharwood
55
ms.author: roharwoo
66
ms.topic: conceptual
7-
ms.date: 02/10/2025
7+
ms.date: 02/11/2025
88
---
99

1010
# Understanding the storage pool cache

WindowsServerDocs/storage/storage-spaces/drive-symmetry-considerations.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -60,13 +60,13 @@ To see why this happens, consider the following simplified illustration. Each co
6060

6161
As drawn, Server 1 (10 TB) and Server 2 (10 TB) are full. Server 3 has larger drives, therefore its total capacity is larger (15 TB). However, to store more three-way mirror data on Server 3 would require copies on Server 1 and Server 2 too, which are already full. The remaining 5 TB capacity on Server 3 can't be used – it's *stranded* capacity.
6262

63-
:::image type="content" source="media/drive-symmetry-considerations/size-asymmetry-3n-stranded.png" alt-text="Three-way mirror, three servers, stranded capacity." lightbox="media/drive-symmetry-considerations/size-asymmetry-3n-stranded.png":::
63+
:::image type="content" source="media/drive-symmetry-considerations/size-asymmetry-3n-stranded.png" alt-text="Three-way mirror, three servers, stranded capacity.":::
6464

6565
### Optimal placement
6666

6767
Conversely, with four servers of 10 TB, 10 TB, 10 TB, and 15 TB capacity and three-way mirror resiliency, it's possible to validly place copies in a way that uses all available capacity, as drawn. Whenever this is possible, the Storage Spaces Direct allocator finds and uses the optimal placement, leaving no stranded capacity.
6868

69-
:::image type="content" source="media/drive-symmetry-considerations/size-asymmetry-4n-no-stranded.png" alt-text="Three-way mirror, four servers, no stranded capacity." lightbox="media/drive-symmetry-considerations/size-asymmetry-4n-no-stranded.png":::
69+
:::image type="content" source="media/drive-symmetry-considerations/size-asymmetry-4n-no-stranded.png" alt-text="Three-way mirror, four servers, no stranded capacity.":::
7070

7171
The number of servers, the resiliency, the severity of the capacity imbalance, and other factors affect whether there's stranded capacity. **The most prudent general rule is to assume that only capacity available in every server is guaranteed to be usable.**
7272

@@ -76,7 +76,7 @@ Storage Spaces Direct can also withstand a cache imbalance across drives and acr
7676

7777
Using cache drives of different sizes may not improve cache performance uniformly or predictably: only IO to [drive bindings](cache.md#server-side-architecture) with larger cache drives may see improved performance. Storage Spaces Direct distributes IO evenly across bindings and doesn't discriminate based on cache-to-capacity ratio.
7878

79-
:::image type="content" source="media/drive-symmetry-considerations/cache-asymmetry.png" alt-text="Cache imbalance." lightbox="media/drive-symmetry-considerations/cache-asymmetry.png":::
79+
:::image type="content" source="media/drive-symmetry-considerations/cache-asymmetry.png" alt-text="Cache imbalance.":::
8080

8181
> [!TIP]
8282
> See [Understanding the storage pool cache](cache.md) to learn more about cache bindings.

WindowsServerDocs/storage/storage-spaces/manage-volumes.md

+2-2
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@ ms.topic: how-to
55
author: robinharwood
66
ms.author: roharwoo
77
ms.reviewer: jgerend
8-
ms.date: 07/10/2024
8+
ms.date: 02/11/2025
99
---
1010

1111
# Manage volumes in Azure Stack HCI and Windows Server
@@ -57,7 +57,7 @@ Before you expand a volume, make sure you have enough capacity in the storage po
5757

5858
In Storage Spaces Direct, every volume is composed of several stacked objects: the cluster shared volume (CSV), which is a volume; the partition; the disk, which is a virtual disk; and one or more storage tiers (if applicable). To resize a volume, you need to resize several of these objects.
5959

60-
:::image type="content" source="media/manage-volumes/volumes-in-smapi.png" alt-text="Diagram shows the layers of a volume, including cluster shard volume, volume, partition, disk, virtual disk, and storage tiers." lightbox="media/manage-volumes/volumes-in-smapi.png":::
60+
:::image type="content" source="media/manage-volumes/volumes-in-smapi.png" alt-text="Diagram shows the layers of a volume, including cluster shard volume, volume, partition, disk, virtual disk, and storage tiers.":::
6161

6262
To familiarize yourself with them, try running the `Get-` cmdlet with the corresponding noun in PowerShell.
6363

WindowsServerDocs/storage/storage-spaces/plan-volumes.md

+7-7
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ description: How to plan storage volumes on Azure Stack HCI and Windows Server c
44
author: robinharwood
55
ms.author: roharwoo
66
ms.topic: conceptual
7-
ms.date: 02/22/2024
7+
ms.date: 02/11/2025
88
---
99

1010
# Plan volumes on Azure Stack HCI and Windows Server clusters
@@ -23,7 +23,7 @@ Volumes are where you put the files your workloads need, such as VHD or VHDX fil
2323
>[!NOTE]
2424
> We use term "volume" to refer jointly to the volume and the virtual disk under it, including functionality provided by other built-in Windows features such as Cluster Shared Volumes (CSV) and ReFS. Understanding these implementation-level distinctions is not necessary to plan and deploy Storage Spaces Direct successfully.
2525
26-
:::image type="content" source="media/plan-volumes/what-are-volumes.png" alt-text="Diagram shows three folders labeled as volumes each associated with a virtual disk labeled as volumes, all associated with a common storage pool of disks." lightbox="media/plan-volumes/what-are-volumes.png":::
26+
:::image type="content" source="media/plan-volumes/what-are-volumes.png" alt-text="Diagram shows three folders labeled as volumes each associated with a virtual disk labeled as volumes, all associated with a common storage pool of disks.":::
2727

2828
All volumes are accessible by all servers in the cluster at the same time. Once created, they show up at **C:\ClusterStorage\\** on all servers.
2929

@@ -57,7 +57,7 @@ With two servers in the cluster, you can use two-way mirroring or you can use ne
5757

5858
Two-way mirroring keeps two copies of all data, one copy on the drives in each server. Its storage efficiency is 50 percent; to write 1 TB of data, you need at least 2 TB of physical storage capacity in the storage pool. Two-way mirroring can safely tolerate one hardware failure at a time (one server or drive).
5959

60-
:::image type="content" source="media/plan-volumes/two-way-mirror.png" alt-text="Diagram shows volumes labeled data and copy connected by circular arrows and both volumes are associated with a bank of disks in servers." lightbox="media/plan-volumes/two-way-mirror.png":::
60+
:::image type="content" source="media/plan-volumes/two-way-mirror.png" alt-text="Diagram shows volumes labeled data and copy connected by circular arrows and both volumes are associated with a bank of disks in servers.":::
6161

6262
Nested resiliency provides data resiliency between servers with two-way mirroring, then adds resiliency within a server with two-way mirroring or mirror-accelerated parity. Nesting provides data resilience even when one server is restarting or unavailable. Its storage efficiency is 25 percent with nested two-way mirroring and around 35-40 percent for nested mirror-accelerated parity. Nested resiliency can safely tolerate two hardware failures at a time (two drives, or a server and a drive on the remaining server). Because of this added data resilience, we recommend using nested resiliency on production deployments of two-server clusters. For more info, see [Nested resiliency](/windows-server/storage/storage-spaces/nested-resiliency).
6363

@@ -67,15 +67,15 @@ Nested resiliency provides data resiliency between servers with two-way mirrorin
6767

6868
With three servers, you should use three-way mirroring for better fault tolerance and performance. Three-way mirroring keeps three copies of all data, one copy on the drives in each server. Its storage efficiency is 33.3 percent – to write 1 TB of data, you need at least 3 TB of physical storage capacity in the storage pool. Three-way mirroring can safely tolerate [at least two hardware problems (drive or server) at a time](/windows-server/storage/storage-spaces/storage-spaces-fault-tolerance#examples). If 2 nodes become unavailable the storage pool loses quorum, since 2/3 of the disks aren't available, and the virtual disks are unaccessible. However, a node can be down and one or more disks on another node can fail and the virtual disks remain online. For example, if you're rebooting one server when suddenly another drive or server fails, all data remains safe and continuously accessible.
6969

70-
:::image type="content" source="media/plan-volumes/three-way-mirror.png" alt-text="Diagram shows a volume labeled data and two labeled copy connected by circular arrows with each volume associated with a server containing physical disks." lightbox="media/plan-volumes/three-way-mirror.png":::
70+
:::image type="content" source="media/plan-volumes/three-way-mirror.png" alt-text="Diagram shows a volume labeled data and two labeled copy connected by circular arrows with each volume associated with a server containing physical disks.":::
7171

7272
### With four or more servers
7373

7474
With four or more servers, you can choose for each volume whether to use three-way mirroring, dual parity (often called "erasure coding"), or mix the two with mirror-accelerated parity.
7575

7676
Dual parity provides the same fault tolerance as three-way mirroring but with better storage efficiency. With four servers, its storage efficiency is 50.0 percent; to store 2 TB of data, you need 4 TB of physical storage capacity in the storage pool. This increases to 66.7 percent storage efficiency with seven servers, and continues up to 80.0 percent storage efficiency. The tradeoff is that parity encoding is more compute-intensive, which can limit its performance.
7777

78-
:::image type="content" source="media/plan-volumes/dual-parity.png" alt-text="Diagram shows two volumes labeled data and two labeled parity connected by circular arrows with each volume associated with a server containing physical disks." lightbox="media/plan-volumes/dual-parity.png":::
78+
:::image type="content" source="media/plan-volumes/dual-parity.png" alt-text="Diagram shows two volumes labeled data and two labeled parity connected by circular arrows with each volume associated with a server containing physical disks.":::
7979

8080
Which resiliency type to use depends on the performance and capacity requirements for your environment. Here's a table that summarizes the performance and storage efficiency of each resiliency type.
8181

@@ -129,15 +129,15 @@ Size is distinct from volume's *footprint*, the total physical storage capacity
129129

130130
The footprints of your volumes need to fit in the storage pool.
131131

132-
:::image type="content" source="media/plan-volumes/size-versus-footprint.png" alt-text="Diagram shows a 2 TB volume compared to a 6 TB footprint in the storage pool with a multiplier of three specified." lightbox="media/plan-volumes/size-versus-footprint.png":::
132+
:::image type="content" source="media/plan-volumes/size-versus-footprint.png" alt-text="Diagram shows a 2 TB volume compared to a 6 TB footprint in the storage pool with a multiplier of three specified.":::
133133

134134
### Reserve capacity
135135

136136
Leaving some capacity in the storage pool unallocated gives volumes space to repair "in-place" after drives fail, improving data safety and performance. If there's sufficient capacity, an immediate, in-place, parallel repair can restore volumes to full resiliency even before the failed drives are replaced. This happens automatically.
137137

138138
We recommend reserving the equivalent of one capacity drive per server, up to 4 drives. You may reserve more at your discretion, but this minimum recommendation guarantees an immediate, in-place, parallel repair can succeed after the failure of any drive.
139139

140-
:::image type="content" source="media/plan-volumes/reserve.png" alt-text="Diagram shows a volume associated with several disks in a storage pool and unassociated disks marked as reserve." lightbox="media/plan-volumes/reserve.png":::
140+
:::image type="content" source="media/plan-volumes/reserve.png" alt-text="Diagram shows a volume associated with several disks in a storage pool and unassociated disks marked as reserve.":::
141141

142142
For example, if you have 2 servers and you're using 1 TB capacity drives, set aside 2 x 1 = 2 TB of the pool as reserve. If you have 3 servers and 1 TB capacity drives, set aside 3 x 1 = 3 TB as reserve. If you have 4 or more servers and 1 TB capacity drives, set aside 4 x 1 = 4 TB as reserve.
143143

WindowsServerDocs/storage/storage-spaces/storage-spaces-direct-overview.md

+5-5
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@ ms.author: cosdar
44
manager: dongill
55
ms.topic: article
66
author: cosmosdarwin
7-
ms.date: 01/03/2025
7+
ms.date: 02/11/2025
88
ms.assetid: 8bd0d09a-0421-40a4-b752-40ecb5350ffd
99
description: An overview of Storage Spaces Direct, a feature of Windows Server and Azure Stack HCI that enables you to cluster servers with internal storage into a software-defined storage solution.
1010
---
@@ -45,7 +45,7 @@ In these volumes, you can place your files, such as .vhd and .vhdx for VMs. You
4545

4646
The following section describes the features and components of a Storage Spaces Direct stack.
4747

48-
:::image type="content" source="media/storage-spaces-direct-overview/converged-full-stack.png" alt-text="Storage Spaces Direct Stack." lightbox="media/storage-spaces-direct-overview/converged-full-stack.png":::
48+
:::image type="content" source="media/storage-spaces-direct-overview/converged-full-stack.png" alt-text="Storage Spaces Direct Stack.":::
4949

5050
**Networking Hardware.** Storage Spaces Direct uses SMB3, including SMB Direct and SMB Multichannel, over Ethernet to communicate between servers. We strongly recommend using 10+ GbE with remote-direct memory access (RDMA), either iWARP or RoCE.
5151

@@ -102,13 +102,13 @@ Storage Spaces Direct supports the following two deployment options:
102102

103103
In a hyperconverged deployment, you use single cluster for both compute and storage. The hyperconverged deployment option runs Hyper-V virtual machines or SQL Server databases directly on the servers providing the storage—storing their files on the local volumes. This eliminates the need to configure file server access and permissions, which in turn reduces hardware costs for small-to-medium business and remote or branch office deployments. To deploy Storage Spaces Direct on Windows Server, see [Deploy Storage Spaces Direct on Windows Server](/windows-server/storage/storage-spaces/deploy-storage-spaces-direct). To deploy Storage Spaces Direct as part of Azure Stack HCI, see [What is the deployment process for Azure Stack HCI?](/azure-stack/hci/deploy/operating-system)
104104

105-
:::image type="content" source="media/storage-spaces-direct-overview/hyper-converged-minimal.png" alt-text="[Storage Spaces Direct serves storage to Hyper-V VMs in the same cluster.]" lightbox="media/storage-spaces-direct-overview/hyper-converged-minimal.png":::
105+
:::image type="content" source="media/storage-spaces-direct-overview/hyper-converged-minimal.png" alt-text="Storage Spaces Direct serves storage to Hyper-V VMs in the same cluster.":::
106106

107107
### Converged deployment
108108

109109
In a converged deployment, you use separate clusters for storage and compute. The converged deployment option, also known as 'disaggregated,' layers a Scale-out File Server (SoFS) atop Storage Spaces Direct to provide network-attached storage over SMB3 file shares. This allows for scaling compute and workload independently from the storage cluster, essential for larger-scale deployments such as Hyper-V IaaS (Infrastructure as a Service) for service providers and enterprises.
110110

111-
:::image type="content" source="media/storage-spaces-direct-overview/converged-minimal.png" alt-text="Storage Spaces Direct serves storage using the Scale-Out File Server feature to Hyper-V VMs in another server or cluster." lightbox="media/storage-spaces-direct-overview/converged-minimal.png":::
111+
:::image type="content" source="media/storage-spaces-direct-overview/converged-minimal.png" alt-text="Storage Spaces Direct serves storage using the Scale-Out File Server feature to Hyper-V VMs in another server or cluster.":::
112112

113113
## Manage and monitor
114114

@@ -145,7 +145,7 @@ There are [over 10,000 clusters](https://techcommunity.microsoft.com/t5/storage-
145145

146146
Visit [Microsoft.com/HCI](https://www.microsoft.com/hci) to read their stories.
147147

148-
:::image type="content" source="media/storage-spaces-direct-overview/customer-stories.png" alt-text="Grid of customer logos." link="https://azure.microsoft.com/products/azure-stack/hci/" lightbox="media/storage-spaces-direct-overview/customer-stories.png":::
148+
<!-->:::image type="content" source="media/storage-spaces-direct-overview/customer-stories.png" alt-text="Grid of customer logos." link="https://azure.microsoft.com/products/azure-stack/hci/":::-->
149149

150150
## Additional references
151151

includes/create-volumes-with-nested-resiliency.md

+7-4
Original file line numberDiff line numberDiff line change
@@ -2,7 +2,7 @@
22
author: ManikaDhiman
33
ms.author: v-manidhiman
44
ms.topic: include
5-
ms.date: 01/23/2025
5+
ms.date: 02/11/2025
66
---
77

88
You can use familiar storage cmdlets in PowerShell to create volumes with nested resiliency, as described in the following section.
@@ -12,21 +12,24 @@ You can use familiar storage cmdlets in PowerShell to create volumes with nested
1212
Windows Server 2019 requires you to create new storage tier templates using the `New-StorageTier` cmdlet before creating volumes. You only need to do this once, and then every new volume you create can reference these templates.
1313

1414
> [!NOTE]
15-
> If you're running Windows Server 2022, Azure Stack HCI 21H2, or Azure Stack HCI 20H2, you can skip this step.
15+
> If you're running Windows Server 2022, Azure Stack HCI, version 21H2, or Azure Stack HCI, version 20H2, you can skip this step.
1616
1717
Specify the `-MediaType` of your capacity drives and, optionally, the `-FriendlyName` of your choice. Don't modify other parameters.
1818

1919
For example, if your capacity drives are hard disk drives (HDD), launch PowerShell as Administrator and run the following cmdlets.
2020

2121
To create a NestedMirror tier:
2222

23-
```PowerShell
23+
```powershell
2424
New-StorageTier -StoragePoolFriendlyName S2D* -FriendlyName NestedMirrorOnHDD -ResiliencySettingName Mirror -MediaType HDD -NumberOfDataCopies 4
2525
```
26+
2627
To create a NestedParity tier:
28+
2729
```powershell
2830
New-StorageTier -StoragePoolFriendlyName S2D* -FriendlyName NestedParityOnHDD -ResiliencySettingName Parity -MediaType HDD -NumberOfDataCopies 2 -PhysicalDiskRedundancy 1 -NumberOfGroups 1 -FaultDomainAwareness StorageScaleUnit -ColumnIsolation PhysicalDisk
2931
```
32+
3033
If your capacity drives are solid-state drives (SSD), set the `-MediaType` to `SSD` instead and change the `-FriendlyName` to `*OnSSD`. Don't modify other parameters.
3134

3235
> [!TIP]
@@ -64,7 +67,7 @@ Volumes that use nested resiliency appear in [Windows Admin Center](/windows-ser
6467
6568
### Optional: Extend to cache drives
6669
67-
With its default settings, nested resiliency protects against the loss of multiple capacity drives at the same time, or one server and one capacity drive at the same time. To extend this protection to [cache drives](/azure-stack/hci/concepts/cache), there's another consideration: because cache drives often provide read and write caching for multiple capacity drives, the only way to ensure you can tolerate the loss of a cache drive when the other server is down is to not cache writes, but that impacts performance.
70+
With its default settings, nested resiliency protects against the loss of multiple capacity drives at the same time, or one server and one capacity drive at the same time. To extend this protection to [cache drives](../WindowsServerDocs/storage/storage-spaces/cache.md), there's another consideration: because cache drives often provide read and write caching for multiple capacity drives, the only way to ensure you can tolerate the loss of a cache drive when the other server is down is to not cache writes, but that impacts performance.
6871
6972
To address this scenario, Storage Spaces Direct offers the option to automatically disable write caching when one server in a two-server cluster is down, and then re-enable write caching once the server is back up. To allow routine restarts without performance impact, write caching isn't disabled until the server has been down for 30 minutes. Once write caching is disabled, the contents of the write cache is written to capacity devices. After this, the server can tolerate a failed cache device in the online server, though reads from the cache might be delayed or fail if a cache device fails.
7073

0 commit comments

Comments
 (0)