Skip to content

Commit a24492b

Browse files
authored
Merge pull request #781 from filecoin-project/phi/add-deprecation-notice-cluster
chore: add deprecated notice to Raft Cluster
2 parents ef1e3c1 + a4e2ec7 commit a24492b

File tree

1 file changed

+5
-138
lines changed

1 file changed

+5
-138
lines changed
+5-138
Original file line numberDiff line numberDiff line change
@@ -1,148 +1,15 @@
11
---
2-
title: "Lotus Node Clusters"
3-
description: "Version 1.19.0 introduces redundant Lotus node cluster raft consensus in order to maintain consistent state for nonces and messages being published in the event of Lotus node failure."
2+
title: "Lotus Node Clusters (Deprecated)"
3+
description: "Lotus node clusters were an experimental feature introduced in version 1.19.0 that provided redundant Lotus node cluster raft consensus, but this feature has been removed as of v1.27.0."
44
draft: false
5-
menu:
6-
lotus:
7-
parent: "lotus-configure"
85
aliases:
96
- /lotus/manage/chain-management
107
weight: 325
118
toc: true
129
---
1310

14-
Version 1.19.0 introduces redundant Lotus node cluster raft consensus in order to maintain consistent state for nonces and messages being published in the event of Lotus node failure.
15-
1611
{{< alert icon="warning" >}}
17-
A minimum number of 3 Lotus nodes are required to enable and use Lotus node clusters
18-
{{< /alert >}}
19-
20-
### Configure the original Lotus node
21-
22-
**This document assumes that the reader is already fully operational with at least one Lotus node and miner instance.**
23-
24-
1. Stop both miner and daemon instances.
25-
2. Browse to your `/.lotus` repo folder and edit the `config.toml` file changing `[API] ListenAddress` and `[Libp2p] ListenAddress`:
26-
```toml
27-
[API]
28-
ListenAddress = "/ip4/127.0.0.1/tcp/4567/http"
29-
```
30-
```toml
31-
[Libp2p]
32-
ListenAddresses = ["/ip4/0.0.0.0/tcp/2222", "/ip6/::/tcp/2222"]
33-
```
34-
### Configure the second Lotus node
35-
1. Create a new repo folder for the second node instance such as `/.lotus-2`.
36-
2. In a new terminal session set the Lotus path for the second lotus node with `LOTUS_PATH=/home/usersname/.lotus-2`.
37-
3. Initialize the new node by importing a [lightweight snapshot](https://lotus.filecoin.io/lotus/manage/chain-management/#lightweight-snapshot) and wait until it has fully synced.
38-
4. Stop the second Lotus node and edit the `/.lotus-2/config.toml` file changing `[API] ListenAddress` and `[Libp2p] ListenAddress`
39-
```toml
40-
[API]
41-
ListenAddress = "/ip4/127.0.0.1/tcp/5678/http"
42-
```
43-
```toml
44-
[Libp2p]
45-
ListenAddresses = ["/ip4/0.0.0.0/tcp/3333", "/ip6/::/tcp/3333"]
46-
```
47-
5. Restart the second node and import Lotus wallet keys from the original node to the second node.
48-
49-
### Configure the third Lotus node
50-
1. Create a new repo folder for the third node instance such as `/.lotus-3`.
51-
2. In a new terminal session set the Lotus path for the third lotus node with `LOTUS_PATH=/home/usersname/.lotus-3`.
52-
3. Initialize the new node by importing a [lightweight snapshot](https://lotus.filecoin.io/lotus/manage/chain-management/#lightweight-snapshot) and wait until it has fully synced.
53-
4. Stop the third Lotus node and edit the `/.lotus-3/config.toml` file changing `[API] ListenAddress` and `[Libp2p] ListenAddress`
54-
```toml
55-
[API]
56-
ListenAddress = "/ip4/127.0.0.1/tcp/6789/http"
57-
```
58-
```toml
59-
[Libp2p]
60-
ListenAddresses = ["/ip4/0.0.0.0/tcp/4444", "/ip6/::/tcp/4444"]
61-
```
62-
5. Restart the third node and import Lotus wallet keys from the original node to the third node.
63-
64-
### Configuring Raft Consensus / Redundant Chain nodes
65-
1. There is now a new section in the `config.toml` file for the lotus node, called `[Cluster]`. If you don't see this section in your own `config.toml`, please run `lotus config default` and copy the new section across.
66-
2. Whilst all three nodes and your miner are running, configure the `config.toml` for all three nodes as below. You can get the `multiaddress` for your nodes by checking the output of `lotus net listen` for all three daemons:
67-
```toml
68-
[Cluster]
69-
ClusterModeEnabled = true
70-
```
71-
```toml
72-
InitPeersetMultiAddr = ["/ip4/127.0.0.1/tcp/2222/p2p/12D3KooWHVawzGL5SG58rS1Ti8m3G8fA9NwEWkfnz1AcRLWq1deF","/ip4/127.0.0.1/tcp/3333/p2p/12D3KooWB2ikW3gvaQiwfdnD8HrFAqBd2Y54gdykLTFybUQsYrBG","/ip4/127.0.0.1/tcp/4444/p2p/12D3KooWHxNgWfmiJGf6sFXbjQhnBHudsXGz9WAuZB1H4LLwxx7V"]
73-
```
74-
3. On the `lotus-miner` unset any LOTUS_PATH environment variables, and add the full node api info for the three daemons: ` export FULLNODE_API_INFO=<node0_info>,<node1_info>,<node2_info>`. You can get API-keys for each node by `lotus auth api-info --perm admin`. The format for each nodes info is like this: `export FULLNODE_API_INFO=<api_token>:/ip4/<lotus_daemon_ip>/tcp/<lotus_daemon_port>/http`
75-
```
76-
FULLNODE_API_INFO=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZWFkIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.T_meWfWV-F_pX19EPZ1p0uLaRmX3kpE_KFE7nXx9ENs:/ip4/127.0.0.1/tcp/4567/http,eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZWFkIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.lIygxCSIqdSeVvN73aVIme9mRdjOunFsn5eb8K8Q5R8:/ip4/127.0.0.1/tcp/5678/http,eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJBbGxvdyI6WyJyZWFkIiwid3JpdGUiLCJzaWduIiwiYWRtaW4iXX0.arVqeW93VujWC5JlIoumfbRFiHk8BtROp9rsdZPEaVk:/ip4/127.0.0.1/tcp/6789/http
77-
```
78-
4. Restart all daemon and miner instances, stop the lotus-miner first followed by the three nodes.
79-
5. Start all three nodes followed by the lotus-miner.
80-
6. You are now running raft consensus through node clustering.
81-
7. You can check that the cluster is successfully running and determine the current node leader by running `./lotus-shed rpc --version v1 RaftLeader`.
82-
83-
### Cluster config options
84-
85-
You can tune your cluster to your own unique requirements in the config.toml of the three nodes by editing the `[Cluster]` section.
86-
```toml
87-
[Cluster]
88-
# EXPERIMENTAL. config to enabled node cluster with raft consensus
89-
#
90-
# type: bool
91-
# env var: LOTUS_CLUSTER_CLUSTERMODEENABLED
92-
#ClusterModeEnabled = false
93-
94-
# A folder to store Raft's data.
95-
#
96-
# type: string
97-
# env var: LOTUS_CLUSTER_DATAFOLDER
98-
#DataFolder = ""
99-
100-
# InitPeersetMultiAddr provides the list of initial cluster peers for new Raft
101-
# peers (with no prior state). It is ignored when Raft was already
102-
# initialized or when starting in staging mode.
103-
#
104-
# type: []string
105-
# env var: LOTUS_CLUSTER_INITPEERSETMULTIADDR
106-
#InitPeersetMultiAddr = []
107-
108-
# LeaderTimeout specifies how long to wait for a leader before
109-
# failing an operation.
110-
#
111-
# type: Duration
112-
# env var: LOTUS_CLUSTER_WAITFORLEADERTIMEOUT
113-
#WaitForLeaderTimeout = "15s"
114-
115-
# NetworkTimeout specifies how long before a Raft network
116-
# operation is timed out
117-
#
118-
# type: Duration
119-
# env var: LOTUS_CLUSTER_NETWORKTIMEOUT
120-
#NetworkTimeout = "1m40s"
121-
122-
# CommitRetries specifies how many times we retry a failed commit until
123-
# we give up.
124-
#
125-
# type: int
126-
# env var: LOTUS_CLUSTER_COMMITRETRIES
127-
#CommitRetries = 1
128-
129-
# How long to wait between retries
130-
#
131-
# type: Duration
132-
# env var: LOTUS_CLUSTER_COMMITRETRYDELAY
133-
#CommitRetryDelay = "200ms"
134-
135-
# BackupsRotate specifies the maximum number of Raft's DataFolder
136-
# copies that we keep as backups (renaming) after cleanup.
137-
#
138-
# type: int
139-
# env var: LOTUS_CLUSTER_BACKUPSROTATE
140-
#BackupsRotate = 6
141-
142-
# Tracing enables propagation of contexts across binary boundaries.
143-
#
144-
# type: bool
145-
# env var: LOTUS_CLUSTER_TRACING
146-
#Tracing = false
12+
**DEPRECATED FEATURE**: Lotus node clusters were an experimental feature that was removed from the codebase in version 1.27.0. The information below is kept for historical reference only.
13+
{{< /alert >}}
14714

148-
```
15+
Version 1.19.0 introduced redundant Lotus node cluster raft consensus as an experimental feature to maintain consistent state for nonces and messages being published in the event of Lotus node failure. This feature was later removed in version 1.27.0.

0 commit comments

Comments
 (0)