Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PBM-1118 Added support of custom shard names for remapping #121

Merged
merged 2 commits into from
Jul 13, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/features/selective-backup.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,20 +28,20 @@ During the restore, the reverse process occurs:
* A `pbm-agent` on each shard restores only the specified databases/collections and replays the oplog that relates only to the specified namespaces. The operations for other namespaces are ignored.
* On the config server replica set, the `pbm-agent` restores the router configuration only for the specified sharded collections. The router configuration for other databases, collections and chunks remains intact.

The restore for sharded timeseries collections is not supported.
The restore for sharded time series collections is not supported.

Note that selective backups and restores operate only with data and router configuration. The cluster configuration and topology-related settings are ignored. Therefore, we recommended to restore the databases/collections on the same environment.

### Implementation specifics

During the selective restore, the primary shard for a database is set to the state it had during the backup. For example, the primary shard for the database "Staff" during backup was A. After you restore the "Staff" database, the primary shard will be set to A even if you moved the primary from A to B before the restore. All non sharded collections will be restored on A; however, they will not be deleted from B. You must take needed actions (cleanup or move the primary back to B) to maintain them.
During the selective restore, the primary shard for a database is set to the state it had during the backup. For example, the primary shard for the database "Staff" during backup was A. After you restore the "Staff" database, the primary shard will be set to A even if you moved the primary from A to B before the restore. All non-sharded collections will be restored on A; however, they will not be deleted from B. You must take needed actions (cleanup or move the primary back to B) to maintain them.


## Known limitations of selective backups and restores

1. Only **logical** backups and restores are supported.
2. Selective backups and restores are supported in sharded clusters for non-sharded collections starting with version 2.0.3. Sharded collections are supported starting with version 2.1.0.
3. Sharded timeseries collections are not supported.
3. Sharded time series collections are not supported.
4. Multiple namespaces are not yet supported for selective backups. However, you can specify several namespaces for the restore (e.g., restore several collections of a database).
5. Multi-collection transactions are not yet supported for selective restore.
6. System collections in ``admin``, ``config``, and ``local`` databases cannot be backed up and restored selectively. You must make a full backup and restore to include them.
Expand Down
4 changes: 3 additions & 1 deletion docs/usage/restore.md
Original file line number Diff line number Diff line change
Expand Up @@ -265,6 +265,7 @@ To restore a backup, use the [`pbm restore`](../reference/pbm-commands.md#pbm-re
```

This is expected behavior of periodic checks upon the database start. During the restore, the `config.system.sessions` collection is dropped but Percona Server for MongoDB recreates it eventually. It is a normal procedure. No action is required from your end.

2. Resync the backup list from the storage.
3. Start the balancer and the `mongos` node.
4. As the general recommendation, make a new base backup to renew the starting point for subsequent incremental backups.
Expand All @@ -279,7 +280,8 @@ To restore a backup from one environment to another, ensure the following:

## Restoring into a cluster / replica set with a different name

Starting with version 1.8.0, you can restore *logical backups* into a new environment that has the same or more number of shards and these shards have different replica set names.
Starting with version 1.8.0, you can restore **logical backups** into a new environment that has the same or more number of shards and these shards have different replica set names.
Starting with version 2.2.0, you can restore environments that have [custom shard names](https://www.mongodb.com/docs/manual/reference/command/addShard/#mongodb-dbcommand-dbcmd.addShard).

Starting with version 2.2.0, you can restore *physical/incremental* backups into a new environment with a different replica set names. Note that **the number of shards must be the same** as in the environment where the you made the backup.

Expand Down
2 changes: 1 addition & 1 deletion styles/Vocab/Percona/accept.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ Percona XtraDB Cluster
Percona XtraBackup
Percona Toolkit
Sysbench
Ooplog
[Oo]plog
PITR
pitr
namespace
Expand Down