Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

low latency light overlays: k-peer-disjoint updates, scaling the p2p with bounded peer-updates #193

Open
emhane opened this issue Jun 7, 2023 · 0 comments

Comments

@emhane
Copy link
Collaborator

emhane commented Jun 7, 2023

duplicate of: ethereum/trin#750, if anybody wants to work on this that's great. possibly before digging into this it's beneficial to take a second look at rust-libp2p's kbuckets as they are also using the same parity code, and see if there are any parts before left out that are now relevant to add.

Last year I started working on this enr-bank-discv5, as I was researching on Felix's topics spec. The topics spec as well as Trin uses KBucket overlays, basically subsets of the discv5 network stored with respect to a different order, or in other words different partial views of the same network.
When the overlays overlap, ENR or connectivity updates that happen on one layer need to be visible on all overlays so that the overlays can scale efficiently. For example we can optimise memory usage and latency in the case of running more than one portal, like state and history, in the same instance as different top-level tasks, but also just running one portal on top of discv5 which very probably means a big overlap of peers between the discv5 DHT and the portal DHT already.
Currently, when a peer goes offline or updates their ENR, this needs to be found out anew by trial and error on each overlay - this latency, of 2 RTT for a PING-PONG ENR update and of 2 seconds for the default discv5 query peer time out upon unresponsiveness, can be minimised to once per peer - a constant bound, k = 1, on k-per-peer updates! Rn the per-peer updates are bound by the input number of overlays. That takes a toll when we want to increase query parallelism and increase the number of virtualisation layers - portals of different services (not random) - on top of the discv5 DHT secured by random unstructured structure. So I suggest in terms of DHT entries we flatten all the layers of DHTs out, to one big pancake of k-bucket entries, with links from each entry to the DHT layers that includes the entry, as opposed to copies of the entry.

I have 2 different approaches in mind:

  • If we don't have a memory constraint, we can add some Discv5Events to know when a peer is disconnected
SessionDisconnected(NodeId),

and when an ENR of a peer is updated

EnrUpdated { enr: Enr, replaced: Option<Enr> },

We emit these events from service.rs here and here respectively, and also here but first making the logic separation between connection update, value update and both updated (emit 2 events).

All Discv5Events rn clone the ENR to thread-safely pass up to the app (move it) running discv5, which then stores the ENR in the app's k-buckets - if discv5 had space for the ENR in its k-buckets, it has also stored the ENR before passing it up.

  • If we have a memory constraint: we use thread-safe references and locks. ENRs take up to 300 bytes in memory. Following code snippets originate from kbucket/buckets.rs. How we lock lock the Node depends on if we want to update connection status and the ENR independently. Locking the whole Node
pub struct KBucket<TNodeId, TVal: Eq> {
    /// The nodes contained in the bucket.
    nodes: ArrayVec<Arc<RwLock<Node<TNodeId, TVal>>>, MAX_NODES_PER_BUCKET>,

or, locking connection status and ENR independently

pub struct Node<TNodeId, TVal: Eq> {
    /// The key of the node, identifying the peer.
    pub key: Key<TNodeId>,
    /// The associated value.
    pub value: Arc<RwLock<TVal>>,
    /// The status of the node.
    pub status: Arc<RwLock<NodeStatus>>,
}

or if we can't make our mind up, with our current understanding of how we will use overlapping DHT overlays in the future, we make the whole node generic (definitely the best idea!), with traits so little of the existing code needs to be changed (i.e. so calls to methods of NodeStatus and Node types already in the code will still work).

pub trait AsNodeStatus {...}

pub trait AsNode<TNodeId, TVal: Eq, NodeStatus: AsNodeStatus> {...}

pub struct KBucket<Node: AsNode> {
    /// The nodes contained in the bucket.
    nodes: ArrayVec<Node, MAX_NODES_PER_BUCKET>,

This way each entry only exists once in memory, but can be mutated by any overlay. Any overlay can mutate any entry with just a read lock on the shared collection of all entries (the pancake) and that change will happen on all overlays, including the discv5 DHT: check out my playground example. Which is great when using a RwLock that allows for multiple concurrent reads. Without this lock on each entry, to disconnect a peer in discv5 based on some portal specific criteria requires waiting on a write lock here.

Why would we have a memory constraint? The more efficiently portals or any other discv5 DHT overlay handles memory, the larger we can make k-buckets which in turn means me have more gains from increasing query parallelisation to tackle latency. A memory-efficient way to increase k-buckets size is to handle this interval in at least two parts when allocating memory for nodes , where for example all buckets until log2distance 200 initialise with the current MAX_NODES_PER_BUCKET= 16 ArrayVec<Node<TNodeId, TVal>, MAX_NODES_PER_BUCKET> and all nodes log2distance 200 to 256 initialise allocating for 3 times as many nodes ArrayVec<Node<TNodeId, TVal>, MAX_NODES_PER_BUCKET * 3>. Tbh, the odds that even one single peer will be in the k-buckets aprox. log2distance 1-100 is so low, that imo we should change the data structure from ArrayVec to another data structure that doesn't allocate memory initially for that interval and dynamically allows that interval to allocate memory up to a MAX_NODES_PER_BUCKET limit. If no such data structure exists giving the option to set either a fixed or dynamic memory allocation, we could use an enum

enum Nodes {
    Fixed(ArrayVec),
    Dynamic(Vec),
}

and create a trait AsArrayVec with all the methods of ArrayVec called in the kbucket/bucket.rs, and then implement that trait for the enum Nodes to allow for easy compatibility with parity's existing k-buckets code (i.e. without changing the existing method calls).

(A related PR would be to remove this event, which is not used anywhere in the code as this method and it's return type are sync.)

@emhane emhane changed the title low latency light overlays: k-overlay-disjoint updates, scaling the p2p with bounded peer-updates low latency light overlays: k-peer-disjoint updates, scaling the p2p with bounded peer-updates Jul 8, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant