Replies: 11 comments 2 replies
-
Make some changes on top of frp?
|
Beta Was this translation helpful? Give feedback.
-
May I ask what proxy solutions are currently available for research and modification?
|
Beta Was this translation helpful? Give feedback.
-
Update progress: Proxy is essentially a load balancing problem, and eBPF is a suitable implementation. In addition, it is more suitable to use libbpf in C++ rather than cilium/ebpf in Go, as there are significant differences between the two.
|
Beta Was this translation helpful? Give feedback.
-
webrtc要给sdp,而lb后面的rs是不暴露的,好像没法解啊, Translation: 'webrtc needs to provide sdp, but the rs behind lb is not exposed, it seems difficult to solve,
|
Beta Was this translation helpful? Give feedback.
-
ebpf is too complicated. We currently implement HLS. pub -> lb1 -> srs -> pub hook -> go server -> save pub pod ip to mysql play -> lb2 -> go server -> get pod ip> get m3u8, ts from srs pod > return Disadvantage, dependency on the center. Depending on the business, if a go server is added, it will become a somewhat complex system. Advantage, simple, sufficient for short-term low traffic, add a local cache of pod IP.
|
Beta Was this translation helpful? Give feedback.
-
For the business scenario of converting RTMP streams to WebRTC streams, since the source server itself has state, it cannot be deployed on K8S. Currently, our business can only be directly deployed on physical machines. Can we consider solving the state issue of the source server by adding this proxy layer? Since this proxy layer is stateless, it can be easily deployed through deployment, exposing a unified load balancer and domain name externally. The UDP load balancer does not modify the source IP and source port of the UDP packet (https://exampleloadbalancer.com/nlbudp_detail.html). The proxy can use this tuple to determine which backend SRS the packet should be forwarded to. @winlinvip, can it be used in this way?
|
Beta Was this translation helpful? Give feedback.
-
@qiantaossx Proxy is designed to recognize the content of the request and determine which stream the client is requesting, and then forward it to the backend server. Therefore, it can be used for the origin server of all protocols. Additionally, regarding the issue of the origin server having state, it is actually not possible to achieve complete statelessness, but it can be avoided through retrying. For example, if the origin server goes down, retrying the push stream will select a new origin server, and retrying the pull stream can also locate this new origin server again. This way, both the origin server and the proxy can be deployed without state, achieving a very simple operation and maintenance. PS: The origin server cluster is actually stateless as well. It is just that the current configuration requires a single address, so typically the origin server cluster is considered stateful. However, if we improve it by obtaining other origin server addresses from an API, then it can also be considered stateless. PS: The preliminary plan is to implement the proxy in SRS 6.0, and development is expected to start in early 2023. If you are interested, you are welcome to participate.
|
Beta Was this translation helpful? Give feedback.
-
Recently, I have been watching livekit, which can be used as a reference for its implementation. The pod will find its own external address (through NAT) and then provide this address to SDP, thus completing the distribution of the WebRTC source station in the cluster.
|
Beta Was this translation helpful? Give feedback.
-
We have confirmed that we will implement a Proxy within the SRS Stack, and we have also experimented with SRT #3869 and architecture, which works well within the Proxy, just like other protocols. The reason for not implementing it within SRS is that the Proxy will definitely utilize Redis to achieve statelessness, and the Proxy does not require strong streaming protocol processing capabilities. It is more about forwarding UDP or TCP packets, hence Go is the most suitable language for this purpose. Originally, the SRS Stack was implemented using Go to enhance its capabilities with a Proxy, which can also cover scenarios that require the expansion of a single SRS Stack instance.
|
Beta Was this translation helpful? Give feedback.
-
Today, I looked at the cluster architecture of Redis, which is divided into several types:
Single Node RedisStandalone version architecture, refer to link: Start Redis:
Test and verify:
The architecture used in Oryx is a standalone Redis setup, because Oryx itself is designed as a single-node architecture. Master Slave RedisTo expand reading capabilities, a configuration consisting of one Master and multiple Slaves can be used, as referenced in link. First, initiate the Master:
Obtain the IP of the Master, then start the Slave service:
Test and verify that data is written to the Master and read from the Slave:
This architecture merely enhances the read capacity and is suitable for scenarios with minimal writing but a substantial amount of reading. Sentinel for Master Slave RedisWhen the Master goes down, Sentinel can be used to promote a Slave to Master, achieving automatic failover. For reference, see link. First, start the Master and Slave services:
Next, create a Sentinel:
Next, create another Sentinel:
Retrieve master information:
Turn off the Master:
View the Sentinel logs:
You are able to access the redis, because slave has been switched to master:
Restart the master container, and notice that it has become a Slave.
Note: Although a Slave can be promoted to a Master, starting a Slave directly without the Master being up will lead to failure. Twemproxy Proxy for RedisServer proxies are sharded, with multiple Proxies connected behind the proxy. A simple architecture example is Twemproxy, for reference see the link. To simplify deployment, Twemproxy is directly connected to the Redis master. First, create two masters:
Create a Twemproxy image:
Generate a configuration file and start the service:
Verify the outcome:
It can be observed that stopping this service on master1 will result in a failure to retrieve.
Thus, the logic of this Proxy is to lock onto the target service based on the key k. Cluster for RedisRedis supports clustering with a MESH topology where instances are interconnected. For reference, consult the official documentation and this article. For the cluster protocol, refer to this link. For an understanding of the principles behind Redis clustering, refer to the Gossip protocol. Start at least six nodes, with three acting as Master nodes and each having one Slave node for backup purposes:
Check if the service has started properly:
Create a cluster with the first three nodes as Master nodes and the subsequent three nodes as Slave nodes:
View the cluster:
Set up a key-value pair; this requires executing within the container because it returns the container's IP, which is similar to a 302 response:
Note: When executing with the Consistent HashingTo balance keys across a group of servers, you can use a hash algorithm to assign them to servers. To avoid key migration caused by adding or removing servers, you can expand the base to uint32, making the server positions fixed. To avoid load imbalance, you can add virtual nodes, which means replicating the same node multiple times. Refer to the provided link for more details. Hash SlotHash Slot, or Hash Slots, involves assigning a range of hash values to servers. For example, in a cluster with 3 nodes, Node A contains hash slots from 0 to 5500, Node B contains hash slots from 5501 to 11000, and Node C contains hash slots from 11001 to 16384. Refer to the provided link for more details. Instead of using Consistent Hashing, Hash Slot provides a more straightforward way to distribute the load. Consistent Hashing is more complex to implement. When a node is removed, data is reassigned to the next node in Consistent Hashing, but with Hash Slot, data can be flexibly reassigned to multiple other nodes.
|
Beta Was this translation helpful? Give feedback.
-
Proxy server: #4158 SRS heartbeat register for proxy server: #4171 Document: https://ossrs.io/lts/en-us/docs/v7/doc/origin-cluster |
Beta Was this translation helpful? Give feedback.
-
Both cluster and proxy works for system load balancing, in short to serve a large set of connections or clients, but there are some differences.
Cluster works as an integrity, a set of servers of a cluster works like one server. So it supports a large number of streams, and each stream supports a lots of connections. For example, a cluster supports 100k streams and 100m viewers, or infinety system capacity.
Proxy works as a agent of SRS Origin server, to proxy streams to a special set of origin servers, and proxy to the same server for a specifical stream. Proxy doesn't extend system capacity for it proxy each UDP or TCP packet, but it helps media server for load balancing.
Proxy is designed to make origin server stateless, to build a cluster from isolated stateless origin servers to a cluster, and can also be a part of a cluster.
Architecture
Proxy works with SRS Origin servers, the stream flow works like this:
LB
is load balancer, such as K8s service, or cloud load balancer. Generally,LB
binds public internet IP, for clients to connect to.Proxy is stateless, so you're able to deploy a set of proxy servers, for example, use K8s Deployment to deploy a set of proxies.
When client request stream from proxy server, it find stream from its infrastructure, and proxy to the specifical origin server, so that each stream is served by a specifical backend server. For the first time, proxy randomly pick one origin server for a fresh stream.
Service Discovery
Regarding service discovery, it involves two aspects: how the proxy discovers the origin servers and how the proxy obtains stream information from other proxy servers.
There are two types of service discovery mechanisms: one that caters to simple scenarios like demos or small clusters, and another that provides stability for large-scale online products.
In this architecture, both the proxy and origin servers, as well as the service manager, utilize the same HTTP API for service discovery, and the configuration file shares a consistent format across all components.
Proxy of Proxies
Edge is actually another type of proxy, but with origin ip configured, but origin pod IP is variant not fixed. While proxy doesn't need to configure the origin IP because it depends on redis or other service discover mechanism, so proxy can also be used for upstream server for edge. In this situation, proxy is like a K8s service of origin servers for edge servers.
With this architecture, we're able to support a huge set of streams and viewers, without origin cluster which is stateful and complex. Please note that edge only works for live streaming, such as RTMP/FLV. Other edge also works well, for example, HLS edge cluster works with Proxy, from where NGINX fetch and cache HLS. After WebRTC supports cascading, it also works with proxy.
Proxy Mirrors
To extend stream concurrency, proxy can mirror streams to multiple origin servers, to enable you to play stream from different origin server, for fault tolerance and scale out cluster capability.
For example, a RTC origin server could serve about 300 to 700 players for each stream, you can use proxy to mirror the same stream to 3 origin servers, to enable 900 to 2100 players.
Limitation
The limitation of proxy is the number of viewers for a stream, which should never exceed each single server's capacity, becuase proxy always proxy the same stream to the same backend server. For example, SRS support 5k viewers for RTMP/FLV, about 500 viewers for WebRTC, please test the capacity by srs-bench.
It's the responsibility of Cluster to support a large set of viewers, such as Edge Cluster for RTMP/FLV, or HLS Cluster for HLS. WebRTC doesn't support cluster now, please read wiki of WebRTC.
For most use scenario, Proxy is much simple and useful ability for load balancing, because there're a set of streams to serve but not too much, and there're also a set of viewers for each stream but not too much. For example, building a system for 1k streams and each stream has 1k viewers, the total connections of system is no more than 1m.
Proxy also works with cluster, for example, if you have 1k streams and each stream has 100k FLV viewers, the architecture is like this:
Keep in mind that proxy is design for Origin server, so there should always be proxy for a origin server, even for edge server to pull stream from. Proxy is another similar solution like origin cluster, but it's much simple and works for all protocols like RTMP/FLV/HLS/WebRTC/SRT, while origin cluster only works for RTMP.
Notes
Proxy should be written by Go/C++, stable and simple. Should be C++, because we might use eBPF.
It's possible to directly forward IP packets by kernel, to make proxy less CPU.
Proxy is much simpler than cluster, because both proxy and origin server is stateless, which can be deploy by K8s Deployment.
Beta Was this translation helpful? Give feedback.
All reactions