Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PLI dilution #3422

Open
wants to merge 1 commit into
base: 0.x
Choose a base branch
from
Open

PLI dilution #3422

wants to merge 1 commit into from

Conversation

natikaltura
Copy link

@natikaltura natikaltura commented Sep 8, 2024

Fix: overflow of PLI requests to Publishers (especially with simulcast)

This is a PR for v0.14.3 branch (see PR# for the multistream)

We observed a significant issue while using the Janus Media Server on both the v0.14.3 branch and the v1.2.3 multistream branch. When VP8 simulcast is enabled with a group of viewers for a Publisher, the Publisher sometimes receives dozens of Picture Loss Indication (PLI) requests per second. Although these PLI requests don't directly lead to full frame retransmissions or additional outgoing networking overhead from the Publisher, they do trigger a lot of processing, resulting in CPU spikes on the Publisher.

Additional Context
While this PR addresses the overflow of PLI requests, a related issue remains unresolved. Specifically, when Chrome is used for desktop sharing, the generated video streams often operate at a very low frame rate (0.2–0.5 FPS). This low frame rate causes Janus to generate excessive PLIs. The issue arises primarily from the janus_rtp_simulcasting_context_process_rtp() function in rtp.c. It suggests that the Publisher requires a PLI (->need_pli) if no stream packets have been received for more than 250,000 microseconds.

As a temporary fix, we applied a patch that identifies desktop sharing by checking for the “_desktop” string constant we add to the Publisher's display property. However, for a correect solution it might be beneficial to add a lowFps property for Publishers. This would allow for more precise logic to handle such cases effectively.

Thank you in advance for your input.
Nati Baratz

@tmatth
Copy link
Contributor

tmatth commented Sep 8, 2024

@natikaltura you should change the target branch to 0.x

@natikaltura natikaltura changed the base branch from master to 0.x September 8, 2024 17:17
@natikaltura natikaltura marked this pull request as ready for review September 8, 2024 17:19
@lminiero
Copy link
Member

lminiero commented Sep 9, 2024

@natikaltura thanks for the PRs! Answering here even though some considerations will apply to #3423 as well (where there's more changes due to the multiple streams).

I think the patches can be simplified, since you're adding a new volatile that's probably not needed. The main problem is that the fence we currently have doesn't seem to be working well with the multistream nature of Janus (the same function is invoked by multiple threads), and mostly because we only update fir_latest at the end of the function instead of the very beginning. Moving it earlier would IMO do the same thing that the new sending_pli thing you added tries to do.

I don't like the change of the threshold from 100ms to 1 second. It will make responsiveness for new users (and simulcast changes) awful in some cases. While we can agree that 100ms may be too short (and probably the cause of your ~10 PLIs per second to the publisher), I don't see 1s as the answer, especially with the fence fix from the paragraph above.

Coming to multistream, I don't see the point of adding a new function. I'd just add a new property to the existing function to dictate a potential different behaviour (e.g., -1 does what it did before, a positive integer only pokes the specific stream). Not even sure why we need two? If we're starting to do drill down PLIs, why not only do that?

While this PR addresses the overflow of PLI requests, a related issue remains unresolved. Specifically, when Chrome is used for desktop sharing, the generated video streams often operate at a very low frame rate (0.2–0.5 FPS). This low frame rate causes Janus to generate excessive PLIs.

As you noticed, that's because the simulcast code defaults to 250ms, but that's a (reasonable) default. You can override that with the fallback property when subscribing or using configure. As such, I don't see this as an issue, unless by lowFps you mean give a way to the publisher to specify the fallback that should be used as default for that specific stream, so that subscribers don't have to do it themselves.

@natikaltura
Copy link
Author

natikaltura commented Sep 9, 2024

@natikaltura thanks for the PRs! Answering here even though some considerations will apply to #3423 as well (where there's more changes due to the multiple streams).

@lminiero Thanks a lot for the comments!

I’ll fix both PRs after getting your inputs about the following:

I think the patches can be simplified, since you're adding a new volatile that's probably not needed. The main problem is that the fence we currently have doesn't seem to be working well with the multistream nature of Janus (the same function is invoked by multiple threads), and mostly because we only update fir_latest at the end of the function instead of the very beginning. Moving it earlier would IMO do the same thing that the new sending_pli thing you added tries to do.

Thanks! I'll remove the sending_pli volatile from the 0.x stream since it seems unnecessary with the other two PR changes:

  1. Moving update_fir_latest to the beginning
  2. Calling janus_videoroom_reqpli() in janus_videoroom_relay_rtp_packet() instead of directly calling gateway->send_pli()

I originally copied the sending_pli volatile from the multistream implementation.
Do you think it’s needed in the main branch? If so I'd be happy to include the removal fix in the second PR.

I don't like the change of the threshold from 100ms to 1 second. It will make responsiveness for new users (and simulcast changes) awful in some cases. While we can agree that 100ms may be too short (and probably the cause of your ~10 PLIs per second to the publisher), I don't see 1s as the answer, especially with the fence fix from the paragraph above.

Just to emphasize the need for this fix, I have logs showing 200 and more PLIs per second (for each layer), especially with screen sharing on challenging networks :)

  1. I took the 1-second threshold from the main (multistream) branch, used in janus_videoroom.c -> janus_videoroom_rtcp_pli_send() function. Do you think this could be an issue?
  2. Would 500ms make more sense, or is that still too high? And let's keep in mind that it's sent for each layer.

Coming to multistream, I don't see the point of adding a new function. I'd just add a new property to the existing function to dictate a potential different behaviour (e.g., -1 does what it did before, a positive integer only pokes the specific stream). Not even sure why we need two? If we're starting to do drill down PLIs, why not only do that?

Sure. I will unite it to one function.

While this PR addresses the overflow of PLI requests, a related issue remains unresolved. Specifically, when Chrome is used for desktop sharing, the generated video streams often operate at a very low frame rate (0.2–0.5 FPS). This low frame rate causes Janus to generate excessive PLIs.

As you noticed, that's because the simulcast code defaults to 250ms, but that's a (reasonable) default. You can override that with the fallback property when subscribing or using configure. As such, I don't see this as an issue, unless by lowFps you mean give a way to the publisher to specify the fallback that should be used as default for that specific stream, so that subscribers don't have to do it themselves.

Thanks! I realize I missed the full picture...
But nevertheless, we implemented our patch for the Publisher, so maybe the best practice would be to add the fallback property for Publishers as well, when publishing or using configure.
If you find it useful for other users, we will try to implement it in the PR's

@natikaltura natikaltura closed this Sep 9, 2024
@natikaltura natikaltura reopened this Sep 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants