You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
presence: Fix database purge of activewatchers (clustering, no fallback2db)
When clustering sharing_tags were added to presence, they were added to
the fallback2db "on" case only:
There are a couple of dimensions with differing behaviours:
+------------+------------+
| fallback2- | fallback2- |
| -db = on | -db = off |
+-clustering:-+------------+------------+
| - no | OK | OK |
| - tagless | PR-2519 | PR-2519 |
| - active | OK | this |
+-------------+------------+------------+
The non-OK behaviour above refers to the activewatcher table getting
filled up with stale/expired items.
fallback2db on or off:
```
modparam("presence", "fallback2db", 0) # or 1=on
```
The no-clustering case:
```
handle_subscribe();
```
The tagless case:
```
modparam("presence", "cluster_id", 1)
modparam("clusterer", "my_node_id", 2)
handle_subscribe();
```
The active case:
```
modparam("presence", "cluster_id", 1)
modparam("clusterer", "my_node_id", 2)
modparam("clusterer", "sharing_tag", "node2/1=active")
handle_subscribe("0", "node2");
```
Where PR OpenSIPS#2519 fixes the tagless case, this PR fixes the fallback2db=0
case by writing the sharing_tag to the database so the records can get
found and cleaned up.
(Sidenote: subscriptions which ended with a timeout or 481 *would* get
cleaned up. This makes sense in all cases: if they have an error before
their expiry, it makes sense to purge them from the DB immediately. And
if the periodic cleanup had cleaned those records already, it would not
be an issue.)
0 commit comments