WARN conduwuit_api::client::sync::v3::joined: timeline for newly joined room is empty spam (caused by Nheko?) #1362
Labels
No labels
Blocked
Bug
Cherry-picking
Database
Dependencies
Dependencies/Renovate
Difficulty
Easy
Difficulty
Hard
Difficulty
Medium
Documentation
Enhancement
Good first issue
Help wanted
Inherited
Matrix/Administration
Matrix/Appservices
Matrix/Auth
Matrix/Client
Matrix/Core
Matrix/E2EE
Matrix/Federation
Matrix/Hydra
Matrix/MSC
Matrix/Media
Matrix/T&S
Meta
Meta/CI
Meta/Packaging
Priority
Blocking
Priority
High
Priority
Low
Security
Status
Confirmed
Status
Duplicate
Status
Invalid
Status
Needs Investigation
Support
To-Merge
Wont fix
old/ci/cd
old/rust
No milestone
No project
No assignees
3 participants
Notifications
Due date
No due date set.
Dependencies
No dependencies set.
Reference
continuwuation/continuwuity#1362
Loading…
Add table
Add a link
Reference in a new issue
No description provided.
Delete branch "%!s()"
Deleting a branch is permanent. Although the deleted branch may continue to exist for a short time before it actually gets removed, it CANNOT be undone in most cases. Continue?
I am not sure what causes it exactly, but it is basically happening constantly (both with current latest and main docker tags) for me, and it is using 2 CPU cores presumably for that (99% of log messages are that).
The configuration is quite vanilla:
Unfortunate thing is that the log message does not even say what room it is for.
I think it might be caused by Nheko (see attached graph), as when I was traveling I didn't see it as I was only using Element X during it. It might be also just coincidence though.
Now that I killed my clients, and restart continuwuity, it still uses 100% of one core but log messages do not occur.
On unrelated note, it would be good to have some visibility to what is going on in the release build though - I have no idea what is consuming the CPU, I am in handful of federated low-traffic rooms.
Usually that log line appears as a consequence of the same race conditions that cause !779, but it doesn't really need to be a warning in release builds, especially as it's not really actionable by homeserver admins. It'll be demoted to only warn in debug mode in the next release.
Thanks, good to know.
Is there some way to figure out what is using the CPU (if it is not related to this message)? It has gotten worse over the last couple of releases and the number of rooms and their traffic hasn't really changed.
This message is directly related to elevated CPU usage - check how many requests per second you're getting to
/_matrix/client/v3/syncAbout 10 rps if I have client active (to the sync endpoint), and way less if not. But even without client running, the CPU usage stays at 100% (for single core - it spikes higher if there's serious traffic going on).
There's some federation traffic but not a lot.
Network traffic for whole node is <200kb/s, and there is almost no i/o, in case that helps (most of the network traffic is not related to Continuwuity).
fwiw, I checked with 0.5.1. If Nhekos are connected it uses CPU, but otherwise not. With :main or :latest (as of yesterday) there is 100% single core CPU usage so there's some regression in that range, not quite sure what.
Even without clients it eventually started using all the cpu so perhaps the version was just red herring.
Fun fact: current main-maxpedf works still entirely fine after I limited it to 0,2 cores ( so 1/5 or less than what it wanted to use ). No idea what exactly it is doing as network or I/o is not involved much.
Ah, looks like current main fixes that too - the culprit was presumably federated presence. Sigh. (
f458f6ab76)