WARN conduwuit_api::client::sync::v3::joined: timeline for newly joined room is empty spam (caused by Nheko?) #1362

Closed
opened 2026-02-12 16:54:19 +00:00 by fingon · 8 comments

I am not sure what causes it exactly, but it is basically happening constantly (both with current latest and main docker tags) for me, and it is using 2 CPU cores presumably for that (99% of log messages are that).

The configuration is quite vanilla:

-e CONDUIT_SERVER_NAME=fingon.iki.fi \
-e CONDUIT_ALLOW_REGISTRATION=false \
-e CONDUIT_ALLOW_FEDERATION=true \
-e CONDUIT_DATABASE_PATH=/var/lib/matrix-conduit/ \
-e CONDUIT_PORT=6167 \
-e CONDUIT_ADDRESS=0.0.0.0 \
-e CONDUIT_CONFIG="" \
-e CONDUIT_NEW_USER_DISPLAYNAME_SUFFIX="" \
-e CONDUIT_ROCKSDB_DIRECT_IO=false \
-e CONDUWUIT_ALLOW_DEVICE_NAME_FEDERATION=false \
-e CONDUWUIT_WELL_KNOWN__SERVER=matrix.fingon.iki.fi:443 \
-e CONDUWUIT_WELL_KNOWN__CLIENT=https://matrix.fingon.iki.fi \
-v {DATA_DIR}:/var/lib/matrix-conduit/ \
-p 16167:6167""",

Unfortunate thing is that the log message does not even say what room it is for.

I think it might be caused by Nheko (see attached graph), as when I was traveling I didn't see it as I was only using Element X during it. It might be also just coincidence though.

Now that I killed my clients, and restart continuwuity, it still uses 100% of one core but log messages do not occur.

On unrelated note, it would be good to have some visibility to what is going on in the release build though - I have no idea what is consuming the CPU, I am in handful of federated low-traffic rooms.

I am not sure what causes it exactly, but it is basically happening constantly (both with current latest and main docker tags) for me, and it is using 2 CPU cores presumably for that (99% of log messages are that). The configuration is quite vanilla: ``` -e CONDUIT_SERVER_NAME=fingon.iki.fi \ -e CONDUIT_ALLOW_REGISTRATION=false \ -e CONDUIT_ALLOW_FEDERATION=true \ -e CONDUIT_DATABASE_PATH=/var/lib/matrix-conduit/ \ -e CONDUIT_PORT=6167 \ -e CONDUIT_ADDRESS=0.0.0.0 \ -e CONDUIT_CONFIG="" \ -e CONDUIT_NEW_USER_DISPLAYNAME_SUFFIX="" \ -e CONDUIT_ROCKSDB_DIRECT_IO=false \ -e CONDUWUIT_ALLOW_DEVICE_NAME_FEDERATION=false \ -e CONDUWUIT_WELL_KNOWN__SERVER=matrix.fingon.iki.fi:443 \ -e CONDUWUIT_WELL_KNOWN__CLIENT=https://matrix.fingon.iki.fi \ -v {DATA_DIR}:/var/lib/matrix-conduit/ \ -p 16167:6167""", ``` Unfortunate thing is that the log message does not even say what room it is for. I think it _might_ be caused by Nheko (see attached graph), as when I was traveling I didn't see it as I was only using Element X during it. It might be also just coincidence though. Now that I killed my clients, and restart continuwuity, it still uses 100% of one core but log messages do not occur. On unrelated note, it would be good to have some visibility to what is going on in the release build though - I have no idea what is consuming the CPU, I am in handful of federated low-traffic rooms.
Owner

Usually that log line appears as a consequence of the same race conditions that cause !779, but it doesn't really need to be a warning in release builds, especially as it's not really actionable by homeserver admins. It'll be demoted to only warn in debug mode in the next release.

Usually that log line appears as a consequence of the same race conditions that cause !779, but it doesn't really need to be a warning in release builds, especially as it's not really actionable by homeserver admins. It'll be demoted to only warn in debug mode in the next release.
Author

Thanks, good to know.

Is there some way to figure out what is using the CPU (if it is not related to this message)? It has gotten worse over the last couple of releases and the number of rooms and their traffic hasn't really changed.

Thanks, good to know. Is there some way to figure out what is using the CPU (if it is not related to this message)? It has gotten worse over the last couple of releases and the number of rooms and their traffic hasn't really changed.
Owner

This message is directly related to elevated CPU usage - check how many requests per second you're getting to /_matrix/client/v3/sync

This message is directly related to elevated CPU usage - check how many requests per second you're getting to `/_matrix/client/v3/sync`
Author

About 10 rps if I have client active (to the sync endpoint), and way less if not. But even without client running, the CPU usage stays at 100% (for single core - it spikes higher if there's serious traffic going on).

There's some federation traffic but not a lot.

About 10 rps if I have client active (to the sync endpoint), and way less if not. But even without client running, the CPU usage stays at 100% (for single core - it spikes higher if there's serious traffic going on). There's some federation traffic but not a lot.
Author

Network traffic for whole node is <200kb/s, and there is almost no i/o, in case that helps (most of the network traffic is not related to Continuwuity).

Network traffic for whole node is <200kb/s, and there is almost no i/o, in case that helps (most of the network traffic is not related to Continuwuity).
Author

fwiw, I checked with 0.5.1. If Nhekos are connected it uses CPU, but otherwise not. With :main or :latest (as of yesterday) there is 100% single core CPU usage so there's some regression in that range, not quite sure what.

fwiw, I checked with 0.5.1. If Nhekos are connected it uses CPU, but otherwise not. With :main or :latest (as of yesterday) there is 100% single core CPU usage so there's some regression in that range, not quite sure what.
Author

Even without clients it eventually started using all the cpu so perhaps the version was just red herring.

Fun fact: current main-maxpedf works still entirely fine after I limited it to 0,2 cores ( so 1/5 or less than what it wanted to use ). No idea what exactly it is doing as network or I/o is not involved much.

Even without clients it eventually started using all the cpu so perhaps the version was just red herring. Fun fact: current main-maxpedf works still entirely fine after I limited it to 0,2 cores ( so 1/5 or less than what it wanted to use ). No idea what exactly it is doing as network or I/o is not involved much.
Author

Ah, looks like current main fixes that too - the culprit was presumably federated presence. Sigh. ( f458f6ab76 )

Ah, looks like current main fixes that too - the culprit was presumably federated presence. Sigh. ( https://forgejo.ellis.link/continuwuation/continuwuity/commit/f458f6ab763e71480bc85964f77aaba8a866b191 )
Sign in to join this conversation.
No milestone
No project
No assignees
3 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

No due date set.

Dependencies

No dependencies set.

Reference
continuwuation/continuwuity#1362
No description provided.