Implement room v12 #943

Merged
nex merged 27 commits from hydra/public into main 2025-09-17 20:46:04 +00:00
Owner

Does not yet work! Currently, state resolution does not correctly resolve conflicting states. Everything else appears to work as expected, so stateres will be fixed soon, then we should be clear for takeoff.

Also: a lot of things currently accept a nullable room ID that really just don't need to. This will need tidying up before merge. Some authentication checks have also been disabled temporarily but nothing important.

A lot of things are tagged with TODO(hydra), those need resolving before merge. External contributors should PR to the hydra/public branch, not main.


This PR should be squash merged.

**Does not yet work!** Currently, state resolution does not correctly resolve conflicting states. Everything else appears to work as expected, so stateres will be fixed soon, then we should be clear for takeoff. Also: a lot of things currently accept a nullable room ID that really just don't need to. This will need tidying up before merge. Some authentication checks have also been disabled temporarily but nothing important. A lot of things are tagged with `TODO(hydra)`, those need resolving before merge. External contributors should PR to the `hydra/public` branch, *not* ` main`. --- This PR should be squash merged.
nex added this to the 0.5.0-old milestone 2025-08-25 02:14:32 +00:00
nex self-assigned this 2025-08-25 02:14:32 +00:00
Author
Owner

log of how the current state res is misbehaving (notice that the power levels are resolved correctly but somehow never make it into the final state map)

log of how the current state res is misbehaving (notice that the power levels are resolved correctly but somehow never make it into the final state map)
106 KiB
nex added the due date 2025-09-01 2025-08-26 00:29:01 +00:00
Cargo.toml Outdated
@ -354,2 +354,3 @@
#branch = "conduwuit-changes"
rev = "b753738047d1f443aca870896ef27ecaacf027da"
#rev = "b753738047d1f443aca870896ef27ecaacf027da"
path = "../ruwuma/crates/ruma"
Author
Owner

This needs to point at the latest commit in continuwuation/ruwuma#28

This needs to point at the latest commit in continuwuation/ruwuma#28
nex marked this conversation as resolved
@ -905,0 +906,4 @@
.rooms
.state
.mutex
.lock(&event.room_id_or_hash())
Author
Owner

We forbid redacting the create event so it should be safe to unwrap the room_id itself here

We forbid redacting the create event so it should be safe to unwrap the room_id itself here
nex marked this conversation as resolved
@ -916,3 +922,3 @@
},
event.sender(),
event.room_id(),
Some(&event.room_id_or_hash()),
Author
Owner

We forbid redacting the create event so it should be safe to unwrap the room_id itself here

We forbid redacting the create event so it should be safe to unwrap the room_id itself here
nex marked this conversation as resolved
@ -550,12 +550,20 @@ async fn join_room_by_id_helper_remote(
.iter()
.stream()
.then(|pdu| {
debug!(?pdu, "Validating send_join response room_state event");
Author
Owner

This debug call needs removing or dropping to a trace

This debug call needs removing or dropping to a `trace`
nex marked this conversation as resolved
@ -564,3 +572,3 @@
},
};
debug!(event_id = ?event_id.clone(), "Adding PDU outlier for send_join response room_state event");
Author
Owner

This debug call needs removing or dropping to a trace

This debug call needs removing or dropping to a `trace`
nex marked this conversation as resolved
@ -567,2 +574,4 @@
debug!(event_id = ?event_id.clone(), "Adding PDU outlier for send_join response room_state event");
services.rooms.outlier.add_pdu_outlier(&event_id, &value);
if let Some(state_key) = &pdu.state_key {
debug!(?state_key, "Creating shortstatekey for state event in send_join response");
Author
Owner

This debug call needs removing or dropping to a trace

This debug call needs removing or dropping to a `trace`
nex marked this conversation as resolved
@ -73,3 +73,1 @@
|| services
.moderation
.is_remote_server_forbidden(room_id.server_name().expect("legacy room mxid"))
|| (room_id.server_name().is_some()
Author
Owner

This chain is ugly, it should be rewritten into a variable to produce something like

if services.rooms.metadata.is_banned(room_id).await || legacy_room_is_banned
This chain is ugly, it should be rewritten into a variable to produce something like ```rs if services.rooms.metadata.is_banned(room_id).await || legacy_room_is_banned ```
nex marked this conversation as resolved
@ -126,1 +81,4 @@
};
let room_features = RoomVersion::new(&room_version)?;
let room_id: Option<OwnedRoomId> = match room_features.room_ids_as_hashes {
Author
Owner

should probably be an if/else instead of a match

should probably be an `if`/`else` instead of a `match`
nex marked this conversation as resolved
@ -158,1 +158,4 @@
);
if room_version == V12 {
// TODO(hydra): v12 rooms cannot be federated until they are stable.
content.insert("m.federate".into(), false.into());
Author
Owner

This needs removing before un-WIPing the PR

This needs removing before un-WIPing the PR
nex marked this conversation as resolved
@ -172,1 +176,4 @@
content.insert("room_version".into(), json!(room_version.as_str()).try_into()?);
if room_version == V12 {
// TODO(hydra): v12 rooms cannot be federated until they are stable.
content.insert("m.federate".into(), false.into());
Author
Owner

This also needs removing before un-WIPing the PR

This also needs removing before un-WIPing the PR
nex marked this conversation as resolved
@ -177,0 +186,4 @@
| Some(room_id) => services.rooms.state.mutex.lock(&room_id).await,
| None => {
let temp_room_id = RoomId::new(services.globals.server_name());
debug_info!("Locking temporary room state mutex for {temp_room_id}");
Author
Owner

This should be a trace

This should be a `trace`
nex marked this conversation as resolved
@ -191,3 +209,4 @@
)
.boxed()
.await?;
debug!("Created room create event with ID {}", create_event_id);
Author
Owner

This should be a trace

This should be a `trace`
nex marked this conversation as resolved
@ -194,0 +214,4 @@
| Some(room_id) => room_id,
| None => {
let as_room_id = create_event_id.as_str().replace('$', "!");
debug_info!("Creating room with v12 room ID {as_room_id}");
Author
Owner

This should be a trace or at least debug

This should be a `trace` or at least `debug`
nex marked this conversation as resolved
@ -236,2 +267,4 @@
}
let mut creators: Vec<OwnedUserId> = vec![sender_user.to_owned()];
if let Some(additional_creators) = create_content.get("additional_creators") {
Author
Owner

holy nesting

holy nesting
Author
Owner

this is probably fine shush

this is probably fine shush
nex marked this conversation as resolved
@ -238,0 +280,4 @@
}
}
if !(RoomVersion::new(&room_version)?).explicitly_privilege_room_creators {
creators.clear();
Author
Owner

We just did all of that work for no reason?

We just did all of that work for no reason?
nex marked this conversation as resolved
@ -138,6 +138,7 @@ async fn handle(
pdus: impl Stream<Item = Pdu> + Send,
edus: impl Stream<Item = Edu> + Send,
) -> Result<ResolvedMap> {
// TODO(hydra): Does having no room ID break this?
Author
Owner

No, it does not, as far as I am aware

No, it does not, as far as I am aware
nex marked this conversation as resolved
@ -186,6 +187,7 @@ async fn handle_room(
.lock(&room_id)
.await;
// TODO(hydra): We might be missing a room ID
Author
Owner

We are not

We are not
nex marked this conversation as resolved
@ -19,3 +19,3 @@
/// Experimental, partially supported room versions
pub const UNSTABLE_ROOM_VERSIONS: &[RoomVersionId] =
&[RoomVersionId::V3, RoomVersionId::V4, RoomVersionId::V5];
&[RoomVersionId::V3, RoomVersionId::V4, RoomVersionId::V5, RoomVersionId::V12];
Author
Owner

more commentary than a review, but I would like to keep V12 as "unstable" for a couple more software versions until we can be sure we've ironed out any explosive bugs

more commentary than a review, but I would like to keep V12 as "unstable" for a couple more software versions until we can be sure we've ironed out any explosive bugs
@ -172,0 +173,4 @@
/// The `RoomId` or hash of this event.
/// This should only be preferred over room_id() if the event is a v12
/// create event.
fn room_id_or_hash(&self) -> OwnedRoomId;
Author
Owner
	/// This should only be preferred over room_id() if the event is a v12
	/// create event.

I believe some validation should be performed to ensure someone doesn't try to pull a fast one by sending a non-create event without a room ID. This function should explode if that happens, not just carry along.

```rs /// This should only be preferred over room_id() if the event is a v12 /// create event. ``` I believe some validation should be performed to ensure someone doesn't try to pull a fast one by sending a non-create event without a room ID. This function should explode if that happens, not just carry along.
Author
Owner

I wonder if placing the burden of not holing it wrong should fall upon the holder, not it

I wonder if placing the burden of not holing it wrong should fall upon the holder, not it
@ -36,0 +35,4 @@
if filter
.not_rooms
.iter()
.any(is_equal_to!(event.room_id().unwrap()))
Author
Owner

not convinced it's safe to unwrap() here

not convinced it's safe to unwrap() here
nex marked this conversation as resolved
@ -38,3 +42,3 @@
if let Some(rooms) = filter.rooms.as_ref() {
if !rooms.iter().any(is_equal_to!(event.room_id())) {
if !rooms.iter().any(is_equal_to!(event.room_id().unwrap())) {
Author
Owner

not convinced it's safe to unwrap() here

not convinced it's safe to unwrap() here
nex marked this conversation as resolved
@ -114,0 +117,4 @@
if let Some(room_id) = &self.room_id {
room_id.clone()
} else {
let constructed_hash = "!".to_owned() + &self.event_id.as_str()[1..];
Author
Owner

self.event_id.as_str().replace('$', "!") anyone?

`self.event_id.as_str().replace('$', "!")` anyone?
nex marked this conversation as resolved
@ -114,0 +119,4 @@
} else {
let constructed_hash = "!".to_owned() + &self.event_id.as_str()[1..];
RoomId::parse(&constructed_hash)
.expect("event ID can be indexed")
Author
Owner

indexed?

indexed?
nex marked this conversation as resolved
@ -167,0 +182,4 @@
if let Some(room_id) = &self.room_id {
room_id.clone()
} else {
let constructed_hash = "!".to_owned() + &self.event_id.as_str()[1..];
Author
Owner

self.event_id.as_str().replace('$', "!")

`self.event_id.as_str().replace('$', "!")`
nex marked this conversation as resolved
@ -66,3 +74,1 @@
(StateEventType::RoomMember, sender.as_str().into()),
(StateEventType::RoomCreate, StateKey::new()),
];
let mut auth_types = if room_version.room_ids_as_hashes {
Author
Owner

In a previous commit before going public this was similar to

if room_version.room_ids_as_hashes {
    auth_events.pop()
}

and after reading this diff, I prefer that, even if it's less declarative

In a previous commit before going public this was similar to ```rs if room_version.room_ids_as_hashes { auth_events.pop() } ``` and after reading this diff, I prefer that, even if it's less declarative
@ -205,3 +219,3 @@
}
if !room_version.use_room_create_sender {
// TODO(hydra): If the create event has a room_id, reject
Author
Owner

This is no longer a TODO

This is no longer a TODO
nex marked this conversation as resolved
@ -207,1 +221,3 @@
if !room_version.use_room_create_sender {
// TODO(hydra): If the create event has a room_id, reject
if room_version.room_ids_as_hashes && incoming_event.room_id().is_some() {
warn!("this room version does not support room IDs in m.room.create");
Author
Owner

Not a very good error message, received create event that incorrectly claimed a room ID is likely better

Not a very good error message, `received create event that incorrectly claimed a room ID` is likely better
nex marked this conversation as resolved
@ -216,6 +238,8 @@ where
return Ok(true);
}
// NOTE(hydra): We must have an event ID from this point forward.
Author
Owner

... yeah? s/event/room/?

... yeah? `s/event/room/`?
nex marked this conversation as resolved
@ -255,3 +276,1 @@
return Ok(false);
},
| Some(e) => e,
// TODO(hydra): Re-enable <v12 checks
Author
Owner

This is done at line 323 now, this TODO no longer applies (and the commented code can be removed)

This is done at line 323 now, this TODO no longer applies (and the commented code can be removed)
nex marked this conversation as resolved
@ -258,0 +303,4 @@
let expected_room_id = match room_version.room_ids_as_hashes {
// If the room version uses hashes, we replace the create event's event ID leading sigil
// with !
| true => OwnedRoomId::try_from(room_create_event.event_id().as_str().replace('$', "!"))
Author
Owner

Is the room_id_or_hash() helper not available here?

Is the `room_id_or_hash()` helper not available here?
nex marked this conversation as resolved
@ -278,2 +350,2 @@
return Ok(false);
}
// removed as part of Hydra.
// TODO: reintroduce this for <v12 lol
Author
Owner

Again, line 323. Remove this

Again, line 323. Remove this
nex marked this conversation as resolved
@ -399,0 +474,4 @@
{
warn!(
"room_id of incoming event ({}) does not match room_id of m.room.create event ({})",
sender_member_event.room_id_or_hash(),
Author
Owner

room_id().unwrap() is fine on this line

room_id().unwrap() is fine on this line
nex marked this conversation as resolved
@ -442,0 +531,4 @@
creators
.iter()
.any(|c| c.deserialize().is_ok_and(|c| c == *sender))
}) {
Author
Owner

What an ugly chain

What an ugly chain
nex marked this conversation as resolved
@ -442,0 +535,4 @@
trace!("privileging room creator or additional creator");
// This user is the room creator or an additional creator, give them max power
// level
sender_power_level = Int::MAX;
Author
Owner

technically, Int::MAX can occur in real power levels - perhaps push it to Int::MAX+1?

technically, Int::MAX can occur in real power levels - perhaps push it to Int::MAX+1?
nex marked this conversation as resolved
@ -588,0 +702,4 @@
sender_power = Some(&Int::MAX);
}
if creators.contains(target_user) {
target_power = Some(&Int::MAX);
Author
Owner

Same as earlier - Int::MAX can theoretically appear in real power levels so this is unsafe

Same as earlier - `Int::MAX` can theoretically appear in real power levels so this is unsafe
nex marked this conversation as resolved
@ -614,2 +735,4 @@
(int!(0), int!(0))
};
if creators.contains(user_for_join_auth) {
auth_user_pl = Int::MAX;
Author
Owner

again, needs to be higher

again, needs to be higher
nex marked this conversation as resolved
@ -622,6 +746,7 @@ where
Ok(match target_membership {
| MembershipState::Join => {
debug!("starting target_membership=join check");
Author
Owner

This should be trace, not debug

This should be `trace`, not `debug`
nex marked this conversation as resolved
@ -634,3 +759,3 @@
if prev_event_is_create_event && no_more_prev_events {
let is_creator = if room_version.use_room_create_sender {
debug!("checking if sender is a room creator for initial membership event");
Author
Owner

This should be trace, not debug

This should be `trace`, not `debug`
nex marked this conversation as resolved
@ -647,10 +775,15 @@ where
};
if is_creator {
debug!("sender is room creator, allowing join");
Author
Owner

This should be trace, not debug

This should be `trace`, not `debug`
nex marked this conversation as resolved
@ -650,2 +778,4 @@
debug!("sender is room creator, allowing join");
return Ok(true);
}
debug!("sender is not room creator, proceeding with normal auth checks");
Author
Owner

This should be trace, not debug

This should be `trace`, not `debug`
nex marked this conversation as resolved
@ -691,0 +817,4 @@
warn!("Join rule is knock_restricted but membership does not allow join");
false
},
| JoinRule::Restricted(_) | JoinRule::KnockRestricted(_) =>
Author
Owner

This match statement, while clearer than the code it replaces, is still not clear to a passing reader.

This match statement, while clearer than the code it replaces, is still not clear to a passing reader.
@ -692,3 +833,4 @@
},
| MembershipState::Invite => {
// If content has third_party_invite key
debug!("starting target_membership=invite check");
Author
Owner

This should be trace, not debug

This should be `trace`, not `debug`
nex marked this conversation as resolved
@ -94,1 +94,4 @@
{
use RoomVersionId::*;
let stateres_version = match room_version {
| V2 | V3 | V4 | V5 | V6 | V7 | V8 | V9 | V10 | V11 => 2.0,
Author
Owner

These floats should be enums

These floats should be enums
@ -111,0 +117,4 @@
calculate_conflicted_subgraph(&conflicting, event_fetch)
.await
.ok_or_else(|| {
Error::InvalidPdu("Failed to calculate conflicted subgraph".to_owned())
Author
Owner

can this error actually happen? If so, why does it swallow the actual error?

can this error actually happen? If so, why does it swallow the actual error?
nex marked this conversation as resolved
@ -251,2 +273,4 @@
}
/// Calculate the conflicted subgraph
async fn calculate_conflicted_subgraph<F, Fut, E>(
Author
Owner

This function is the code manifestation of a tin of sardines. Whitespace isn't illegal

This function is the code manifestation of a tin of sardines. Whitespace isn't illegal
nex marked this conversation as resolved
@ -288,2 +288,4 @@
let mut notify = None;
let mut tweaks = Vec::new();
if event.room_id().is_none() {
// TODO(hydra): does this matter?
Author
Owner

I don't think this does, this would only happen if event is a create event, which nobody is setting push rules for

I don't think this does, this would only happen if `event` is a create event, which nobody is setting push rules for
@ -196,3 +196,3 @@
},
| Ok(pdu) => {
if pdu.room_id != room_id {
if pdu.room_id.is_some() && pdu.room_id != Some(room_id.to_owned()) {
Author
Owner

is_some_and?

`is_some_and`?
nex marked this conversation as resolved
@ -59,2 +59,4 @@
is_timeline_event: bool,
) -> Result<Option<RawPduId>> {
if room_id.is_empty() {
// TODO(hydra): Room IDs should be calculated before this function is called
Author
Owner

Not entirely sure what to do with this information, past nexy. Should this just be an unreachable!()?

Not entirely sure what to do with this information, past nexy. Should this just be an `unreachable!()`?
nex marked this conversation as resolved
@ -36,3 +38,3 @@
.await?;
if self.services.admin.is_admin_room(pdu.room_id()).await {
let room_id = pdu.room_id_or_hash();
Author
Owner

Check if it's a create event first.

Check if it's a create event first.
nex marked this conversation as resolved
@ -39,0 +40,4 @@
let room_id = pdu.room_id_or_hash();
trace!("Checking if room {room_id} is an admin room");
if self.services.admin.is_admin_room(&room_id).await {
trace!("Room {room_id} is an admin room, checking PDU for admin room restrictions");
Author
Owner

These logs just aren't necessary

These logs just aren't necessary
nex marked this conversation as resolved
@ -96,0 +107,4 @@
if room_features.room_ids_as_hashes {
// bootstrap shortid for room
debug_warn!(%room_id, "Bootstrapping shortid for room");
self.services
Author
Owner

This was needed during dev, but is it still needed?

This was needed during dev, but is it still needed?
Author
Owner

idk but im keeping it

idk but im keeping it
nex marked this conversation as resolved
@ -68,2 +71,4 @@
};
let room_version = RoomVersion::new(&room_version_id).expect("room version is supported");
// TODO(hydra): Only create events can lack a room ID.
Author
Owner

Okay? Why is this here

Okay? Why is this here
nex marked this conversation as resolved
@ -155,0 +199,4 @@
})?,
),
};
let create_event = match &pdu.kind {
Author
Owner

what?

what?
Author
Owner

i dont even know, it's fine probably

i dont even know, it's fine probably
nex marked this conversation as resolved
@ -22,3 +23,4 @@
return Err!(BadServerResponse("Failed to determine keys required to verify: {e}"));
},
};
debug!(?required, "Keys required to verify event");
Author
Owner

trace not debug

`trace` not debug
nex marked this conversation as resolved
@ -70,6 +73,7 @@ pub async fn get_verify_key(
let notary_only = self.services.server.config.only_query_trusted_key_servers;
if let Some(result) = self.verify_keys_for(origin).await.remove(key_id) {
debug!("Found key in cache");
Author
Owner

trace not debug

trace not debug
nex marked this conversation as resolved
@ -119,10 +120,12 @@ pub async fn required_keys_exist(
) -> bool {
use ruma::signatures::required_keys;
debug!(?object, "Checking required keys exist");
Author
Owner

trace not debug

trace not debug
nex marked this conversation as resolved
@ -123,3 +126,3 @@
return false;
};
debug!(?required_keys, "Required keys to verify event");
Author
Owner

trace not debug

trace not debug
nex marked this conversation as resolved
@ -28,18 +30,25 @@ pub async fn validate_and_add_event_id_no_fetch(
pdu: &RawJsonValue,
room_version: &RoomVersionId,
) -> Result<(OwnedEventId, CanonicalJsonObject)> {
debug!(?pdu, "Validating PDU without fetching keys");
Author
Owner

trace not debug

trace not debug
nex marked this conversation as resolved
@ -30,2 +32,4 @@
) -> Result<(OwnedEventId, CanonicalJsonObject)> {
debug!(?pdu, "Validating PDU without fetching keys");
let (event_id, mut value) = gen_event_id_canonical_json(pdu, room_version)?;
debug!(event_id = event_id.as_str(), "Generated event ID, checking required keys");
Author
Owner

in what world would this be anything more than a trace? why is this even here?

in what world would this be anything more than a trace? why is this even here?
nex marked this conversation as resolved
@ -35,3 +42,3 @@
)));
}
debug!("All required keys exist, verifying event");
Author
Owner

trace not debug

trace not debug
nex marked this conversation as resolved
@ -40,3 +48,4 @@
"Event {event_id} failed verification: {e:?}"
)));
}
debug!("Event verified successfully");
Author
Owner

trace

trace
nex marked this conversation as resolved
nex modified the milestone from 0.5.0-old to v0.5.0-rc.8 2025-08-26 23:17:59 +00:00
nex modified the due date from 2025-09-01 to 2025-09-14 2025-08-26 23:18:18 +00:00
nex force-pushed hydra/public from 56d869c941
Some checks failed
Release Docker Image / define-variables (push) Successful in 3s
Checks / Prek / Pre-commit & Formatting (push) Successful in 22s
Checks / Rust / Format (push) Failing after 41s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 15s
Documentation / Build and Deploy Documentation (pull_request) Successful in 49s
Checks / Rust / Clippy (push) Failing after 1m30s
Checks / Rust / Cargo Test (push) Failing after 1m41s
Release Docker Image / build-image (linux/amd64, release, linux-amd64, base) (push) Failing after 5m8s
Release Docker Image / build-image (linux/arm64, release, linux-arm64, base) (push) Failing after 5m15s
Release Docker Image / merge (push) Has been skipped
to 327fa02cd9
Some checks failed
Release Docker Image / define-variables (push) Successful in 3s
Documentation / Build and Deploy Documentation (pull_request) Successful in 1m6s
Checks / Prek / Pre-commit & Formatting (push) Failing after 1m31s
Release Docker Image / build-image (linux/arm64, release, linux-arm64, base) (push) Has been cancelled
Release Docker Image / build-image (linux/amd64, release, linux-amd64, base) (push) Has been cancelled
Checks / Prek / Clippy and Cargo Tests (push) Failing after 9m4s
Release Docker Image / merge (push) Has been cancelled
2025-08-30 15:55:36 +00:00
Compare
@ -196,8 +220,11 @@ where
// Add unconflicted state to the resolved state
// We priorities the unconflicting state
resolved_state.extend(clean);
resolved_state.extend(resolved_control); // TODO(hydra): this feels disgusting and wrong but it allows
Author
Owner

no this is fine please read the comment literally 8 lines above this

no this is fine please read the comment literally 8 lines above this
Author
Owner

Right now, state res v2.1 is functional, however there's now a major complication - for some reason, the "first" PDU in federated rooms is not being persisted as the first one, which means the server currently will do one remote backfill request, and then never backfill again. I don't know why this happens, you can probably see my futile debugging attempts in d221491377b7e170e8ace28336be79e9de290306.

I still advise against running this in a production deployment as getting backfill in rooms that encounter this bug is nigh impossible, but if you're as bad at following instructions as me, then at least now it won't catastrophically explode.

Right now, state res v2.1 is functional, however there's now a major complication - for some reason, the "first" PDU in federated rooms is not being persisted as the first one, which means the server currently will do one remote backfill request, and then **never backfill again**. I don't know why this happens, you can probably see my futile debugging attempts in d221491377b7e170e8ace28336be79e9de290306. I still advise against running this in a production deployment as getting backfill in rooms that encounter this bug is nigh impossible, but if you're as bad at following instructions as me, then at least now it won't catastrophically explode.
Author
Owner

Oh yeah I also think this has broken backfill in <v12 rooms too, but haven't had the chance to actually test that yet

Oh yeah I also think this has broken backfill in <v12 rooms too, but haven't had the chance to actually test that yet
Author
Owner

Interestingly, calling /messages with a limit of 100 (or just more events than are in the room), everything returns as expected. But the end value never changes, which either causes a request loop, or causes clients to abort loading further messages?

let next_token = events.last().map(at!(0));

this guy smells weird

Interestingly, calling `/messages` with a `limit` of 100 (or just more events than are in the room), everything returns as expected. But the `end` value never changes, which either causes a request loop, or causes clients to abort loading further messages? https://forgejo.ellis.link/continuwuation/continuwuity/src/commit/9e62e66ae4a46f3e742644a86c9c699eb5625f47/src/api/client/message.rs#L175 this guy smells weird
Jade left a comment
Owner

Notes to self, got to src/service/rooms/timeline/backfill.rs

Notes to self, got to `src/service/rooms/timeline/backfill.rs`
@ -172,0 +173,4 @@
/// The `RoomId` or hash of this event.
/// This should only be preferred over room_id() if the event is a v12
/// create event.
fn room_id_or_hash(&self) -> OwnedRoomId;
Owner

Should these even be seperate methods? Need to look at the underlying code here, but we should only ever be getting the hash from this method from a v12 create event, getting that hash is infalible, and we should never be taking the ID as a field on a v12 create event. Leaving is as seperate methods, and one as an option, feels bad.
normal event -> field is always there
<v12 create -> field is always there
v12 create -> hash is infalable
any situation in which we can't get a room ID is where we've got bad data, and should have been rejected before we get to this point

Should these even be seperate methods? Need to look at the underlying code here, but we should only ever be getting the hash from this method from a v12 create event, getting that hash is infalible, and we should never be taking the ID as a field on a v12 create event. Leaving is as seperate methods, and one as an option, feels bad. normal event -> field is always there <v12 create -> field is always there v12 create -> hash is infalable any situation in which we can't get a room ID is where we've got bad data, and should have been rejected before we get to this point
Author
Owner

I started in an earlier self-review that one of the room ID getters should enforce only calculating it based on the event ID if there is

  1. no existing room ID in the PDU
  2. the PDU type is a valid m.room.create event

But never got around to it.

I started in an earlier self-review that one of the room ID getters should enforce only calculating it based on the event ID if there is 1. no existing room ID in the PDU 2. the PDU type is a valid m.room.create event But never got around to it.
@ -114,0 +117,4 @@
if let Some(room_id) = &self.room_id {
room_id.clone()
} else {
let constructed_hash = self.event_id.as_str().replace('$', "!");
Owner

feels like a footgun, see prior comment

feels like a footgun, see prior comment
@ -633,7 +759,10 @@ where
let no_more_prev_events = prev_events.next().is_none();
if prev_event_is_create_event && no_more_prev_events {
let is_creator = if room_version.use_room_create_sender {
Owner

Need to look at the spec to see what's actually supposed to happen here

Need to look at the spec to see what's actually supposed to happen here
@ -46,1 +46,4 @@
#[derive(Deserialize)]
struct RoomCreateContentFields {
room_version: Option<Raw<RoomVersionId>>,
Owner

Should this really be option? Or default to a specific version?

Should this really be option? Or default to a specific version?
@ -47,0 +47,4 @@
#[derive(Deserialize)]
struct RoomCreateContentFields {
room_version: Option<Raw<RoomVersionId>>,
creator: Option<Raw<IgnoredAny>>,
Owner

Also thinking about this being an enum or ruwuma, will have to look what it's used for

Also thinking about this being an enum or ruwuma, will have to look what it's used for
@ -193,2 +200,2 @@
warn!("servername of room ID does not match servername of sender");
return Ok(false);
if incoming_event.room_id().is_some() {
let Some(room_id_server_name) = incoming_event.room_id().unwrap().server_name()
Owner

Feels like should be a match instead

Feels like should be a match instead
@ -205,3 +220,3 @@
}
if !room_version.use_room_create_sender {
// TODO(hydra): If the create event has a room_id, reject
Owner

todo

todo
@ -258,0 +280,4 @@
// // Room was either v11 with no create event, or v12+ room
// if incoming_event.room_id().is_some() {
// // invalid v11
// warn!("no m.room.create event found in claimed state");
Owner

todo

todo
@ -258,0 +296,4 @@
from_json_str(room_create_event.content().get())?;
if room_create_content
.room_version
.is_some_and(|v| v.deserialize().is_err())
Owner

if is none??

if is none??
@ -262,0 +323,4 @@
// If the create event is referenced in the event's auth events, and this is a
// v12 room, reject
let claims_create_event = incoming_event
Owner

need to check the spec here

need to check the spec here
@ -262,0 +326,4 @@
let claims_create_event = incoming_event
.auth_events()
.any(|id| id == room_create_event.event_id());
if room_version.room_ids_as_hashes && claims_create_event {
Owner

need to look at room version here, field vs asc const

need to look at room version here, field vs asc const
@ -277,3 +351,1 @@
warn!("no m.room.create event in auth events");
return Ok(false);
}
// removed as part of Hydra.
Owner

Don't we do this earlier?

Don't we do this earlier?
@ -442,0 +536,4 @@
trace!("privileging room creator or additional creator");
// This user is the room creator or an additional creator, give them max power
// level
sender_power_level = Int::MAX;
Owner

TODO: I think int max can be set as a valid PL, so we need infinity or something that can't be

TODO: I think int max can be set as a valid PL, so we need infinity or something that can't be
@ -588,0 +703,4 @@
sender_power = Some(&Int::MAX);
}
if creators.contains(target_user) {
target_power = Some(&Int::MAX);
Owner

TODO

Also why is this so different to prev one?

TODO Also why is this so different to prev one?
@ -688,3 +797,1 @@
// If the join_rule is public, allow.
// Otherwise, reject.
join_rules == JoinRule::Public
match join_rules {
Owner

Uhh let me come back to this

Uhh let me come back to this
@ -95,0 +95,4 @@
use RoomVersionId::*;
let stateres_version = match room_version {
| V2 | V3 | V4 | V5 | V6 | V7 | V8 | V9 | V10 | V11 => 2.0,
| _ => 2.1,
Owner

Compare to ruma using switches for each room version. Also using a number here is wrong, enum instead at least

Compare to ruma using switches for each room version. Also using a number here is wrong, enum instead at least
@ -251,2 +278,4 @@
}
/// Calculate the conflicted subgraph
async fn calculate_conflicted_subgraph<F, Fut, E>(
Owner

Come back to

Come back to
Author
Owner

This was just lifted from conduit, asyncified and stripped, I don't think there's anything to come back to really

This was just lifted from conduit, asyncified and stripped, I don't *think* there's anything to come back to really
@ -571,0 +662,4 @@
"Failed to parse create event ID from room ID/hash: {e}"
))
})?;
let create_event = fetch_event(create_event_id.into())
Owner

Check to see if this accadentally adds to timeline and breaks backfill -

Check to see if this accadentally adds to timeline and breaks backfill -
@ -79,3 +79,3 @@
/// instead of the event content's `creator` field.
///
/// See: [MSC2175](https://github.com/matrix-org/matrix-spec-proposals/pull/2175) for more information.
/// See: [MSC2175](https://github.com/matrix-org/matrix-spec-proposals/pull/2175)
Owner

what went on here

what went on here
@ -83,1 +89,4 @@
///
/// See: [MSC4291](https://github.com/matrix-org/matrix-spec-proposals/pull/4291)
pub room_ids_as_hashes: bool,
}
Owner

OK, so no asc consts, but state res flags probably belong here.

OK, so no asc consts, but state res flags probably belong here.
@ -27,3 +27,3 @@
pub async fn create_admin_room(services: &Services) -> Result {
let room_id = RoomId::new(services.globals.server_name());
let room_version = &services.config.default_room_version;
let room_version = &RoomVersionId::V11;
Owner

what

what
Author
Owner

I couldn't be bothered testing if the admin room remains both functional and secure with v12 room IDs lol

I couldn't be bothered testing if the admin room remains both functional and secure with v12 room IDs lol
@ -485,3 +491,3 @@
// Prevent unescaped !admin from being used outside of the admin room
if is_public_prefix && !self.is_admin_room(event.room_id()).await {
if event.room_id().is_some()
Owner

thinking emoji

thinking emoji
@ -290,0 +290,4 @@
if event.room_id().is_none() {
// TODO(hydra): does this matter?
return Ok(());
}
Owner

come back to - mb check syn

come back to - mb check syn
@ -196,3 +196,3 @@
},
| Ok(pdu) => {
if pdu.room_id != room_id {
if pdu.room_id.is_some() && pdu.room_id != Some(room_id.to_owned()) {
Owner

what happens for v12 here

what happens for v12 here
@ -20,0 +26,4 @@
} else {
// v12 rooms might have no room_id in the create event. We'll need to check the
// content.room_version
let content = value
Owner

come back to

come back to
@ -220,3 +266,1 @@
// fail open
warn!("Failed to check event with policy server: {e}");
},
// TODO(hydra): Skip this check for create events (why didnt we do this
Owner

because we don't check state events anyway, silly

because we don't check state events anyway, silly
Author
Owner

fogor

fogor
Author
Owner

blocking: we do not serialise the m.room.create event in v12 rooms correctly, resulting in conflicting hashes (room_id needs to be skipped if it is None)

blocking: we do not serialise the m.room.create event in v12 rooms correctly, resulting in conflicting hashes (`room_id` needs to be skipped if it is None)
nex force-pushed hydra/public from ef42638f3c
Some checks failed
Checks / Prek / Pre-commit & Formatting (push) Successful in 2m54s
Release Docker Image / define-variables (push) Successful in 3s
Documentation / Build and Deploy Documentation (pull_request) Successful in 30s
Checks / Prek / Clippy and Cargo Tests (push) Failing after 9m20s
Release Docker Image / build-image (linux/arm64, release, linux-arm64, base) (push) Successful in 14m59s
Release Docker Image / build-image (linux/amd64, release, linux-amd64, base) (push) Successful in 14m50s
Release Docker Image / merge (push) Successful in 12s
to c647aed572
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 28s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 1m19s
Release Docker Image / define-variables (pull_request) Successful in 1s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 2m16s
Release Docker Image / build-image (linux/arm64, release, linux-arm64, base) (pull_request) Successful in 3m43s
Release Docker Image / build-image (linux/amd64, release, linux-amd64, base) (pull_request) Failing after 9m1s
Release Docker Image / merge (pull_request) Has been skipped
2025-09-10 15:56:47 +00:00
Compare
nex force-pushed hydra/public from c647aed572
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 28s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 1m19s
Release Docker Image / define-variables (pull_request) Successful in 1s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 2m16s
Release Docker Image / build-image (linux/arm64, release, linux-arm64, base) (pull_request) Successful in 3m43s
Release Docker Image / build-image (linux/amd64, release, linux-amd64, base) (pull_request) Failing after 9m1s
Release Docker Image / merge (pull_request) Has been skipped
to 87b76d31a1
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 35s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 40s
Release Docker Image / define-variables (pull_request) Successful in 2s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 1m48s
Release Docker Image / build-image (linux/amd64, release, linux-amd64, base) (pull_request) Failing after 1m37s
Release Docker Image / build-image (linux/arm64, release, linux-arm64, base) (pull_request) Failing after 1m31s
Release Docker Image / merge (pull_request) Has been skipped
2025-09-10 15:58:26 +00:00
Compare
nex force-pushed hydra/public from 87b76d31a1
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 35s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 40s
Release Docker Image / define-variables (pull_request) Successful in 2s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 1m48s
Release Docker Image / build-image (linux/amd64, release, linux-amd64, base) (pull_request) Failing after 1m37s
Release Docker Image / build-image (linux/arm64, release, linux-arm64, base) (pull_request) Failing after 1m31s
Release Docker Image / merge (pull_request) Has been skipped
to 719fbee11a
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 24s
Release Docker Image / define-variables (pull_request) Successful in 3s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 37s
Release Docker Image / Build linux-amd64-haswell (pull_request) Successful in 2m49s
Release Docker Image / Build linux-amd64 (pull_request) Successful in 3m3s
Release Docker Image / Build linux-arm64 (pull_request) Successful in 1m32s
Release Docker Image / Create Haswell Manifest (pull_request) Successful in 7s
Release Docker Image / Create Multi-arch Manifest (pull_request) Successful in 8s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 5m28s
2025-09-11 15:55:21 +00:00
Compare
chore(hydra): Bump ruwuma
Some checks failed
Release Docker Image / define-variables (pull_request) Successful in 9s
Documentation / Build and Deploy Documentation (pull_request) Successful in 34s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 36s
Release Docker Image / Build linux-amd64-haswell (pull_request) Successful in 2m22s
Release Docker Image / Build linux-amd64 (pull_request) Successful in 2m39s
Release Docker Image / Build linux-arm64 (pull_request) Successful in 2m32s
Release Docker Image / Create Haswell Manifest (pull_request) Successful in 8s
Release Docker Image / Create Multi-arch Manifest (pull_request) Successful in 9s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 4m48s
94eb0f1400
fix(hydra): Don't serialise null room IDs in PDUs
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 47s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 38s
Release Docker Image / define-variables (pull_request) Successful in 4s
Release Docker Image / Build linux-amd64 (pull_request) Successful in 6m47s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 11m58s
Release Docker Image / Build linux-arm64 (pull_request) Successful in 6m56s
Release Docker Image / Build linux-amd64-haswell (pull_request) Successful in 23m36s
Release Docker Image / Create Haswell Manifest (pull_request) Successful in 14s
Release Docker Image / Create Multi-arch Manifest (pull_request) Successful in 10s
9dc336b5e8
fix(hydra): Stop enforcing unfederated v12 rooms
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 40s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 49s
Release Docker Image / define-variables (pull_request) Successful in 3s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 11m18s
Release Docker Image / Build linux-amd64 (pull_request) Successful in 4m59s
Release Docker Image / Build linux-arm64 (pull_request) Successful in 5m28s
Release Docker Image / Build linux-amd64-haswell (pull_request) Successful in 27m54s
Release Docker Image / Create Haswell Manifest (pull_request) Successful in 8s
Release Docker Image / Create Multi-arch Manifest (pull_request) Successful in 7s
cc58e5d749
Author
Owner

This damn thing finally works as expected (as far as I can tell) 🎉

This damn thing finally works as expected (as far as I can tell) 🎉
fix(hydra): Tackle most open code review comments
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 27s
Release Docker Image / define-variables (pull_request) Successful in 2s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 39s
Release Docker Image / Build linux-amd64 (pull_request) Successful in 5m8s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 9m17s
Release Docker Image / Build linux-arm64 (pull_request) Successful in 4m7s
Release Docker Image / Build linux-amd64-haswell (pull_request) Successful in 25m38s
Release Docker Image / Create Haswell Manifest (pull_request) Successful in 9s
Release Docker Image / Create Multi-arch Manifest (pull_request) Successful in 8s
aa3e996574
fix(hydra): Incorrect "auth event for incorrect room"
Some checks failed
Release Docker Image / define-variables (pull_request) Successful in 10s
Documentation / Build and Deploy Documentation (pull_request) Successful in 28s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 38s
Release Docker Image / Build linux-amd64 (pull_request) Successful in 6m7s
Release Docker Image / Build linux-arm64 (pull_request) Successful in 6m3s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 8m8s
Release Docker Image / Build linux-amd64-haswell (pull_request) Successful in 14m40s
Release Docker Image / Create Haswell Manifest (pull_request) Successful in 7s
Release Docker Image / Create Multi-arch Manifest (pull_request) Successful in 7s
245a32675a
fix(hydra): Idk maybe this will fix it
Some checks failed
Release Docker Image / define-variables (pull_request) Successful in 10s
Documentation / Build and Deploy Documentation (pull_request) Successful in 30s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 38s
Release Docker Image / Build linux-amd64 (pull_request) Successful in 5m37s
Release Docker Image / Build linux-arm64 (pull_request) Successful in 6m0s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 8m37s
Release Docker Image / Build linux-amd64-haswell (pull_request) Successful in 17m0s
Release Docker Image / Create Haswell Manifest (pull_request) Successful in 7s
Release Docker Image / Create Multi-arch Manifest (pull_request) Successful in 8s
7f0871546d
fix(hydra): Own it
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 31s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 35s
Release Docker Image / define-variables (pull_request) Successful in 2s
Release Docker Image / Build linux-amd64 (pull_request) Successful in 4m54s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 7m14s
Release Docker Image / Build linux-arm64 (pull_request) Successful in 3m59s
Release Docker Image / Build linux-amd64-haswell (pull_request) Successful in 18m57s
Release Docker Image / Create Haswell Manifest (pull_request) Successful in 6s
Release Docker Image / Create Multi-arch Manifest (pull_request) Successful in 7s
1288cc8794
Author
Owner

This PR also bumps the database version btw so if you deploy it you can't go back, it also breaks v11 and below so dont deploy it yet 👍

This PR also bumps the database version btw so if you deploy it you can't go back, it also breaks v11 and below so dont deploy it yet 👍
fix(hydra): Fix v11 room support
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 44s
Release Docker Image / define-variables (pull_request) Successful in 3s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 1m9s
Release Docker Image / Build linux-amd64 (pull_request) Successful in 6m7s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 9m56s
Release Docker Image / Build linux-arm64 (pull_request) Successful in 5m8s
Release Docker Image / Build linux-amd64-haswell (pull_request) Successful in 17m0s
Release Docker Image / Create Haswell Manifest (pull_request) Successful in 38s
Release Docker Image / Create Multi-arch Manifest (pull_request) Successful in 15s
e8df221dd9
Author
Owner

Some commits don't seem to have made it into the PR but they're in the branch so I'll manually merge this later to be safe.

In an effort to not delay the release any longer than necessary I'm gonna un-WIP this and wait for an approval. There's definitely still bugs and quirks in this but I haven't encountered anything explosive since the v11 fixes, so followup bugfix patches are probably the way to go.

Some commits don't seem to have made it into the PR but they're in the branch so I'll manually merge this later to be safe. In an effort to not delay the release any longer than necessary I'm gonna un-WIP this and wait for an approval. There's definitely still bugs and quirks in this but I haven't encountered anything explosive since the v11 fixes, so followup bugfix patches are probably the way to go.
nex changed title from WIP: Implement room v12 to Implement room v12 2025-09-15 14:31:05 +00:00
nex force-pushed hydra/public from f7432b5b0b
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 35s
Release Docker Image / define-variables (pull_request) Successful in 23s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 56s
Checks / Prek / Clippy and Cargo Tests (pull_request) Has been cancelled
Release Docker Image / Create Haswell Manifest (pull_request) Has been cancelled
Release Docker Image / Create Multi-arch Manifest (pull_request) Has been cancelled
Release Docker Image / Build linux-amd64-haswell (pull_request) Has been cancelled
Release Docker Image / Build linux-amd64 (pull_request) Has been cancelled
Release Docker Image / Build linux-arm64 (pull_request) Has been cancelled
to ebbf80dabd
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 34s
Release Docker Image / define-variables (pull_request) Successful in 27s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 57s
Release Docker Image / Create Multi-arch Manifest (pull_request) Has been skipped
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 7m42s
Release Docker Image / Build linux-amd64 (pull_request) Failing after 3m23s
Release Docker Image / Build linux-amd64-haswell (pull_request) Failing after 3m19s
Release Docker Image / Build linux-arm64 (pull_request) Failing after 3m19s
Release Docker Image / Create Haswell Manifest (pull_request) Has been skipped
2025-09-15 14:44:23 +00:00
Compare
nex force-pushed hydra/public from ebbf80dabd
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 34s
Release Docker Image / define-variables (pull_request) Successful in 27s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 57s
Release Docker Image / Create Multi-arch Manifest (pull_request) Has been skipped
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 7m42s
Release Docker Image / Build linux-amd64 (pull_request) Failing after 3m23s
Release Docker Image / Build linux-amd64-haswell (pull_request) Failing after 3m19s
Release Docker Image / Build linux-arm64 (pull_request) Failing after 3m19s
Release Docker Image / Create Haswell Manifest (pull_request) Has been skipped
to ea2cbd228c
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 26s
Release Docker Image / define-variables (pull_request) Successful in 12s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 37s
Release Docker Image / Build linux-amd64 (pull_request) Failing after 3m23s
Release Docker Image / Build linux-amd64-haswell (pull_request) Failing after 3m18s
Release Docker Image / Build linux-arm64 (pull_request) Failing after 3m11s
Release Docker Image / Create Haswell Manifest (pull_request) Has been skipped
Release Docker Image / Create Multi-arch Manifest (pull_request) Has been skipped
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 5m54s
2025-09-16 10:56:28 +00:00
Compare
nex force-pushed hydra/public from ea2cbd228c
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 26s
Release Docker Image / define-variables (pull_request) Successful in 12s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 37s
Release Docker Image / Build linux-amd64 (pull_request) Failing after 3m23s
Release Docker Image / Build linux-amd64-haswell (pull_request) Failing after 3m18s
Release Docker Image / Build linux-arm64 (pull_request) Failing after 3m11s
Release Docker Image / Create Haswell Manifest (pull_request) Has been skipped
Release Docker Image / Create Multi-arch Manifest (pull_request) Has been skipped
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 5m54s
to b3ec94d324
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 25s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 37s
Release Docker Image / Build linux-arm64 (release) (pull_request) Successful in 3m35s
Release Docker Image / Build linux-amd64 (release) (pull_request) Successful in 4m2s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 5m34s
Release Docker Image / Create Multi-arch Release Manifest (pull_request) Successful in 11s
Release Docker Image / Build linux-arm64 (max-perf) (pull_request) Successful in 2m55s
Release Docker Image / Build linux-amd64 (max-perf) (pull_request) Successful in 3m29s
Release Docker Image / Create Max-Perf Manifest (pull_request) Successful in 3m19s
2025-09-16 11:11:15 +00:00
Compare
chore(hydra): Bump ruwuma *correctly*
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 28s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 36s
Release Docker Image / Build linux-amd64 (release) (pull_request) Successful in 2m38s
Release Docker Image / Build linux-arm64 (release) (pull_request) Successful in 2m40s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 6m9s
Release Docker Image / Create Multi-arch Release Manifest (pull_request) Successful in 10s
Release Docker Image / Build linux-amd64 (max-perf) (pull_request) Successful in 1m47s
Release Docker Image / Build linux-arm64 (max-perf) (pull_request) Successful in 1m49s
Release Docker Image / Create Max-Perf Manifest (pull_request) Successful in 10s
c12aa4c50e
fix(hydra): Fix rocksdb compile errors
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 34s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 42s
Release Docker Image / Build linux-amd64 (release) (pull_request) Successful in 3m53s
Release Docker Image / Build linux-arm64 (release) (pull_request) Successful in 3m52s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 5m39s
Release Docker Image / Create Multi-arch Release Manifest (pull_request) Successful in 16s
Release Docker Image / Build linux-amd64 (max-perf) (pull_request) Successful in 2m30s
Release Docker Image / Build linux-arm64 (max-perf) (pull_request) Successful in 2m39s
Release Docker Image / Create Max-Perf Manifest (pull_request) Successful in 10s
ee183ed38f
nex force-pushed hydra/public from ee183ed38f
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 34s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 42s
Release Docker Image / Build linux-amd64 (release) (pull_request) Successful in 3m53s
Release Docker Image / Build linux-arm64 (release) (pull_request) Successful in 3m52s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 5m39s
Release Docker Image / Create Multi-arch Release Manifest (pull_request) Successful in 16s
Release Docker Image / Build linux-amd64 (max-perf) (pull_request) Successful in 2m30s
Release Docker Image / Build linux-arm64 (max-perf) (pull_request) Successful in 2m39s
Release Docker Image / Create Max-Perf Manifest (pull_request) Successful in 10s
to 2162c4712f
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 39s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 38s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 3m16s
Release Docker Image / Build linux-amd64 (release) (pull_request) Successful in 5m23s
Release Docker Image / Build linux-arm64 (release) (pull_request) Successful in 5m13s
Release Docker Image / Create Multi-arch Release Manifest (pull_request) Successful in 11s
Release Docker Image / Build linux-arm64 (max-perf) (pull_request) Successful in 21m21s
Release Docker Image / Build linux-amd64 (max-perf) (pull_request) Successful in 22m21s
Release Docker Image / Create Max-Perf Manifest (pull_request) Successful in 10s
2025-09-16 11:43:25 +00:00
Compare
Author
Owner

don't merge yet, the stateres v2.1 check at src/core/matrix/state_res/mod.rs:223:5 and co really should be an enum instead of a float comparison

don't merge yet, the stateres v2.1 check at `src/core/matrix/state_res/mod.rs:223:5` and co really should be an enum instead of a *float comparison*
fix(hydra): Use an enum instead of a float for stateres version
Some checks failed
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 2m6s
Documentation / Build and Deploy Documentation (pull_request) Successful in 3m42s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 11m0s
Release Docker Image / Build linux-amd64 (release) (pull_request) Successful in 6m29s
Release Docker Image / Build linux-arm64 (release) (pull_request) Successful in 6m48s
Release Docker Image / Create Multi-arch Release Manifest (pull_request) Successful in 14s
Release Docker Image / Build linux-arm64 (max-perf) (pull_request) Successful in 14m28s
Release Docker Image / Build linux-amd64 (max-perf) (pull_request) Successful in 14m36s
Release Docker Image / Create Max-Perf Manifest (pull_request) Successful in 11s
0d84744e07
Author
Owner

Giving it a final burn test, then will be merging into main shortly.

Giving it a final burn test, then will be merging into main shortly.
nex force-pushed hydra/public from 0d84744e07
Some checks failed
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 2m6s
Documentation / Build and Deploy Documentation (pull_request) Successful in 3m42s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 11m0s
Release Docker Image / Build linux-amd64 (release) (pull_request) Successful in 6m29s
Release Docker Image / Build linux-arm64 (release) (pull_request) Successful in 6m48s
Release Docker Image / Create Multi-arch Release Manifest (pull_request) Successful in 14s
Release Docker Image / Build linux-arm64 (max-perf) (pull_request) Successful in 14m28s
Release Docker Image / Build linux-amd64 (max-perf) (pull_request) Successful in 14m36s
Release Docker Image / Create Max-Perf Manifest (pull_request) Successful in 11s
to 446db274a3
Some checks failed
Documentation / Build and Deploy Documentation (pull_request) Successful in 29s
Checks / Prek / Pre-commit & Formatting (pull_request) Successful in 56s
Release Docker Image / Build linux-amd64 (release) (pull_request) Successful in 7m1s
Release Docker Image / Build linux-arm64 (release) (pull_request) Successful in 7m11s
Release Docker Image / Create Multi-arch Release Manifest (pull_request) Successful in 11s
Checks / Prek / Clippy and Cargo Tests (pull_request) Failing after 9m44s
Release Docker Image / Create Max-Perf Manifest (pull_request) Has been cancelled
Release Docker Image / Build linux-arm64 (max-perf) (pull_request) Has been cancelled
Release Docker Image / Build linux-amd64 (max-perf) (pull_request) Has been cancelled
2025-09-17 20:32:53 +00:00
Compare
nex merged commit 7e4071c117 into main 2025-09-17 20:46:04 +00:00
nex referenced this pull request from a commit 2025-09-17 20:46:05 +00:00
Author
Owner

note to maintainers: do not delete related branch yet.

note to maintainers: do not delete related branch yet.
Sign in to join this conversation.
No reviewers
No milestone
No project
No assignees
2 participants
Notifications
Due date
The due date is invalid or out of range. Please use the format "yyyy-mm-dd".

2025-09-14

Depends on
Reference
continuwuation/continuwuity!943
No description provided.