Skip to content

Commit 05cb825

Browse files
committed
Auto merge of #68914 - nnethercote:speed-up-SipHasher128, r=<try>
Speed up `SipHasher128`. The current code in `SipHasher128::short_write` is inefficient. It uses `u8to64_le` (which is complex and slow) to extract just the right number of bytes of the input into a u64 and pad the result with zeroes. It then left-shifts that value in order to bitwise-OR it with `self.tail`. For example, imagine we have a u32 input 0xIIHH_GGFF and only need three bytes to fill up `self.tail`. The current code uses `u8to64_le` to construct 0x0000_0000_00HH_GGFF, which is just 0xIIHH_GGFF with the 0xII removed and zero-extended to a u64. The code then left-shifts that value by five bytes -- discarding the 0x00 byte that replaced the 0xII byte! -- to give 0xHHGG_FF00_0000_0000. It then then ORs that value with self.tail. There's a much simpler way to do it: zero-extend to u64 first, then left shift. E.g. 0xIIHH_GGFF is zero-extended to 0x0000_0000_IIHH_GGFF, and then left-shifted to 0xHHGG_FF00_0000_0000. We don't have to take time to exclude the unneeded 0xII byte, because it just gets shifted out anyway! It also avoids multiple occurrences of `unsafe`. There's a similar story with the setting of `self.tail` at the method's end. The current code uses `u8to64_le` to extract the remaining part of the input, but the same effect can be achieved more quickly with a right shift on the zero-extended input. This commit changes `SipHasher128` to use the simpler shift-based approach. The code is also smaller, which means that `short_write` is now inlined where previously it wasn't, which makes things faster again. This gives big speed-ups for all incremental builds, especially "baseline" incremental builds. r? @michaelwoerister
2 parents 442ae7f + e606fe7 commit 05cb825

File tree

1 file changed

+51
-39
lines changed

1 file changed

+51
-39
lines changed

src/librustc_data_structures/sip128.rs

+51-39
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,6 @@ use std::cmp;
44
use std::hash::Hasher;
55
use std::mem;
66
use std::ptr;
7-
use std::slice;
87

98
#[cfg(test)]
109
mod tests;
@@ -122,42 +121,55 @@ impl SipHasher128 {
122121
self.state.v1 ^= 0xee;
123122
}
124123

125-
// Specialized write function that is only valid for buffers with len <= 8.
126-
// It's used to force inlining of write_u8 and write_usize, those would normally be inlined
127-
// except for composite types (that includes slices and str hashing because of delimiter).
128-
// Without this extra push the compiler is very reluctant to inline delimiter writes,
129-
// degrading performance substantially for the most common use cases.
124+
// Specialized write function for values with size <= 8.
130125
#[inline]
131-
fn short_write(&mut self, msg: &[u8]) {
132-
debug_assert!(msg.len() <= 8);
133-
let length = msg.len();
134-
self.length += length;
126+
fn short_write<T>(&mut self, _x: T, x: u64) {
127+
let size = mem::size_of::<T>();
128+
self.length += size;
129+
130+
// The original number must be zero-extended, not sign-extended.
131+
debug_assert!(if size < 8 { x >> (8 * size) == 0 } else { true });
135132

133+
// The number of bytes needed to fill `self.tail`.
136134
let needed = 8 - self.ntail;
137-
let fill = cmp::min(length, needed);
138-
if fill == 8 {
139-
self.tail = unsafe { load_int_le!(msg, 0, u64) };
140-
} else {
141-
self.tail |= unsafe { u8to64_le(msg, 0, fill) } << (8 * self.ntail);
142-
if length < needed {
143-
self.ntail += length;
144-
return;
145-
}
135+
136+
// Imagine that `self.tail` is 0x0000_00EE_DDCC_BBAA, `self.ntail`
137+
// is 5, and therefore `needed` is 3.
138+
//
139+
// - Scenario 1, `self.write_u8(0xFF)`: we have already zero-extended
140+
// the input to 0x0000_0000_0000_00FF. We now left-shift it five
141+
// bytes, giving 0x0000_FF00_0000_0000. We then bitwise-OR that value
142+
// into `self.tail`, resulting in 0x0000_FFEE_DDCC_BBAA.
143+
// (Zero-extension of the original input is critical in this scenario
144+
// because we don't want the high two bytes of `self.tail` to be
145+
// touched by the bitwise-OR.) `self.tail` is not yet full, so we
146+
// return early, after updating `self.ntail` to 6.
147+
//
148+
// - Scenario 2, `self.write_u32(0xIIHH_GGFF)`: we have zero-extended the
149+
// input to 0x0000_0000_IIHH_GGFF. We now left-shift it five bytes,
150+
// giving 0xHHGG_FF00_0000_0000. We then bitwise-OR that value into
151+
// `self.tail`, resulting in 0xHHGG_FFEE_DDCC_BBAA. `self.tail` is
152+
// now full, and we can use it to update `self.state`.
153+
//
154+
self.tail |= x << (8 * self.ntail);
155+
if size < needed {
156+
self.ntail += size;
157+
return;
146158
}
159+
160+
// `self.tail` is full, process it.
147161
self.state.v3 ^= self.tail;
148162
Sip24Rounds::c_rounds(&mut self.state);
149163
self.state.v0 ^= self.tail;
150164

151-
// Buffered tail is now flushed, process new input.
152-
self.ntail = length - needed;
153-
self.tail = unsafe { u8to64_le(msg, needed, self.ntail) };
154-
}
155-
156-
#[inline(always)]
157-
fn short_write_gen<T>(&mut self, x: T) {
158-
let bytes =
159-
unsafe { slice::from_raw_parts(&x as *const T as *const u8, mem::size_of::<T>()) };
160-
self.short_write(bytes);
165+
// Continuing scenario 2: we have one byte left over from the input. We
166+
// set `self.ntail` to 1 and `self.tail` to `(0x0000_0000_IIHH_GGFF >>
167+
// 8*3)`, which is 0x0000_0000_0000_00II.
168+
//
169+
// The `if` is needed to avoid shifting by 64 bits, which Rust
170+
// complains about.
171+
self.ntail = size - needed;
172+
self.tail = if needed < 8 { x >> (8 * needed) } else { 0 };
161173
}
162174

163175
#[inline]
@@ -182,52 +194,52 @@ impl SipHasher128 {
182194
impl Hasher for SipHasher128 {
183195
#[inline]
184196
fn write_u8(&mut self, i: u8) {
185-
self.short_write_gen(i);
197+
self.short_write(i, i as u64);
186198
}
187199

188200
#[inline]
189201
fn write_u16(&mut self, i: u16) {
190-
self.short_write_gen(i);
202+
self.short_write(i, i as u64);
191203
}
192204

193205
#[inline]
194206
fn write_u32(&mut self, i: u32) {
195-
self.short_write_gen(i);
207+
self.short_write(i, i as u64);
196208
}
197209

198210
#[inline]
199211
fn write_u64(&mut self, i: u64) {
200-
self.short_write_gen(i);
212+
self.short_write(i, i as u64);
201213
}
202214

203215
#[inline]
204216
fn write_usize(&mut self, i: usize) {
205-
self.short_write_gen(i);
217+
self.short_write(i, i as u64);
206218
}
207219

208220
#[inline]
209221
fn write_i8(&mut self, i: i8) {
210-
self.short_write_gen(i);
222+
self.short_write(i, i as u8 as u64);
211223
}
212224

213225
#[inline]
214226
fn write_i16(&mut self, i: i16) {
215-
self.short_write_gen(i);
227+
self.short_write(i, i as u16 as u64);
216228
}
217229

218230
#[inline]
219231
fn write_i32(&mut self, i: i32) {
220-
self.short_write_gen(i);
232+
self.short_write(i, i as u32 as u64);
221233
}
222234

223235
#[inline]
224236
fn write_i64(&mut self, i: i64) {
225-
self.short_write_gen(i);
237+
self.short_write(i, i as i64 as u64);
226238
}
227239

228240
#[inline]
229241
fn write_isize(&mut self, i: isize) {
230-
self.short_write_gen(i);
242+
self.short_write(i, i as usize as u64);
231243
}
232244

233245
#[inline]

0 commit comments

Comments
 (0)