Skip to content
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.

Commit dbbd7b9

Browse files
authoredOct 12, 2024··
Unrolled build for rust-lang#131065
Rollup merge of rust-lang#131065 - Voultapher:port-sort-test-suite, r=thomcc Port sort-research-rs test suite to Rust stdlib tests This PR is a followup to rust-lang#124032. It replaces the tests that test the various sort functions in the standard library with a test-suite developed as part of https://github.com/Voultapher/sort-research-rs. The current tests suffer a couple of problems: - They don't cover important real world patterns that the implementations take advantage of and execute special code for. - The input lengths tested miss out on code paths. For example, important safety property tests never reach the quicksort part of the implementation. - The miri side is often limited to `len <= 20` which means it very thoroughly tests the insertion sort, which accounts for 19 out of 1.5k LoC. - They are split into to core and alloc, causing code duplication and uneven coverage. - ~~The randomness is tied to a caller location, wasting the space exploration capabilities of randomized testing.~~ The randomness is not repeatable, as it relies on `std::hash::RandomState::new().build_hasher()`. Most of these issues existed before rust-lang#124032, but they are intensified by it. One thing that is new and requires additional testing, is that the new sort implementations specialize based on type properties. For example `Freeze` and non `Freeze` execute different code paths. Effectively there are three dimensions that matter: - Input type - Input length - Input pattern The ported test-suite tests various properties along all three dimensions, greatly improving test coverage. It side-steps the miri issue by preferring sampled approaches. For example the test that checks if after a panic the set of elements is still the original one, doesn't do so for every single possible panic opportunity but rather it picks one at random, and performs this test across a range of input length, which varies the panic point across them. This allows regular execution to easily test inputs of length 10k, and miri execution up to 100 which covers significantly more code. The randomness used is tied to a fixed - but random per process execution - seed. This allows for fully repeatable tests and fuzzer like exploration across multiple runs. Structure wise, the tests are previously found in the core integration tests for `sort_unstable` and alloc unit tests for `sort`. The new test-suite was developed to be a purely black-box approach, which makes integration testing the better place, because it can't accidentally rely on internal access. Because unwinding support is required the tests can't be in core, even if the implementation is, so they are now part of the alloc integration tests. Are there architectures that can only build and test core and not alloc? If so, do such platforms require sort testing? For what it's worth the current implementation state passes miri `--target mips64-unknown-linux-gnuabi64` which is big endian. The test-suite also contains tests for properties that were and are given by the current and previous implementations, and likely relied upon by users but weren't tested. For example `self_cmp` tests that the two parameters `a` and `b` passed into the comparison function are never references to the same object, which if the user is sorting for example a `&mut [Mutex<i32>]` could lead to a deadlock. Instead of using the hashed caller location as rand seed, it uses seconds since unix epoch / 10, which given timestamps in the CI should be reasonably easy to reproduce, but also allows fuzzer like space exploration. --- Test run-time changes: Setup: ``` Linux 6.10 rustc 1.83.0-nightly (f79a912 2024-09-18) AMD Ryzen 9 5900X 12-Core Processor (Zen 3 micro-architecture) CPU boost enabled. ``` master: e9df22f Before core integration tests: ``` $ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/coretests-219cbd0308a49e2f Time (mean ± σ): 869.6 ms ± 21.1 ms [User: 1327.6 ms, System: 95.1 ms] Range (min … max): 845.4 ms … 917.0 ms 10 runs # MIRIFLAGS="-Zmiri-disable-isolation" to get real time $ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/core finished in 738.44s ``` After core integration tests: ``` $ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/coretests-219cbd0308a49e2f Time (mean ± σ): 865.1 ms ± 14.7 ms [User: 1283.5 ms, System: 88.4 ms] Range (min … max): 836.2 ms … 885.7 ms 10 runs $ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/core finished in 752.35s ``` Before alloc unit tests: ``` LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloc-19c15e6e8565aa54 Time (mean ± σ): 295.0 ms ± 9.9 ms [User: 719.6 ms, System: 35.3 ms] Range (min … max): 284.9 ms … 319.3 ms 10 runs $ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc finished in 322.75s ``` After alloc unit tests: ``` LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloc-19c15e6e8565aa54 Time (mean ± σ): 97.4 ms ± 4.1 ms [User: 297.7 ms, System: 28.6 ms] Range (min … max): 92.3 ms … 109.2 ms 27 runs $ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc finished in 309.18s ``` Before alloc integration tests: ``` $ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloctests-439e7300c61a8046 Time (mean ± σ): 103.2 ms ± 1.7 ms [User: 135.7 ms, System: 39.4 ms] Range (min … max): 99.7 ms … 107.3 ms 28 runs $ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc finished in 231.35s ``` After alloc integration tests: ``` $ LD_LIBRARY_PATH=build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/ hyperfine build/x86_64-unknown-linux-gnu/stage0-std/x86_64-unknown-linux-gnu/release/deps/alloctests-439e7300c61a8046 Time (mean ± σ): 379.8 ms ± 4.7 ms [User: 4620.5 ms, System: 1157.2 ms] Range (min … max): 373.6 ms … 386.9 ms 10 runs $ MIRIFLAGS="-Zmiri-disable-isolation" ./x.py miri library/alloc finished in 449.24s ``` In my opinion the results don't change iterative library development or CI execution in meaningful ways. For example currently the library doc-tests take ~66s and incremental compilation takes 10+ seconds. However I only have limited knowledge of the various local development workflows that exist, and might be missing one that is significantly impacted by this change.
2 parents 1bc403d + 71bb0e7 commit dbbd7b9

File tree

10 files changed

+1955
-434
lines changed

10 files changed

+1955
-434
lines changed
 

‎library/alloc/src/slice.rs

+10-14
Original file line numberDiff line numberDiff line change
@@ -19,20 +19,6 @@ use core::cmp::Ordering::{self, Less};
1919
use core::mem::{self, MaybeUninit};
2020
#[cfg(not(no_global_oom_handling))]
2121
use core::ptr;
22-
#[cfg(not(no_global_oom_handling))]
23-
use core::slice::sort;
24-
25-
use crate::alloc::Allocator;
26-
#[cfg(not(no_global_oom_handling))]
27-
use crate::alloc::Global;
28-
#[cfg(not(no_global_oom_handling))]
29-
use crate::borrow::ToOwned;
30-
use crate::boxed::Box;
31-
use crate::vec::Vec;
32-
33-
#[cfg(test)]
34-
mod tests;
35-
3622
#[unstable(feature = "array_chunks", issue = "74985")]
3723
pub use core::slice::ArrayChunks;
3824
#[unstable(feature = "array_chunks", issue = "74985")]
@@ -43,6 +29,8 @@ pub use core::slice::ArrayWindows;
4329
pub use core::slice::EscapeAscii;
4430
#[stable(feature = "slice_get_slice", since = "1.28.0")]
4531
pub use core::slice::SliceIndex;
32+
#[cfg(not(no_global_oom_handling))]
33+
use core::slice::sort;
4634
#[stable(feature = "slice_group_by", since = "1.77.0")]
4735
pub use core::slice::{ChunkBy, ChunkByMut};
4836
#[stable(feature = "rust1", since = "1.0.0")]
@@ -83,6 +71,14 @@ pub use hack::into_vec;
8371
#[cfg(test)]
8472
pub use hack::to_vec;
8573

74+
use crate::alloc::Allocator;
75+
#[cfg(not(no_global_oom_handling))]
76+
use crate::alloc::Global;
77+
#[cfg(not(no_global_oom_handling))]
78+
use crate::borrow::ToOwned;
79+
use crate::boxed::Box;
80+
use crate::vec::Vec;
81+
8682
// HACK(japaric): With cfg(test) `impl [T]` is not available, these three
8783
// functions are actually methods that are in `impl [T]` but not in
8884
// `core::slice::SliceExt` - we need to supply these functions for the

‎library/alloc/src/slice/tests.rs

-369
This file was deleted.

‎library/alloc/tests/lib.rs

+2
Original file line numberDiff line numberDiff line change
@@ -40,6 +40,7 @@
4040
#![feature(local_waker)]
4141
#![feature(vec_pop_if)]
4242
#![feature(unique_rc_arc)]
43+
#![feature(macro_metavar_expr_concat)]
4344
#![allow(internal_features)]
4445
#![deny(fuzzy_provenance_casts)]
4546
#![deny(unsafe_op_in_unsafe_fn)]
@@ -59,6 +60,7 @@ mod heap;
5960
mod linked_list;
6061
mod rc;
6162
mod slice;
63+
mod sort;
6264
mod str;
6365
mod string;
6466
mod task;

‎library/alloc/tests/sort/ffi_types.rs

+82
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,82 @@
1+
use std::cmp::Ordering;
2+
3+
// Very large stack value.
4+
#[repr(C)]
5+
#[derive(PartialEq, Eq, Debug, Clone)]
6+
pub struct FFIOneKibiByte {
7+
values: [i64; 128],
8+
}
9+
10+
impl FFIOneKibiByte {
11+
pub fn new(val: i32) -> Self {
12+
let mut values = [0i64; 128];
13+
let mut val_i64 = val as i64;
14+
15+
for elem in &mut values {
16+
*elem = val_i64;
17+
val_i64 = std::hint::black_box(val_i64 + 1);
18+
}
19+
Self { values }
20+
}
21+
22+
fn as_i64(&self) -> i64 {
23+
self.values[11] + self.values[55] + self.values[77]
24+
}
25+
}
26+
27+
impl PartialOrd for FFIOneKibiByte {
28+
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
29+
Some(self.cmp(other))
30+
}
31+
}
32+
33+
impl Ord for FFIOneKibiByte {
34+
fn cmp(&self, other: &Self) -> Ordering {
35+
self.as_i64().cmp(&other.as_i64())
36+
}
37+
}
38+
39+
// 16 byte stack value, with more expensive comparison.
40+
#[repr(C)]
41+
#[derive(PartialEq, Debug, Clone, Copy)]
42+
pub struct F128 {
43+
x: f64,
44+
y: f64,
45+
}
46+
47+
impl F128 {
48+
pub fn new(val: i32) -> Self {
49+
let val_f = (val as f64) + (i32::MAX as f64) + 10.0;
50+
51+
let x = val_f + 0.1;
52+
let y = val_f.log(4.1);
53+
54+
assert!(y < x);
55+
assert!(x.is_normal() && y.is_normal());
56+
57+
Self { x, y }
58+
}
59+
}
60+
61+
// This is kind of hacky, but we know we only have normal comparable floats in there.
62+
impl Eq for F128 {}
63+
64+
impl PartialOrd for F128 {
65+
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
66+
Some(self.cmp(other))
67+
}
68+
}
69+
70+
// Goal is similar code-gen between Rust and C++
71+
// - Rust https://godbolt.org/z/3YM3xenPP
72+
// - C++ https://godbolt.org/z/178M6j1zz
73+
impl Ord for F128 {
74+
fn cmp(&self, other: &Self) -> Ordering {
75+
// Simulate expensive comparison function.
76+
let this_div = self.x / self.y;
77+
let other_div = other.x / other.y;
78+
79+
// SAFETY: We checked in the ctor that both are normal.
80+
unsafe { this_div.partial_cmp(&other_div).unwrap_unchecked() }
81+
}
82+
}
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,192 @@
1+
// This module implements a known good stable sort implementation that helps provide better error
2+
// messages when the correctness tests fail, we can't use the stdlib sort functions because we are
3+
// testing them for correctness.
4+
//
5+
// Based on https://github.com/voultapher/tiny-sort-rs.
6+
7+
use alloc::alloc::{Layout, alloc, dealloc};
8+
use std::{mem, ptr};
9+
10+
/// Sort `v` preserving initial order of equal elements.
11+
///
12+
/// - Guaranteed O(N * log(N)) worst case perf
13+
/// - No adaptiveness
14+
/// - Branch miss-prediction not affected by outcome of comparison function
15+
/// - Uses `v.len()` auxiliary memory.
16+
///
17+
/// If `T: Ord` does not implement a total order the resulting order is
18+
/// unspecified. All original elements will remain in `v` and any possible modifications via
19+
/// interior mutability will be observable. Same is true if `T: Ord` panics.
20+
///
21+
/// Panics if allocating the auxiliary memory fails.
22+
#[inline(always)]
23+
pub fn sort<T: Ord>(v: &mut [T]) {
24+
stable_sort(v, |a, b| a.lt(b))
25+
}
26+
27+
#[inline(always)]
28+
fn stable_sort<T, F: FnMut(&T, &T) -> bool>(v: &mut [T], mut is_less: F) {
29+
if mem::size_of::<T>() == 0 {
30+
return;
31+
}
32+
33+
let len = v.len();
34+
35+
// Inline the check for len < 2. This happens a lot, instrumenting the Rust compiler suggests
36+
// len < 2 accounts for 94% of its calls to `slice::sort`.
37+
if len < 2 {
38+
return;
39+
}
40+
41+
// SAFETY: We checked that len is > 0 and that T is not a ZST.
42+
unsafe {
43+
mergesort_main(v, &mut is_less);
44+
}
45+
}
46+
47+
/// The core logic should not be inlined.
48+
///
49+
/// SAFETY: The caller has to ensure that len is > 0 and that T is not a ZST.
50+
#[inline(never)]
51+
unsafe fn mergesort_main<T, F: FnMut(&T, &T) -> bool>(v: &mut [T], is_less: &mut F) {
52+
// While it would be nice to have a merge implementation that only requires N / 2 auxiliary
53+
// memory. Doing so would make the merge implementation significantly more complex and
54+
55+
// SAFETY: See function safety description.
56+
let buf = unsafe { BufGuard::new(v.len()) };
57+
58+
// SAFETY: `scratch` has space for `v.len()` writes. And does not alias `v`.
59+
unsafe {
60+
mergesort_core(v, buf.buf_ptr.as_ptr(), is_less);
61+
}
62+
}
63+
64+
/// Tiny recursive top-down merge sort optimized for binary size. It has no adaptiveness whatsoever,
65+
/// no run detection, etc.
66+
///
67+
/// Buffer as pointed to by `scratch` must have space for `v.len()` writes. And must not alias `v`.
68+
#[inline(always)]
69+
unsafe fn mergesort_core<T, F: FnMut(&T, &T) -> bool>(
70+
v: &mut [T],
71+
scratch_ptr: *mut T,
72+
is_less: &mut F,
73+
) {
74+
let len = v.len();
75+
76+
if len > 2 {
77+
// SAFETY: `mid` is guaranteed in-bounds. And caller has to ensure that `scratch_ptr` can
78+
// hold `v.len()` values.
79+
unsafe {
80+
let mid = len / 2;
81+
// Sort the left half recursively.
82+
mergesort_core(v.get_unchecked_mut(..mid), scratch_ptr, is_less);
83+
// Sort the right half recursively.
84+
mergesort_core(v.get_unchecked_mut(mid..), scratch_ptr, is_less);
85+
// Combine the two halves.
86+
merge(v, scratch_ptr, is_less, mid);
87+
}
88+
} else if len == 2 {
89+
if is_less(&v[1], &v[0]) {
90+
v.swap(0, 1);
91+
}
92+
}
93+
}
94+
95+
/// Branchless merge function.
96+
///
97+
/// SAFETY: The caller must ensure that `scratch_ptr` is valid for `v.len()` writes. And that mid is
98+
/// in-bounds.
99+
#[inline(always)]
100+
unsafe fn merge<T, F>(v: &mut [T], scratch_ptr: *mut T, is_less: &mut F, mid: usize)
101+
where
102+
F: FnMut(&T, &T) -> bool,
103+
{
104+
let len = v.len();
105+
debug_assert!(mid > 0 && mid < len);
106+
107+
let len = v.len();
108+
109+
// Indexes to track the positions while merging.
110+
let mut l = 0;
111+
let mut r = mid;
112+
113+
// SAFETY: No matter what the result of is_less is we check that l and r remain in-bounds and if
114+
// is_less panics the original elements remain in `v`.
115+
unsafe {
116+
let arr_ptr = v.as_ptr();
117+
118+
for i in 0..len {
119+
let left_ptr = arr_ptr.add(l);
120+
let right_ptr = arr_ptr.add(r);
121+
122+
let is_lt = !is_less(&*right_ptr, &*left_ptr);
123+
let copy_ptr = if is_lt { left_ptr } else { right_ptr };
124+
ptr::copy_nonoverlapping(copy_ptr, scratch_ptr.add(i), 1);
125+
126+
l += is_lt as usize;
127+
r += !is_lt as usize;
128+
129+
// As long as neither side is exhausted merge left and right elements.
130+
if ((l == mid) as u8 + (r == len) as u8) != 0 {
131+
break;
132+
}
133+
}
134+
135+
// The left or right side is exhausted, drain the right side in one go.
136+
let copy_ptr = if l == mid { arr_ptr.add(r) } else { arr_ptr.add(l) };
137+
let i = l + (r - mid);
138+
ptr::copy_nonoverlapping(copy_ptr, scratch_ptr.add(i), len - i);
139+
140+
// Now that scratch_ptr holds the full merged content, write it back on-top of v.
141+
ptr::copy_nonoverlapping(scratch_ptr, v.as_mut_ptr(), len);
142+
}
143+
}
144+
145+
// SAFETY: The caller has to ensure that Option is Some, UB otherwise.
146+
unsafe fn unwrap_unchecked<T>(opt_val: Option<T>) -> T {
147+
match opt_val {
148+
Some(val) => val,
149+
None => {
150+
// SAFETY: See function safety description.
151+
unsafe {
152+
core::hint::unreachable_unchecked();
153+
}
154+
}
155+
}
156+
}
157+
158+
// Extremely basic versions of Vec.
159+
// Their use is super limited and by having the code here, it allows reuse between the sort
160+
// implementations.
161+
struct BufGuard<T> {
162+
buf_ptr: ptr::NonNull<T>,
163+
capacity: usize,
164+
}
165+
166+
impl<T> BufGuard<T> {
167+
// SAFETY: The caller has to ensure that len is not 0 and that T is not a ZST.
168+
unsafe fn new(len: usize) -> Self {
169+
debug_assert!(len > 0 && mem::size_of::<T>() > 0);
170+
171+
// SAFETY: See function safety description.
172+
let layout = unsafe { unwrap_unchecked(Layout::array::<T>(len).ok()) };
173+
174+
// SAFETY: We checked that T is not a ZST.
175+
let buf_ptr = unsafe { alloc(layout) as *mut T };
176+
177+
if buf_ptr.is_null() {
178+
panic!("allocation failure");
179+
}
180+
181+
Self { buf_ptr: ptr::NonNull::new(buf_ptr).unwrap(), capacity: len }
182+
}
183+
}
184+
185+
impl<T> Drop for BufGuard<T> {
186+
fn drop(&mut self) {
187+
// SAFETY: We checked that T is not a ZST.
188+
unsafe {
189+
dealloc(self.buf_ptr.as_ptr() as *mut u8, Layout::array::<T>(self.capacity).unwrap());
190+
}
191+
}
192+
}

‎library/alloc/tests/sort/mod.rs

+17
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,17 @@
1+
pub trait Sort {
2+
fn name() -> String;
3+
4+
fn sort<T>(v: &mut [T])
5+
where
6+
T: Ord;
7+
8+
fn sort_by<T, F>(v: &mut [T], compare: F)
9+
where
10+
F: FnMut(&T, &T) -> std::cmp::Ordering;
11+
}
12+
13+
mod ffi_types;
14+
mod known_good_stable_sort;
15+
mod patterns;
16+
mod tests;
17+
mod zipf;

‎library/alloc/tests/sort/patterns.rs

+211
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,211 @@
1+
use std::env;
2+
use std::hash::Hash;
3+
use std::str::FromStr;
4+
use std::sync::OnceLock;
5+
6+
use rand::prelude::*;
7+
use rand_xorshift::XorShiftRng;
8+
9+
use crate::sort::zipf::ZipfDistribution;
10+
11+
/// Provides a set of patterns useful for testing and benchmarking sorting algorithms.
12+
/// Currently limited to i32 values.
13+
14+
// --- Public ---
15+
16+
pub fn random(len: usize) -> Vec<i32> {
17+
// .
18+
// : . : :
19+
// :.:::.::
20+
21+
random_vec(len)
22+
}
23+
24+
pub fn random_uniform<R>(len: usize, range: R) -> Vec<i32>
25+
where
26+
R: Into<rand::distributions::Uniform<i32>> + Hash,
27+
{
28+
// :.:.:.::
29+
30+
let mut rng: XorShiftRng = rand::SeedableRng::seed_from_u64(get_or_init_rand_seed());
31+
32+
// Abstracting over ranges in Rust :(
33+
let dist: rand::distributions::Uniform<i32> = range.into();
34+
(0..len).map(|_| dist.sample(&mut rng)).collect()
35+
}
36+
37+
pub fn random_zipf(len: usize, exponent: f64) -> Vec<i32> {
38+
// https://en.wikipedia.org/wiki/Zipf's_law
39+
40+
let mut rng: XorShiftRng = rand::SeedableRng::seed_from_u64(get_or_init_rand_seed());
41+
42+
// Abstracting over ranges in Rust :(
43+
let dist = ZipfDistribution::new(len, exponent).unwrap();
44+
(0..len).map(|_| dist.sample(&mut rng) as i32).collect()
45+
}
46+
47+
pub fn random_sorted(len: usize, sorted_percent: f64) -> Vec<i32> {
48+
// .:
49+
// .:::. :
50+
// .::::::.::
51+
// [----][--]
52+
// ^ ^
53+
// | |
54+
// sorted |
55+
// unsorted
56+
57+
// Simulate pre-existing sorted slice, where len - sorted_percent are the new unsorted values
58+
// and part of the overall distribution.
59+
let mut v = random_vec(len);
60+
let sorted_len = ((len as f64) * (sorted_percent / 100.0)).round() as usize;
61+
62+
v[0..sorted_len].sort_unstable();
63+
64+
v
65+
}
66+
67+
pub fn all_equal(len: usize) -> Vec<i32> {
68+
// ......
69+
// ::::::
70+
71+
(0..len).map(|_| 66).collect::<Vec<_>>()
72+
}
73+
74+
pub fn ascending(len: usize) -> Vec<i32> {
75+
// .:
76+
// .:::
77+
// .:::::
78+
79+
(0..len as i32).collect::<Vec<_>>()
80+
}
81+
82+
pub fn descending(len: usize) -> Vec<i32> {
83+
// :.
84+
// :::.
85+
// :::::.
86+
87+
(0..len as i32).rev().collect::<Vec<_>>()
88+
}
89+
90+
pub fn saw_mixed(len: usize, saw_count: usize) -> Vec<i32> {
91+
// :. :. .::. .:
92+
// :::.:::..::::::..:::
93+
94+
if len == 0 {
95+
return Vec::new();
96+
}
97+
98+
let mut vals = random_vec(len);
99+
let chunks_size = len / saw_count.max(1);
100+
let saw_directions = random_uniform((len / chunks_size) + 1, 0..=1);
101+
102+
for (i, chunk) in vals.chunks_mut(chunks_size).enumerate() {
103+
if saw_directions[i] == 0 {
104+
chunk.sort_unstable();
105+
} else if saw_directions[i] == 1 {
106+
chunk.sort_unstable_by_key(|&e| std::cmp::Reverse(e));
107+
} else {
108+
unreachable!();
109+
}
110+
}
111+
112+
vals
113+
}
114+
115+
pub fn saw_mixed_range(len: usize, range: std::ops::Range<usize>) -> Vec<i32> {
116+
// :.
117+
// :. :::. .::. .:
118+
// :::.:::::..::::::..:.:::
119+
120+
// ascending and descending randomly picked, with length in `range`.
121+
122+
if len == 0 {
123+
return Vec::new();
124+
}
125+
126+
let mut vals = random_vec(len);
127+
128+
let max_chunks = len / range.start;
129+
let saw_directions = random_uniform(max_chunks + 1, 0..=1);
130+
let chunk_sizes = random_uniform(max_chunks + 1, (range.start as i32)..(range.end as i32));
131+
132+
let mut i = 0;
133+
let mut l = 0;
134+
while l < len {
135+
let chunk_size = chunk_sizes[i] as usize;
136+
let chunk_end = std::cmp::min(l + chunk_size, len);
137+
let chunk = &mut vals[l..chunk_end];
138+
139+
if saw_directions[i] == 0 {
140+
chunk.sort_unstable();
141+
} else if saw_directions[i] == 1 {
142+
chunk.sort_unstable_by_key(|&e| std::cmp::Reverse(e));
143+
} else {
144+
unreachable!();
145+
}
146+
147+
i += 1;
148+
l += chunk_size;
149+
}
150+
151+
vals
152+
}
153+
154+
pub fn pipe_organ(len: usize) -> Vec<i32> {
155+
// .:.
156+
// .:::::.
157+
158+
let mut vals = random_vec(len);
159+
160+
let first_half = &mut vals[0..(len / 2)];
161+
first_half.sort_unstable();
162+
163+
let second_half = &mut vals[(len / 2)..len];
164+
second_half.sort_unstable_by_key(|&e| std::cmp::Reverse(e));
165+
166+
vals
167+
}
168+
169+
pub fn get_or_init_rand_seed() -> u64 {
170+
*SEED_VALUE.get_or_init(|| {
171+
env::var("OVERRIDE_SEED")
172+
.ok()
173+
.map(|seed| u64::from_str(&seed).unwrap())
174+
.unwrap_or_else(rand_root_seed)
175+
})
176+
}
177+
178+
// --- Private ---
179+
180+
static SEED_VALUE: OnceLock<u64> = OnceLock::new();
181+
182+
#[cfg(not(miri))]
183+
fn rand_root_seed() -> u64 {
184+
// Other test code hashes `panic::Location::caller()` and constructs a seed from that, in these
185+
// tests we want to have a fuzzer like exploration of the test space, if we used the same caller
186+
// based construction we would always test the same.
187+
//
188+
// Instead we use the seconds since UNIX epoch / 10, given CI log output this value should be
189+
// reasonably easy to re-construct.
190+
191+
use std::time::{SystemTime, UNIX_EPOCH};
192+
193+
let epoch_seconds = SystemTime::now().duration_since(UNIX_EPOCH).unwrap().as_secs();
194+
195+
epoch_seconds / 10
196+
}
197+
198+
#[cfg(miri)]
199+
fn rand_root_seed() -> u64 {
200+
// Miri is usually run with isolation with gives us repeatability but also permutations based on
201+
// other code that runs before.
202+
use core::hash::{BuildHasher, Hash, Hasher};
203+
let mut hasher = std::hash::RandomState::new().build_hasher();
204+
core::panic::Location::caller().hash(&mut hasher);
205+
hasher.finish()
206+
}
207+
208+
fn random_vec(len: usize) -> Vec<i32> {
209+
let mut rng: XorShiftRng = rand::SeedableRng::seed_from_u64(get_or_init_rand_seed());
210+
(0..len).map(|_| rng.gen::<i32>()).collect()
211+
}

‎library/alloc/tests/sort/tests.rs

+1,233
Large diffs are not rendered by default.

‎library/alloc/tests/sort/zipf.rs

+208
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,208 @@
1+
// This module implements a Zipfian distribution generator.
2+
//
3+
// Based on https://github.com/jonhoo/rust-zipf.
4+
5+
use rand::Rng;
6+
7+
/// Random number generator that generates Zipf-distributed random numbers using rejection
8+
/// inversion.
9+
#[derive(Clone, Copy)]
10+
pub struct ZipfDistribution {
11+
/// Number of elements
12+
num_elements: f64,
13+
/// Exponent parameter of the distribution
14+
exponent: f64,
15+
/// `hIntegral(1.5) - 1}`
16+
h_integral_x1: f64,
17+
/// `hIntegral(num_elements + 0.5)}`
18+
h_integral_num_elements: f64,
19+
/// `2 - hIntegralInverse(hIntegral(2.5) - h(2)}`
20+
s: f64,
21+
}
22+
23+
impl ZipfDistribution {
24+
/// Creates a new [Zipf-distributed](https://en.wikipedia.org/wiki/Zipf's_law)
25+
/// random number generator.
26+
///
27+
/// Note that both the number of elements and the exponent must be greater than 0.
28+
pub fn new(num_elements: usize, exponent: f64) -> Result<Self, ()> {
29+
if num_elements == 0 {
30+
return Err(());
31+
}
32+
if exponent <= 0f64 {
33+
return Err(());
34+
}
35+
36+
let z = ZipfDistribution {
37+
num_elements: num_elements as f64,
38+
exponent,
39+
h_integral_x1: ZipfDistribution::h_integral(1.5, exponent) - 1f64,
40+
h_integral_num_elements: ZipfDistribution::h_integral(
41+
num_elements as f64 + 0.5,
42+
exponent,
43+
),
44+
s: 2f64
45+
- ZipfDistribution::h_integral_inv(
46+
ZipfDistribution::h_integral(2.5, exponent)
47+
- ZipfDistribution::h(2f64, exponent),
48+
exponent,
49+
),
50+
};
51+
52+
// populate cache
53+
54+
Ok(z)
55+
}
56+
}
57+
58+
impl ZipfDistribution {
59+
fn next<R: Rng + ?Sized>(&self, rng: &mut R) -> usize {
60+
// The paper describes an algorithm for exponents larger than 1 (Algorithm ZRI).
61+
//
62+
// The original method uses
63+
// H(x) = (v + x)^(1 - q) / (1 - q)
64+
// as the integral of the hat function.
65+
//
66+
// This function is undefined for q = 1, which is the reason for the limitation of the
67+
// exponent.
68+
//
69+
// If instead the integral function
70+
// H(x) = ((v + x)^(1 - q) - 1) / (1 - q)
71+
// is used, for which a meaningful limit exists for q = 1, the method works for all
72+
// positive exponents.
73+
//
74+
// The following implementation uses v = 0 and generates integral number in the range [1,
75+
// num_elements]. This is different to the original method where v is defined to
76+
// be positive and numbers are taken from [0, i_max]. This explains why the implementation
77+
// looks slightly different.
78+
79+
let hnum = self.h_integral_num_elements;
80+
81+
loop {
82+
use std::cmp;
83+
let u: f64 = hnum + rng.gen::<f64>() * (self.h_integral_x1 - hnum);
84+
// u is uniformly distributed in (h_integral_x1, h_integral_num_elements]
85+
86+
let x: f64 = ZipfDistribution::h_integral_inv(u, self.exponent);
87+
88+
// Limit k to the range [1, num_elements] if it would be outside
89+
// due to numerical inaccuracies.
90+
let k64 = x.max(1.0).min(self.num_elements);
91+
// float -> integer rounds towards zero, so we add 0.5
92+
// to prevent bias towards k == 1
93+
let k = cmp::max(1, (k64 + 0.5) as usize);
94+
95+
// Here, the distribution of k is given by:
96+
//
97+
// P(k = 1) = C * (hIntegral(1.5) - h_integral_x1) = C
98+
// P(k = m) = C * (hIntegral(m + 1/2) - hIntegral(m - 1/2)) for m >= 2
99+
//
100+
// where C = 1 / (h_integral_num_elements - h_integral_x1)
101+
if k64 - x <= self.s
102+
|| u >= ZipfDistribution::h_integral(k64 + 0.5, self.exponent)
103+
- ZipfDistribution::h(k64, self.exponent)
104+
{
105+
// Case k = 1:
106+
//
107+
// The right inequality is always true, because replacing k by 1 gives
108+
// u >= hIntegral(1.5) - h(1) = h_integral_x1 and u is taken from
109+
// (h_integral_x1, h_integral_num_elements].
110+
//
111+
// Therefore, the acceptance rate for k = 1 is P(accepted | k = 1) = 1
112+
// and the probability that 1 is returned as random value is
113+
// P(k = 1 and accepted) = P(accepted | k = 1) * P(k = 1) = C = C / 1^exponent
114+
//
115+
// Case k >= 2:
116+
//
117+
// The left inequality (k - x <= s) is just a short cut
118+
// to avoid the more expensive evaluation of the right inequality
119+
// (u >= hIntegral(k + 0.5) - h(k)) in many cases.
120+
//
121+
// If the left inequality is true, the right inequality is also true:
122+
// Theorem 2 in the paper is valid for all positive exponents, because
123+
// the requirements h'(x) = -exponent/x^(exponent + 1) < 0 and
124+
// (-1/hInverse'(x))'' = (1+1/exponent) * x^(1/exponent-1) >= 0
125+
// are both fulfilled.
126+
// Therefore, f(x) = x - hIntegralInverse(hIntegral(x + 0.5) - h(x))
127+
// is a non-decreasing function. If k - x <= s holds,
128+
// k - x <= s + f(k) - f(2) is obviously also true which is equivalent to
129+
// -x <= -hIntegralInverse(hIntegral(k + 0.5) - h(k)),
130+
// -hIntegralInverse(u) <= -hIntegralInverse(hIntegral(k + 0.5) - h(k)),
131+
// and finally u >= hIntegral(k + 0.5) - h(k).
132+
//
133+
// Hence, the right inequality determines the acceptance rate:
134+
// P(accepted | k = m) = h(m) / (hIntegrated(m+1/2) - hIntegrated(m-1/2))
135+
// The probability that m is returned is given by
136+
// P(k = m and accepted) = P(accepted | k = m) * P(k = m)
137+
// = C * h(m) = C / m^exponent.
138+
//
139+
// In both cases the probabilities are proportional to the probability mass
140+
// function of the Zipf distribution.
141+
142+
return k;
143+
}
144+
}
145+
}
146+
}
147+
148+
impl rand::distributions::Distribution<usize> for ZipfDistribution {
149+
fn sample<R: Rng + ?Sized>(&self, rng: &mut R) -> usize {
150+
self.next(rng)
151+
}
152+
}
153+
154+
use std::fmt;
155+
impl fmt::Debug for ZipfDistribution {
156+
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> Result<(), fmt::Error> {
157+
f.debug_struct("ZipfDistribution")
158+
.field("e", &self.exponent)
159+
.field("n", &self.num_elements)
160+
.finish()
161+
}
162+
}
163+
164+
impl ZipfDistribution {
165+
/// Computes `H(x)`, defined as
166+
///
167+
/// - `(x^(1 - exponent) - 1) / (1 - exponent)`, if `exponent != 1`
168+
/// - `log(x)`, if `exponent == 1`
169+
///
170+
/// `H(x)` is an integral function of `h(x)`, the derivative of `H(x)` is `h(x)`.
171+
fn h_integral(x: f64, exponent: f64) -> f64 {
172+
let log_x = x.ln();
173+
helper2((1f64 - exponent) * log_x) * log_x
174+
}
175+
176+
/// Computes `h(x) = 1 / x^exponent`
177+
fn h(x: f64, exponent: f64) -> f64 {
178+
(-exponent * x.ln()).exp()
179+
}
180+
181+
/// The inverse function of `H(x)`.
182+
/// Returns the `y` for which `H(y) = x`.
183+
fn h_integral_inv(x: f64, exponent: f64) -> f64 {
184+
let mut t: f64 = x * (1f64 - exponent);
185+
if t < -1f64 {
186+
// Limit value to the range [-1, +inf).
187+
// t could be smaller than -1 in some rare cases due to numerical errors.
188+
t = -1f64;
189+
}
190+
(helper1(t) * x).exp()
191+
}
192+
}
193+
194+
/// Helper function that calculates `log(1 + x) / x`.
195+
/// A Taylor series expansion is used, if x is close to 0.
196+
fn helper1(x: f64) -> f64 {
197+
if x.abs() > 1e-8 { x.ln_1p() / x } else { 1f64 - x * (0.5 - x * (1.0 / 3.0 - 0.25 * x)) }
198+
}
199+
200+
/// Helper function to calculate `(exp(x) - 1) / x`.
201+
/// A Taylor series expansion is used, if x is close to 0.
202+
fn helper2(x: f64) -> f64 {
203+
if x.abs() > 1e-8 {
204+
x.exp_m1() / x
205+
} else {
206+
1f64 + x * 0.5 * (1f64 + x * 1.0 / 3.0 * (1f64 + 0.25 * x))
207+
}
208+
}

‎library/core/tests/slice.rs

-51
Original file line numberDiff line numberDiff line change
@@ -1800,57 +1800,6 @@ fn brute_force_rotate_test_1() {
18001800
}
18011801
}
18021802

1803-
#[test]
1804-
#[cfg(not(target_arch = "wasm32"))]
1805-
fn sort_unstable() {
1806-
use rand::Rng;
1807-
1808-
// Miri is too slow (but still need to `chain` to make the types match)
1809-
let lens = if cfg!(miri) { (2..20).chain(0..0) } else { (2..25).chain(500..510) };
1810-
let rounds = if cfg!(miri) { 1 } else { 100 };
1811-
1812-
let mut v = [0; 600];
1813-
let mut tmp = [0; 600];
1814-
let mut rng = crate::test_rng();
1815-
1816-
for len in lens {
1817-
let v = &mut v[0..len];
1818-
let tmp = &mut tmp[0..len];
1819-
1820-
for &modulus in &[5, 10, 100, 1000] {
1821-
for _ in 0..rounds {
1822-
for i in 0..len {
1823-
v[i] = rng.gen::<i32>() % modulus;
1824-
}
1825-
1826-
// Sort in default order.
1827-
tmp.copy_from_slice(v);
1828-
tmp.sort_unstable();
1829-
assert!(tmp.windows(2).all(|w| w[0] <= w[1]));
1830-
1831-
// Sort in ascending order.
1832-
tmp.copy_from_slice(v);
1833-
tmp.sort_unstable_by(|a, b| a.cmp(b));
1834-
assert!(tmp.windows(2).all(|w| w[0] <= w[1]));
1835-
1836-
// Sort in descending order.
1837-
tmp.copy_from_slice(v);
1838-
tmp.sort_unstable_by(|a, b| b.cmp(a));
1839-
assert!(tmp.windows(2).all(|w| w[0] >= w[1]));
1840-
}
1841-
}
1842-
}
1843-
1844-
// Should not panic.
1845-
[0i32; 0].sort_unstable();
1846-
[(); 10].sort_unstable();
1847-
[(); 100].sort_unstable();
1848-
1849-
let mut v = [0xDEADBEEFu64];
1850-
v.sort_unstable();
1851-
assert!(v == [0xDEADBEEF]);
1852-
}
1853-
18541803
#[test]
18551804
#[cfg(not(target_arch = "wasm32"))]
18561805
#[cfg_attr(miri, ignore)] // Miri is too slow

0 commit comments

Comments
 (0)
Please sign in to comment.