Skip to content

Commit f6fd305

Browse files
authored
Rollup merge of #129674 - matthewpipie:rc-arc-new-cyclic-in, r=dtolnay
Add new_cyclic_in for Rc and Arc Currently, new_cyclic_in does not exist for Rc and Arc. This is an oversight according to rust-lang/wg-allocators#132. This PR adds new_cyclic_in for Rc and Arc. The implementation is almost the exact same as new_cyclic with some small differences to make it allocator-specific. new_cyclic's implementation has been replaced with a call to `new_cyclic_in(data_fn, Global)`. Remaining questions: * ~~Is requiring Allocator to be Clone OK? According to rust-lang/wg-allocators#88, Allocators should be cheap to clone. I'm just hesitant to add unnecessary constraints, though I don't see an obvious workaround for this function since many called functions in new_cyclic_in expect an owned Allocator. I see Allocator.by_ref() as an option, but that doesn't work on when creating Weak { ptr: init_ptr, alloc: alloc.clone() }, because the type of Weak then becomes Weak<T, &A> which is incompatible.~~ Fixed, thank you `@zakarumych!` This PR no longer requires the allocator to be Clone. * Currently, new_cyclic_in's documentation is almost entirely copy-pasted from new_cyclic, with minor tweaks to make it more accurate (e.g. Rc<T> -> Rc<T, A>). The example section is removed to mitigate redundancy and instead redirects to cyclic_in. Is this appropriate? * ~~The comments in new_cyclic_in (and much of the implementation) are also copy-pasted from new_cyclic. Would it be better to make a helper method new_cyclic_in_internal that both functions call, with either Global or the custom allocator? I'm not sure if that's even possible, since the internal method would have to return Arc<T, Global> and I don't know if it's possible to "downcast" that to an Arc<T>. Maybe transmute would work here?~~ Done, thanks `@zakarumych` * Arc::new_cyclic is #[inline], but Rc::new_cyclic is not. Which is preferred? * nit: does it matter where in the impl block new_cyclic_in is defined?
2 parents 8b36ecb + 6750f04 commit f6fd305

File tree

2 files changed

+172
-84
lines changed

2 files changed

+172
-84
lines changed

library/alloc/src/rc.rs

+79-36
Original file line numberDiff line numberDiff line change
@@ -460,42 +460,7 @@ impl<T> Rc<T> {
460460
where
461461
F: FnOnce(&Weak<T>) -> T,
462462
{
463-
// Construct the inner in the "uninitialized" state with a single
464-
// weak reference.
465-
let uninit_ptr: NonNull<_> = Box::leak(Box::new(RcBox {
466-
strong: Cell::new(0),
467-
weak: Cell::new(1),
468-
value: mem::MaybeUninit::<T>::uninit(),
469-
}))
470-
.into();
471-
472-
let init_ptr: NonNull<RcBox<T>> = uninit_ptr.cast();
473-
474-
let weak = Weak { ptr: init_ptr, alloc: Global };
475-
476-
// It's important we don't give up ownership of the weak pointer, or
477-
// else the memory might be freed by the time `data_fn` returns. If
478-
// we really wanted to pass ownership, we could create an additional
479-
// weak pointer for ourselves, but this would result in additional
480-
// updates to the weak reference count which might not be necessary
481-
// otherwise.
482-
let data = data_fn(&weak);
483-
484-
let strong = unsafe {
485-
let inner = init_ptr.as_ptr();
486-
ptr::write(ptr::addr_of_mut!((*inner).value), data);
487-
488-
let prev_value = (*inner).strong.get();
489-
debug_assert_eq!(prev_value, 0, "No prior strong references should exist");
490-
(*inner).strong.set(1);
491-
492-
Rc::from_inner(init_ptr)
493-
};
494-
495-
// Strong references should collectively own a shared weak reference,
496-
// so don't run the destructor for our old weak reference.
497-
mem::forget(weak);
498-
strong
463+
Self::new_cyclic_in(data_fn, Global)
499464
}
500465

501466
/// Constructs a new `Rc` with uninitialized contents.
@@ -762,6 +727,84 @@ impl<T, A: Allocator> Rc<T, A> {
762727
}
763728
}
764729

730+
/// Constructs a new `Rc<T, A>` in the given allocator while giving you a `Weak<T, A>` to the allocation,
731+
/// to allow you to construct a `T` which holds a weak pointer to itself.
732+
///
733+
/// Generally, a structure circularly referencing itself, either directly or
734+
/// indirectly, should not hold a strong reference to itself to prevent a memory leak.
735+
/// Using this function, you get access to the weak pointer during the
736+
/// initialization of `T`, before the `Rc<T, A>` is created, such that you can
737+
/// clone and store it inside the `T`.
738+
///
739+
/// `new_cyclic_in` first allocates the managed allocation for the `Rc<T, A>`,
740+
/// then calls your closure, giving it a `Weak<T, A>` to this allocation,
741+
/// and only afterwards completes the construction of the `Rc<T, A>` by placing
742+
/// the `T` returned from your closure into the allocation.
743+
///
744+
/// Since the new `Rc<T, A>` is not fully-constructed until `Rc<T, A>::new_cyclic_in`
745+
/// returns, calling [`upgrade`] on the weak reference inside your closure will
746+
/// fail and result in a `None` value.
747+
///
748+
/// # Panics
749+
///
750+
/// If `data_fn` panics, the panic is propagated to the caller, and the
751+
/// temporary [`Weak<T, A>`] is dropped normally.
752+
///
753+
/// # Examples
754+
///
755+
/// See [`new_cyclic`].
756+
///
757+
/// [`new_cyclic`]: Rc::new_cyclic
758+
/// [`upgrade`]: Weak::upgrade
759+
#[cfg(not(no_global_oom_handling))]
760+
#[unstable(feature = "allocator_api", issue = "32838")]
761+
pub fn new_cyclic_in<F>(data_fn: F, alloc: A) -> Rc<T, A>
762+
where
763+
F: FnOnce(&Weak<T, A>) -> T,
764+
{
765+
// Construct the inner in the "uninitialized" state with a single
766+
// weak reference.
767+
let (uninit_raw_ptr, alloc) = Box::into_raw_with_allocator(Box::new_in(
768+
RcBox {
769+
strong: Cell::new(0),
770+
weak: Cell::new(1),
771+
value: mem::MaybeUninit::<T>::uninit(),
772+
},
773+
alloc,
774+
));
775+
let uninit_ptr: NonNull<_> = (unsafe { &mut *uninit_raw_ptr }).into();
776+
let init_ptr: NonNull<RcBox<T>> = uninit_ptr.cast();
777+
778+
let weak = Weak { ptr: init_ptr, alloc: alloc };
779+
780+
// It's important we don't give up ownership of the weak pointer, or
781+
// else the memory might be freed by the time `data_fn` returns. If
782+
// we really wanted to pass ownership, we could create an additional
783+
// weak pointer for ourselves, but this would result in additional
784+
// updates to the weak reference count which might not be necessary
785+
// otherwise.
786+
let data = data_fn(&weak);
787+
788+
let strong = unsafe {
789+
let inner = init_ptr.as_ptr();
790+
ptr::write(ptr::addr_of_mut!((*inner).value), data);
791+
792+
let prev_value = (*inner).strong.get();
793+
debug_assert_eq!(prev_value, 0, "No prior strong references should exist");
794+
(*inner).strong.set(1);
795+
796+
// Strong references should collectively own a shared weak reference,
797+
// so don't run the destructor for our old weak reference.
798+
// Calling into_raw_with_allocator has the double effect of giving us back the allocator,
799+
// and forgetting the weak reference.
800+
let alloc = weak.into_raw_with_allocator().1;
801+
802+
Rc::from_inner_in(init_ptr, alloc)
803+
};
804+
805+
strong
806+
}
807+
765808
/// Constructs a new `Rc<T>` in the provided allocator, returning an error if the allocation
766809
/// fails
767810
///

library/alloc/src/sync.rs

+93-48
Original file line numberDiff line numberDiff line change
@@ -450,54 +450,7 @@ impl<T> Arc<T> {
450450
where
451451
F: FnOnce(&Weak<T>) -> T,
452452
{
453-
// Construct the inner in the "uninitialized" state with a single
454-
// weak reference.
455-
let uninit_ptr: NonNull<_> = Box::leak(Box::new(ArcInner {
456-
strong: atomic::AtomicUsize::new(0),
457-
weak: atomic::AtomicUsize::new(1),
458-
data: mem::MaybeUninit::<T>::uninit(),
459-
}))
460-
.into();
461-
let init_ptr: NonNull<ArcInner<T>> = uninit_ptr.cast();
462-
463-
let weak = Weak { ptr: init_ptr, alloc: Global };
464-
465-
// It's important we don't give up ownership of the weak pointer, or
466-
// else the memory might be freed by the time `data_fn` returns. If
467-
// we really wanted to pass ownership, we could create an additional
468-
// weak pointer for ourselves, but this would result in additional
469-
// updates to the weak reference count which might not be necessary
470-
// otherwise.
471-
let data = data_fn(&weak);
472-
473-
// Now we can properly initialize the inner value and turn our weak
474-
// reference into a strong reference.
475-
let strong = unsafe {
476-
let inner = init_ptr.as_ptr();
477-
ptr::write(ptr::addr_of_mut!((*inner).data), data);
478-
479-
// The above write to the data field must be visible to any threads which
480-
// observe a non-zero strong count. Therefore we need at least "Release" ordering
481-
// in order to synchronize with the `compare_exchange_weak` in `Weak::upgrade`.
482-
//
483-
// "Acquire" ordering is not required. When considering the possible behaviours
484-
// of `data_fn` we only need to look at what it could do with a reference to a
485-
// non-upgradeable `Weak`:
486-
// - It can *clone* the `Weak`, increasing the weak reference count.
487-
// - It can drop those clones, decreasing the weak reference count (but never to zero).
488-
//
489-
// These side effects do not impact us in any way, and no other side effects are
490-
// possible with safe code alone.
491-
let prev_value = (*inner).strong.fetch_add(1, Release);
492-
debug_assert_eq!(prev_value, 0, "No prior strong references should exist");
493-
494-
Arc::from_inner(init_ptr)
495-
};
496-
497-
// Strong references should collectively own a shared weak reference,
498-
// so don't run the destructor for our old weak reference.
499-
mem::forget(weak);
500-
strong
453+
Self::new_cyclic_in(data_fn, Global)
501454
}
502455

503456
/// Constructs a new `Arc` with uninitialized contents.
@@ -781,6 +734,98 @@ impl<T, A: Allocator> Arc<T, A> {
781734
}
782735
}
783736

737+
/// Constructs a new `Arc<T, A>` in the given allocator while giving you a `Weak<T, A>` to the allocation,
738+
/// to allow you to construct a `T` which holds a weak pointer to itself.
739+
///
740+
/// Generally, a structure circularly referencing itself, either directly or
741+
/// indirectly, should not hold a strong reference to itself to prevent a memory leak.
742+
/// Using this function, you get access to the weak pointer during the
743+
/// initialization of `T`, before the `Arc<T, A>` is created, such that you can
744+
/// clone and store it inside the `T`.
745+
///
746+
/// `new_cyclic_in` first allocates the managed allocation for the `Arc<T, A>`,
747+
/// then calls your closure, giving it a `Weak<T, A>` to this allocation,
748+
/// and only afterwards completes the construction of the `Arc<T, A>` by placing
749+
/// the `T` returned from your closure into the allocation.
750+
///
751+
/// Since the new `Arc<T, A>` is not fully-constructed until `Arc<T, A>::new_cyclic_in`
752+
/// returns, calling [`upgrade`] on the weak reference inside your closure will
753+
/// fail and result in a `None` value.
754+
///
755+
/// # Panics
756+
///
757+
/// If `data_fn` panics, the panic is propagated to the caller, and the
758+
/// temporary [`Weak<T>`] is dropped normally.
759+
///
760+
/// # Example
761+
///
762+
/// See [`new_cyclic`]
763+
///
764+
/// [`new_cyclic`]: Arc::new_cyclic
765+
/// [`upgrade`]: Weak::upgrade
766+
#[cfg(not(no_global_oom_handling))]
767+
#[inline]
768+
#[unstable(feature = "allocator_api", issue = "32838")]
769+
pub fn new_cyclic_in<F>(data_fn: F, alloc: A) -> Arc<T, A>
770+
where
771+
F: FnOnce(&Weak<T, A>) -> T,
772+
{
773+
// Construct the inner in the "uninitialized" state with a single
774+
// weak reference.
775+
let (uninit_raw_ptr, alloc) = Box::into_raw_with_allocator(Box::new_in(
776+
ArcInner {
777+
strong: atomic::AtomicUsize::new(0),
778+
weak: atomic::AtomicUsize::new(1),
779+
data: mem::MaybeUninit::<T>::uninit(),
780+
},
781+
alloc,
782+
));
783+
let uninit_ptr: NonNull<_> = (unsafe { &mut *uninit_raw_ptr }).into();
784+
let init_ptr: NonNull<ArcInner<T>> = uninit_ptr.cast();
785+
786+
let weak = Weak { ptr: init_ptr, alloc: alloc };
787+
788+
// It's important we don't give up ownership of the weak pointer, or
789+
// else the memory might be freed by the time `data_fn` returns. If
790+
// we really wanted to pass ownership, we could create an additional
791+
// weak pointer for ourselves, but this would result in additional
792+
// updates to the weak reference count which might not be necessary
793+
// otherwise.
794+
let data = data_fn(&weak);
795+
796+
// Now we can properly initialize the inner value and turn our weak
797+
// reference into a strong reference.
798+
let strong = unsafe {
799+
let inner = init_ptr.as_ptr();
800+
ptr::write(ptr::addr_of_mut!((*inner).data), data);
801+
802+
// The above write to the data field must be visible to any threads which
803+
// observe a non-zero strong count. Therefore we need at least "Release" ordering
804+
// in order to synchronize with the `compare_exchange_weak` in `Weak::upgrade`.
805+
//
806+
// "Acquire" ordering is not required. When considering the possible behaviours
807+
// of `data_fn` we only need to look at what it could do with a reference to a
808+
// non-upgradeable `Weak`:
809+
// - It can *clone* the `Weak`, increasing the weak reference count.
810+
// - It can drop those clones, decreasing the weak reference count (but never to zero).
811+
//
812+
// These side effects do not impact us in any way, and no other side effects are
813+
// possible with safe code alone.
814+
let prev_value = (*inner).strong.fetch_add(1, Release);
815+
debug_assert_eq!(prev_value, 0, "No prior strong references should exist");
816+
817+
// Strong references should collectively own a shared weak reference,
818+
// so don't run the destructor for our old weak reference.
819+
// Calling into_raw_with_allocator has the double effect of giving us back the allocator,
820+
// and forgetting the weak reference.
821+
let alloc = weak.into_raw_with_allocator().1;
822+
823+
Arc::from_inner_in(init_ptr, alloc)
824+
};
825+
826+
strong
827+
}
828+
784829
/// Constructs a new `Pin<Arc<T, A>>` in the provided allocator. If `T` does not implement `Unpin`,
785830
/// then `data` will be pinned in memory and unable to be moved.
786831
#[cfg(not(no_global_oom_handling))]

0 commit comments

Comments
 (0)