-
Notifications
You must be signed in to change notification settings - Fork 430
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Custom alloc
, using stable Rust compilers, and more
#402
Conversation
This comment has been minimized.
This comment has been minimized.
@wedsonaf This disables Rust Binder in the CI. |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
Two changes from a self-review:
|
@wedsonaf Let's do that then, since anyhow I have to improve a couple things here. |
I am slightly worried about the fact that the custom I am also worried about the extensive use of unstable features in |
@bjorn3 Before one of the technical calls on After discussing this and the possible approaches, we decided to go for a middle ground. On one hand, kernel folks wanted to keep The middle ground was agreeing on keeping a subset of By doing this, the kernel can go a bit faster now, and Rust can slowly incorporate and discuss the changes as needed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some thoughts:
- Is it necessary for all files to carry SPDX identifers? Can we for example document somewhere that all alloc files are MIT or APACHE-2.0 so we can keep the files unchanged?
- I actually find
-Dmissing_docs
not very helpful for dev. Perhaps it's better to warn about missing docs in Makefile and only reject them in CI (like many other warnings). - You also want to add
no_global_oom_handling
toRc::new
andArc::new
as they use non-falliblebox
syntax. You cannot actually use them because they'll trigger error when monomorphized, but it's better to just remove them.
AFAIK, they are required (or worse, a bigger text with the licence like in some third-party files etc.). In any case, it is better to avoid that fight now, i.e. we can ask lawyers later. It is not a big deal anyway, and can be mechanically ignored in e.g. a script.
It is true that people will complain sooner than later, so it is best to change it. My thinking here was that the warning was fairly easy/stable, and that most driver developers will not need many
Indeed. |
Found in Rust-for-Linux/linux#402. Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
alloc: `RawVec<T, A>::shrink` can be in `no_global_oom_handling`. Found in Rust-for-Linux/linux#402. Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
This brings the `alloc` crate in-tree. The code comes from Rust 1.54.0-beta.1, i.e. commit `bf62f4de3`. Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
In preparation for enabling `no_global_oom_handling` for `alloc`, we need to add some new methods. They are all marked as: #[stable(feature = "kernel", since = "1.0.0")] for easy identification. Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
In preparation for enabling `no_global_oom_handling` for `alloc`, we need to stop using methods that will disappear when enabling the configuration option. Instead, we use the new methods we just added. Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
This comment has been minimized.
This comment has been minimized.
@bjorn3 @nbdd0121 Please take another look at b4a8689 and 487d757 (you may want to check the difference with the previous time... Now it is closer to what should be implemented upstream, like Gary asked, but it is quite a lot more of code now). Also, please see the |
The `alloc` diff vs. v1diff --git a/rust/alloc/raw_vec.rs b/rust/alloc/raw_vec.rs
index 011c22ac0d9..629dbd3927d 100644
--- a/rust/alloc/raw_vec.rs
+++ b/rust/alloc/raw_vec.rs
@@ -248,7 +248,7 @@ impl<T, A: Allocator> RawVec<T, A> {
};
let ptr = match result {
Ok(ptr) => ptr,
- Err(_) => return Err(TryReserveError::AllocError{ layout, non_exhaustive: () }),
+ Err(_) => return Err(TryReserveError::AllocError { layout, non_exhaustive: () }),
};
Ok(Self {
@@ -515,7 +515,6 @@ impl<T, A: Allocator> RawVec<T, A> {
Ok(())
}
- // TODO: Why this was marked as `#[cfg(not(no_global_oom_handling))]`?
fn shrink(&mut self, amount: usize) -> Result<(), TryReserveError> {
assert!(amount <= self.capacity(), "Tried to shrink to a larger capacity");
diff --git a/rust/alloc/rc.rs b/rust/alloc/rc.rs
index 34a3d7eb6d2..7344cd9a449 100644
--- a/rust/alloc/rc.rs
+++ b/rust/alloc/rc.rs
@@ -264,6 +264,7 @@ use core::marker::{self, PhantomData, Unpin, Unsize};
use core::mem::size_of_val;
use core::mem::{self, align_of_val_raw, forget};
use core::ops::{CoerceUnsized, Deref, DispatchFromDyn, Receiver};
+#[cfg(not(no_global_oom_handling))]
use core::pin::Pin;
use core::ptr::{self, NonNull};
#[cfg(not(no_global_oom_handling))]
@@ -348,6 +349,7 @@ impl<T> Rc<T> {
///
/// let five = Rc::new(5);
/// ```
+ #[cfg(not(no_global_oom_handling))]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn new(value: T) -> Rc<T> {
// There is an implicit weak pointer owned by all the strong
@@ -383,6 +385,7 @@ impl<T> Rc<T> {
/// }
/// }
/// ```
+ #[cfg(not(no_global_oom_handling))]
#[unstable(feature = "arc_new_cyclic", issue = "75861")]
pub fn new_cyclic(data_fn: impl FnOnce(&Weak<T>) -> T) -> Rc<T> {
// Construct the inner in the "uninitialized" state with a single
@@ -579,6 +582,7 @@ impl<T> Rc<T> {
}
/// Constructs a new `Pin<Rc<T>>`. If `T` does not implement `Unpin`, then
/// `value` will be pinned in memory and unable to be moved.
+ #[cfg(not(no_global_oom_handling))]
#[stable(feature = "pin", since = "1.33.0")]
pub fn pin(value: T) -> Pin<Rc<T>> {
unsafe { Pin::new_unchecked(Rc::new(value)) }
@@ -1475,6 +1479,7 @@ impl<T: ?Sized> Clone for Rc<T> {
}
}
+#[cfg(not(no_global_oom_handling))]
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Default> Default for Rc<T> {
/// Creates a new `Rc<T>`, with the `Default` value for `T`.
@@ -1733,6 +1738,7 @@ impl<T: ?Sized> fmt::Pointer for Rc<T> {
}
}
+#[cfg(not(no_global_oom_handling))]
#[stable(feature = "from_for_ptrs", since = "1.6.0")]
impl<T> From<T> for Rc<T> {
/// Converts a generic type `T` into a `Rc<T>`
diff --git a/rust/alloc/slice.rs b/rust/alloc/slice.rs
index cad2b4ee2a2..455d1be60c1 100644
--- a/rust/alloc/slice.rs
+++ b/rust/alloc/slice.rs
@@ -155,6 +155,7 @@ mod hack {
use core::alloc::Allocator;
use crate::boxed::Box;
+ use crate::collections::TryReserveError;
use crate::vec::Vec;
// We shouldn't add inline attribute to this since this is used in
@@ -174,6 +175,11 @@ mod hack {
T::to_vec(s, alloc)
}
+ #[inline]
+ pub fn try_to_vec<T: TryConvertVec, A: Allocator>(s: &[T], alloc: A) -> Result<Vec<T, A>, TryReserveError> {
+ T::try_to_vec(s, alloc)
+ }
+
#[cfg(not(no_global_oom_handling))]
pub trait ConvertVec {
fn to_vec<A: Allocator>(s: &[Self], alloc: A) -> Vec<Self, A>
@@ -181,6 +187,12 @@ mod hack {
Self: Sized;
}
+ pub trait TryConvertVec {
+ fn try_to_vec<A: Allocator>(s: &[Self], alloc: A) -> Result<Vec<Self, A>, TryReserveError>
+ where
+ Self: Sized;
+ }
+
#[cfg(not(no_global_oom_handling))]
impl<T: Clone> ConvertVec for T {
#[inline]
@@ -233,6 +245,42 @@ mod hack {
v
}
}
+
+ impl<T: Clone> TryConvertVec for T {
+ #[inline]
+ default fn try_to_vec<A: Allocator>(s: &[Self], alloc: A) -> Result<Vec<Self, A>, TryReserveError> {
+ struct DropGuard<'a, T, A: Allocator> {
+ vec: &'a mut Vec<T, A>,
+ num_init: usize,
+ }
+ impl<'a, T, A: Allocator> Drop for DropGuard<'a, T, A> {
+ #[inline]
+ fn drop(&mut self) {
+ // SAFETY:
+ // items were marked initialized in the loop below
+ unsafe {
+ self.vec.set_len(self.num_init);
+ }
+ }
+ }
+ let mut vec = Vec::try_with_capacity_in(s.len(), alloc)?;
+ let mut guard = DropGuard { vec: &mut vec, num_init: 0 };
+ let slots = guard.vec.spare_capacity_mut();
+ // .take(slots.len()) is necessary for LLVM to remove bounds checks
+ // and has better codegen than zip.
+ for (i, b) in s.iter().enumerate().take(slots.len()) {
+ guard.num_init = i;
+ slots[i].write(b.clone());
+ }
+ core::mem::forget(guard);
+ // SAFETY:
+ // the vec was allocated and initialized above to at least this length.
+ unsafe {
+ vec.set_len(s.len());
+ }
+ Ok(vec)
+ }
+ }
}
#[lang = "slice_alloc"]
@@ -533,15 +581,8 @@ impl<T> [T] {
where
T: Clone,
{
- let mut v = Vec::try_with_capacity_in(self.len(), alloc)?;
- // SAFETY:
- // allocated above with the capacity of `self`, and initialize to `self.len()` in
- // ptr::copy_to_non_overlapping below.
- unsafe {
- self.as_ptr().copy_to_nonoverlapping(v.as_mut_ptr(), self.len());
- v.set_len(self.len());
- }
- Ok(v)
+ // N.B., see the `hack` module in this file for more details.
+ hack::try_to_vec(self, alloc)
}
/// Converts `self` into a vector without clones or allocation.
diff --git a/rust/alloc/sync.rs b/rust/alloc/sync.rs
index b87d1f24875..1f4e446806c 100644
--- a/rust/alloc/sync.rs
+++ b/rust/alloc/sync.rs
@@ -21,6 +21,7 @@ use core::marker::{PhantomData, Unpin, Unsize};
use core::mem::size_of_val;
use core::mem::{self, align_of_val_raw};
use core::ops::{CoerceUnsized, Deref, DispatchFromDyn, Receiver};
+#[cfg(not(no_global_oom_handling))]
use core::pin::Pin;
use core::ptr::{self, NonNull};
#[cfg(not(no_global_oom_handling))]
@@ -35,10 +36,10 @@ use crate::alloc::{box_free, WriteCloneIntoRaw};
use crate::alloc::{AllocError, Allocator, Global, Layout};
use crate::borrow::{Cow, ToOwned};
use crate::boxed::Box;
+use crate::collections::TryReserveError;
use crate::rc::is_dangling;
#[cfg(not(no_global_oom_handling))]
use crate::string::String;
-#[cfg(not(no_global_oom_handling))]
use crate::vec::Vec;
#[cfg(test)]
@@ -334,6 +335,7 @@ impl<T> Arc<T> {
///
/// let five = Arc::new(5);
/// ```
+ #[cfg(not(no_global_oom_handling))]
#[inline]
#[stable(feature = "rust1", since = "1.0.0")]
pub fn new(data: T) -> Arc<T> {
@@ -367,6 +369,7 @@ impl<T> Arc<T> {
/// me: me.clone(),
/// });
/// ```
+ #[cfg(not(no_global_oom_handling))]
#[inline]
#[unstable(feature = "arc_new_cyclic", issue = "75861")]
pub fn new_cyclic(data_fn: impl FnOnce(&Weak<T>) -> T) -> Arc<T> {
@@ -487,6 +490,7 @@ impl<T> Arc<T> {
/// Constructs a new `Pin<Arc<T>>`. If `T` does not implement `Unpin`, then
/// `data` will be pinned in memory and unable to be moved.
+ #[cfg(not(no_global_oom_handling))]
#[stable(feature = "pin", since = "1.33.0")]
pub fn pin(data: T) -> Pin<Arc<T>> {
unsafe { Pin::new_unchecked(Arc::new(data)) }
@@ -1184,6 +1188,18 @@ impl<T> Arc<[T]> {
}
}
+ /// Tries to allocate an `ArcInner<[T]>` with the given length.
+ unsafe fn try_allocate_for_slice(len: usize) -> Result<*mut ArcInner<[T]>, TryReserveError> {
+ unsafe {
+ let layout = Layout::array::<T>(len)?;
+ Self::try_allocate_for_layout(
+ layout,
+ |l| Global.allocate(l),
+ |mem| ptr::slice_from_raw_parts_mut(mem as *mut T, len) as *mut ArcInner<[T]>,
+ ).map_err(|_| TryReserveError::AllocError { layout, non_exhaustive: () })
+ }
+ }
+
/// Copy elements from slice into newly allocated Arc<\[T\]>
///
/// Unsafe because the caller must either take ownership or bind `T: Copy`.
@@ -1198,6 +1214,19 @@ impl<T> Arc<[T]> {
}
}
+ /// Tries to copy elements from slice into newly allocated Arc<\[T\]>
+ ///
+ /// Unsafe because the caller must either take ownership or bind `T: Copy`.
+ unsafe fn try_copy_from_slice(v: &[T]) -> Result<Arc<[T]>, TryReserveError> {
+ unsafe {
+ let ptr = Self::try_allocate_for_slice(v.len())?;
+
+ ptr::copy_nonoverlapping(v.as_ptr(), &mut (*ptr).data as *mut [T] as *mut T, v.len());
+
+ Ok(Self::from_ptr(ptr))
+ }
+ }
+
/// Constructs an `Arc<[T]>` from an iterator known to be of a certain size.
///
/// Behavior is undefined should the size be wrong.
@@ -2276,6 +2305,7 @@ impl<T: ?Sized> fmt::Pointer for Arc<T> {
}
}
+#[cfg(not(no_global_oom_handling))]
#[stable(feature = "rust1", since = "1.0.0")]
impl<T: Default> Default for Arc<T> {
/// Creates a new `Arc<T>`, with the `Default` value for `T`.
@@ -2300,6 +2330,7 @@ impl<T: ?Sized + Hash> Hash for Arc<T> {
}
}
+#[cfg(not(no_global_oom_handling))]
#[stable(feature = "from_for_ptrs", since = "1.6.0")]
impl<T> From<T> for Arc<T> {
fn from(t: T) -> Self {
@@ -2409,6 +2440,32 @@ impl<T> From<Vec<T>> for Arc<[T]> {
}
}
+// Avoid `error: specializing impl repeats parameter` implementing `TryFrom`.
+impl<T> Arc<[T]> {
+ /// Tries to allocate a reference-counted slice and move `v`'s items into it.
+ ///
+ /// # Example
+ ///
+ /// ```
+ /// # use std::sync::Arc;
+ /// let unique: Vec<i32> = vec![1, 2, 3];
+ /// let shared: Arc<[i32]> = Arc::try_from(unique).unwrap();
+ /// assert_eq!(&[1, 2, 3], &shared[..]);
+ /// ```
+ #[stable(feature = "kernel", since = "1.0.0")]
+ #[inline]
+ pub fn try_from_vec(mut v: Vec<T>) -> Result<Self, TryReserveError> {
+ unsafe {
+ let arc = Arc::try_copy_from_slice(&v)?;
+
+ // Allow the Vec to free its memory, but not destroy its contents
+ v.set_len(0);
+
+ Ok(arc)
+ }
+ }
+}
+
#[stable(feature = "shared_from_cow", since = "1.45.0")]
impl<'a, B> From<Cow<'a, B>> for Arc<B>
where
diff --git a/rust/alloc/vec/mod.rs b/rust/alloc/vec/mod.rs
index 1af036959e3..2abffb93e49 100644
--- a/rust/alloc/vec/mod.rs
+++ b/rust/alloc/vec/mod.rs
@@ -120,10 +120,8 @@ use self::spec_from_elem::SpecFromElem;
#[cfg(not(no_global_oom_handling))]
mod spec_from_elem;
-#[cfg(not(no_global_oom_handling))]
use self::set_len_on_drop::SetLenOnDrop;
-#[cfg(not(no_global_oom_handling))]
mod set_len_on_drop;
#[cfg(not(no_global_oom_handling))]
@@ -147,7 +145,8 @@ mod spec_from_iter;
#[cfg(not(no_global_oom_handling))]
use self::spec_extend::SpecExtend;
-#[cfg(not(no_global_oom_handling))]
+use self::spec_extend::TrySpecExtend;
+
mod spec_extend;
/// A contiguous growable array type, written as `Vec<T>` and pronounced 'vector'.
@@ -1834,7 +1833,7 @@ impl<T, A: Allocator> Vec<T, A> {
///
/// ```
/// let mut vec = vec![1, 2];
- /// vec.try_push(3)?;
+ /// vec.try_push(3).unwrap();
/// assert_eq!(vec, [1, 2, 3]);
/// ```
#[inline]
@@ -1910,6 +1909,17 @@ impl<T, A: Allocator> Vec<T, A> {
self.len += count;
}
+ /// Tries to append elements to `Self` from other buffer.
+ #[inline]
+ unsafe fn try_append_elements(&mut self, other: *const [T]) -> Result<(), TryReserveError> {
+ let count = unsafe { (*other).len() };
+ self.try_reserve(count)?;
+ let len = self.len();
+ unsafe { ptr::copy_nonoverlapping(other as *const T, self.as_mut_ptr().add(len), count) };
+ self.len += count;
+ Ok(())
+ }
+
/// Creates a draining iterator that removes the specified range in the vector
/// and yields the removed items.
///
@@ -2340,22 +2350,12 @@ impl<T: Clone, A: Allocator> Vec<T, A> {
pub fn try_resize(&mut self, new_len: usize, value: T) -> Result<(), TryReserveError> {
let len = self.len();
- if new_len <= len {
+ if new_len > len {
+ self.try_extend_with(new_len - len, ExtendElement(value))
+ } else {
self.truncate(new_len);
- return Ok(());
- }
-
- let additional = new_len - len;
-
- // TODO: another method to have the option for `try_reserve_exact`?
- self.try_reserve(additional)?;
-
- // TODO: be smarter.
- for _ in 0..additional {
- self.try_push(value.clone())?;
+ Ok(())
}
-
- Ok(())
}
/// Clones and appends all elements in a slice to the `Vec`.
@@ -2404,15 +2404,7 @@ impl<T: Clone, A: Allocator> Vec<T, A> {
/// [`extend`]: Vec::extend
#[stable(feature = "kernel", since = "1.0.0")]
pub fn try_extend_from_slice(&mut self, other: &[T]) -> Result<(), TryReserveError> {
- // TODO: another method to have the option for `try_reserve_exact`?
- self.try_reserve(other.len())?;
-
- // TODO: be smarter. Also avoids bringing the `spec_extend` module.
- for value in other {
- self.try_push(value.clone())?;
- }
-
- Ok(())
+ self.try_spec_extend(other.iter())
}
/// Copies elements from `src` range to the end of the vector.
@@ -2514,6 +2506,36 @@ impl<T, A: Allocator> Vec<T, A> {
// len set by scope guard
}
}
+
+ /// Try to extend the vector by `n` values, using the given generator.
+ fn try_extend_with<E: ExtendWith<T>>(&mut self, n: usize, mut value: E) -> Result<(), TryReserveError> {
+ self.try_reserve(n)?;
+
+ unsafe {
+ let mut ptr = self.as_mut_ptr().add(self.len());
+ // Use SetLenOnDrop to work around bug where compiler
+ // may not realize the store through `ptr` through self.set_len()
+ // don't alias.
+ let mut local_len = SetLenOnDrop::new(&mut self.len);
+
+ // Write all elements except the last one
+ for _ in 1..n {
+ ptr::write(ptr, value.next());
+ ptr = ptr.offset(1);
+ // Increment the length in every step in case next() panics
+ local_len.increment_len(1);
+ }
+
+ if n > 0 {
+ // We can write the last element directly without cloning needlessly
+ ptr::write(ptr, value.last());
+ local_len.increment_len(1);
+ }
+
+ // len set by scope guard
+ Ok(())
+ }
+ }
}
impl<T: PartialEq, A: Allocator> Vec<T, A> {
@@ -2814,6 +2836,32 @@ impl<T, A: Allocator> Vec<T, A> {
}
}
+ // leaf method to which various SpecFrom/SpecExtend implementations delegate when
+ // they have no further optimizations to apply
+ fn try_extend_desugared<I: Iterator<Item = T>>(&mut self, mut iterator: I) -> Result<(), TryReserveError> {
+ // This is the case for a general iterator.
+ //
+ // This function should be the moral equivalent of:
+ //
+ // for item in iterator {
+ // self.push(item);
+ // }
+ while let Some(element) = iterator.next() {
+ let len = self.len();
+ if len == self.capacity() {
+ let (lower, _) = iterator.size_hint();
+ self.try_reserve(lower.saturating_add(1))?;
+ }
+ unsafe {
+ ptr::write(self.as_mut_ptr().add(len), element);
+ // NB can't overflow since we would have had to alloc the address space
+ self.set_len(len + 1);
+ }
+ }
+
+ Ok(())
+ }
+
/// Creates a splicing iterator that replaces the specified range in the vector
/// with the given `replace_with` iterator and yields the removed items.
/// `replace_with` does not need to be the same length as `range`.
diff --git a/rust/alloc/vec/set_len_on_drop.rs b/rust/alloc/vec/set_len_on_drop.rs
new file mode 100644
index 00000000000..448bf5076a0
--- /dev/null
+++ b/rust/alloc/vec/set_len_on_drop.rs
@@ -0,0 +1,30 @@
+// SPDX-License-Identifier: Apache-2.0 OR MIT
+
+// Set the length of the vec when the `SetLenOnDrop` value goes out of scope.
+//
+// The idea is: The length field in SetLenOnDrop is a local variable
+// that the optimizer will see does not alias with any stores through the Vec's data
+// pointer. This is a workaround for alias analysis issue #32155
+pub(super) struct SetLenOnDrop<'a> {
+ len: &'a mut usize,
+ local_len: usize,
+}
+
+impl<'a> SetLenOnDrop<'a> {
+ #[inline]
+ pub(super) fn new(len: &'a mut usize) -> Self {
+ SetLenOnDrop { local_len: *len, len }
+ }
+
+ #[inline]
+ pub(super) fn increment_len(&mut self, increment: usize) {
+ self.local_len += increment;
+ }
+}
+
+impl Drop for SetLenOnDrop<'_> {
+ #[inline]
+ fn drop(&mut self) {
+ *self.len = self.local_len;
+ }
+}
diff --git a/rust/alloc/vec/spec_extend.rs b/rust/alloc/vec/spec_extend.rs
new file mode 100644
index 00000000000..5a64c7ce239
--- /dev/null
+++ b/rust/alloc/vec/spec_extend.rs
@@ -0,0 +1,170 @@
+// SPDX-License-Identifier: Apache-2.0 OR MIT
+
+use crate::alloc::Allocator;
+use crate::vec::TryReserveError;
+use core::iter::TrustedLen;
+use core::ptr::{self};
+use core::slice::{self};
+
+use super::{IntoIter, SetLenOnDrop, Vec};
+
+// Specialization trait used for Vec::extend
+#[cfg(not(no_global_oom_handling))]
+pub(super) trait SpecExtend<T, I> {
+ fn spec_extend(&mut self, iter: I);
+}
+
+// Specialization trait used for Vec::try_extend
+pub(super) trait TrySpecExtend<T, I> {
+ fn try_spec_extend(&mut self, iter: I) -> Result<(), TryReserveError>;
+}
+
+#[cfg(not(no_global_oom_handling))]
+impl<T, I, A: Allocator> SpecExtend<T, I> for Vec<T, A>
+where
+ I: Iterator<Item = T>,
+{
+ default fn spec_extend(&mut self, iter: I) {
+ self.extend_desugared(iter)
+ }
+}
+
+impl<T, I, A: Allocator> TrySpecExtend<T, I> for Vec<T, A>
+where
+ I: Iterator<Item = T>,
+{
+ default fn try_spec_extend(&mut self, iter: I) -> Result<(), TryReserveError> {
+ self.try_extend_desugared(iter)
+ }
+}
+
+#[cfg(not(no_global_oom_handling))]
+impl<T, I, A: Allocator> SpecExtend<T, I> for Vec<T, A>
+where
+ I: TrustedLen<Item = T>,
+{
+ default fn spec_extend(&mut self, iterator: I) {
+ // This is the case for a TrustedLen iterator.
+ let (low, high) = iterator.size_hint();
+ if let Some(additional) = high {
+ debug_assert_eq!(
+ low,
+ additional,
+ "TrustedLen iterator's size hint is not exact: {:?}",
+ (low, high)
+ );
+ self.reserve(additional);
+ unsafe {
+ let mut ptr = self.as_mut_ptr().add(self.len());
+ let mut local_len = SetLenOnDrop::new(&mut self.len);
+ iterator.for_each(move |element| {
+ ptr::write(ptr, element);
+ ptr = ptr.offset(1);
+ // NB can't overflow since we would have had to alloc the address space
+ local_len.increment_len(1);
+ });
+ }
+ } else {
+ // Per TrustedLen contract a `None` upper bound means that the iterator length
+ // truly exceeds usize::MAX, which would eventually lead to a capacity overflow anyway.
+ // Since the other branch already panics eagerly (via `reserve()`) we do the same here.
+ // This avoids additional codegen for a fallback code path which would eventually
+ // panic anyway.
+ panic!("capacity overflow");
+ }
+ }
+}
+
+impl<T, I, A: Allocator> TrySpecExtend<T, I> for Vec<T, A>
+where
+ I: TrustedLen<Item = T>,
+{
+ default fn try_spec_extend(&mut self, iterator: I) -> Result<(), TryReserveError> {
+ // This is the case for a TrustedLen iterator.
+ let (low, high) = iterator.size_hint();
+ if let Some(additional) = high {
+ debug_assert_eq!(
+ low,
+ additional,
+ "TrustedLen iterator's size hint is not exact: {:?}",
+ (low, high)
+ );
+ self.try_reserve(additional)?;
+ unsafe {
+ let mut ptr = self.as_mut_ptr().add(self.len());
+ let mut local_len = SetLenOnDrop::new(&mut self.len);
+ iterator.for_each(move |element| {
+ ptr::write(ptr, element);
+ ptr = ptr.offset(1);
+ // NB can't overflow since we would have had to alloc the address space
+ local_len.increment_len(1);
+ });
+ }
+ Ok(())
+ } else {
+ Err(TryReserveError::CapacityOverflow)
+ }
+ }
+}
+
+#[cfg(not(no_global_oom_handling))]
+impl<T, A: Allocator> SpecExtend<T, IntoIter<T>> for Vec<T, A> {
+ fn spec_extend(&mut self, mut iterator: IntoIter<T>) {
+ unsafe {
+ self.append_elements(iterator.as_slice() as _);
+ }
+ iterator.ptr = iterator.end;
+ }
+}
+
+impl<T, A: Allocator> TrySpecExtend<T, IntoIter<T>> for Vec<T, A> {
+ fn try_spec_extend(&mut self, mut iterator: IntoIter<T>) -> Result<(), TryReserveError> {
+ unsafe {
+ self.try_append_elements(iterator.as_slice() as _)?;
+ }
+ iterator.ptr = iterator.end;
+ Ok(())
+ }
+}
+
+#[cfg(not(no_global_oom_handling))]
+impl<'a, T: 'a, I, A: Allocator + 'a> SpecExtend<&'a T, I> for Vec<T, A>
+where
+ I: Iterator<Item = &'a T>,
+ T: Clone,
+{
+ default fn spec_extend(&mut self, iterator: I) {
+ self.spec_extend(iterator.cloned())
+ }
+}
+
+impl<'a, T: 'a, I, A: Allocator + 'a> TrySpecExtend<&'a T, I> for Vec<T, A>
+where
+ I: Iterator<Item = &'a T>,
+ T: Clone,
+{
+ default fn try_spec_extend(&mut self, iterator: I) -> Result<(), TryReserveError> {
+ self.try_spec_extend(iterator.cloned())
+ }
+}
+
+#[cfg(not(no_global_oom_handling))]
+impl<'a, T: 'a, A: Allocator + 'a> SpecExtend<&'a T, slice::Iter<'a, T>> for Vec<T, A>
+where
+ T: Copy,
+{
+ fn spec_extend(&mut self, iterator: slice::Iter<'a, T>) {
+ let slice = iterator.as_slice();
+ unsafe { self.append_elements(slice) };
+ }
+}
+
+impl<'a, T: 'a, A: Allocator + 'a> TrySpecExtend<&'a T, slice::Iter<'a, T>> for Vec<T, A>
+where
+ T: Copy,
+{
+ fn try_spec_extend(&mut self, iterator: slice::Iter<'a, T>) -> Result<(), TryReserveError> {
+ let slice = iterator.as_slice();
+ unsafe { self.try_append_elements(slice) }
+ }
+} |
Also: I ran the binder tests against a build with these changes and they all pass. |
Thanks for checking! I will leave this open for a few hours in case bjorn/Gary want to check again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't reviewed the new commits as thoroughly, but I don't see any obvious problems.
In preparation for enabling `no_global_oom_handling` for `alloc`, we need to stop using methods that will disappear when enabling the configuration option. Instead, we use the new methods we just added. Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Now both `alloc` and users are prepared to be compiled with `no_global_oom_handling`. Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Sadly, `rustfmt` does not respect `RUSTC_BOOTSTRAP=1` yet, so we will have to avoid them for the moment. Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Now it is needed since we use `objtree` etc. in the target. Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
Thanks a lot everyone, I am pushing to remove the |
Review of
|
They are infallible, and could not be actually used because they will trigger an error when monomorphized, but it is better to just remove them. Link: Rust-for-Linux/linux#402 Suggested-by: Gary Guo <gary@garyguo.net> Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
alloc: `no_global_oom_handling`: disable `new()`s, `pin()`s, etc. They are infallible, and could not be actually used because they will trigger an error when monomorphized, but it is better to just remove them. Link: Rust-for-Linux/linux#402 Suggested-by: Gary Guo <gary@garyguo.net> Signed-off-by: Miguel Ojeda <ojeda@kernel.org>
This solves several points:
No more panicking allocations.
alloc
withno_global_oom_handling
enabled, plus adds a new few functions we needed (see next point).alloc
#199.Allows us to customize
alloc
as needed by having a "miniversion" of it in-tree.alloc
differently (e.g. no re-formatting done for it, and I will make @ksquirrel flag changes toalloc
).alloc
that were needed to replace a few panicking calls.We can now use stable Rust compilers instead of nightlies.
Avoids disabling unittests and doctests.
alloc
. I am using Rust's blessed-Zbuild-std
way, so it should be pretty maintainable.test
library, we can simplify all this.It also cleans up some warnings and a few other things that were discovered by using the beta. Some related issues found while working with the beta have been opened too:
rustup
wanted features & bugfixes #396.rustfmt
wanted features & bugfixes #398.This is best reviewed commit by commit. In particular, I would like reviews on the functions I added to
alloc
, thus summoning @alex @bjorn3 @nbdd0121. Take a look at commit rust: alloc: add sometry_*
methods we need for them.Special thanks to @Ericson2314 for quickly introducing
no_global_oom_allocation
in upstreamalloc
which simplified my life and to @Mark-Simulacrum for answering some questions on beta releases (and working on pushing forward the first 1.54 beta which had a few blockers/hiccups, more than usual).