Rust: Tracking issue for specialization (RFC 1210)

Created on 23 Feb 2016  ·  236Comments  ·  Source: rust-lang/rust

This is a tracking issue for specialization (rust-lang/rfcs#1210).

Major implementation steps:

  • [x] Land https://github.com/rust-lang/rust/pull/30652 =)
  • [ ] Restrictions around lifetime dispatch (currently a soundness hole)
  • [ ] default impl (https://github.com/rust-lang/rust/issues/37653)
  • [ ] Integration with associated consts
  • [ ] Bounds not always properly enforced (https://github.com/rust-lang/rust/issues/33017)
  • [ ] Should we permit empty impls if parent has no default members? https://github.com/rust-lang/rust/issues/48444
  • [ ] implement "always applicable" impls https://github.com/rust-lang/rust/issues/48538
  • [ ] describe and test the precise cycle conditions around creating the specialization graph (see e.g. this comment, which noted that we have some very careful logic here today)

Unresolved questions from the RFC:

  • Should associated type be specializable at all?
  • When should projection reveal a default type? Never during typeck? Or when monomorphic?
  • Should default trait items be considered default (i.e. specializable)?
  • Should we have default impl (where all items are default) or partial impl (where default is opt-in); see https://github.com/rust-lang/rust/issues/37653#issuecomment-616116577 for some relevant examples of where default impl is limiting.
  • How should we deal with lifetime dispatchability?

Note that the specialization feature as implemented currently is unsound, which means that it can cause Undefined Behavior without unsafe code. min_specialization avoids most of the pitfalls.

A-specialization A-traits B-RFC-approved B-RFC-implemented B-unstable C-tracking-issue F-specialization T-lang

Most helpful comment

I've been using #[min_specialization] in an experimental library I've been developing so I thought I'd share my experiences. The goal is to use specialization in it's simplest form: to have some narrow cases with faster implementations than the general case. In particular, to have cryptographic algorithms in the general case run in constant time but then if all the inputs are marked Public to have a specialized version that runs in faster variable time (because if they are public we don't care about leaking info about them via execution time). Additionally some algorithms are faster depending on whether the elliptic curve point is normalized or not. To get this to work we start with

#![feature(rustc_attrs, min_specialization)]

Then if you need to make a _specialization predicate_ trait as explained in maximally minimal specialization you mark the trait declaration with #[rustc_specialization_trait].

All my specialization is done in this file and here's an example of a specialization predicate trait.

The feature works and does exactly what I need. This is obviously using rustc internal marker and is therefore prone to break without warning.

The only negative bit of feedback is that I don't feel the default keyword makes sense. Essentially what default means right now is: "this impl is specializable so interpret impls that cover a subset of this one this one as specialization of it rather than a conflicting impl". The problem is it leads to very weird looking code:

https://github.com/LLFourn/secp256kfun/blob/6766b60c02c99ca24f816801fe876fed79643c3a/secp256kfun/src/op.rs#L196-L206

Here the second impl is specializing the first but it is also default. The meaning of default seems to be lost. If you look at the rest of the impls it is is quite hard to figure out which impls are specializing which. Furthermore, when I made a erroneous impl that overlapped with an existing one it was often hard to figure out where I went wrong.

It seems to me this would be simpler if everything was specializable and when you specialize something you declare precisely which impl you are specializing. Transforming the example in the RFC into what I had in mind:

impl<A, T> Extend<A, T> for Vec<A> where T: IntoIterator<Item=A>
{
    // no need for default
    fn extend(&mut self, iterable: T) {
        ...
    }
}

// We declare explicitly which impl we are specializing repeating all type bounds etc
specialize impl<A, T> Extend<A, T> for Vec<A> where T: IntoIterator<Item=A>
    // And then we declare explicitly how we are making this impl narrower with ‘when’.
    // i.e. This impl is like the first except replace all occurances of ‘T’ with ‘&'a [A]’
    when<'a> T = &'a [A]
{
    fn extend(&mut self, iterable: &'a [A]) {
        ...
    }
}

All 236 comments

Some additional open questions:

  • Should we revisit the orphan rules in the light of specialization? Are there ways to make things more flexible now?
  • Should we extend the "chain rule" in the RFC to something more expressive, like the so-called "lattice rule"?
  • Related to both of the above, how does negative reasoning fit into the story? Can we recover the negative reasoning we need by a clever enough use of specialization/orphan rules, or should we make it more first-class?

I am not sure that specialization changes the orphan rules:

  • The "linking" orphan rules must stay the same, because otherwise you would not have safe linking.
  • I don't think the "future compatibility" orphan rules should change. Adding a non-specializable impl under you would still be a breaking change.

Worse than that, the "future compatibility" orphan rules keep cross-crate specialization under pretty heavy control. Without them, default-impls leaving their methods open becomes much worse.

I never liked explicit negative reasoning. I think the total negative reasoning specialization provides is a nice compromise.

Should this impl be allowed with specialization as implemented? Or am I missing something?
http://is.gd/3Ul0pe

Same with this one, would have expected it to compile: http://is.gd/RyFIEl

Looks like there's some quirks in determining overlap when associated types are involved. This compiles: http://is.gd/JBPzIX, while this effectively identical code doesn't: http://is.gd/0ksLPX

Here's a piece of code I expected to compile with specialization:

http://is.gd/3BNbfK

#![feature(specialization)]

use std::str::FromStr;

struct Error;

trait Simple<'a> {
    fn do_something(s: &'a str) -> Result<Self, Error>;
}

impl<'a> Simple<'a> for &'a str {
     fn do_something(s: &'a str) -> Result<Self, Error> {
        Ok(s)
    }
}

impl<'a, T: FromStr> Simple<'a> for T {
    fn do_something(s: &'a str) -> Result<Self, Error> {
        T::from_str(s).map_err(|_| Error)
    }
}

fn main() {
    // Do nothing. Just type check.
}

Compilation fails with the compiler citing implementation conflicts. Note that &str doesn't implement FromStr, so there shouldn't be a conflict.

@sgrif

I had time to look at the first two examples. Here are my notes.

Example 1

First case, you have:

  • FromSqlRow<ST, DB> for T where T: FromSql<ST, DB>
  • FromSqlRow<(ST, SU), DB> for (T, U) where T: FromSqlRow<ST, DB>, U: FromSqlRow<SU, DB>,

The problem is that these impls overlap but neither is more specific than the other:

  • You can potentially have a T: FromSql<ST, DB> where T is not a pair (so it matches the first impl but not the second).
  • You can potentially have a (T, U) where:

    • T: FromSqlRow<ST, DB>,

    • U: FromSqlRow<SU, DB>, but _not_

    • (T, U): FromSql<(ST, SU), DB>

    • (so the second impl matches, but not the first)

  • The two impls overlap because you can have a (T, U) such that:

    • T: FromSqlRow<ST, DB>

    • U: FromSqlRow<SU, DB>

    • (T, U): FromSql<(ST, SU), DB>

This is the kind of situation that lattice impls would allow -- you'd have to write a third impl for the overlapping case, and say what it should do. Alternatively, negative trait impls might give you a way to rule out overlap or otherwise tweak which matches are possible.

Example 2

You have:

  • Queryable<ST, DB> for T where T: FromSqlRow<ST, DB>
  • Queryable<Nullable<ST>, DB> for Option<T> where T: Queryable<ST, DB>

These overlap because you can have Option<T> where:

  • T: Queryable<ST, DB>
  • Option<T>: FromSqlRow<Nullable<ST>, DB>

But neither impl is more specific:

  • You can have a T such that T: FromSqlRow<ST, DB> but T is not an Option<U> (matches first impl but not second)
  • You can have an Option<T> such that T: Queryable<ST, DB> but not Option<T>: FromSqlRow<Nullable<ST>, DB>

@SergioBenitez

Compilation fails with the compiler citing implementation conflicts. Note that &str doesn't implement FromStr, so there shouldn't be a conflict.

The problem is that the compiler is conservatively assuming that &str might come to implement FromStr in the future. That may seem silly for this example, but in general, we add new impls all the time, and we want to protect downstream code from breaking when we add those impls.

This is a conservative choice, and is something we might want to relax over time. You can get the background here:

Thank you for clarifying those two cases. It makes complete sense now

On Tue, Mar 22, 2016, 6:34 PM Aaron Turon [email protected] wrote:

@SergioBenitez https://github.com/SergioBenitez

Compilation fails with the compiler citing implementation conflicts. Note
that &str doesn't implement FromStr, so there shouldn't be a conflict.

The problem is that the compiler is conservatively assuming that &str
might come to implement FromStr in the future. That may seem silly for
this example, but in general, we add new impls all the time, and we want to
protect downstream code from breaking when we add those impls.

This is a conservative choice, and is something we might want to relax
over time. You can get the background here:

-
http://smallcultfollowing.com/babysteps/blog/2015/01/14/little-orphan-impls/


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/rust-lang/rust/issues/31844#issuecomment-200093757

@aturon

The problem is that the compiler is conservatively assuming that &str might come to implement FromStr in the future. That may seem silly for this example, but in general, we add new impls all the time, and we want to protect downstream code from breaking when we add those impls.

Isn't this exactly what specialization is trying to address? With specialization, I would expect that even if an implementation of FromStr for &str were added in the future, the direct implementation of the Simple trait for &str would take precedence.

@SergioBenitez you need to put default fn in the more general impl. Your
example isn't specializable.

On Tue, Mar 22, 2016, 6:54 PM Sergio Benitez [email protected]
wrote:

@aturon https://github.com/aturon

The problem is that the compiler is conservatively assuming that &str
might come to implement FromStr in the future. That may seem silly for this
example, but in general, we add new impls all the time, and we want to
protect downstream code from breaking when we add those impls.

Isn't this exactly what specialization is trying to address? With
specialization, I would expect that even if an implementation of FromStr
for &str were added in the future, the direct implementation for the
trait for &str would take precedence.


You are receiving this because you were mentioned.
Reply to this email directly or view it on GitHub
https://github.com/rust-lang/rust/issues/31844#issuecomment-200097995

I think "default" trait items being automatically considered default sounds confusing. You might want both parametricity for a trait like in Haskell, etc. along side with easing the impls. Also you cannot easily grep for them like you can for default. It's not hard to both type the default keyword and give a default implementation, but they cannot be separated as is. Also, if one wants to clarify the language, then these "default" trait items could be renamed to "trait proposed" items in documentation.

Note from #32999 (comment): if we do go with the lattice rule (or allow negative constraints), the "use an intermediate trait" trick to prevent further specialization of something will no longer work.

@Stebalien

Why won't it work? The trick limits the specialization to a private trait. You can't specialize the private trait if you can't access it.

@arielb1 Ah. Good point. In my case, the trait isn't private.

I don't think the "externals can't specialize because orphan forward-compatibility + coherence rulea" reasoning is particularly interesting or useful. Especially when we don't commit to our specific coherence rules.

Is there a way to access an overridden default impl? If so, this could aid in constructing tests. See Design By Contract and libhoare.

Allowing projection of default associated types during type-checking will allow enforcing type inequality at compile-time: https://gist.github.com/7c081574958d22f89d434a97b626b1e4

#![feature(specialization)]

pub trait NotSame {}

pub struct True;
pub struct False;

pub trait Sameness {
    type Same;
}

mod internal {
    pub trait PrivSameness {
        type Same;
    }
}

use internal::PrivSameness;

impl<A, B> Sameness for (A, B) {
    type Same = <Self as PrivSameness>::Same;
}

impl<A, B> PrivSameness for (A, B) {
    default type Same = False;
}
impl<A> PrivSameness for (A, A) {
    type Same = True;
}

impl<A, B> NotSame for (A, B) where (A, B): Sameness<Same=False> {}

fn not_same<A, B>() where (A, B): NotSame {}

fn main() {
    // would compile
    not_same::<i32, f32>();

    // would not compile
    // not_same::<i32, i32>();
}

edited per @burdges' comment

Just fyi @rphmeier one should probably avoid is.gd because it does not resolve for Tor users due to using CloudFlare. GitHub works fine with full URLs. And play.rust-lang.org works fine over Tor.

@burdges FWIW play.rust-lang.org itself uses is.gd for its "Shorten" button.

It can probably be changed, though: https://github.com/rust-lang/rust-playpen/blob/9777ef59b/static/web.js#L333

use like this(https://is.gd/Ux6FNs):

#![feature(specialization)]
pub trait Foo {}
pub trait Bar: Foo {}
pub trait Baz: Foo {}

pub trait Trait {
    type Item;
}

struct Staff<T> { }

impl<T: Foo> Trait for Staff<T> {
    default type Item = i32;
}

impl<T: Foo + Bar> Trait for Staff<T> {
    type Item = i64;
}

impl<T: Foo + Baz> Trait for Staff<T> {
    type Item = f64;
}

fn main() {
    let _ = Staff { };
}

Error :

error: conflicting implementations of trait `Trait` for type `Staff<_>`: [--explain E0119]
  --> <anon>:20:1
20 |> impl<T: Foo + Baz> Trait for Staff<T> {
   |> ^
note: conflicting implementation is here:
  --> <anon>:16:1
16 |> impl<T: Foo + Bar> Trait for Staff<T> {
   |> ^

error: aborting due to previous error

Does feture specialization support this, and is there any other kind of implementations currently?

@zitsen

These impls are not allowed by the current specialization design, because neither T: Foo + Bar nor T: Foo + Baz is more specialized than the other. That is, if you have some T: Foo + Bar + Baz, it's not clear which impl should "win".

We have some thoughts on a more expressive system that would allow you to _also_ give an impl for T: Foo + Bar + Baz and thus disambiguate, but that hasn't been fully proposed yet.

If negative trait bounds trait Baz: !Bar ever land, that could also be used with specialization to prove that the sets of types that implement Bar and those that implement Baz are distinct and individually specializable.

Seems @rphmeier 's reply is what I exactly want, impls for T: Foo + Bar + Baz would also help.

Just ignore this, I still have something to do with my case, and always exciting for the specialization and other features landing.

Thanks @aturon @rphmeier .

I've been playing around with specialization lately, and I came across this weird case:

#![feature(specialization)]

trait Marker {
    type Mark;
}

trait Foo { fn foo(&self); }

struct Fizz;

impl Marker for Fizz {
    type Mark = ();
}

impl Foo for Fizz {
    fn foo(&self) { println!("Fizz!"); }
}

impl<T> Foo for T
    where T: Marker, T::Mark: Foo
{
    default fn foo(&self) { println!("Has Foo marker!"); }
}

struct Buzz;

impl Marker for Buzz {
    type Mark = Fizz;
}

fn main() {
    Fizz.foo();
    Buzz.foo();
}

Compiler output:

error: conflicting implementations of trait `Foo` for type `Fizz`: [--explain E0119]
  --> <anon>:19:1
19 |> impl<T> Foo for T
   |> ^
note: conflicting implementation is here:
  --> <anon>:15:1
15 |> impl Foo for Fizz {
   |> ^

playpen

I believe that the above _should_ compile, and there's two interesting variations that actually do work-as-intended:

1) Removing the where T::Mark: Fizz bound:

impl<T> Foo for T
    where T: Marker //, T::Mark: Fizz
{
    // ...
}

playpen

2) Adding a "trait bound alias":

trait FooMarker { }
impl<T> FooMarker for T where T: Marker, T::Mark: Foo { }

impl<T> Foo for T where T: FooMarker {
    // ...
}

playpen

(Which _doesn't_ work if Marker is defined in a separate crate (!), see this example repo)

I also believe that this issue might be related to #20400 somehow

EDIT: I've opened an issue about this: #36587

I'm encountering an issue with specialization. Not sure if it's an implementation problem or a problem in the way specialization is specified.

use std::vec::IntoIter as VecIntoIter;

pub trait ClonableIterator: Iterator {
    type ClonableIter;

    fn clonable(self) -> Self::ClonableIter;
}

impl<T> ClonableIterator for T where T: Iterator {
    default type ClonableIter = VecIntoIter<T::Item>;

    default fn clonable(self) -> VecIntoIter<T::Item> {
        self.collect::<Vec<_>>().into_iter()
    }
}

impl<T> ClonableIterator for T where T: Iterator + Clone {
    type ClonableIter = T;

    #[inline]
    fn clonable(self) -> T {
        self
    }
}

(playpen)
(by the way, it would be nice if this code eventually landed in the stdlib one day)

This code fails with:

error: method `clonable` has an incompatible type for trait:
 expected associated type,
    found struct `std::vec::IntoIter` [--explain E0053]
  --> <anon>:14:5
   |>
14 |>     default fn clonable(self) -> VecIntoIter<T::Item> {
   |>     ^

Changing the return value to Self::ClonableIter gives the following error:

error: mismatched types [--explain E0308]
  --> <anon>:15:9
   |>
15 |>         self.collect::<Vec<_>>().into_iter()
   |>         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected associated type, found struct `std::vec::IntoIter`
note: expected type `<T as ClonableIterator>::ClonableIter`
note:    found type `std::vec::IntoIter<<T as std::iter::Iterator>::Item>`

Apparently you can't refer to the concrete type of a defaulted associated type, which I find quite limiting.

@tomaka it should work, the RFC text has this:

impl<T> Example for T {
    default type Output = Box<T>;
    default fn generate(self) -> Box<T> { Box::new(self) }
}

impl Example for bool {
    type Output = bool;
    fn generate(self) -> bool { self }
}

(https://github.com/rust-lang/rfcs/blob/master/text/1210-impl-specialization.md#the-default-keyword)

Which seems similar enough to your case to be relevant.

@aatch that example doesn't seem to compile with the intuitive definition for the example trait: https://play.rust-lang.org/?gist=97ff3c2f7f3e50bd3aef000dbfa2ca4e&version=nightly&backtrace=0

the specialization code explicitly disallows this -- see #33481, which I initially thought was an error but turned out to be a diagnostics issue. My PRs to improve the diagnostics here went unnoticed, and I haven't maintained them to the latest master for quite some time.

@rphmeier the RFC text suggests that it should be allowed though, that example is copied from it.

I had a play with some code that could benefit from specialization. I strongly think we should go for the lattice rule rather chaining - it feels natural and was the only way to get the flexibility I needed (afaict).

If we went for default on the impl as well as individual items, could we enforce that if any item is overridden then they all must be? That would allow us to reason based on the precise type of a default assoc type (for example) in the other items, which seems like a useful boost in expressivity.

Should the following be allowed? I want to specialize a type so that ArrayVec is Copy when its element type is Copy, and that it otherwise has a destructor. I'm trying to accomplish it by using an internal field that is replaced by specialization.

I hoped this would compile, i.e that it deduces the copyability of ArrayVec<A>'s fields from the field types that are selected by the A: Copy + Array bound (compilable snippet on playground).

impl<A: Copy + Array> Copy for ArrayVec<A>
    //where <A as Repr>::Data: Copy
{ }

The commented-out where clause is not wanted because it exposes a private type Repr in the public interface. (It also ICEs anyway).

Edit: I had forgotten I reported issue #33162 about this already, I'm sorry.

Follow up on my comment, my actual use case:

// Ideal version

trait Scannable {}

impl<T: FromStr> Scannable for T {}
impl<T: FromStr> Scannable for Result<T, ()> {}

// But this doesn't follow from the specialisation rules because Result: !FromStr
// Lattice rule would allow filling in that gap or negative reasoning would allow specifying it.

// Second attempt

trait FromResult {
    type Ok;
    fn from(r: Result<Self::Ok, ()>) -> Self;
}

impl<T> Scannable for T {
    default type Ok = T;
    default fn from(r: Result<T, ()>) -> Self {...} // error can't assume Ok == T, could do this if we had `default impl`
}

impl<T> Scannable for Result<T, ()> {
    type Ok = T;
    default fn from(r: Result<T, ()>) -> Self { r }
}

fn scan_from_str<T: FromResult>(x: &str) -> T
    where <T as FromResult>::Ok: FromStr  // Doesn't hold for T: FromStr because of the default on T::Ok
{ ... }

// Can also add the FromStr bound to FromResult::Ok, but doesn't help

// Third attempt
trait FromResult<Ok> {
    fn from(r: Result<Ok, ()>) -> Self;
}

impl<T> FromResult<T> for T {
    default fn from(r: Result<Self, ()>) -> Self { ... }
}

impl<T> FromResult<T> for Result<T, ()> {
    fn from(r: Result<T, ())>) -> Self { r }
}


fn scan_from_str<U: FromStr, T: FromResult<U>>(x: &str) -> T { ... }

// Error because we can't infer that U == String
let mut x: Result<String, ()> = scan_from_str("dsfsf");

@tomaka @Aatch

The problem there is that you are not allowed to rely on the value of other default items. So when you have this impl:

impl<T> ClonableIterator for T where T: Iterator {
    default type ClonableIter = VecIntoIter<T::Item>;

    default fn clonable(self) -> VecIntoIter<T::Item> {
    //                           ^^^^^^^^^^^^^^^^^^^^
        self.collect::<Vec<_>>().into_iter()
    }
}

At the spot where I highlighted, clonable is relying on Self::ClonableIter, but because CloneableIter is declared as default, you can't do that. The concern is that someone might specialize and override CloneableIter but _not_ clonable.

We had talked about some possible answers here. One of them was to let you use default to group together items where, if you override one, you must override all:

impl<T> ClonableIterator for T where T: Iterator {
    default {
        type ClonableIter = VecIntoIter<T::Item>;
        fn clonable(self) -> VecIntoIter<T::Item> { ... }
    }
}

This is ok, but a bit "rightward-drift inducing". The default also looks like a naming scope, which it is not. There might be some simpler variant that just lets you toggle between "override-any" (as today) vs "override-all" (what you need).

We had also hoped we could get by by leveraging impl Trait. The idea would be that this most often comes up, as is the case here, when you want to customize the return type of methods. So perhaps if you could rewrite the trait to use impl Trait:

pub trait ClonableIterator: Iterator {
    fn clonable(self) -> impl Iterator;
}

This would effectively be a kind of shorthand when implemented for a default group containing the type and the fn. (I'm not sure if there'd be a way to do that purely in the impl though.)

PS, sorry for the long delay in answering your messages, which I see date from _July_.

While impl Trait does help, there is no RFC that has been accepted or implemented which allows it to be used with trait bodies in any form, so looking to it for this RFC feels a bit odd.

I'm interested in implementing the default impl feature (where all items are default).
Would you accept a contribution on that?

@giannicic Definitely! I'd be happy to help mentor the work as well.

Is there currently a conclusion on whether associated types should be specializable?

The following is a simplification of my use-case, demonstrating a need for specializable associated types.
I have a generic data structure, say Foo, which coordinates a collection of container trait objects (&trait::Property). The trait trait::Property is implemented by both Property<T> (backed by Vec<T>) and PropertyBits (backed by BitVec, a bit vector).
In generic methods on Foo, I would like to be able to determine the right underlying data structure for T via associated types, but this requires specialization to have a blanket impl for non-special cases as follows.

trait ContainerFor {
    type P: trait::Property;
}

impl<T> ContainerFor for T {
    default type P = Property<T>; // default to the `Vec`-based version
}

impl ContainerFor for bool {
    type P = PropertyBits; // specialize to optimize for space
}

impl Foo {
    fn add<T>(&mut self, name: &str) {
        self.add_trait_obj(name, Box::new(<T as ContainerFor>::P::new())));
    }
    fn get<T>(&mut self, name: &str) -> Option<&<T as ContainerFor>::P> {
        self.get_trait_obj(name).and_then(|prop| prop.downcast::<_>());
    }
}

Thanks @aturon !
Basically I'm doing the work by adding a new "defaultness" attribute to the ast::ItemKind::Impl struct (and then use the new attribute together with the impl item "defaultness" attribute) but there is also a quick and easy
possibility consisting on setting default to all the impl items of the default impl during parsing.
To me this isn't a "complete" solution since we lost the information that the "defaultness" is related to the impl and not to each item of the impl,
additionally if there is a plan to introduce a partial impl the first solution would already provide an attribute that can be used to store default as well as partial. But just to be sure and
not wasting time, what do you think about?

@giannicic @aturon may I propose we create a specific issue to discuss default impl ?

Would the lattice rule allow me to, given:

trait Foo {}

trait A {}
trait B {}
trait C {}
// ...

add implementations of Foo for subset of types that implement some combination of A, B, C, ...:

impl Foo for T where T: A { ... }
impl Foo for T where T: B { ... }
impl Foo for T where T: A + B { ... }
impl Foo for T where T: B + C { ... }
// ...

and allow me to "forbid" some combinations, e.g., that A + C should never happen:

impl Foo for T where T: A + C = delete;

?

Context: I landed into wanting this when implementing an ApproxEqual(Shape, Shape) trait for different kinds of shapes (points, cubes, polygons, ...) where these are all traits. I had to work around this by refactoring this into different traits, e.g., ApproxEqualPoint(Point, Point), to avoid conflicting implementations.

@gnzlbg

and allow me to "forbid" some combinations, e.g., that A + C should never happen:

No, this is not something that the lattice rule would permit. That would be more the domain of "negative reasoning" in some shape or kind.

Context: I landed into wanting this when implementing an ApproxEqual(Shape, Shape) trait for different kinds of shapes (points, cubes, polygons, ...) where these are all traits. I had to work around this by refactoring this into different traits, e.g., ApproxEqualPoint(Point, Point), to avoid conflicting implementations.

So @withoutboats has been promoting the idea of "exclusion groups", where you can declare that a certain set of traits are mutually exclusive (i.e., you can implement at most one of them). I envision this as kind of being like an enum (i.e., the traits are all declared together). I like the idea of this, particularly as (I think!) it helps to avoid some of the more pernicious aspects of negative reasoning. But I feel like more thought is needed on this front -- and also a good writeup that tries to summarize all the "data" floating around about how to think about negative reasoning. Perhaps now that I've (mostly) wrapped up my HKT and specialization series I can think about that...

@nikomatsakis :

So @withoutboats has been promoting the idea of "exclusion groups", where you can declare that a certain set of traits are mutually exclusive (i.e., you can implement at most one of them). I envision this as kind of being like an enum (i.e., the traits are all declared together). I like the idea of this, particularly as (I think!) it helps to avoid some of the more pernicious aspects of negative reasoning. But I feel like more thought is needed on this front -- and also a good writeup that tries to summarize all the "data" floating around about how to think about negative reasoning. Perhaps now that I've (mostly) wrapped up my HKT and specialization series I can think about that...

I thought about exclusions groups while writing this (you mentioned it in the forums the other day), but I don't think they can work since in this particular example not all traits implementations are exclusive. The most trivial example is the Point and Float traits: a Float _can_ be a 1D point, so ApproxEqualPoint(Point, Point) and ApproxEqualFloat(Float, Float) cannot be exclusive. There are other examples like Square and Polygon, or Box | Cube and AABB (axis-aligned bounding box) where the "trait hierarchy" actually needs more complex constraints.

No, this is not something that the lattice rule would permit. That would be more the domain of "negative reasoning" in some shape or kind.

I would at least be able to implement the particular case and put an unimplemented!() in it. That would be enough, but obviously I would like it more if the compiler would statically catch those cases in which I call a function with an unimplemented!() in it (and at this point, we are again in negative reasoning land).

@gnzlbg lattice specialization would allow you to make that impl panic, but the idea of doing that makes me :cry:.

The idea of "exclusion groups" is really just negative supertrait bounds. One thing we haven't explored too thoroughly is the notion of reverse polarity specialization - allowing you to write a specialized impl that is of reversed polarity to its less specialized impl. For example, in this case you would just write:

impl<T> !Foo for T where T: A + C { }

I'm not fully sure what the implications of allowing that are. I think it connects to the issues Niko's already highlighted about how specialization is sort of conflating code reuse with polymorphism right now.

With all this discussion of negative reasoning and negative impls, I feel compelled to bring up the old Haskell idea of "instance chains" again (paper, paper, GHC issue tracker, Rust pre-RFC), as a potential source of inspiration if nothing else.

Essentially the idea is that anywhere you can write a trait impl, you can also write any number of "else if clauses" specifying a different impl that should apply in case the previous one(s) did not, with an optional final "else clause" specifying a negative impl (that is, if none of the clauses for Trait apply, then !Trait applies).

@withoutboats

The idea of "exclusion groups" is really just negative supertrait bounds.

I think that would be enough for my use cases.

I think it connects to the issues Niko's already highlighted about how specialization is sort of conflating code reuse with polymorphism right now.

I don't know if these can be untangled. I want to have:

  • polymorphism: a single trait that abstracts different implementations of an operation for lots of different types,
  • code reuse: instead of implementing the operation for each type, I want to implement them for groups of types that implement some traits,
  • performance: be able to override an already existing implementation for a particular type or a subset of types that has a more specific set of constraints that the already existing implementations,
  • productivity: be able to write and test my program incrementally, instead of having to add a lots of impls for it to compile.

Covering all cases is hard, but if the compiler forces me to cover all cases:

trait Foo {}
trait A {}
trait B {}

impl<T> Foo for T where T: A { ... }
impl<T> Foo for T where T: B { ... }
// impl<T> Foo for T where T: A + B { ... }  //< compiler: need to add this impl!

and also gives me negative impls:

impl<T> !Foo for T where T: A + B { }
impl<T> !Foo for T where T: _ { } // _ => all cases not explicitly covered yet

I would be able to incrementally add impls as I need them and also get nice compiler errors when I try to use a trait with a type for which there is no impl.

I'm not fully sure what the implications of allowing that are.

Niko mentioned that there are problems with negative reasoning. FWIW the only thing negative reasoning is used for in the example above is to state that the user knows that an impl for a particular case is required, but has explicitly decided not to provide an implementation for it.

I just hit #33017 and don't see it linked here yet. It is marked as a soundness hole so it would be good to track here.

For https://github.com/dtolnay/quote/issues/7 I need something similar to this example from the RFC which doesn't work yet. cc @tomaka @Aatch @rphmeier who commented about this earlier.

trait Example {
    type Output;
    fn generate(self) -> Self::Output;
}

impl<T> Example for T {
    default type Output = Box<T>;
    default fn generate(self) -> Box<T> { Box::new(self) }
}

impl Example for bool {
    type Output = bool;
    fn generate(self) -> bool { self }
}

I stumbled upon the following workaround which gives a way to express the same thing.

#![feature(specialization)]

use std::fmt::{self, Debug};

///////////////////////////////////////////////////////////////////////////////

trait Example: Output {
    fn generate(self) -> Self::Output;
}

/// In its own trait for reasons, presumably.
trait Output {
    type Output: Debug + Valid<Self>;
}

fn main() {
    // true
    println!("{:?}", Example::generate(true));

    // box("s")
    println!("{:?}", Example::generate("s"));
}

///////////////////////////////////////////////////////////////////////////////

/// Instead of `Box<T>` just so the "{:?}" in main() clearly shows the type.
struct MyBox<T: ?Sized>(Box<T>);

impl<T: ?Sized> Debug for MyBox<T>
    where T: Debug
{
    fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
        write!(f, "box({:?})", self.0)
    }
}

///////////////////////////////////////////////////////////////////////////////

/// Return type of the impl containing `default fn`.
type DefaultOutput<T> = MyBox<T>;

impl Output for bool {
    type Output = bool;
}

impl<T> Example for T where T: Pass {
    default fn generate(self) -> Self::Output {
        T::pass({
            // This is the impl you wish you could write
            MyBox(Box::new(self))
        })
    }
}

impl Example for bool {
    fn generate(self) -> Self::Output {
        self
    }
}

///////////////////////////////////////////////////////////////////////////////
// Magic? Soundness exploit? Who knows?

impl<T: ?Sized> Output for T where T: Debug {
    default type Output = DefaultOutput<T>;
}

trait Valid<T: ?Sized> {
    fn valid(DefaultOutput<T>) -> Self;
}

impl<T: ?Sized> Valid<T> for DefaultOutput<T> {
    fn valid(ret: DefaultOutput<T>) -> Self {
        ret
    }
}

impl<T> Valid<T> for T {
    fn valid(_: DefaultOutput<T>) -> Self {
        unreachable!()
    }
}

trait Pass: Debug {
    fn pass(DefaultOutput<Self>) -> <Self as Output>::Output;
}

impl<T: ?Sized> Pass for T where T: Debug, <T as Output>::Output: Valid<T> {
    fn pass(ret: DefaultOutput<T>) -> <T as Output>::Output {
        <T as Output>::Output::valid(ret)
    }
}

I am still working on https://github.com/dtolnay/quote/issues/7 and needed a diamond pattern. Here is my solution. cc @zitsen who asked about this earlier and @aturon and @rphmeier who responded.

#![feature(specialization)]

/// Can't have these impls directly:
///
///  - impl<T> Trait for T
///  - impl<T> Trait for T where T: Clone
///  - impl<T> Trait for T where T: Default
///  - impl<T> Trait for T where T: Clone + Default
trait Trait {
    fn print(&self);
}

fn main() {
    struct A;
    A.print(); // "neither"

    #[derive(Clone)]
    struct B;
    B.print(); // "clone"

    #[derive(Default)]
    struct C;
    C.print(); // "default"

    #[derive(Clone, Default)]
    struct D;
    D.print(); // "clone + default"
}

trait IfClone: Clone { fn if_clone(&self); }
trait IfNotClone { fn if_not_clone(&self); }

impl<T> Trait for T {
    default fn print(&self) {
        self.if_not_clone();
    }
}

impl<T> Trait for T where T: Clone {
    fn print(&self) {
        self.if_clone();
    }
}

impl<T> IfClone for T where T: Clone {
    default fn if_clone(&self) {
        self.clone();
        println!("clone");
    }
}

impl<T> IfClone for T where T: Clone + Default {
    fn if_clone(&self) {
        self.clone();
        Self::default();
        println!("clone + default");
    }
}

impl<T> IfNotClone for T {
    default fn if_not_clone(&self) {
        println!("neither");
    }
}

impl<T> IfNotClone for T where T: Default {
    fn if_not_clone(&self) {
        Self::default();
        println!("default");
    }
}

Hit a bug (or at least unexpected behavior from my perspective) with specialization and type inference: #38167

These two impls should be expected to be valid with specialization, right? It seems to not be successfully picking it up.

impl<T, ST, DB> ToSql<Nullable<ST>, DB> for T where
    T: ToSql<ST, DB>,
    DB: Backend + HasSqlType<ST>,
    ST: NotNull,
{
    ...
}

impl<T, ST, DB> ToSql<Nullable<ST>, DB> for Option<T> where
    T: ToSql<ST, DB>,
    DB: Backend + HasSqlType<ST>,
    ST: NotNull,
{
    ...
}

I filed https://github.com/rust-lang/rust/issues/38516 for some unexpected behavior I ran into while working on building specialization into Serde. Similar to https://github.com/rust-lang/rust/issues/38167, this is a case where the program compiles without the specialized impl and when it is added there is a type error. cc @bluss who was concerned about this situation earlier.

What if we allowed specialization without the default keyword within a single crate, similar to how we allow negative reasoning within a single crate?

My main justification is this: "the iterators and vectors pattern." Sometimes, users want to implement something for all iterators and for vectors:

impl<I> Foo for I where I: Iterator<Item = u32> { ... }
impl Foo for Vec<u32> { ... }

(This is relevant to other situations than iterators and vectors, of course, this is just one example.)

Today this doesn't compile, and there is tsuris and gnashing of teeth. Specialization solves this problem:

default impl<I> Foo for I where I: Iterator<Item = u32> { ... }
impl Foo for Vec<u32> { ... }

But in solving this problem, you have added a public contract to your crate: it is possible to overide the iterator impl of Foo. Maybe we don't want to force you to do that - hence, local specialization without default.


The question I suppose is, what exactly is the role of default. Requiring default was, I think, originally a gesture toward explicitness and self-documenting code. Just as Rust code is immutable by default, private by default, safe by default, it should also be final by default. However, because "non-finality" is a global property, I cannot specialize an item unless I let you specialize an item.

Requiring default was, I think, originally a gesture toward explicitness and self-documenting code. However [..] I cannot specialize an item unless I let you specialize an item.

Is that really so bad though? If you want to specialize an impl then maybe other people want to aswell.

I worry because just thinking about this RFC is already giving me PTSD flashbacks of working in C++ codebases which use obscene amounts of overloading and inheritance and having no idea wtf is going on in any line of code which has a method call in it. I really appreciate the lengths that @aturon has gone to to make specialization explicit and self-documenting.

Is that really so bad though? If you want to specialize an impl then maybe other people want to aswell.

If other people only "maybe" want to specialize it too, and if there are good cases where we wouldn't want them to, we shouldn't make it impossible to specify this. (a bit similar to encapsulation: you want to access some data and maybe some other people want to as well -- so you explicitly mark _this data_ public, instead of defaulting all data to be public.)

I worry because just thinking about this RFC is already giving me PTSD flashbacks ...

But how would disallowing this specification prevent these things from happening?

if there are good cases where we wouldn't want them to, we shouldn't make it impossible to specify this.

It's not necessarily a good idea to give users a power whenever they might have a good usecase for it. Not if it also enables users to write confusing code.

But how would disallowing this specification prevent these things from happening?

Say you see foo.bar() and you want to look at what bar() does. Right now, if you find the method implemented on a matching type and it's not marked default you know that its the method definition you're looking for. With @withoutboats' proposal this will no longer be true - instead you'll never know for sure whether you're actually looking at the code which is getting executed.

instead you'll never know for sure whether you're actually looking at the code which is getting executed.

This is quite an exaggeration of the effect of allowing specialization of non-default impls for local types. If you are looking at a concrete impl, you know you are looking at the correct impl. And you have access to the entire source of this crate; you can determine if this impl is specialized or not significantly sooner than "never."

Meanwhile, even with default, the problem remains when an impl has not been finalized. If the correct impl is actually a default impl, you are in the same situation of having difficulty being unsure if this is the correct impl. And of course if specialization is employed, this will quite commonly be the case (for example, this is the case today for nearly every impl of ToString).

In fact I do think this is a rather serious problem, but I'm not convinced that default solves it. What we need are better code navigation tools. Currently rustdoc makes a very much 'best effort' approach when it comes to trait impls - it doesn't link to their source and it doesn't even list impls that are provided by blanket impls.

I'm not saying this change is a slamdunk by any means, but I think its worth a more nuanced consideration.

It's not necessarily a good idea to give users a power whenever they might have a good usecase for it. Not if it also enables users to write confusing code.

Exactly, I absolutely agree. I think I'm talking about a different "user" here, which is the user of crates you write. You don't want them to freely specialize traits in your crate (possibly affecting the behavior of your crate in a hacky way). On the other hand, we'd be giving more power the "user" you're talking about, namely the crate author, but even without @withoutboats' proposal, you'd have to use "default" and run into the same problem.

I think default helps in the sense that if you want to simplify reading a code then you can ask that nobody use default or establish rigorous documentation rules for using it. At that point, you need only worry about the defaults from std, which presumably folks would better understand.

I recall the idea that documentation rules could be imposed on usages of specialization contributed to getting the specialization RFC approved.

@withoutboats am I correct in reading your motivation for loosening of default as you want a restricted form of default which means "overridable, but only in this crate" (i.e., pub(crate) but for default)? However, to keep things simple you are proposing changing the semantics of omitting default, rather than adding graduations of default-ness?

Correct. Doing something like default(crate) seems like overkill.

A priori, I'd imagine one could simulate that through what the crate exports though, no? Are there any situations where you could not simply introduce a private helper trait with the default methods and call it from your own final impls? You want the user to use your defaults but not supply any of their own?

Correct. Doing something like default(crate) seems like overkill.

I disagree. I really want a restricted form of default. I have been meaning to propose it. My motivation is that sometimes intersection impls etc will force you to add default, but that doesn't mean you want to allow for arbitrary crates to change your behavior. Sorry, have a meeting, I can try to elaborate with an example in a bit.

@nikomatsakis I have the same motivation, what I'm proposing is that we just remove the default requirement to specialize in the same crate, as opposed to adding more levers. :-)

If by chance this non-exported default might be the more common usage, then a #[default_export] feature would be easier to remember by analogy with #[macro_export]. An intermediate option might be allowing this export feature for pub use or pub mod lines.

Using the pub keyword would be better, since Macros 2.0 will support macros as normal items and use pub instead of #[macro_use]. Using pub to indicate visibility across the board would be a big win for its consistency.

@withoutboats regardless, I think sometimes you will want to specialize locally but not necessarily open the doors to all

Using the pub keyword would be better

Having pub default fn mean "publicly export the defaultness of the fn" as opposed to affecting the visibility of the function itself would be super confusing to newcomers.

@jimmycuadra is that what you meant by using the pub keyword? I agree with @sgrif that it seems more confusing, and if we're going to allow you to scope defaultness explicitly, the same syntax we decide on for scoping visibility seems like the correct path.

Probably not pub default fn exactly, because that is ambiguous, as you both mention. I was just saying there's value in having pub universally mean "expose something otherwise private to the outside." There's probably some formulation of syntax involving pub that would be visually different so as not to be confused with making the function itself public.

Although it is a bit syntaxey, I would not oppose default(foo) working like pub(foo) - the symmetry between the two marginally outweights the fiddliness of the syntax for me.

Bikeshed warning: have we considered calling it overridable instead of default? It's more literally descriptive, and overridable(foo) reads better to me than default(foo) - the latter suggests "this is the default within the scope of foo, but something else might be the default elsewhere", while the former says "this is overridable within the scope of foo", which is correct.

I think the first two questions are really : Is exporting or not exporting defaultness significantly more common? Should not exporting defaultness be the default behavior?

Yes case: You could maximize the similarity with exports elsewhere dictates something like pub mod mymodule default; and pub use mymodule::MyTrait default;, or maybe with overridable. If needed, you could export defaultness for only some methods with pub use MyModule::MyTrait::{methoda,methodb} default;

No case: You need to express privateness, not publicness, which differs considerably from anything else in Rust anyways, so now default(crate) becomes the normal way to control these exports.

Also, if exporting and not exporting defaultness are comparably common, then you guys can probably choose arbitrarily to be in either the yes or no case, so again just picking pub use MyModule::MyTrait::{methoda,methodb} default; works fine.

All these notations look compatible anyways. Another option might be some special impl that closed off the defaults, but that sounds complex and strange.

@burdges Do you have the labels "yes case" and "no case" backwards there, or am I misunderstanding what you're saying?

Yup, oops! Fixed!

We have impl<T> Borrow<T> for T where T: ?Sized so that a Borrow<T> bound can treat owned values as if they were borrowed.

I suppose we could use specialization to optimize away calls to clone from a Borrow<T>, yes?

pub trait CloneOrTake<T> {
    fn clone_or_take(self) -> T;
}

impl<B,T> CloneOrTake<T> for B where B: Borrow<T>, T: Clone {
    #[inline]
    default fn clone_or_take(b: B) -> T { b.clone() }
}
impl<T> CloneOrTake<T> for T {
    #[inline]
    fn clone_or_take(b: T) -> T { b };
}

I'd think this might make Borrow<T> usable in more situations. I dropped the T: ?Sized bound because one presumably needs Sized when returning T.

Another approach might be

pub trait ToOwnedFinal : ToOwned {
    fn to_owned_final(self) -> Self::Owned;
}

impl<B> ToOwnedFinal for B where B: ToOwned {
    #[inline]
    default fn to_owned_final(b: B) -> Self::Owned { b.to_owned() }
}
impl<T> ToOwnedFinal for T {
    #[inline]
    fn to_owned_final(b: T) -> T { b };
}

We've made some possibly troubling discoveries today, you can read the IRC logs here: https://botbot.me/mozilla/rust-lang/

I'm not 100% confident about all of the conclusions we reached, especially since Niko's comments after the fact seem uplifting. For a little while it seemed a bit apocalyptic to me.

One thing I do feel fairly sure about is that requiring the default cannot be made compatible with a guarantee that adding new default impls is always backward compatible. Here's the demonstration:

crate parent v 1.0.0

trait A { }
trait B { }
trait C {
    fn foo(&self);
}

impl<T> C for T where T: B {
    // No default, not specializable!
    fn foo(&self) { panic!() }
}

crate client (depends on parent)

extern crate parent;

struct Local;

impl parent::A for Local { }
impl parent::C for Local {
    fn foo(&self) { }
}

Local implements A and C but not B. If local implemented B, its impl of C would conflict with the non-specializable blanket impl of C for T where T: B.

crate parent v 1.1.0

// Same code as before, but add:
default impl<T> B for T where T: A { }

This impl has been added, and is a completely specializable impl, so we've said its a non-breaking change. However, it creates a transitive implication - we already had "all B impl C (not specializable)", by adding "all A impl B (specializable)," we've implicitly added the statement "all A impl C (not specializable)". Now the child crate cannot upgrade.


It might be the case that the idea of guaranteeing that adding specializable impls is not a breaking change is totally out the window, because Aaron showed (as you can see in the logs linked above) that you can write impls which make equivalent guarantees regarding defaultness. However, Niko's later comments suggest that such impls may be prohibited (or at least prohibitable) by the orphan rules.

So its uncertain to me if the 'impls are non-breaking' guarantee is salvageable, but it is certain that it is not compatible with explicit control over impl finality.

Is there any plan on allowing this?

struct Foo;

trait Bar {
    fn bar<T: Read>(stream: &T);
}

impl Bar for Foo {
    fn bar<T: Read>(stream: &T) {
        let stream = BufReader::new(stream);

        // Work with stream
    }

    fn bar<T: BufRead>(stream: &T) {
        // Work with stream
    }
}

So essentially a specialization for a template function which has a type parameter with a bound on A where the specialized version has a bound on B (which requires A).

@torkleyy not currently but you can secretly do it by creating a trait which is implemented for both T: Read and T: BufRead and containing the parts of your code you want to specialize in the impls of that trait. It doesn't even need to be visible in the public API.

Regarding the backwards compatibility issue, I think thanks to the orphan rules we can get away with these rules:

_An impl is backwards compatible to add unless:_

  • _The trait being impl'd is an auto trait._
  • _The receiver is a type parameter, and every trait in the impl previously existed._

That is, I think in all of the problematic examples the added impl is a blanket impl. We wanted to say that fully default blanket impls are also okay, but I think we just have to say that adding of existing blanket impls can be a breaking change.

The question is what guarantee do we want to make in the face of that - e.g. I think it would be a very nice property if at least a blanket impl can only be a breaking change based on the code in your crate, so you can review your crate and know with certainty whether or not you need to increment the major version.

@withoutboats

Regarding the backwards compatibility issue, I think thanks to the orphan rules we can get away with these rules:

_An impl is backwards compatible to add unless:_

  • _The trait being impl'd is an auto trait._
  • _The receiver is a type parameter, and every trait in the impl previously existed._

That is, I think in all of the problematic examples the added impl is a blanket impl. We wanted to say that fully default blanket impls are also okay, but I think we just have to say that adding of existing blanket impls can be a breaking change.

A week and many discussions later, this has unfortunately turned out not to be the case.

The results we've had are :crying_cat_face:, but I think what I wrote there is the same as your conclusion. Adding blanket impls is a breaking change, no matter what. But only blanket impls (and auto trait impls); as far as I know we've not found a case where a non-blanket impl could break downstream code (and that would be very bad).

I did think at one point that we might be able to relax the orphan rules so that you could implement traits for types like Vec<MyType>, but if we did that this situation would then play out in exactly the same way there:

//crate A

trait Foo { }

// new impl
// impl<T> Foo for Vec<T> { }
// crate B
extern crate A;

use A::Foo;

trait Bar {
    type Assoc;
}

// Sadly, this impl is not an orphan
impl<T> Bar for Vec<T> where Vec<T>: Foo {
    type Assoc = ();
}
// crate C

struct Baz;

// Therefore, this impl must remain an orphan
impl Bar for Vec<Baz> {
    type Assoc = bool;
}

@withoutboats Ah, I understood your two-bullet list as or rather than and, which it seems is what you meant?

@aturon Yea, I meant 'or' - those are the two cases where it is a breaking change. Any auto trait impl, no matter how concrete, is a breaking change because of the way we allow negative reasoning about them to propogate: https://is.gd/k4Xtlp

That is, unless it contains new names. AFAIK an impl that contains a new name is never breaking.

@withoutboats I wonder if we can/should restrict people relying on negative logic around auto-traits. That is, if we said that adding new impls of auto traits is a legal breaking change, we might then warn about impls that could be broken by an upstream crate adding Send. This would work best if we had:

  1. stable specialization, one could overcome the warnings by adding default in strategic places (much of the time);
  2. some form of explicit negative impls, so that types like Rc could declare their intention to never be Send -- but then we have those for auto traits, so we could take them into account.

I don't know I think it depends on whether or not there's strong motivation. It seems especially unlikely you'll realize a type could have an unsafe impl Send/Sync after you've already released it; I think most of the time that would be safe, you'll have written a type with the foreknowledge that it would be safe (because that's the point of the type).

I add unsafe impl Send/Sync after the fact all the time. Sometimes because I make it thread safe, sometimes because I realize the C API I'm interfacing with is fine to share across threads, and sometimes it's just because whether something should be Send/Sync isn't what I'm thinking about when I introduce a type.

I add them after the fact as well when binding C APIs - often because someone explicitly asks for those bounds so I then go through and check what the underlying library guarantees.

One thing I don't love about how specializing associated traits works right now, this pattern doesn't work:

trait Buffer: Read {
    type Buffered: BufRead;
    fn buffer(self) -> impl BufRead;
}

impl<T: Read> Buffer for T {
    default type Buffered = BufReader<T>;
    default fn buffer(self) -> BufReader<T> {
        BufReader::new(self)
    }
}

impl<T: BufRead> Buffer for T {
    type Buffered = Self;
    fn buffer(self) -> T {
        self
    }
}

This is because the current system requires that this impl would be valid:

impl Buffer for SomeRead {
    type Buffered = SomeBufRead;
    // no overriding of fn buffer, it no longer returns Self::Buffered
}

impl Trait in traits would release a lot of desire for this sort of pattern, but I wonder if there isn't a better solution where the generic impl is valid but that specialization doesn't work because it introduces a type error?

@withoutboats Yeah, this is one of the main unresolved questions about the design (which I'd forgotten to bring up in recent discussions). There's a fair amount of discussion about this on the original RFC thread, but I'll try to write up a summary of the options/tradeoffs soon.

@aturon Is the current solution the most conservative (forward compatible with whatever we want to do) or is it a decision we have to make before stabilizing?

I personally think the only real solution to this problem that @withoutboats raised is to allow items to be "grouped" together when you specify the default tag. It's kind of the better-is-better solution, but I feel like the worse-is-better variant (overriding any means overriding all) is quite a bit worse. (But actually @withoutboats the way you wrote this code is confusing. I think in place of using impl BufRead as the return type of Buffer, you meant Self::BufReader, right?)

In that case, the following would be permitted:

trait Buffer: Read {
    type Buffered: BufRead;
    fn buffer(self) -> impl BufRead;
}

impl<T: Read> Buffer for T {
    default {
        type Buffered = BufReader<T>;
        fn buffer(self) -> BufReader<T> {
            BufReader::new(self)
        }
    }
}

impl<T: BufRead> Buffer for T {
    type Buffered = Self;
    fn buffer(self) -> T {
        self
    }
}

But perhaps we can infer these groupings? I've not given it much thought, but it seems that the fact that item defaults are "entangled" is visible from the trait definition.

But actually @withoutboats the way you wrote this code is confusing. I think in place of using impl BufRead as the return type of Buffer, you meant Self::BufReader, right?

Yes, I had modified the solution to an impl Trait based one & then switched back but missed the return type in the trait.

Maybe something like the type system of this language may also be interesting, since it seems to be similar to Rusts, but with some features, that may solve the current problems.
(A <: B would in Rust be true when A is a struct and implements trait B, or when A is a trait, and generic implementations for objects of this trait exist, I think)

It seems there is an issue with the Display trait for specialization.
For instance, this example does not compile:

use std::fmt::Display;

pub trait Print {
    fn print(&self);
}

impl<T: Display> Print for T {
    default fn print(&self) {
        println!("Value: {}", self);
    }
}

impl Print for () {
    fn print(&self) {
        println!("No value");
    }
}

fn main() {
    "Hello, world!".print();
    ().print();
}

with the following error:

error[E0119]: conflicting implementations of trait `Print` for type `()`:
  --> src/main.rs:41:1
   |
35 |   impl<T: Display> Print for T {
   |  _- starting here...
36 | |     default fn print(&self) {
37 | |         println!("Value: {}", self);
38 | |     }
39 | | }
   | |_- ...ending here: first implementation here
40 | 
41 |   impl Print for () {
   |  _^ starting here...
42 | |     fn print(&self) {
43 | |         println!("No value");
44 | |     }
45 | | }
   | |_^ ...ending here: conflicting implementation for `()`

while this compiles:

pub trait Print {
    fn print(&self);
}

impl<T: Default> Print for T {
    default fn print(&self) {
    }
}

impl Print for () {
    fn print(&self) {
        println!("No value");
    }
}

fn main() {
    "Hello, world!".print();
    ().print();
}

Thanks to fix this issue.

@antoyo are you sure that's because Display is special, or could it be because Display isn't implemented for tuples while Default is?

@shepmaster
I don't know if it is about Display, but the following works with a Custom trait not implemented for tuples:

pub trait Custom { }

impl<'a> Custom for &'a str { }

pub trait Print {
    fn print(&self);
}

impl<T: Custom> Print for T {
    default fn print(&self) {
    }
}

impl Print for () {
    fn print(&self) {
        println!("No value");
    }
}

fn main() {
    "Hello, world!".print();
    ().print();
}

By the way, here is the real thing that I want to achieve with specialization:

pub trait Emit<C, R> {
    fn emit(callback: C, value: Self) -> R;
}

impl<C: Fn(Self) -> R, R, T> Emit<C, R> for T {
    default fn emit(callback: C, value: Self) -> R {
        callback(value)
    }
}

impl<C> Emit<C, C> for () {
    fn emit(callback: C, _value: Self) -> C {
        callback
    }
}

I want to call a function by default, or return a value if the parameter would be unit.
I get the same error about conflicting implementations.
It is possible (or will this be possible) to do that with specialization?
If not, what are the alternatives?

Edit: I think I figured out why it does not compile:
T in for T is more general than () in for () so the first impl cannot be the specialization.
And C is more general than C: Fn(Self) -> R so the second impl cannot be the specialization.
Please tell me if I'm wrong.
But I still don't get why it does not work with the first example with Display.

This is currently the correct behavior.

In the Custom example, those impls do not overlap because of special local negative reasoning. Because the trait is from this crate, we can infer that (), which does not have an impl of Custom, does not overlap with T: Custom. No specialization necessary.

However, we do not perform this negative reasoning for traits that aren't from your crate. The standard library could add Display for () in the next release, and we don't want that to be a breaking change. We want libraries to have the freedom to make those kinds of changes. So even though () doesn't impl Display, we can't use that information in the overlap check.

But also, because () doesn't impl Display, it is not more specific than T: Display. This is why specialization does not work, whereas in the Default case, (): Default, therefore that impl is more specific than T: Default.

Impls like this one are sort of in 'limbo' where we can neither assume it overlaps or doesn't. We're trying to figure out a principled way to make this work, but it's not the first implementation of specialization, it's a backwards compatible extension to that feature coming later.

I filed #40582 to track the lifetime-related soundness issue.

I had an issue trying to use specialization, I don't think its quite the same as what @antoyo had, I had filed it as a separate issue #41140, I can bring the example code from that into here if necessary

@afonso360 No, a separate issue is fine.

As a general point: at this point further work on specialization is blocked on the work on Chalk, which should allow us to tackle soundness issues and is also likely to clear up the ICEs being hit today.

Can someone clarify if this is a bug, or something that is purposely forbidden? https://is.gd/pBvefi

@sgrif I believe the issue here is just that projection of default associated types is disallowed. Diagnostics could be better though: https://github.com/rust-lang/rust/issues/33481

Could you elaborate on why it is expected to be disallowed? We know that no more specific impl could be added, since it would violate the orphan rules.

This comment indicates it's to necessary for some cases in order to require soundness (although I don't know why) and in others to force consumers of the interface to treat it as an abstract type: https://github.com/rust-lang/rust/blob/e5e664f/src/librustc/traits/project.rs#L41

Was anyone ever able to look at https://github.com/rust-lang/rust/issues/31844#issuecomment-266221638 ? Those impls should be valid with specialization as far as I can tell. I believe there is a bug that is preventing them.

@sgrif I believe the issue with your code there may be similar to the issue in https://github.com/rust-lang/rust/issues/31844#issuecomment-284235369 which @withoutboats explained in https://github.com/rust-lang/rust/issues/31844#issuecomment-284268302. That being said, based on @withoutboats's comment, it seems that the present local reasoning should allow your example to compile, but perhaps I'm mistaken as to what's expected to work.

As an aside, I tried to implement the following, unsuccessfully:

trait Optional<T> {
    fn into_option(self) -> Option<T>;
}

impl<R, T: Into<R>> Optional<R> for T {
    default fn into_option(self) -> Option<R> {
        Some(self.into())
    }
}

impl<R> Optional<R> for Option<R> {
    fn into_option(self) -> Option<R> {
        self
    }
}

I intuitively expected Option<R> to be more specific than <R, T: Into<R>> T, but of course, nothing prevents an impl<R> Into<R> for Option<R> in the future.

I'm not sure why this is disallowed, however. Even if an impl<R> Into<R> for Option<R> was added in the future, I would still expect Rust to choose the non-default implementation, so as far as I can see, allowing this code has no implication on forward-compatibility.

In all, I find specialization very frustrating to work with. Just about everything I expect to work doesn't. The only cases where I've had success with specialization are those that are very simple, such as having an two impls that include T where T: A and T where T: A + B. I have a hard time getting other things to work, and the error messages don't indicate why attempts to specialize don't work. Of course, there's still a road ahead, so I don't expect very helpful error messages. But there seem to be quite a few cases where I really expect something to work (like above) but it just doesn't, and it's currently quite difficult for me to ascertain if that's because I've misunderstood what's allowed (and more importantly, why), if something is wrong, or if something just hasn't been implemented yet. A nice overview of what's going on with this feature as it stands would be very helpful.

I'm not positive this is in the right place, but we ran into a problem on the users forum that I'd like to mention here.

The following code (which is adapted from the RFC here) does not compile on nightly:

#![feature(specialization)]

trait Example {
    type Output;
    fn generate(self) -> Self::Output;
}

default impl<T> Example for T {
    type Output = Box<T>;
    fn generate(self) -> Self::Output { Box::new(self) }
}

impl Example for bool {
    type Output = bool;
    fn generate(self) -> Self::Output { self }
}

This doesn't really seem like a glitch but more like a usability problem - if a hypothetical impl specialized only the associated type in the example above, the defaulti impl of generate wouldn't typecheck.

Link to the thread here

@burns47 there is a confusing but useful workaround here: https://github.com/rust-lang/rust/issues/31844#issuecomment-263175793.

@dtolnay Not quite satisfactory - what if we're specializing on traits we don't own (and can't modify)? We shouldn't need to rewrite/refactor trait definitions to do this IMO.

Can anyone comment as to whether the code in the following issue is intentionally rejected? https://github.com/rust-lang/rust/issues/45542

Would specialization allow adding something like the following to libcore?

impl<T: Ord> Eq for T {}

impl<T: Ord> PartialEq for T {
    default fn eq(&self, other: &Self) -> bool {
        self.cmp(other) == Ordering::Equal
    }
}

impl<T: Ord> PartialOrd for T {
    default fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
        Some(self.cmp(other))
    }
}

This way you could implement Ord for your custom type and have Eq, PartialEq, and PartialOrd be automatically implemented.

Note that implementing Ord and simultaneously deriving PartialEq or PartialOrd is dangerous and can lead to very subtle bugs! With these default impls you would be less tempted to derive those traits, so the problem would be somewhat mitigated.


Alternatively, we modify derivation to take advantage of specialization. For example, writing #[derive(PartialOrd)] above struct Foo(String) could generate the following code:

impl PartialOrd for Foo {
    default fn partial_cmp(&self, other: &Foo) -> Option<Ordering> {
        self.0.partial_cmp(&other.0)
    }
}

impl PartialOrd for Foo where Foo: Ord {
    fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
        Some(self.cmp(other))
    }
}

This way the default impl gets used if Ord is not implemented. But if it is, then PartialOrd relies on Ord. Unfortunately, this doesn't compile: error[E0119]: conflicting implementations of trait `std::cmp::PartialOrd` for type `Foo`

@stjepang I certainly hope the blankets like that can be added -- impl<T:Copy> Clone for T too.

I think

impl<T: Ord> PartialEq for T

should be

impl<T, U> PartialEq<U> for T where T : PartialOrd<U>

because PartialOrd requires PartialEq and can provide it too.

Right now, one cannot really use associated types to constrain a specialization, both because they cannot be left unspecified and because they trigger uneeded recursion. See https://github.com/dhardy/rand/issues/18#issuecomment-358147645

Eventually, I'ld love to see what I'm calling specialization groups with the syntax proposed by @nikomatsakis here https://github.com/rust-lang/rust/issues/31844#issuecomment-249355377 and independently by me. I'ld like to write an RFC on that proposal later when we're closer to stabilizing specialization.

Just in case nobody saw it, this blog post covers a proposal to make specialization sound in the face of lifetime-based dispatch.

As copy closures were already stablized in Beta, developers have more motivation to stabilizing on specialization now. The reason is that Fn and FnOnce + Clone represent two overlapping set of closures, and in many case we need to implement traits for both of them.

Just figure out that the wording of rfc 2132 seems to imply that there are only 5 types of closures:

  • FnOnce (a move closure with all captured variables being neither Copy nor Clone)
  • FnOnce + Clone (a move closure with all captured variables being Clone)
  • FnOnce + Copy + Clone (a move closure with all captured variables being Copy and so Clone)
  • FnMut + FnOnce (a non-move closure with mutated captured variables)
  • Fn + FnMut + FnOnce + Copy + Clone (a non-move closure without mutated captured variables)

So if specification is not available in the near future, maybe we should update our definition of Fn traits so Fn does not overlapping with FnOnce + Clone?

I understand that someone may already implemented specific types that is Fn without Copy/Clone, but should this be deprecated? I think there is always better way to do the same thing.

Is the following supposed to be allowed by specialization (note the absence of default) or is it a bug?

#![feature(specialization)]
mod ab {
    pub trait A {
        fn foo_a(&self) { println!("a"); }
    }

    pub trait B {
        fn foo_b(&self) { println!("b"); }
    }

    impl<T: A> B for T {
        fn foo_b(&self) { println!("ab"); }
    }

    impl<T: B> A for T {
        fn foo_a(&self) { println!("ba"); }
    }
}

use ab::B;

struct Foo;

impl B for Foo {}

fn main() {
    Foo.foo_b();
}

without specialization, this fails to build with:

error[E0119]: conflicting implementations of trait `ab::B` for type `Foo`:
  --> src/main.rs:24:1
   |
11 |     impl<T: A> B for T {
   |     ------------------ first implementation here
...
24 | impl B for Foo {}
   | ^^^^^^^^^^^^^^ conflicting implementation for `Foo`

@glandium what on earth is going on there? Nice example, here the playground link: https://play.rust-lang.org/?gist=fc7cf5145222c432e2bd8de1b0a425cd&version=nightly&mode=debug

is it? there is no empty impl in my example.

@glandium

 impl B for Foo {}

@MoSal but that impl "isn't empty" since B adds a method with a default implementation.

@gnzlbg It is empty by definition. Nothing between the braces.


#![feature(specialization)]

use std::borrow::Borrow;

#[derive(Debug)]
struct Bla {
    bla: Vec<Option<i32>>
}

// Why is this a conflict ?
impl From<i32> for Bla {
    fn from(i: i32) -> Self {
        Bla { bla: vec![Some(i)] }
    }
}

impl<B: Borrow<[i32]>> From<B> for Bla {
    default fn from(b: B) -> Self {
        Bla { bla: b.borrow().iter().map(|&i| Some(i)).collect() }
    }
}

fn main() {
    let b : Bla = [1, 2, 3].into();
    println!("{:?}", b);
}

error[E0119]: conflicting implementations of trait `std::convert::From<i32>` for type `Bla`:
  --> src/main.rs:17:1
   |
11 | impl From<i32> for Bla {
   | ---------------------- first implementation here
...
17 | impl<B: Borrow<[i32]>> From<B> for Bla {
   | ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ conflicting implementation for `Bla`
   |
   = note: upstream crates may add new impl of trait `std::borrow::Borrow<[i32]>` for type `i32` in future versions

Wouldn't specialization prevent possible future conflicts?

Goodness me, this is a slow-moving feature! No progress in over two years, it seems (certainly according to the original post). Has the lang team abandoned this?

@alexreg see http://aturon.github.io/2018/04/05/sound-specialization/ for the latest development.

@alexreg It turns out soundness is _hard_. I believe there is some work on the "always applicable impls" idea currently happening, so there is progress. See https://github.com/rust-lang/rust/pull/49624. Also, I believe that the chalk working group is working on implementing the "always applicable impls" idea too, but I don't know how far that has gotten.

After a bit of wrangling, it seems it is possible to effectively implement intersection impls already via a hack using specialization and overlapping_marker_traits.

https://play.rust-lang.org/?gist=cb7244f41c040db41fc447d491031263&version=nightly&mode=debug

I tried to write a recursive specialized function to implement an equivalent to this C++ code:


C++ code

#include <cassert>
#include <vector>

template<typename T>
size_t count(T elem)
{
    return 1;
}

template<typename T>
size_t count(std::vector<T> vec)
{
    size_t n = 0;
    for (auto elem : vec)
    {
        n += count(elem);
    }
    return n;
}

int main()
{
    auto v1 = std::vector{1, 2, 3};
    assert(count(v1) == 3);

    auto v2 = std::vector{ std::vector{1, 2, 3}, std::vector{4, 5, 6} };
    assert(count(v2) == 6);

    return 0;
}


I tried this:


Rust code

#![feature(specialization)]

trait Count {
    fn count(self) -> usize;
}

default impl<T> Count for T {
    fn count(self) -> usize {
        1
    }
}

impl<T> Count for T
where
    T: IntoIterator,
    T::Item: Count,
{
    fn count(self) -> usize {
        let i = self.into_iter();

        i.map(|x| x.count()).sum()
    }
}

fn main() {
    let v = vec![1, 2, 3];
    assert_eq!(v.count(), 3);

    let v = vec![
        vec![1, 2, 3],
        vec![4, 5, 6],
    ];
    assert_eq!(v.count(), 6);
}


But I am getting an:

overflow evaluating the requirement `{integer}: Count`

I do not think that this should happen because impl<T> Count for T where T::Item: Count should not overflow.

EDIT: sorry, I just saw that this was already mentioned

@Boiethios Your usecase is working if you default on the fn and not on the impl:

#![feature(specialization)]

trait Count {
    fn count(self) -> usize;
}

impl<T> Count for T {
    default fn count(self) -> usize {
        1
    }
}

impl<T> Count for T
where
    T: IntoIterator,
    T::Item: Count,
{
    fn count(self) -> usize {
        let i = self.into_iter();

        i.map(|x| x.count()).sum()
    }
}

fn main() {
    let v = vec![1, 2, 3];
    assert_eq!(v.count(), 3);

    let v = vec![vec![1, 2, 3], vec![4, 5, 6]];
    assert_eq!(v.count(), 6);
}

Has the soundness hole still not been fixed yet?

@alexreg I don't think so. See http://smallcultfollowing.com/babysteps/blog/2018/02/09/maximally-minimal-specialization-always-applicable-impls/

My guess is that everyone's focused on the edition right now...

Okay thanks... seems like this issue is dragging on forever, but fair enough. It's tough, I know. And attention is directed elsewhere right now unfortunately.

Can someone more concretely explain the rationale behind not allowing projections for default associated types in fully-monomorphic cases? I have a use case where I would like that functionality (in particular, it would be semantically incorrect for the trait to ever be invoked with types that weren't fully monomorphic), and if there's no soundness issue I don't completely understand why it's disallowed.

@pythonesque There's some discussion at https://github.com/rust-lang/rust/pull/42411

Ah, I understand if it turns out that projection interacts badly with specialization in general. . And it is indeed true that what I want is of a "negative reasoning" flavor (though closed traits would not really be sufficient).

Unfortunately, I'm not sure if there's really any way to do what I want without such a feature: I'd like to have an associated type that outputs "True" when two passed-in types implementing a particular trait are syntactically equal, and "False" when they aren't (with the "False" case triggering a more expensive trait search which can decide whether they are "semantically" equal). The only real alternative seems (to me) to be to just always do the expensive search; which is fine in theory, but it can be a lot more expensive.

(I could work around this if the trait were intended to be closed, by just enumerating every possible pair of constructors in the head position and having them output True or False; but it's intended to be open to extension outside the repository, so that can't possibly work, especially since implementations in two different user repositories wouldn't necessarily know about each other).

Anyway, maybe this is just an indication that what I want to do is a bad fit for the trait system and I should switch to some other mechanism, like macros :P

And it is indeed true that what I want is of a "negative reasoning" flavor (though closed traits would not really be sufficient).

An alternative to negative reasoning is requiring that a type implements only one trait of a closed set of traits, such that implementations with other other traits in the set cannot overlap (e.g. T implements one of { Float | Int | Bool | Ptr }).

Even if there were a way to enforce that in Rust (which there isn't, AFAIK?), I do not think that would solve my problem. I would like users in different crates to be able to implement an arbitrary number of new constants, which should compare equal only to themselves and unequal to every other defined constant, including ones unknown at crate definition time. I don't see how any closed set of traits (or even set of families of traits) can accomplish that goal by itself: this is a problem that fundamentally can't be solved without looking directly at the types. The reason it would be workable with default projections is that you could default everything to "don't compare equal" and then implement equality of your new constant to itself in whatever crate you defined the constant in, which wouldn't run afoul of orphan rules because all the types in the trait implementation were in the same crate. If I wanted almost any such rule but equality, even this wouldn't work, but equality is good enough for me :)

On present nightly, this works:

trait Foo {}
trait Bar {}

impl<T: Bar> Foo for T {}
impl Foo for () {}

but even with specialization, and using nightly, this does not:

#![feature(specialization)]

trait Foo<F> {}
trait Bar<F> {}

default impl<F, T: Bar<F>> Foo<F> for T {}
impl<F> Foo<F> for () {}

Does this have a rationale or is it a bug?

@rmanoka Isn't this just the normal orphan rules? In the first case, no downstream crate could impl Bar for () so the compiler allows this, but in the second example, a downstream crate could impl Bar<CustomType> for () which would conflict with your default impl.

@Boscop In that scenario, the default impl should anyway be overridden by the non-default one below. For instance, if I had: impl Bar<bool> for () {} added before the other impls, then I would expect it to work (as per RFC / expectation). Isn't that correct?

Digging deeper along the lines of the counter-example you've mentioned, I realise (or believe) that the example satisfies the "always-applicable" test, and may be being worked on.

This issue probably depends on #45814.

Are there any plans to support trait bounds on the default that are not present in the specialisation?

As an example for which this would be very useful, such that you can easily compose handling of different types by creating a generic Struct with arbitrary Inner for the functionality that shouldn't be shared.

#![feature(specialization)]
trait Handler<M> {
    fn handle(&self, m:M);
}

struct Inner;
impl Handler<f64> for Inner {
    fn handle(&self, m : f64) {
        println!("inner got an f64={}", m);
    }
}

struct Struct<T>(T);
impl<T:Handler<M>, M:std::fmt::Debug> Handler<M> for Struct<T> {
    default fn handle(&self, m : M) {
        println!("got something else: {:?}", m);
        self.0.handle(m)
    }
}
impl<T> Handler<String> for Struct<T> {
    fn handle(&self, m : String) {
        println!("got a string={}", m);
    }
}
impl<T> Handler<u32> for Struct<T> {
    fn handle(&self, m : u32) {
        println!("got a u32={}", m);
    }
}

fn main() {
    let s = Struct(Inner);
    s.handle("hello".to_string());
    s.handle(5.0 as f64);
    s.handle(5 as u32);
}

Furthermore, in the example above, something odd that I've experienced - after removing the trait bound on the default Handler impl (and also self.0.handle(m)) the code compiles without issues. However, when you remove the implementation for u32, it seems to break the other trait deduction:

#![feature(specialization)]
trait Handler<M> {
    fn handle(&self, m:M);
}

struct Struct<T>(T);
impl<T, M:std::fmt::Debug> Handler<M> for Struct<T> {
    default fn handle(&self, m : M) {
        println!("got something else: {:?}", m);
    }
}
impl<T> Handler<String> for Struct<T> {
    fn handle(&self, m : String) {
        println!("got a string={}", m);
    }
}
// impl<T> Handler<u32> for Struct<T> {
//     fn handle(&self, m : u32) {
//         println!("got a u32={}", m);
//     }
// }
fn main() {
    let s = Struct(());
    s.handle("hello".to_string());
    s.handle(5.0 as f64);
}

Even though there is no code calling the handler for u32, the specialisation not being there causes the code to not compile.

Edit: this seems to be the same as the second problem ("However, when you remove the implementation for u32, it seems to break the other trait deduction") that Gladdy mentioned a one post back.

With rustc 1.35.0-nightly (3de010678 2019-04-11), the following code gives an error:

#![feature(specialization)]
trait MyTrait<T> {
    fn print(&self, parameter: T);
}

struct Message;

impl<T> MyTrait<T> for Message {
    default fn print(&self, parameter: T) {}
}

impl MyTrait<u8> for Message {
    fn print(&self, parameter: u8) {}
}

fn main() {
    let message = Message;
    message.print(1_u16);
}

error:

error[E0308]: mismatched types
  --> src/main.rs:20:19
   |
18 |     message.print(1_u16);
   |                   ^^^^^ expected u8, found u16

However, the code compiles and works when I omit the impl MyTrait<u8> block:

#![feature(specialization)]
trait MyTrait<T> {
    fn print(&self, parameter: T);
}

struct Message;

impl<T> MyTrait<T> for Message {
    default fn print(&self, parameter: T) {}
}

/*
impl MyTrait<u8> for Message {
    fn print(&self, parameter: u8) {}
}
*/

fn main() {
    let message = Message;
    message.print(1_u16);
}

Is this by design, is this because the implementation is incomplete, or is this a bug?

Also, I would like to know if this use case for specialization (implementing traits with overlapping type parameters for a single concrete type as opposed to implementing the same trait for overlapping types) will be supported. Reading section "Defining the precedence rules" in RFC 1210, I think it would be supported, but the RFC does not give such examples and I don't know if we are still strictly following this RFC.

Report a weirdness:

trait MyTrait {}
impl<E: std::error::Error> MyTrait for E {}

struct Foo {}
impl MyTrait for Foo {}  // OK

// But this one is conflicting with error message:
//
//   "... note: upstream crates may add new impl of trait `std::error::Error` for type
//    std::boxed::Box<(dyn std::error::Error + 'static)>` in future versions"
//
// impl MyTrait for Box<dyn std::error::Error> {}

Why is Box<dyn std::error::Error> peculiar (avoid using word "special") in this case? Even if it impls std::error::Error in the future, the impl MyTrait for Box<dyn std::error::Error> is still a valid specialization of impl<E: std::error::Error> MyTrait for E, no?

is still a valid specialization

In your case the impl<E: std::error::Error> MyTrait for E can not be specialized, as it doesnt have any default methods.

@bjorn3 This looks like it should work, but it doesn't even if you add in dummy methods

in crate bar

pub trait Bar {}
impl<B: Bar> Bar for Box<B> {}

In crate foo

#![feature(specialization)]

use bar::*;

trait Trait {
    fn func(&self) {}
}

impl<E: Bar> Trait for E {
    default fn func(&self) {}
}

struct Foo;
impl Trait for Foo {}  // OK

impl Trait for Box<dyn Bar> {} // Error error[E0119]: conflicting implementations of trait

Note that if you change crate bar to

pub trait Bar {}
impl<B: ?Sized + Bar> Bar for Box<B> {}

Then crate foo compiles.

@bjorn3 Seems that we don't need a default method to specialized it (playground).

@KrishnaSannasi I cannot reproduce the "conflicting implementations" error in your example (playground).

Update: Oh, I see. The trait Bar must be from an upstream crate for the example to work.

@updogliu you example does not show specialization because Foo does not implement Error.

Am I programming too late tonight, or should this not cause a stack overflow?

#![feature(specialization)]
use std::fmt::Debug;

trait Print {
    fn print(self);
}

default impl<T> Print for [T; 1] where T: Debug {
    fn print(self) {
        println!("{:?}", self);
    }
}

impl<T> Print for [T; 1] where T: Debug + Clone {
    fn print(self) {
        println!("{:?}", self.clone());
    }
}

fn main() {
    let x = [0u8];
    x.print();
}

Playground link

Coarse-grained default impl blocks have always been doing very weird things for me, I would suggest trying the default fn fine-grained specialization syntax instead.

EDIT: Upon cross-checking the RFC, this is expected, as default impl actually does _not_ mean that all items in the impl block are defaulted. I find those semantics surprising to say the least.

Playground link

@HadrienG2 I actually have always used default fn in this project but this time I forgot the default keyword and the compiler suggested to add it to the impl. Hadn't seen the stack recursion issue before and wasn't sure if it was expected at this stage. Thanks for the suggestion, default fn works fine.

Looking at the original RFC, there is a section about specialization of inherent impls. Did someone gave that I try ?

The approach proposed in the RFC might not work directly anymore, at least, for inherent const methods:

// This compiles correctly today:
#![feature(specialization)] 
use std::marker::PhantomData;
struct Foo<T>(PhantomData<T>);
impl<T> Foo<T> {
    default const fn foo() -> Self { Self(PhantomData) }
    // ^^should't default here error?
}
// ----
// Adding this fails:
impl<T: Copy> Foo<T> {
    const fn foo() -> Self { Self(PhantomData) }
}

The original RFC proposes lifting the method into a trait, implementing that for the type, and specializing the impl. I suppose that for const fn methods, those impls of the trait for the type would need to be const impls.

For anyone coming across this and curious about the status -- there were a couple of significant conceptual advances in 2018:
http://smallcultfollowing.com/babysteps/blog/2018/02/09/maximally-minimal-specialization-always-applicable-impls/
http://aturon.github.io/tech/2018/04/05/sound-specialization/

More recently, last month @nikomatsakis wrote (as an example, in another context; bold mine) that:

there was one key issue [in specialization] that never got satisfactorily resolved, a technical soundness concern around lifetimes and traits [...] Then, [those two posts linked above]. It seems like these ideas have basically solved the problem, but we’ve been busy in the meantime and haven’t had time to follow up.

Sounds hopeful though there's clearly still work to do.

(Posting this because I found this thread some weeks ago and had no idea of last year's progress, then more recently came across those posts by accident. There are comments above mentioning them, but GitHub makes it increasingly hard to see any but the first and last few comments on a long thread :cry: . It might be helpful if this update made it into the issue description.)

Hello everyone! Could someone tell me why this use case does not work? Bugs or expected behaviours?

As this example. impl A for i32 is ok, but impl A for () cannot be compiled in 1.39.0-nightly.

#![feature(specialization)]

trait A {
    fn a();
}

default impl <T: ToString> A for T {
    fn a() {}
}

impl A for i32 {
    fn a() {}
}

impl A for () {
    fn a() {}
}

compiling message:

error[E0119]: conflicting implementations of trait `A` for type `()`:
  --> src/lib.rs:16:1
   |
8  | default impl <T: ToString> A for T {
   | ---------------------------------- first implementation here
...
16 | impl A for () {
   | ^^^^^^^^^^^^^ conflicting implementation for `()`
   |
   = note: upstream crates may add new impl of trait `std::fmt::Display` for type `()` in future versions

@Hexilee Put default on the methods not the impl.

@KrishnaSannasi example 2

@zserik yes, I know. I don't think it has been implemented yet, or it was dropped. In any case it doesn't work now.

It obviously doesn't work now, but I think it should work.

I'm asking this here, because I haven't noticed this topic come up anywhere else - are there any any plans to default-ify various standard library functions, similarly to how we've has const-ified functions when it's been deemed safe to do so? The main reason I'm asking is that the default generic From and Into implementations (impl<T, U: From<T>> Into<U> for T and impl<T> From<T> for T) make it difficult to write comprehensive generic From and Into implementations downstream from core, and it would be nice if I could override those conversions in my own crates.

Even if we allow specialization for From/Into it wouldn't help generic impls because of the lattice problem.

@KrishnaSannasi I don't believe that's the case. For example, this code should work if From and Into were specializable, but doesn't because they aren't:

impl<M: Into<[S; 2]>, S> From<M> for GLVec2<S> {
    fn from(to_array: M) -> GLVec2<S> {
        unimplemented!()
    }
}
impl<M, S> Into<M> for GLVec2<S>
where
    [S; 2]: Into<M>,
{
    fn into(self) -> M {
        unimplemented!()
    }
}

pub struct GLVec2<S> {
    pub x: S,
    pub y: S,
}

That works if you convert From and Into into a custom trait that doesn't have those generic implementaitons: https://play.rust-lang.org/?version=stable&mode=debug&edition=2018&gist=cc126b016ff62643946aebc6bab88c98

@Osspial Well, if you try and simulate using a default impl, you will see the issue,

https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=e5b9da0eeca05d063e2605135a0b5ead

I will repeat, changing From/Into impl to be a default impl in the standard library will not make generic impls for Into possible. (and it doesn't affect generic impls of From)

Hi, there is a serious bug in the current specialization implementation. I label it as a bug because even if that was an explicit design decision, it prevents us from using one of the most powerful specialization features, namely the possibility of creation of "opaque types" (this is not a formal name). This pattern is one of the most primitive building blocks in other languages providing type classes, like Haskell or Scala.

This pattern is simple – we can define structures like WithLabel or WithID which add some fields and methods to underlying structures, so for example if we create WithLabel<WithID<MyType>> then we will be able to get id, label and all fields / methods of MyType as well. Unfortunately, with the current implementation, it is not possible.

Below is an example code showing usage of this pattern. The commented-out code does not compile, while it should to make this pattern really useful:

#![feature(specialization)]

use std::ops::Deref;
use std::ops::DerefMut;

// =================
// === WithLabel ===
// =================

struct WithLabel<T>(String, T);

pub trait HasLabel {
    fn label(&self) -> &String;
}

impl<T> HasLabel for WithLabel<T> {
    fn label(&self) -> &String { 
        &self.0
    }
}

// THIS SHOULD COMPILE, BUT GETS REJECTED
// impl<T> HasLabel for T
// where T: Deref, <Self as Deref>::Target : HasLabel {
//     default fn label(&self) -> &String { 
//         self.deref().label() 
//     }
// }

impl<T> Deref for WithLabel<T> {
    type Target = T;
    fn deref(&self) -> &Self::Target {
        &self.1
    }
}

// ==============
// === WithID ===
// ==============

struct WithID<T>(i32, T);

pub trait HasID {
    fn id(&self) -> &i32;
}

impl<T> HasID for WithID<T> {
    fn id(&self) -> &i32 { 
        &self.0
    }
}

// THIS SHOULD COMPILE, BUT GETS REJECTED
// impl<T> HasID for T
// where T: Deref, <Self as Deref>::Target : HasID {
//     default fn id(&self) -> &i32 { 
//         self.deref().id() 
//     }
// }

impl<T> Deref for WithID<T> {
    type Target = T;
    fn deref(&self) -> &Self::Target {
        &self.1
    }
}

// =============
// === Usage ===
// =============

struct A(i32);

type X = WithLabel<WithID<A>>;

fn test<T: HasID + HasLabel> (t: T) {
    println!("{:?}", t.label());
    println!("{:?}", t.id());
}

fn main() {
    let v1 = WithLabel("label1".to_string(), WithID(0, A(1)));
    // test(v1); // THIS IS EXAMPLE USE CASE WHICH DOES NOT COMPILE
}

In order to make the test(v1) line working, we need to add such trait impl manually:

impl<T: HasID> HasID for WithLabel<T> {
    fn id(&self) -> &i32 { 
        self.deref().id()
    }
}

Of course, to make it complete, we would also need to make this trait impl:

impl<T: HasLabel> HasLabel for WithID<T> {
    fn label(&self) -> &String { 
        self.deref().label()
    }
}

And this is VERY BAD. For just 2 types it's simple. However, imagine that we've got 10 different opaque-types definitions, which add different fields, like WithID, WithLabel, WithCallback, ... you name it. With the current behavior of specializations, we would need to define ... over 1000 different trait implementations! Adding 11th such type would require us to add a ton of other implementations, for all other types. If the commented out code would be accepted, we would need only 10 trait implementations and implementing each new type will require only a single additional implementation.

I'm not sure how your code relates to specialization. Your argument (your initial code compiles but the commented test(v1); line doesn't compile without the manual impl you present) still applies if the first #![feature(specialization)] line is removed.

@qnighy The code should compile after uncommenting the impls HasLabel for T and HasID for T – they are using specialization. Currently, they are rejected (try uncommenting them in the code I provided!). Does it make sense now to you? 🙂

Let's consider three instances WithLabel<WithID<A>>, WithID<WithLabel<A>> and WithLabel<WithLabel<A>>. Then

  • the first impl covers WithLabel<WithID<A>> and WithLabel<WithLabel<A>>.
  • the second impl covers WithID<WithLabel<A>> and WithLabel<WithLabel<A>>.

Therefore the pair of impls doesn't satisfy the following clause from the RFC:

To ensure specialization is coherent, we will ensure that for any two impls I and J that overlap, we have either I < J or J < I. That is, one must be truly more specific than the other.

And it is a real problem in your case too because the HasLabel impl of WithLabel<WithLabel<A>> could be interpreted in two ways.

How we can cover this case is already discussed in the RFC too, and the conclusion is:

The limitations that the lattice rule addresses are fairly secondary to the main goals of specialization (as laid out in the Motivation), and so, since the lattice rule can be added later, the RFC sticks with the simple chain rule for now.

@qnighy, thanks for thinking about it.

And it is a real problem in your case too because the HasLabel impl of WithLabel<WithLabel<A>> could be interpreted in two ways.

This is true if we don't consider the impl<T> HasLabel for WithLabel<T> as more specialized than impl<T> HasLabel for T for the input of WithLabel<WithLabel<A>>. The part of the RFC you pasted does indeed covers that, however, I believe that this is a serious limitation and I would ask for re-considering support for this use case in the first release of this extension.

In the meantime, I was playing with with negative trait impls because they may actually resolve the points you covered. I created a code which does not have the problems you describe (unless I'm missing something), however, it still does not compile. This time, I don't understand where the constraints mentioned in the error came from, as the resolution should not be ambiguous.

The good thing is that actually everything compiles now (including specializations) but not the test(v1) usage:

#![feature(specialization)]
#![feature(optin_builtin_traits)]

use std::ops::Deref;
use std::ops::DerefMut;

// =================
// === WithLabel ===
// =================

struct WithLabel<T>(String, T);

auto trait IsNotWithLabel {}
impl<T> !IsNotWithLabel for WithLabel<T> {}

pub trait HasLabel {
    fn label(&self) -> &String;
}

impl<T> HasLabel for WithLabel<T> {
    fn label(&self) -> &String { 
        &self.0
    }
}

impl<T> HasLabel for T
where T: Deref + IsNotWithLabel, <Self as Deref>::Target : HasLabel {
    default fn label(&self) -> &String { 
        self.deref().label() 
    }
}

impl<T> Deref for WithLabel<T> {
    type Target = T;
    fn deref(&self) -> &Self::Target {
        &self.1
    }
}

// ==============
// === WithID ===
// ==============

struct WithID<T>(i32, T);

pub trait HasID {
    fn id(&self) -> &i32;
}

impl<T> HasID for WithID<T> {
    fn id(&self) -> &i32 { 
        &self.0
    }
}

auto trait IsNotWithID {}
impl<T> !IsNotWithID for WithID<T> {}

impl<T> HasID for T
where T: Deref + IsNotWithID, <Self as Deref>::Target : HasID {
    default fn id(&self) -> &i32 { 
        self.deref().id() 
    }
}

impl<T> Deref for WithID<T> {
    type Target = T;
    fn deref(&self) -> &Self::Target {
        &self.1
    }
}

// =============
// === Usage ===
// =============

struct A(i32);

type X = WithLabel<WithID<A>>;

fn test<T: HasID + HasLabel> (t: T) {
    println!("{:?}", t.label());
    println!("{:?}", t.id());
}

fn main() {
    let v1 = WithLabel("label1".to_string(), WithID(0, A(1)));
    test(v1);
}

In the meantime, you can exploit RFC1268 overlapping_marker_traits to allow overlapping non-marker traits, but this hack requires three more traits (one for going through marker traits, two for re-acquiring erased data through specialization).

https://play.rust-lang.org/?version=nightly&mode=debug&edition=2018&gist=b66ee0021db73efaaa5d46edfb4f3990

@qnighy I've created a separate issue about this bug: https://github.com/rust-lang/rust/issues/66041

Ok, I just discovered that auto traits will never be solution here, as (according to https://doc.rust-lang.org/nightly/unstable-book/language-features/optin-builtin-traits.html) they propagate to all the fields in a struct:

Auto traits, like Send or Sync in the standard library, are marker traits that are automatically implemented for every type, unless the type, or a type it contains, has explicitly opted out via a negative impl.

EDIT
@qnighy somehow I overlooked that you provided a link to the playground. ❤️ Thank you so much for it. It works and I am amazed by how hacky this solution is. Amazing that we are able to express that currently and I hope this possibility will not disappear in the future!

In such a situation, overlapping marker traits are the only hack we can use now, but I think it would be nice to allow in the future some kind of easier solution to express opaque types (as described in my previous post: https://github.com/rust-lang/rust/issues/31844#issuecomment-549023367).

A very simple example (simplification of above example) which fails:

trait Trait<T> {}
impl<T> Trait<T> for T {}
impl<T> Trait<()> for T {}

I don't believe this hits the problem identified with lattice rules, however perhaps an overly simplistic solver thinks it does?

Without this, the current implementation is useless for my purposes. If the above were allowed, then I believe it would also be possible to implement From on wrapper types (though I'm unsure about Into).

For any body that doesn't know yet: there is this awesome trick discovered by dtolnay that allows to use (very limited) specialization on stable rust

I'm not sure whether this has already been addressed, but traits with default implementations for their methods have to be redefined just so that they can be marked as default. Example;

trait Trait {
    fn test(&self) { println!("default implementation"); }
}

impl<T> Trait for T {
    // violates DRY principle
    default fn test(&self) { println!("default implementation"); }
}

I propose the following syntax to fix this (if it needs fixing):

impl<T> Trait for T {
    // delegates to the already existing default implementation
    default fn test(&self);
}

Moved to #68309

@jazzfool Please refile this as an issue (applies generally to everyone asking similar questions here) and cc me on that one.

Is there an approach for testing specialization? E.g. when writing a test that checks the correctness of a specialization you first need to know whether the specialization you're trying to test is actually applied instead of the default implementation.

@the8472 do you mean testing the compiler, or do you mean testing in your own code? You can certainly write unit tests that behave differently (i.e., call a fn, and see if you get the specialized variant). Perhaps you are saying that the two variants are equivalent, except for one is supposed to be faster, and hence you're not sure how to test which version you are getting? In that case, I agree, I don't know how you can test that right now.

You could I suppose make some other trait with the same set of impls, but where the fns behave differently, just to reassure yourself.

Perhaps you are saying that the two variants are equivalent, except for one is supposed to be faster, and hence you're not sure how to test which version you are getting? In that case, I agree, I don't know how you can test that right now.

You can test that using a macro. I'm a bit rusty with my Rust, but something along these lines...

[#cfg(test)]
static mut SPECIALIZATION_TRIGGERED : bool = false;

[#cfg(test)]
macro_rules! specialization_trigger {
    () =>  { SPECIALIZATION_TRIGGERED = true; };
}

[#cfg(not(test))]
macro_rules! specialization_trigger {
    () => {};
}

Then use specialization_trigger!() in the specialized impl, and in tests use assert!(SPECIALIZATION_TRIGGERED);

[#cfg(test)]
static mut SPECIALIZATION_TRIGGERED : bool = false;
...

You will want to use thread_local! { static VAR: Cell<bool> = Cell::new(false); } instead of static mut because otherwise the variable could get set in one test-case thread and mistakenly read from another thread. Also, remember to reset the variable at the beginning of each test, otherwise you will get the true from the previous test.

I have a question regarding the RFC text, hopefully this is a good place to ask.

In the reuse section, this example is given:

trait Add<Rhs=Self> {
    type Output;
    fn add(self, rhs: Rhs) -> Self::Output;
    fn add_assign(&mut self, rhs: Rhs);
}

// the `default` qualifier here means (1) not all items are implied
// and (2) those that are can be further specialized
default impl<T: Clone, Rhs> Add<Rhs> for T {
    fn add_assign(&mut self, rhs: Rhs) {
        let tmp = self.clone() + rhs;
        *self = tmp;
    }
}

I wonder how this is supposed to type-check, given that tmp has type Self::Output and nothing is known about this associated type. The RFC text does not seem to explain that, at least not anywhere near where the example is given.

Is there some mechanism here that is/was supposed to make that work?

Could that default be constrained where T: Add<Output = T>? Or is that a causality loop?

@RalfJung I agree that seems wrong.

I have a question about procedure: how meaningful is this issue and how meaningful is it for people to try out this feature? As I understand it the current implementation is unsound and incomplete and will likely be completely replaced by chalk or something else. If that's true, should we just de-implement this feature and others (e.g. GATs) until they can be properly redone?

Please don't de-implement. Broken, unsound and incomplete still allows experimentation.

If that's true, should we just de-implement this feature and others (e.g. GATs) until they can be properly redone?

Please don't, PyO3 (Python bindings library) currently depends on specialization. See https://github.com/PyO3/pyo3/issues/210

Doesn't a fair amount of the std depend on it as well? I though I remembered seeing a lot of specialized internal implementations for vector & string related stuff. Not that that should prevent de-implementing, just that it wouldn't be as simple as removing the relevant sections from the type checker.

@Lucretiel yes, many useful optimizations (especially around iterators) depend on specialization, so it would be a huge perf regression to de implement it.

For example, FusedIterator and TrustedLen are useless without specialization.

PyO3 (Python bindings library) currently depends on specialization

That's scary, because of the "unsound" parts. The standard library had critical soundness bugs due to using specialization wrong. How sure are you that you do not have the same bugs? Try using min_specialization instead, it is hopefully at least less unsound.

Maybe specialization should get a warning similar to const_generics saying "this feature is incomplete, unsound and broken, do not use in production".

many useful optimizations (especially around iterators) depend on specialization, so it would be a huge perf regression to de implement it.

These days they depend on min_specialization (see e.g. https://github.com/rust-lang/rust/pull/71321), which has the biggest soundness holes plugged.

@nikomatsakis

I agree that seems wrong.

Any idea what the intended code is? I first thought the default impl was intended to also set type Output = Self;, but that is actually impossible in the proposed RFC. So maybe the intention was to have an Output = T bound?

@RalfJung Any chance min_specialization could get documented? I feel it's more risky to use a completely undocumented feature on a crate than one that has known (and possibly unknown) soundness bugs. Neither is good, but at least the latter one isn't just compiler internals.

I couldn't find any mention of min_specialization from this tracking issue outside of the #71321 PR - and according to the Unstable book this is the tracking issue for that feature.

I don't know much about that feature either, I just saw the libstd soundness fixes. It got introduced in https://github.com/rust-lang/rust/pull/68970 which explains a few more things about it.

@matthewjasper would it make sense to document this a bit more and ask nightly users of feature(specialization) to migrate?

It seems like there should at least be a warning. It seems like this feature is blatantly broken and dangerous to use in its current state.

I'd think specialization could become a synonym for min_specialization, but add another unsound_specialization feature if necessary for existing projects, like PyO3 or whatever. It'd save anyone who only uses min_specialization considerable effort, but anyone else gets the error message, and can look up here the new name.

@RalfJung

Any idea what the intended code is?

Well, at some point, we had been considering a mode where defaults could rely on one another. So I imagine at that point the following would've worked:

default impl<T: Clone, Rhs> Add<Rhs> for T {
    type Output = T;

    fn add_assign(&mut self, rhs: Rhs) {
        let tmp = self.clone() + rhs;
        *self = tmp;
    }
}

The caveat was that if you override any member of the impl, then you had to override them all. We later backed off from this idea, and then kicked around various iterations, such as "default groups" (which would also work here), and ultimately didn't adopt any solution because we figured we could get to it later once we deal with the other, er, pressing problems (cc #71420).

Please don't, PyO3 (Python bindings library) currently depends on specialization. See PyO3/pyo3#210

PyO3 maintainer here - we're in favour of moving away from specialization so that we can get onto stable Rust. Will min_specialization be likely to be stabilized before the rest of specialization is complete?

I think there was some discussion of trying to stabilize min_specialization in the 2021 edition planning lang design meeting (it's on youtube; sorry, I'm on my phone, or i would try to find a link). I forgot what they said about it though

I think there was some discussion of trying to stabilize min_specialization in the 2021 edition planning lang design meeting (it's on youtube; sorry, I'm on my phone, or i would try to find a link). I forgot what they said about it though

I think this is the correct YouTube link: https://youtu.be/uDbs_1LXqus
(also on my phone)

Yep, that's it. Here is a link to the specific discussion: https://youtu.be/uDbs_1LXqus?t=2073

I've been using #[min_specialization] in an experimental library I've been developing so I thought I'd share my experiences. The goal is to use specialization in it's simplest form: to have some narrow cases with faster implementations than the general case. In particular, to have cryptographic algorithms in the general case run in constant time but then if all the inputs are marked Public to have a specialized version that runs in faster variable time (because if they are public we don't care about leaking info about them via execution time). Additionally some algorithms are faster depending on whether the elliptic curve point is normalized or not. To get this to work we start with

#![feature(rustc_attrs, min_specialization)]

Then if you need to make a _specialization predicate_ trait as explained in maximally minimal specialization you mark the trait declaration with #[rustc_specialization_trait].

All my specialization is done in this file and here's an example of a specialization predicate trait.

The feature works and does exactly what I need. This is obviously using rustc internal marker and is therefore prone to break without warning.

The only negative bit of feedback is that I don't feel the default keyword makes sense. Essentially what default means right now is: "this impl is specializable so interpret impls that cover a subset of this one this one as specialization of it rather than a conflicting impl". The problem is it leads to very weird looking code:

https://github.com/LLFourn/secp256kfun/blob/6766b60c02c99ca24f816801fe876fed79643c3a/secp256kfun/src/op.rs#L196-L206

Here the second impl is specializing the first but it is also default. The meaning of default seems to be lost. If you look at the rest of the impls it is is quite hard to figure out which impls are specializing which. Furthermore, when I made a erroneous impl that overlapped with an existing one it was often hard to figure out where I went wrong.

It seems to me this would be simpler if everything was specializable and when you specialize something you declare precisely which impl you are specializing. Transforming the example in the RFC into what I had in mind:

impl<A, T> Extend<A, T> for Vec<A> where T: IntoIterator<Item=A>
{
    // no need for default
    fn extend(&mut self, iterable: T) {
        ...
    }
}

// We declare explicitly which impl we are specializing repeating all type bounds etc
specialize impl<A, T> Extend<A, T> for Vec<A> where T: IntoIterator<Item=A>
    // And then we declare explicitly how we are making this impl narrower with ‘when’.
    // i.e. This impl is like the first except replace all occurances of ‘T’ with ‘&'a [A]’
    when<'a> T = &'a [A]
{
    fn extend(&mut self, iterable: &'a [A]) {
        ...
    }
}

Thanks for the feedback.

My comment here, specifically item 6, provides a concrete case in the standard library where it may be desirable to have a specialization that is only partly overridable: IndexSet would need a distinct Output type because IndexSet could be implemented without Index, but we probably do not want to allow the two types to coexist with different Output types. Since IndexSet could have a default implementation in terms of IndexMut, it would be reasonable to allow specialization of the index_set method without allowing specialization of Output.

I have a difficult time with videos, so I can't look up the linked video, however, I do have one question about #[min_specialization]. As-is, there is an rustc_unsafe_specialization_marker attribute on traits like FusedIterator that provide optimization hints, so that they can be specialized on. @matthewjasper wrote:

This is unsound but we allow it in the short term because it can't cause use after frees with purely safe code in the same way as specializing on traits methods can.

I assume that the plan is to implement @aturon's proposal and add a specialization modality for traits such as these (where specialize(T: FusedIterator)). But currently, it appears that any code can specialize on these traits. If it's stabilized as-is, people could write stable specializations that depend on it, meaning that this unsoundness would be stabilized.

Should specialization on these traits also be limited to the standard library, then? Does the standard library derive enough benefit from being able to specialize on them?

If it's stabilized as-is, people could write stable specializations that depend on it, meaning that this unsoundness would be stabilized.

It is my understanding that min_specialization as-is is not intended for stabilization.

I would like to second having some sort of marker on specializing impls. There have been quite a few cases of code in rustc and the standard library not doing what it looks like because there's no way to know that specialization is actually happening:

An unnecessary specialization of Copy:
https://github.com/rust-lang/rust/pull/72707/files#diff-3afa644e1d09503658d661130df65f59L1955

A "Specialization" that isn't:
https://github.com/rust-lang/rust/pull/71321/files#diff-da456bd3af6d94a9693e625ff7303113L1589

An implementation generated by a macro unless a flag is passed overriding a default impl:
https://github.com/rust-lang/rust/pull/73851/files?file-filters%5B%5D=#diff-ebb36dd2ac01b28a3fff54a1382527ddR124

@matthewjasper the last link doesn't appear to link to any specific snippet.

I'm not sure if this is an explicit goal, but AIUI the fact that specializing impls aren't marked gives you a way to avoid breaking changes on blanket impls. A new default impl<T> Trait for T doesn't conflict with downstream impls -- those just become specializing.

Could it be a warning only to have it unmarked?

There have been quite a few cases of code in rustc and the standard library not doing what it looks like because there's no way to know that specialization is actually happening

My experience with java is similar (though not exactly analogous). It can be hard to find out which subclass of a class is actually running...

We'd want some marker on specializable impls too though, also for clarity when reading, right?

We could put the markers in both places, which then improves rustc error or warning messages because they now know if specialization is desired and can point to the other place if it exists.

If an upstream crate adds an impl then, aside from simply upgrading, a downstream crate could employ tricks that permit compiling against both the new and old version, not sure that's beneficial.

I think that the diff may be too large to show the change. It's pointing to this: https://github.com/rust-lang/rust/blob/fb818d4321dee29e1938c002c1ff79b0e7eaadff/src/librustc_span/def_id.rs#L124

Re: Blanket impls, they are breaking changes anyway:

  • They might partially overlap a downstream impl, which is not allowed
  • Coherence can assume their non-existence in more subtle ways (which is why reserveration impls were added internally)
  • Specializing impls have to be always applicable, which either means:

    • We break peoples impls (what min_specialization does).

    • We require them to somehow annotate their trait bounds as being always applicable where necessary.

    • We make the always applicable change for them implicitly and potentially introduce subtle runtime bugs when the default impl now applies.

@cuviper actually, I feel like there were still edge cases around adding new blanket impls, even with specialization. I remember I was trying to figure out what it would take to permit us to add a impl<T: Copy> Clone for T { } impI wrote this blog post about it, in any case... but I can't remember now what my conclusion was.

Regardless, we could make it a lint warning to not have an #[override] annotation.

That said, if we could have the user declare which impls they are specializing (no idea how we would do that), it would simplify some things. Right now the compiler has to deduce the relationships between impls and that's always a bit tricky.

One of the pending items we have to do in the chalk project is to try and go back and spell out how specialization should be expressed there.

There have been quite a few cases of code in rustc and the standard library not doing what it looks like because there's no way to know that specialization is actually happening

My experience with java is similar (though not exactly analogous). It can be hard to find out which subclass of a class is actually running...

Back in May, I proposed an alternative to specialization on IRLO that doesn't actually rely on overlapping impls, but rather allows a single impl to where match on its type parameter:

impl<R, T> AddAssign<R> for T {
    fn add_assign(&mut self, rhs: R) where match T {
        T: AddAssignSpec<R> => self.add_assign(rhs),
        T: Add<R> + Copy => *self = *self + rhs,
        T: Add<R> + Clone => { let tmp = self.clone() + rhs; *self = tmp; }
    }
}

Crates downstream can then use such impl to implement "specialization", because by convention such impl for trait Trait would first match on types that implement another trait TraitSpec, and downstream types would be able to implement that trait to override the generic behavior:

// Crate upstream
pub trait Foo { fn foo(); }
pub trait FooSpec { fn foo(); }

impl<T> Foo for T {
    fn foo() where T {
        T : FooSpec => T::foo(),
        _ => { println!("generic implementation") }
    }
}

fn foo<T : Foo>(t: T) {
    T::foo()
}

// crate downstream
struct A {}
struct B {}

impl upstream::FooSpec for A {
    fn foo() { println!("Specialized"); }
}

fn main() {
    upstream::foo(A); // prints "specialized"
    upstream::foo(B); // prints "generic"
}

This formulation gives more control to upstream to choose the order of the applicable impls, and as this is part of the trait/function signature, this would appear in documentation. IMO this prevent "impl chasing" to know which branch is actually applicable, as the resolution order is explicit.

This could maybe make the errors around lifetimes and type equality more apparent as only upstream could meet them while implementing the specialization (since downstream is only implementing a "specialization trait".

Drawbacks to this formulation are that it is a very different route than the one in the RFC, and being implemented since 2016, and that at least some people on the thread expressed concerns that it would not be as expressive and/or intuitive as the current specialization feature (I find "matching on types" pretty intuitive, but I'm biased as I propose the formulation).

The match syntax might have another (syntactical) benefit: If it were at some point in the future extended with const-evaluated match guards then one wouldn't need to do type gymnastics to express bounds conditional on const expressions. E.g. one could apply specializations based on size_of, align_of, needs_drop or array sizes.

@dureuill thanks for the info! That is indeed an interesting idea. One concern I have is that it doesn't necessarily solve some of the other anticipated use cases for specialization, especially the "incrementally refining behavior" case as described by @aturon in this blog post. Still, worth keeping in mind.

@dureuill The idea is indeed interesting and may have a lot of potential, but alternative isn't always equivalent exchange.
The reason I don't think it is, is that one isn't given the opportunity to fully replace the more general implementation. Also another problem might be the fact that we don't actually have support for all the features present in the where syntax RFC on which your suggestion depends.
The suggestion is intriguing, so maybe it could have it's own RFC as a separate feature rather than an competitor to specialization because both would be useful and I see no reason why they can't live together.

@the8472 @nikomatsakis, @Dark-Legion : Thank you for the positive feedback! I try answering some of your remarks in the IRLO thread, since I don't want to be too noisy on the tracking issue (I'm sorry for each of you who expected news on specialization and just found my ramblings :flushed:).

I may open a separate RFC if I manage to write something publishable. Meanwhile, I'm very open to feedback on the linked IRLO thread. I added a longer example from aturon's blog post, so feel free to comment on that!

I'm also in favour of having some sort of marker on specializing impls.

The 2021 Edition approaches, which allows us to reserve further keywords (like specialize). Looking at the complexity and history of this feature, I don't think it will stabilize before the release of the 2021 Edition (feel free to prove me wrong) which means - in my opinion - playing around with (a) new keyword(s) is reasonable.

Otherwise, the only existing keyword that seems ... well.. suitable as marker, might be super?

Summary by reusing the example of @LLFourn from https://github.com/rust-lang/rust/issues/31844#issuecomment-639977601:

  • super (already reserved, but it could also be misinterpreted as alternative to default)
super impl<A, T> Extend<A, T> for Vec<A> where T: IntoIterator<Item=A>
  • specialize
specialize impl<A, T> Extend<A, T> for Vec<A> where T: IntoIterator<Item=A>
  • spec (short for specialize like impl is for implement) (valid concern raised by @ssokolow in https://github.com/rust-lang/rust/issues/31844#issuecomment-690980762)
spec impl<A, T> Extend<A, T> for Vec<A> where T: IntoIterator<Item=A>
  • override (already reserved, thanks @the8472 https://github.com/rust-lang/rust/issues/31844#issuecomment-691042082)
override impl<A, T> Extend<A, T> for Vec<A> where T: IntoIterator<Item=A>

already reserved keywords are to be found here

or spec (short for specialize like impl is for implement)

"spec" is already more familiar to people as a shorthand for "specification" (eg. "The HTML 5 spec") so I don't think it'd be a good shorthand for "specialize".

override is a reserved keyword, I assume it was intended for functions, so it might be usable for an impl block.

specialize Is also locale dependent - as an Australian it's specialise for me, so using 'spec' removes the locale ambiguity.

specialize Is also locale dependent - as an Australian it's specialise for me, so using 'spec' removes the locale ambiguity.

that works, except for that 'spec' is a common abbreviation for specification, so I think using 'spec' to mean specialization would be confusing. Even if the words are spelled differently in Australia, everyone can still understand which word is intended if it's spelled with either a 'z' or an 's'.

As a Canadian, I have to say that specialize/specialise isn't the only word used in programming that varies with locale.

Here, we use "colour", but it always trips me up on the rare occasions when a programming language or library uses that instead of "color". For better or worse, American English is a bit of a de facto standard in API design and, with words like colour/color, choosing to favour one spelling over the other is unavoidable without getting really contrived.

Given how strongly I expect "spec" to mean "specification", I think this is another situation where we should just consider the American English spelling the least worst option.

It may be the defacto, but that doesn't mean it's okay to use them. I find myself doing imports like "use color as colour" for example. I always trip up on the s vs z as well. I think given Rust's positive attitude to inclusivity and accessibility, it makes sense to chose language terms that are not locale dependent, as small user frustrations like color/colour and s/z do build up.

I agree in principle. I'm just skeptical that, for this case, there's a locale-neutral choice which doesn't cause more problems than it solves.

As a non-English native speaker, I find it somewhat amusing that native English speakers would complain about extra u as being a barrier to inclusivity. Just imagine what it'd be like if everything was not spelled a bit weird, but written in an entirely different language.

Put differently: every term used in Rust is locale dependent.

For better or worse, things in Rust are spelled in US English. For many here this means working in their second or third language; for others it means having to adjust spelling a bit. This is what it takes to make a whole bunch of people work together. I think the benefit of of trying to pick words that are spelled the same across many variants of English is marginal compared to picking a good an unambiguous term -- and spec is ambiguous as pointed out above.

use special as the keyword?

Alternative, make two keywords: specialize and specialise and make them equivalent...

(Or you funny non-Americans can learn to spell real proper :us: 😂 )

I can't speak was to what most languages do, but CSS uses American English spellings for everything new. Anecdotally American English seems to be used more frequently in programming as well.

@mark-i-m Unfortunately, that's a slippery slope leading to arguments that Rust should have alternative sets of keywords in every major language that learners might be coming from.

It also needlessly complicates the language to have multiple synonyms for keywords since people and parsers are only used to the idea that whitespace can vary like that.

(Not to mention that it would potentially touch off a push for equivalent synonyms in libraries, which would then necessitate rustdoc design and implementation work to keep them from being a net negative.)

Rather than arguing about which dialect of English we want this identifier to be in, perhaps we can compromise and put it in Hebrew instead?

@ssokolow while slippery slope argument in general is not a strong argument to use, I do agree with you in this case. One could argue that multiple languages is fine but there are at least two reasons why it's not:

  • Some words in different languages look the same but mean different things (can't come up with programming-related example right now, but a random example: a in Slovak is and in English)
  • People will have a huge trouble reading code in another language even if they know the language. I know from experience. (Long story short: I had a huge trouble understanding some texts with terms directly translated from English to my "mother language" at university education scam.)

Now working backwards, why different dialects of English should be preferred if not other languages? I don't see a point. Consistency (everything is US English) seems simplest, easiest to understand and least error-prone.

All this being said, I'd be very happy with "did you mean XXX?" error message approach. Neutral words that don't have other problems are fine as well.

At least nobody needs to discuss football in code. ;)

Around 70% of native English speakers live in countries using US spelling.

Also..

"The -ize spelling is often incorrectly seen as an Americanism in Britain. It has been in use since the 15th century, predating -ise by over a century. -ize comes directly from Greek -ιζειν -izein and Latin -izāre, while -ise comes via French -iser. The Oxford English Dictionary (OED) recommends -ize and lists the -ise form as an alternative."

"Publications by Oxford University Press (OUP)—such as Henry Watson Fowler's A Dictionary of Modern English Usage, Hart's Rules, and The Oxford Guide to English Usage—also recommend -ize. However, Robert Allan's Pocket Fowler's Modern English Usage considers either spelling to be acceptable anywhere but the US."

ref. https://en.wikipedia.org/wiki/American_and_British_English_spelling_differences#-ise,_-ize_(-isation,_-ization)

It appears Spanish and Italian have a z or two, so not sure where the French gets -iser, maybe from German?

Can further bikeshedding around specific keywords and naming be moved to an internals thread? I'm following this issue for progress updates on the feature, and this discussion is starting to get a bit noisy.

Was this page helpful?
0 / 5 - 0 ratings