Go: proposal: spec: generic programming facilities

Created on 14 Apr 2016  ·  816Comments  ·  Source: golang/go

This issue proposes that Go should support some form of generic programming.
It has the Go2 label, since for Go1.x the language is more or less done.

Accompanying this issue is a general generics proposal by @ianlancetaylor that includes four specific flawed proposals of generic programming mechanisms for Go.

The intent is not to add generics to Go at this time, but rather to show people what a complete proposal would look like. We hope this will be of help to anyone proposing similar language changes in the future.

Go2 LanguageChange NeedsInvestigation Proposal generics

Most helpful comment

Let me preemptively remind everybody of our https://golang.org/wiki/NoMeToo policy. The emoji party is above.

All 816 comments

CL https://golang.org/cl/22057 mentions this issue.

Let me preemptively remind everybody of our https://golang.org/wiki/NoMeToo policy. The emoji party is above.

There is Summary of Go Generics Discussions, which tries to provide an overview of discussions from different places. It also provides some examples how to solve problems, where you would want to use generics.

There are two "requirements" in the linked proposal that may complicate the implementation and reduce type safety:

  • Define generic types based on types that are not known until they are instantiated.
  • Do not require an explicit relationship between the definition of a generic type or function and its use. That is, programs should not have to explicitly say type T implements generic G.

These requirements seem to exclude e.g. a system similar to Rust's trait system, where generic types are constrained by trait bounds. Why are these needed?

It becomes tempting to build generics into the standard library at a very low level, as in C++ std::basic_string, std::allocator >. This has its benefits—otherwise nobody would do it—but it has wide-ranging and sometimes surprising effects, as in incomprehensible C++ error messages.

The problem in C++ arises from type checking generated code. There needs to be an additional type check before code generation. The C++ concepts proposal enables this by allowing the author of generic code to specify the requirements of a generic type. That way, compilation can fail type checking before code generation and simple error messages can be printed. The problem with C++ generics (without concepts) is that the generic code _is_ the specification of the generic type. That's what creates the incomprehensible error messages.

Generic code should not be the specification of a generic type.

@tamird It is an essential feature of Go's interface types that you can define a non-interface type T and later define an interface type I such that T implements I. See https://golang.org/doc/faq#implements_interface . It would be inconsistent if Go implemented a form of generics for which a generic type G could only be used with a type T that explicitly said "I can be used to implement G."

I'm not familiar with Rust, but I don't know of any language that requires T to explicitly state that it can be used to implement G. The two requirements you mention do not mean that G can not impose requirements on T, just as I imposes requirements on T. The requirements just mean that G and T can be written independently. That is a highly desirable feature for generics, and I can not imagine abandoning it.

@ianlancetaylor https://doc.rust-lang.org/book/traits.html explains Rust's traits. While I think they're a good model in general, they would be a bad fit for Go as it exists today.

@sbunce I also thought that concepts were the answer, and you can see the idea scattered through the various proposals before the last one. But it is discouraging that concepts were originally planned for what became C++11, and it is now 2016, and they are still controversial and not particularly close to being included in the C++ language.

Would there be value on the academic literature for any guidance on evaluating approaches?

The only paper I've read on the topic is Do developers benefit from generic types? (paywall sorry, you might google your way to a pdf download) which had the following to say

Consequently, a conservative interpretation of the experiment
is that generic types can be considered as a tradeoff
between the positive documentation characteristics and the
negative extensibility characteristics. The exciting part of
the study is that it showed a situation where the use of a
(stronger) static type system had a negative impact on the
development time while at the same time the expected bene-
fit – the reduction of type error fixing time – did not appear.
We think that such tasks could help in future experiments in
identifying the impact of type systems.

I also see https://github.com/golang/go/issues/15295 also references Lightweight, flexible object-oriented generics.

If we were going to lean on academia to guide the decision I think it would be better to do an up front literature review, and probably decide early if we would weigh empirical studies differently from ones relying on proofs.

Please see: http://dl.acm.org/citation.cfm?id=2738008 by Barbara Liskov:

The support for generic programming in modern object-oriented programming languages is awkward and lacks desirable expressive power. We introduce an expressive genericity mechanism that adds expressive power and strengthens static checking, while remaining lightweight and simple in common use cases. Like type classes and concepts, the mechanism allows existing types to model type constraints retroactively. For expressive power, we expose models as named constructs that can be defined and selected explicitly to witness constraints; in common uses of genericity, however, types implicitly witness constraints without additional programmer effort.

I think what they did there is pretty cool - I'm sorry if this is the incorrect place to stop but I couldn't find a place to comment in /proposals and I didn't find an appropriate issue here.

It could be interesting to have one or more experimental transpilers - a Go generics source code to Go 1.x.y source code compiler.
I mean - too much talk/arguments-for-my-opinion, and no one is writing source code that _try_ to implement _some kind_ of generics for Go.

Just to get knowledge and experience with Go and generics - to see what works and what doesn't work.
If all Go generics solutions aren't really good, then; No generics for Go.

Can the proposal also include the implications on binary size and memory footprint? I would expect that there will be code duplication for each concrete value type so that compiler optimizations work on them. I hope for a guarantee that there will be no code duplication for concrete pointer types.

I offer a Pugh Decision matrix. My criteria include perspicuity impacts (source complexity, size). I also forced ranked the criteria to determine the weights for the criteria. Your own may vary of course. I used "interfaces" as the default alternative and compared this to "copy/paste" generics, template based generics (I had in mind something like how D language works), and something I called runtime instantiation style generics. I'm sure this is a vast over simplification. Nonetheless, it may spark some ideas on how to evaluate choices... this should be a public link to my Google Sheet, here

Pinging @yizhouzhang and @andrewcmyers so they can voice their opinions about genus like generics in Go. It sounds like it could be a good match :)

The generics design we came up with for Genus has static, modular type checking, does not require predeclaring that types implement some interface, and comes with reasonable performance. I would definitely look at it if you're thinking about generics for Go. It does seem like a good fit from my understanding of Go.

Here is a link to the paper that doesn't require ACM Digital Library access:
http://www.cs.cornell.edu/andru/papers/genus/

The Genus home page is here: http://www.cs.cornell.edu/projects/genus/

We haven't released the compiler publicly yet, but we are planning to do that fairly soon.

Happy to answer any questions people have.

In terms of @mandolyte's decision matrix, Genus scores a 17, tied for #1. I would add some more criteria to score, though. For example, modular type checking is important, as others such as @sbunce observed above, but template-based schemes lack it. The technical report for the Genus paper has a much larger table on page 34, comparing various generics designs.

I just went through the whole Summary of Go Generics document, which was a helpful summary of previous discussions. The generics mechanism in Genus does not, to my mind, suffer from the problems identified for C++, Java, or C#. Genus generics are reified, unlike in Java, so you can find out types at run time. You can also instantiate on primitive types, and you don't get implicit boxing in the places you really don't want it: arrays of T where T is a primitive. The type system is closest to Haskell and Rust -- actually a bit more powerful, but I think also intuitive. Primitive specialization ala C# is not currently supported in Genus but it could be. In most cases, specialization can be determined at link time, so true run-time code generation would not be required.

CL https://golang.org/cl/22163 mentions this issue.

A way to constrain generic types that doesn't require adding new language concepts: https://docs.google.com/document/d/1rX4huWffJ0y1ZjqEpPrDy-kk-m9zWfatgCluGRBQveQ/edit?usp=sharing.

Genus looks really cool and it's clearly an important advancement of the art, but I don't see how it would apply to Go. Does anyone have a sketch of how it would integrate with the Go type system/philosophy?

The issue is the go team is stonewalling attempts. The title clearly states the intentions of the go team. And if that wasn't enough to deter all takers, the features demanded of such a broad domain in the proposals by ian make it clear that if you want generics then they don't want you. It is asinine to even attempt dialog with the go team. To those looking for generics in go, I say fracture the language. Begin a new journey- many will follow. I've already seen some great work done in forks. Organize yourselves, rally around a cause

If anyone wants to try to work up a generics extension to Go based on the Genus design, we are happy to help. We don't know Go well enough to produce a design that harmonizes with the existing language. I think the first step would be a straw-man design proposal with worked-out examples.

@andrewcmyers hoping that @ianlancetaylor will work with you on that. Just having some examples to look at would help a lot.

I've read through the Genus paper. To the extent that I understand it, it seems nice for Java, but doesn't seem like a natural fit for Go.

One key aspect of Go is that when you write a Go program, most of what you write is code. This is different from C++ and Java, where much more of what you write is types. Genus seems to be mostly about types: you write constraints and models, rather than code. Go's type system is very very simple. Genus's type system is far more complex.

The ideas of retroactive modeling, while clearly useful for Java, do not seem to fit Go at all. People already use adapter types to match existing types to interfaces; nothing further should be needed when using generics.

It would be interesting to see these ideas applied to Go, but I'm not optimistic about the result.

I'm not a Go expert, but its type system doesn't seem any simpler than pre-generics Java. The type syntax is a bit lighter-weight in a nice way but the underlying complexity seems about the same.

In Genus, constraints are types but models are code. Models are adapters, but they adapt without adding a layer of actual wrapping. This is very useful when you want to, say, adapt an entire array of objects to a new interface. Retroactive modeling lets you treat the array as an array of objects satisfying the desired interface.

I wouldn't be surprised if it were more complicated than (pre-generics) Java's in a type theoretic sense, even though it's simpler to use in practice.

Relative complexity aside, they're different enough that Genus couldn't map 1:1. No subtyping seems like a big one.

If you're interested:

The briefest summary of the relevant philosophical/design differences I mentioned are contained in the following FAQ entries:

Unlike most languages, the Go spec is very short and clear about the relevant properties of the type system start at https://golang.org/ref/spec#Constants and go straight through until the section titled "Blocks" (all of which is less than 11 pages printed).

Unlike Java and C# generics, the Genus generics mechanism is not based on subtyping. On the other hand, it seems to me that Go does have subtyping, but structural subtyping. That is also a good match for the Genus approach, which has a structural flavor rather than relying on predeclared relationships.

I don't believe that Go has structural subtyping.

While two types whose underlying type is identical are therefore identical
can be substituted for one another, https://play.golang.org/p/cT15aQ-PFr

This does not extend to two types who share a common subset of fields,
https://play.golang.org/p/KrC9_BDXuh.

On Thu, Apr 28, 2016 at 1:09 PM, Andrew Myers [email protected]
wrote:

Unlike Java and C# generics, the Genus generics mechanism is not based on
subtyping. On the other hand, it seems to me that Go does have subtyping,
but structural subtyping. That is also a good match for the Genus approach,
which has a structural flavor rather than relying on predeclared
relationships.


You are receiving this because you are subscribed to this thread.
Reply to this email directly or view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-215298127

Thanks, I was misinterpreting some of the language about when types implement interfaces. Actually, it looks to me as if Go interfaces, with a modest extension, could be used as Genus-style constraints.

That's exactly why I pinged you, genus seems like a much better approach than Java/C# like generics.

There were some ideas with regards to specializing on the interface types; e.g. the _package templates_ approach "proposals" 1 2 are examples of it.

tl;dr; the generic package with interface specialization would look like:

package set
type E interface { Equal(other E) bool }
type Set struct { items []E }
func (s *Set) Add(item E) { ... }

Version 1. with package scoped specialization:

package main
import items set[[E: *Item]]

type Item struct { ... }
func (a *Item) Equal(b *Item) bool { ... }

var xs items.Set
xs.Add(&Item{})

Version 2. the declaration scoped specialization:

package main
import set

type Item struct { ... }
func (a *Item) Equal(b *Item) bool { ... }

var xs set.Set[[E: *Item]]
xs.Add(&Item{})

The package scoped generics will prevent people from significantly abusing the generics system, since the usage is limited to basic algorithms and data-structures. It basically prevents building new language-abstractions and functional-code.

The declaration scoped specialization has more possibilities at the cost making it more prone to abuse and it is more verbose. But, functional code would be possible, e.g:

type E interface{}
func Reduce(zero E, items []E, fn func(a, b E) E) E { ... }

Reduce[[E: int]](0, []int{1,2,3,4}, func(a, b int)int { return a + b } )
// there are probably ways to have some aliases (or similar) to make it less verbose
alias ReduceInt Reduce[[E: int]]
func ReduceInt Reduce[[E: int]]

The interface specialization approach has interesting properties:

  • Already existing packages using interfaces would be specializable. e.g. I would be able to call sort.Sort[[Interface:MyItems]](...) and have the sorting work on the concrete type instead of interface (with potential gains from inlining).
  • Testing is simplified, I only have to assure that the generic code works with interfaces.
  • It's easy to state how it works. i.e. imagine that [[E: int]] replaces all declarations of E with int.

But, there are verbosity issues when working across packages:

type op
import "set"

type E interface{}
func Union(a, b set.Set[[set.E: E]]) set.Set[[set.E: E]] {
    result := set.New[[set.E: E]]()
    ...
}

_Of course, the whole thing is simpler to state than to implement. Internally there are probably tons of problems and ways how it could work._

_PS, to the grumblers on slow generics progress, I applaud the Go Team for spending more time on issues that have a bigger benefit to the community e.g. compiler/runtime bugs, SSA, GC, http2._

@egonelbre your point that package-level generics will prevent "abuse" is a really important one that I think most people overlook. That plus their relative semantic and syntactic simplicity (only the package and import constructs are affected) make them very attractive for Go.

@andrewcymyers interesting that you think Go interfaces work as Genus-style constraints. I would have thought they still have the problem that you can't express multi-type-parameter constraints with them.

One thing I just realized, however, is that in Go you can write an interface inline. So with the right syntax you could put the interface in scope of all the parameters and capture multi-parameter constraints:

type [V, E] Graph [V interface { Edges() E }, E interface { Endpoints() (V, V) }] ...

I think the bigger problem with interfaces as constraints is that methods are not as pervasive in Go as in Java. Built-in types do not have methods. There is no set of universal methods like those in java.lang.Object. Users don't typically define methods like Equals or HashCode on their types unless they specifically need to, because those methods don't qualify a type for use as map keys, or in any algorithm that needs equality.

(Equality in Go is an interesting story. The language gives your type "==" if it meets certain requirements (see https://golang.org/ref/spec#Logical_operators, search for "comparable"). Any type with "==" can serve as a map key. But if your type doesn't deserve "==", then there is nothing you can write that will make it work as a map key.)

Because methods aren't pervasive, and because there is no easy way to express properties of the built-in types (like what operators they work with), I suggested using code itself as the generic constraint mechanism. See the link in my comment of April 18, above. This proposal has its problems, but one nice feature is that generic numeric code could still use the usual operators, instead of cumbersome method calls.

The other way to go is to add methods to types that lack them. You can do this in the existing language in a much lighter way than in Java:

type Int int
func (i Int) Less(j Int) bool { return i < j }

The Int type "inherits" all the operators and other properties of int. Though you have to cast between the two to use Int and int together, which can be a pain.

Genus models could help here. But they would have to be kept very simple. I think @ianlancetaylor was too narrow in his characterization of Go as writing more code, fewer types. The general principal is that Go abhors complexity. We look at Java and C++ and are determined never to go there. (No offense.)

So one quick idea for a model-like feature would be: have the user write types like Int above, and in generic instantiations allow "int with Int", meaning use type int but treat it like Int. Then there is no overt language construct called model, with its keyword, inheritance semantics, and so on. I don't understand models well enough to know whether this is feasible, but it is more in the spirit of Go.

@jba We certainly agree with the principle of avoiding complexity. "As simple as possible but no simpler." I would probably leave some Genus features out of Go on those grounds, at least at first.

One of the nice things about the Genus approach is that it handles built-in types smoothly. Recall that primitive types in Java don't have methods, and Genus inherits this behavior. Instead, Genus treats primitive types _as if_ they had a fairly large suite of methods for the purpose of satisfying constraints. A hash table requires that its keys can be hashed and compared, but all the primitive types satisfy this constraint. So type instantiations like Map[int, boolean] are perfectly legal with no further fuss. There is no need to distinguish between two flavors of integers (int vs Int) to achieve this. However, if int were not equipped with enough operations for some uses, we would use a model almost exactly like the use of Int above.

Another thing worth mentioning is the idea of "natural models" in Genus. You ordinarily don't have to declare a model to use a generic type: if the type argument satisfies the constraint, a natural model is automatically generated. Our experience is that this is the usual case; declaring explicit, named models is normally not needed. But if a model were needed — for example, if you wanted to hash ints in a nonstandard way — then the syntax is similar to what you suggested: Map[int with fancyHash, boolean]. I would argue that Genus is syntactically light in normal use cases but with power in reserve when needed.

@egonelbre What you're proposing here looks like virtual types, which are supported by Scala. There is an ECOOP'97 paper by Kresten Krab Thorup, "Genericity in Java with virtual types", which explores this direction. We also developed mechanisms for virtual types and virtual classes in our work ("J&: nested intersection for scalable software composition", OOPSLA'06).

Since literal initializations are pervasive in Go, I had to wonder what a function literal would look like. I suspect that the code to handle this largely exists in Go generate, fix, and rename.Maybe it will inspire someone :-)

// the (generic) func type definition
type Sum64 func (X, Y) float64 {
return float64(X) + float64(Y)
}

// instantiate one, positionally
i := 42
var j uint = 86
sum := &Sum64{i, j}

// instantiate one, by named parameter types
sum := &Sum64{ X: int, Y: uint}

// now use it...
result := sum(i, j) // result is 128

Ian's proposal demands too much. We cannot possibly develop all the features at-once, it will exist in an unfinished state for many months.

In the meantime, the unfinished project cannot be called official Go language until done because that will risk fragmenting the ecosystem.

So the question is how to plan this.

Also a huge part of the project would be developing the reference corpus.
developing the actual generic collections, algorithms and other things in such a way we all agree on that they are idiomatic, while using the new go 2.0 features

A possible syntax?

// Module defining generic type
module list(type t)

type node struct {
    next *node
    data t
}
// Module using generic type:
import (
    intlist "module/path/to/list" (int)
    funclist "module/path/to/list" (func (int) int)
)

l := intlist.New()
l.Insert(5)

@md2perpe, syntax is not the hard part of this issue. In fact, it is by far the easiest. Please see the discussion and linked documents above.

@md2perpe We have discussed parametrizing entire packages ("modules") as a way to genericity internally - it does seem to be a way to reduce syntactic overhead. But it has other issues; e.g., it's not clear how one would parametrize it with types that are not package-level. But the idea may still be worth exploring in detail.

I'd like to share a perspective: In a parallel universe all Go function-signatures have always been constrained to mention only interface types, and instead of demand for generics today, there's one for a way to avoid the indirection associated with interface values. Think of how you'd solve that problem (without changing the language). I have some ideas.

@thwd So would the library author continue using interfaces, but without the type switching and type assertions needed today. And would the library user simply pass in concrete types as if the library would use the types as-is... and then would the compiler reconcile the two? And if it couldn't state why? (such as the modulo operator was used in the library, but the user supplied a slice of something.

Am I close? :-)

@mandolyte yes! let's exchange emails as to not pollute this thread. You can reach me at "me at thwd dot me". Anyone else reading this who might be interested; shoot me an email and I'll add you to the thread.

It a great feature for type system and collection library.
A potential syntax:

type Element<T> struct {
    prev, next *Element<T>
    list *List<T>
    value T
}
type List<E> struct {
    root Element<E>
    len int
}

For interface

type Collection<E> interface {
    Size() int
    Add(e E) bool
}

super type or type implement:

func contain(l List<parent E>, e E) bool
<V> func (c Collection<child E>)Map(fn func(e E) V) Collection

The above aka in java:

boolean contain(List<? super E>, E e)
<V> Collection Map(Function<? extend E, V> mapFunc);

@leaxoy as said before, the syntax is not the hard part here. See discussion above.

Just be aware that the cost of interface is unbelievably huge.

Please elaborate why do you think the cost of interface is "unbelievably"
large.
It shouldn't be worse than C++'s non-specialized virtual calls.

@minux I can't say about the performance costs but in relation to code quality. interface{} can't be verified at compile time but generics can. In my opinion this is, in most cases, more important than the performance issues of using interface{}.

@xoviat

There's really no downside to this because the processing required for this doesn't slow the compiler down.

There are (at least) two downsides.

One is increased work for the linker: if the specializations for two types result in the same underlying machine code, we don't want to compile and link two copies of that code.

Another is that parameterized packages are less expressive than parameterized methods. (See the proposals linked from the first comment for detail.)

Is hyper type a good idea?

func getAddFunc (aType type) func(aType, aType) aType {
    return func(a, b aType) aType {
        return a+b
    }
}

Is hyper type a good idea?

What you are describing here is just type parameterization ala C++ (i.e., templates). It doesn't type-check in a modular way because there is no way to know that the type aType has a + operation from the given information. Constrained type parameterization as in CLU, Haskell, Java, Genus is the solution.

@golang101 I have a detailed proposal along those lines. I'll send a CL to add it to the list, but it's unlikely to be adopted.

CL https://golang.org/cl/38731 mentions this issue.

@andrewcmyers

It doesn't type-check in a modular way because there is no way to know that the type aType has a + operation from the given information.

Sure there is. That constraint is implicit in the definition of the function, and constraints of that form can be propagated to all of the (transitive) compile-time callers of getAddFunc.

The constraint isn't part of a Go _type_ — that is, it cannot be encoded in the type system of the run-time portion of the language — but that doesn't mean that it can't be evaluated in a modular fashion.

Added my proposal as 2016-09-compile-time-functions.md.

I do not expect that it will be adopted, but it can at least serve as an interesting reference point.

@bcmills I feel that compile time functions are a powerful idea, apart from any consideration of generics. For example, I wrote a sudoku solver that needs a popcount. To speed that up, I precalculated the popcounts for the various possible values and stored it as Go source. This is something one might do with go:generate. But if there were a compile time function, that lookup table could just as well be calculated at compile time, keeping the machine generated code from having to be committed to the repo. In general, any sort of memoizable mathematical function is a good fit for pre-made lookup tables with compile time functions.

More speculatively, one might also want to, e.g., download a protobuf definition from a canonical source and use that to build types at compile time. But maybe that's too much to be allowed to do at compile time?

I feel like compile time functions are too powerful and too weak at the same time: they are too flexible and can error out in strange ways / slow down compiling the way C++ templates do, but on the other hand they are too static and difficult to adapt to things like first-class functions.

For the second part, I don't see a way you can make something like a "slice of functions that process slices of a particular type and return one element", or in an ad-hoc syntax []func<T>([]T) T, which is very easy to do in essentially every statically typed functional language. What is really needed is values being able to take on parametric types, not some source-code level code generation.

@bunsim

For the second part, I don't see a way you can make something like a "slice of functions that process slices of a particular type and return one element",

If you're talking about a single type parameter, in my proposal that would be written:

const func SliceOfSelectors(T gotype) gotype { return []func([]T)T (type) }

If you're talking about mixing type parameters and value parameters, no, my proposal does not allow for that: part of the point of compile-time functions is to be able to operate on unboxed values, and the kind of run-time parametricity I think you're describing pretty much requires boxing of values.

Yup, but in my opinion that kind of thing that requires boxing should be allowed while keeping type-safety, perhaps with special syntax that indicates the "boxedness". A big part of adding "generics" is really to avoid the type-unsafety of interface{} even when the overhead of interface{} is not avoidable. (Perhaps only allow certain parametric type constructs with pointer and interface types that are "already" boxed? Java's Integer etc boxed objects are not completely a bad idea, though slices of value types are tricky)

I just feel like compile-time functions are very C++-like, and would be extremely disappointing for people like me expecting Go2 to have a modern parametric type system grounded in a sound type theory rather than a hack based on manipulating pieces of source code written in a language without generics.

@bcmills
What you propose will not be modular. If module A uses module B, which uses module C, which uses module D, a change to how a type parameter is used in D may need to propagate all the way back to A, even if the implementer of A has no idea that D is in the system. The loose coupling provided by module systems will be weakened, and software will be more brittle. This is one of the problems with C++ templates.

If, on the other hand, type signatures do capture the requirements on type parameters, as in languages like CLU, ML, Haskell, or Genus, a module can be compiled without any access to the internals of modules it depends on.

@bunsim

A big part of adding "generics" is really to avoid the type-unsafety of interface{} even when the overhead of interface{} is not avoidable.

"not avoidable" is relative. Note that the overhead of boxing is point # 3 in Russ's post from 2009 (https://research.swtch.com/generic).

expecting Go2 to have a modern parametric type system grounded in a sound type theory rather than a hack based on manipulating pieces of source code

A good "sound type theory" is descriptive, not prescriptive. My proposal in particular draws from second-order lambda calculus (along the lines of System F), where gotype stands for the kind type and the entire first-order type system is hoisted into the second-order ("compile-time") types.

It's also related to the modal type theory work of Davies, Pfenning, et al at CMU. For some background, I would start with A Modal Analysis of Staged Computation and Modal Types as Staging Specifications for Run-time Code Generation.

It's true that the underlying type theory in my proposal is less formally specified than in the academic literature, but that doesn't mean it isn't there.

@andrewcmyers

If module A uses module B, which uses module C, which uses module D, a change to how a type parameter is used in D may need to propagate all the way back to A, even if the implementwer of A has no idea that D is in the system.

That is already true in Go today: if you look carefully, you'll note that the object files generated by the compiler for a given Go package include information on the portions of the transitive dependencies that affect the exported API.

The loose coupling provided by module systems will be weakened, and software will be more brittle.

I've heard that same argument used to advocate for exporting interface types rather than concrete types in Go APIs, and the reverse turns out to be more common: premature abstraction overconstrains the types and hinders extension of APIs. (For one such example, see #19584.) If you want to rely on this line of argument I think you need to provide some concrete examples.

This is one of the problems with C++ templates.

As I see it, the main problems with C++ templates are (in no particular order):

  • Excessive syntactic ambiguity.
    a. Ambiguity between type names and value names.
    b. Excessively broad support for operator overloading, leading to weakened ability to infer constraints from operator usage.
  • Over-reliance on overload resolution for metaprogramming (or, equivalently, ad-hoc evolution of metaprogramming support).
    a. Especially w.r.t. reference-collapsing rules.
  • Overly-broad application of the SFINAE principle, leading to very difficult-to-propagate constraints and far too many implicit conditionals in type definitions, leading to very difficult error reporting.
  • Overuse of token-pasting and textual inclusion (the C preprocessor) instead of AST substition and higher-order compilation artifacts (which thankfully seems to be at least partly addressed with Modules).
  • Lack of good bootstrapping languages for C++ compilers, leading to poor error-reporting in long-lived compiler lineages (e.g. the GCC toolchain).
  • The doubling (and sometimes multiplication) of names resulting from mapping sets of operators onto differently-named "concepts" (rather than treating the operators themselves as the fundamental constraints).

I've been coding in C++ off and on for a decade now and I'm happy to discuss the deficiencies of C++ at length, but the fact that program dependencies are transitive has never been remotely near the top of my list of complaints.

On the other hand, needing to update a chain of O(N) dependencies just to add a single method to a type in module A and be able to use it in module D? That's the kind of problem that slows me down on a regular basis. Where parametricity and loose coupling conflict, I'll choose parametricity any day.

Still, I firmly believe that metaprogramming and parametric polymorphism should be separated, and C++'s confusion of them is the root cause of why C++ templates are annoying. Simply put, C++ attempts to implement a type-theory idea using essentially macros on steroids, which is very problematic since programmers like to think of templates as real parametric polymorphism and are hit by unexpected behavior. Compile-time functions are a great idea for metaprogramming and replacing the hack that's go generate, but I don't believe it should be the blessed way of doing generic programming.

"Real" parametric polymorphism helps loose coupling and shouldn't conflict with it. It should also be tightly integrated with the rest of the type system; for example it probably should be integrated into the current interface system, so that many usages of interface types could be rewritten into things like:

func <T io.Reader> ReadAll(in T)

which should avoid interface overhead (like Rust's usage), though in this case it's not very useful.

A better example might be the sort package, where you could have something like

func <T Comparable> Sort(slice []T)

where Comparable is simply a good old interface that types can implement. Sort can then be called on a slice of value types that implement Comparable, without boxing them into interface types.

@bcmills Transitive dependencies unconstrained by the type system are, in my view, at the core of some of your complaints about C++. Transitive dependencies are not so much of a problem if you control modules A, B, C, and D. In general, you are developing module A and may only be weakly aware that module D is down there, and conversely, the developer of D may be unaware of A. If module D now, without making any change to the declarations visible in D, starts using some new operator on a type parameter—or merely uses that type parameter as a type argument to a new module E with its own implicit constraints—those constraints will percolate to all clients, who may not be using type arguments satisfying the constraints. Nothing tells developer D they are blowing it. In effect, you've got a kind of global type inference, with all the difficulties of debugging that that entails.

I believe the approach we took in Genus [PLDI'15] is much better. Type parameters have explicit, but lightweight, constraints (I take your point about supporting operation constraints; CLU showed how to do that right all the way back in 1977). Genus type checking is fully modular. Generic code can either be compiled only once to optimize code space or specialized to particular type arguments for good performance.

@andrewcmyers

If module D now, without making any change to the declarations visible in D, starts using some new operator on a type parameter […] [clients] may not be using type arguments satisfying the constraints. Nothing tells developer D they are blowing it.

Sure, but that's already true for lots of implicit constraints in Go independent of any generic programming mechanism.

For example, a function may receive a parameter of interface type and initially call its methods sequentially. If that function later changes to call those methods concurrently (by spawning additional goroutines), the constraint "must be safe for concurrent use" is not reflected in the type system.

Similarly, the Go type system today does not specify constraints on variable lifetimes: some implementations of io.Writer erroneously assume they can keep a reference to the passed-in slice and read from it later (e.g. by doing the actual write asynchronously in a background goroutine), but that causes data races if the caller of Write attempts to reuse the same backing slice for a subsequent Write.

Or a function using a type-switch might take a different path of a method is added to one of the types in the switch.

Or a function checking for a particular error code might break if the function generating the error changes the way it reports that condition. (For example, see https://github.com/golang/go/issues/19647.)

Or a function checking for a particular error type might break if wrappers around the error are added or removed (as happened in the standard net package in Go 1.5).

Or the buffering on a channel exposed in an API may change, introducing deadlocks and/or races.

...and so on.

Go is not unusual in this regard: implicit constraints are ubiquitous in real-world programs.


If you try to capture all of the relevant constraints in explicit annotations, then you end up going in one of two directions.

In one direction, you build a complex, extremely comprehensive system of dependent types and annotations, and the annotations end up recapitulating a substantial portion of the code they annotate. As I hope you can clearly see, that direction is not at all in keeping with the design of the rest of the Go language: Go favors specification simplicity and code conciseness over comprehensive static typing.

In the other direction, the explicit annotations would cover only a subset of the relevant constraints for a given API. Now the annotations provide a false sense of security: the code can still break due to changes in implicit constraints, but the presence of explicit constraints misleads the developer into thinking that any "type-safe" change also maintains compatibility.


It's not obvious to me why that kind of API stability needs to be accomplished through explicit source code annotation: the sort of API stability you're describing can also be achieved (with less redundancy in the code) through source code analysis. For example, you could imagine having the api tool analyze the code and output a much richer set of constraints than can be expressed in the formal type system of the language, and giving the guru tool the ability to query the computed set of constraints for any given API function, method, or parameter.

@bcmills Aren't you making the perfect the enemy of the good? Yes, there are implicit constraints that are hard to capture in a type system. (And good modular design avoids introducing such implicit constraints when feasible.) It would be great to have an all-encompassing analysis that can statically check all the properties you want checked -- and provide clear, non-misleading explanations to programmers about where they are making mistakes. Even with the recent progress on automatic error diagnosis and localization, I'm not holding my breath. For one thing, analysis tools can only analyze the code you give them. Developers do not always have access to all the code that might link with theirs.

So where there are constraints that are easy to capture in a type system, why not give programmers the ability to write them down? We have 40 years of experience programming with statically constrained type parameters. This is a simple, intuitive static annotation that pays off.

Once you start building larger software that layers software modules, you start wanting to write comments explaining such implicit constraints anyway. Assuming there is a good, checkable way to express them, why not then let the compiler in on the joke so it can help you?

I note that some of your examples of other implicit constraints involve error handling. I think our lightweight static checking of exceptions [PLDI 2016] would address these examples.

@andrewcmyers

So where there are constraints that are easy to capture in a type system, why not give programmers the ability to write them down?
[…]
Once you start building larger software that layers software modules, you start wanting to write comments explaining such implicit constraints anyway. Assuming there is a good, checkable way to express them, why not then let the compiler in on the joke so it can help you?

I actually agree completely with this point, and I often use a similar argument in regards to memory management. (If you're going to have to document invariants on aliasing and retention of data anyway, why not enforce those invariants at compile-time?)

But I would take that argument one step further: the converse also holds! If you _don't_ need to write a comment for a constraint (because it is obvious in context to the humans who work with the code), why should you need to write that comment for the compiler? Regardless of my personal preferences, Go's use of garbage-collection and zero-values clearly indicate a bias toward "not requiring programmers to state obvious invariants". It may be the case that Genus-style modeling can express many of the constraints that would be expressed in comments, but how does it fare in terms of eliding the constraints that would also be elided in comments?

It seems to me that Genus-style models are more than just comments anyway: they actually change the semantics of the code in some cases, they don't just constrain it. Now we would have two different mechanisms — interfaces and type-models — for parameterizing behaviors. That would represent a major shift in the Go language: we have discovered some best practices for interfaces over time (such as "define interfaces on the consumer side") and it's not obvious that that experience would translate to such a radically different system, even neglecting Go 1 compatibility.

Furthermore, one of the excellent properties of Go is that its specification can be read (and largely understood) in an afternoon. It isn't obvious to me that a Genus-style system of constraints could be added to the Go language without complicating it substantially — I would be curious to see a concrete proposal for changes to the spec.

Here's an interesting data point for "metaprogramming". It would be nice for certain types in the sync and atomic packages — namely, atomic.Value and sync.Map — to support CompareAndSwap methods, but those only work for types that happen to be comparable. The rest of the atomic.Value and sync.Map APIs remain useful without those methods, so for that use-case we either need something like SFINAE (or other kinds of conditionally-defined APIs) or have to fall back to a more complex hierarchy of types.

I want to drop this creative syntax idea of using aboriginal syllabics.

@bcmills Can you explain more about these three points?

  1. Ambiguity between type names and value names.
  2. Excessively broad support for operator overloading
    3.Over-reliance on overload resolution for metaprogramming

@mahdix Sure.

  1. Ambiguity between type names and value names.

This article gives a good introduction. In order to parse a C++ program, you must know which names are types and which are values. When you parse a templated C++ program, you do not have that information available for members of the template parameters.

A similar issue arises in Go for composite literals, but the ambiguity is between values and field names rather than values and types. In this Go code:

const a = someValue
x := T{a: b}

is a a literal field name, or is it the constant a being used as a map key or array index?

  1. Excessively broad support for operator overloading

Argument-dependent lookup is a good place to start. Overloads of operators in C++ can occur as methods on the receiver type or as free functions in any of several namespaces, and the rules for resolving those overloads are quite complex.

There are many ways to avoid that complexity, but the simplest (as Go currently does) is to disallow operator overloading entirely.

  1. Over-reliance on overload resolution for metaprogramming

The <type_traits> library is a good place to start. Check out the implementation in your friendly neighborhood libc++ to see how overload resolution comes into play.

If Go ever supports metaprogramming (and even that is very doubtful), I would not expect it to involve overload resolution as the fundamental operation for guarding conditional definitions.

@bcmills
As I've never used C++, could you shed some light as to where operator overloading via implementing predefined 'interfaces' stands in terms of complexity. Python and Kotlin are examples of this.

I think that ADL itself is a huge problem with C++ templates that went mostly unmentioned, because they force the compiler to delay resolution of all the names until instantiation time, and can result in very subtle bugs, in part because the "ideal" and "lazy" compilers behave differently here and the standard permits it. The fact that it supports operator overloading is not really the worst part of it by far.

This proposal is based on Templates, a system for macro expansion would not be enough? I'm not talking about go generate or projects like gotemplate. I'm talking about more like this:

macro MacroFoo(stmt ast.Statement) {
    ....
}

Macro could reduce the boilerplate and the use of reflection.

I think that C++ is a good enough example that generics shouldn't be based on templates or macros. Especially considering Go has stuff like anonymous functions that really can't be "instantiated" at compile-time except as an optimization.

@samadadi you can get your point across without saying "what is wrong with you people". Having said that, the argument of complexity has been brought up multiple times already.

Go is not the first language to try to achieve simplicity by omitting support for parametric polymorphism (generics), despite that feature becoming increasingly important over the past 40 years -- in my experience, it's a staple of second-semester programming courses.

The trouble with not having the feature in the language is that programmers end up resorting to workarounds that are even worse. For example, Go programmers often write code templates that are macro-expanded to produce the "real" code for various desired types. But the real programming language is the one you type, not the one the compiler sees. So this strategy effectively means you are using a (no longer standard) language that has all the brittleness and code bloat of C++ templates.

As noted on https://blog.golang.org/toward-go2 we need to provide "experience reports", so that need and design goals can be determined. Could you take a few minutes and document the macro cases you have observed?

Please keep this bug on topic and civil. And again, https://golang.org/wiki/NoMeToo. Please only comment if you have unique and constructive information to add.

@mandolyte It's very easy to find detailed explanations on the web advocating code generation as a (partial) substitute for generics:
https://appliedgo.net/generics/
https://www.calhoun.io/using-code-generation-to-survive-without-generics-in-go/
http://blog.ralch.com/tutorial/golang-code-generation-and-generics/

Clearly there are a lot of people out there taking this approach.

@andrewcmyers , there are some limitations as well as convenience caveats when using code generation BUT .
Generally - if you believe this approach is best/good enough, I think the effort to allow somewhat similar generation from within the go tool chain would be a bless.

  • Compiler optimization may be a challenge in this case, but runtime will be consistent, AND code maintenance, user experience (simplicity...) , standard best practices and unified code standards may be kept .
    Moreover - all the tool chain will be kept the same, apart from debugging tools (profilers , step debuggers etc) that will see lines of code that were not written by the developer, but that's a little like stepping into ASM code while debugging - only its a readable code :) .

Downside - no precedent (that I know of) to this approach inside the go tool chain .

To sum it up - consider code generation as part of the build process , it shouldnt be too complicated, quite safe, runtime optimized, can keep simplicity and very small change in the language .

IMHO : Its a compromise easily achieved, with a low price .

To be clear, I don't consider macro-style code generation, whether done with gen, cpp, gofmt -r, or other macro/template tools, to be a good solution to the generics problem even if standardized. It has the same problems as C++ templates: code bloat, lack of modular type checking, and difficulty debugging. It gets worse as you start, as is natural, building generic code in terms of other generic code. To my mind, the advantages are limited: it would keep life relatively simple for the Go compiler writers and it does produce efficient code — unless there is instruction cache pressure, a frequent situation in modern software!

I think the point was rather that code generation is used to substitute for
generics, so generics should seek to solve most of those use cases.

On Wed, Jul 26, 2017, 22:41 Andrew Myers, notifications@github.com wrote:

To be clear, I don't consider macro-style code generation, whether done
with gen, cpp, gofmt -r, or other macro/template tools, to be a good
solution to the generics problem even if standardized. It has the same
problems as C++ templates: code bloat, lack of modular type checking, and
difficulty debugging. It gets worse as you start, as is natural, building
generic code in terms of other generic code. To my mind, the advantages are
limited: it would keep life relatively simple for the Go compiler writers
and it does produce efficient code — unless there is instruction cache
pressure, a frequent situation in modern software!


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-318242016, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AT4HVb2SPMpe5dlEDUQeadIRKPaB74zoks5sR_jSgaJpZM4IG-xv
.

No doubt code generation is not a REAL solution even if wrapped up with some in language support to make the look and feel as a "part of the language"

My point was it was VERY cost effective.

Btw, if you look at some of the code generation substitutes, you can easily see how they could have been much more readable, faster, and lack some wrong concepts (e.g. iteration over arrays of pointers vs values) had the language given them better tools for this.

And perhaps that's a better path to resolve for in short term, that would not feel like a patch:
before thinking of the "best generics support that will also be idiomatic to go " (I believe some implementations above would take years to accomplish full integration) , implement some sets of "in language" supported functions that are needed anyhow (like a build in structures deep copy) would make these code generating solution much more usable.

After reading through the generics proposals by @bcmills and @ianlancetaylor, I've made the following observations:

Compile-time Functions and First Class Types

I like the idea of compile-time evaluation, but I don't see the benefit of limiting it to pure functions. This proposal introduces the builtin gotype, but limits its use to const functions and any data types defined within function scope. From the perspective of a library user, instantiation is limited to constructor functions like "New", and leads to function signatures like this one:

const func New(K, V gotype, hashfn Hashfn(K), eqfn Eqfn(K)) func()*Hashmap(K, V, hashfn, eqfn)

The return type here can't be separated into a function type because we are limited to pure functions. Additionally, the signature defines two new "types" in the signature itself (K and V), which means that in order to parse a single parameter, we must parse the whole parameter list. This is fine for a compiler, but I wonder if adds complexity to a package's public API.

Type Parameters in Go

Parameterized types allow for most of the use cases of generic programming, e.g the ability to define generic data structures and operations over different data types. The proposal exhaustively lists enhancements to the type-checker that would be needed to produce better compilation errors, faster compile times, and smaller binaries.

Under the section "Type Checker," the proposal also lists some useful type restrictions to speed up the process, like "Indexable", "Comparable", "Callable", "Composite", etc... What I don't understand is why not allow the user the specify their own type restrictions? The proposal states that

There are no restrictions on how parameterized types may be used in a parameterized function.

However, if the identifiers had more constraints tied to them, wouldn't that have the effect of assisting the compiler? Consider:

HashMap[Anything,Anything] // Compiler must always compare the implementation and usages to make sure this is valid.

vs

HashMap[Comparable,Anything] // Compiler can first filter out instantiations for incomparable types before running an exhaustive check.

Separating type constraints from type parameters and allowing user-defined constraints could also improve readability, making generic packages easier to understand. Interestingly, the flaws listed at the end of the proposal regarding the complexity of type deduction rules could actually be mitigated if those rules are explicitly defined by the user.

@smasher164

I like the idea of compile-time evaluation, but I don't see the benefit of limiting it to pure functions.

The benefit is that it makes separate compilation possible. If a compile-time function can modify global state, then the compiler must either have that state available, or journal the edits to it in such a way that the linker can sequence them at link time. If a compile-time function can modify local state, then we would need some way to track which state is local vs. global. Both add complexity, and it's not obvious that either would provide enough benefit to offset it.

@smasher164

What I don't understand is why not allow the user the specify their own type restrictions?

The type restrictions in that proposal correspond to operations in the syntax of the language. That reduces the surface area of the new features: there is no need to specify additional syntax for constraining types, because all of the syntactic constraints can be inferred from usage.

if the identifiers had more constraints tied to them, wouldn't that have the effect of assisting the compiler?

The language should be designed for its users, not for the compiler-writers.

there is no need to specify additional syntax for constraining types because all of the syntactic constraints can be inferred from usage.

This is the route C++ went down. It requires a global program analysis to identify the relevant usages. Code cannot be reasoned about by programmers in a modular fashion, and error messages are verbose and incomprehensible.

It can be so easy and lightweight to specify the operations needed. See CLU (1977) for an example.

@andrewcmyers

It requires a global program analysis to identify the relevant usages. Code cannot be reasoned about by programmers in a modular fashion,

That's using a particular definition of "modular", which I don't think is as universal as you seem to assume. Under the 2013 proposal, each function or type would have an unambiguous set of constraints inferred bottom-up from imported packages, in exactly the same way that the run-time (and run-time constraints) of non-parametric functions are derived bottom-up from call chains today.

You could presumably query the inferred constraints using guru or a similar tool, and it could answer those queries using local information from the exported package metadata.

and error messages are verbose and incomprehensible.

We have a couple of examples (GCC and MSVC) demonstrating that naively-generated error messages are incomprehensible. I think it's a stretch to assume that error messages for implicit constraints are intrinsically bad.

I think the biggest downside to inferred constraints is that they make it easy to use a type in a way that introduces a constraint without fully understanding it. In the best case, this just means that your users may run into unexpected compile-time failures, but in the worst case, this means you can break the package for consumers by introducing a new constraint inadvertently. Explicitly-specified constraints would avoid this.

I also personally don't feel that explicit constraints are out of line with the existing Go approach, since interfaces are explicit runtime type constraints, although they have limited expressivity.

We have a couple of examples (GCC and MSVC) demonstrating that naively-generated error messages are incomprehensible. I think it's a stretch to assume that error messages for implicit constraints are intrinsically bad.

The list of compilers on which non-local type inference - which is what you propose - results in bad error messages is quite a bit longer than that. It includes SML, OCaml, and GHC, where a lot of effort has already gone into improving their error messages and where there is at least some explicit module structure helping out. You might be able to do better, and if you come up with an algorithm for good error messages with the scheme you propose, you'll have a nice publication. As a starting point toward that algorithm, you might find our POPL 2014 and PLDI 2015 papers on error localization useful. They are more or less the state of the art.

because all of the syntactic constraints can be inferred from usage.

Doesn't that limit the breadth of type-checkable generic programs? For example, note that the type-params proposal doesn't specify an "Iterable" constraint. In the current language, this would correspond either to a slice or channel, but a composite type (say a linked list) wouldn't necessarily satisfy those requirements. Defining an interface like

type Iterable[T] interface {
    Next() T
}

helps the linked list case, but now the builtin slice and channel types must to be extended to satisfy this interface.

A constraint that says "I accept the set of all types that are either Iterables, slices, or channels" seems like a win-win-win situation for the user, package author, and compiler implementer. The point I'm trying to make is that constraints are a superset of syntactically valid programs, and some may not make sense from a language perspective, but only from an API perspective.

The language should be designed for its users, not for the compiler-writers.

I agree, but maybe I should have phrased it differently. Improved compiler efficiency could be a side effect of user-defined constraints. The main benefit would be readability, since the user has a better idea of their API behavior than the compiler anyways. The tradeoff here is that generic programs would have to be slightly more explicit about what they accept.

What if instead of

type Iterable[T] interface {
    Next() T
}

we separated out the idea of "interfaces" from "constraints". Then we might have

type T generic

type Iterable class {
    Next() T
}

where "class" means a Haskell-style typeclass, not a Java-style class.

Having "typeclasses" separate from "interfaces" might help clear up some of the non-orthogonality of the two ideas. Then Sortable (ignoring sort.Interface) might look something like:

type T generic

type Comparable class {
    Less(a, b T) bool
}

type Sortable class {
    Next() Comparable
}

Here is some feedback to the "Type classes and concepts" section in Genus by @andrewcmyers and its applicability to Go.

This section addresses the limitations of type classes and concepts, stating

first, constraint satisfaction must be uniquely witnessed

I'm not sure I understand this limitation. Wouldn't tying a constraint to separate identifiers prevent it from being unique to a given type? It looks to me that the "where" clause in Genus essentially constructs a type/constraint from a given constraint, but this seems analogous to instantiating a variable from a given type. A constraint in this way resembles a kind.

Here's a dramatic simplification of constraint definitions, adapted to Go:

kind Any interface{} // accepts any type that satisfies interface{}.
type T Any // Declare a type of Any kind. Also binds it to an identifier.
kind Eq T == T // accepts any type for which equality is defined.

So a map declaration would appear as:

type Map[K Eq, V Any] struct {
}

where in Genus, it could look like:

type Map[K, V] where Eq[K], Any[V] struct {
}

and in the existing Type-Params proposal it would look like:

type Map[K,V] struct {
}

I think we can all agree that allowing constraints to leverage the existing type system can both remove overlap between features of the language, and make it easy to understand new ones.

and second, their models define how to adapt a single type, whereas in a language with subtyping, each adapted type in general represents all of its subtypes.

This limitation seems less pertinent to Go since the language already has good conversion rules between named/unnamed types and overlapping interfaces.

The given examples propose models as a solution, which seems to be a useful but not necessary feature for Go. If a library expects a type to implement http.Handler for example, and the user wants different behaviors depending on the context, writing adapters is simple:

type handleFunc func(http.ResponseWriter, *http.Request)
func (f handlerFunc) ServeHTTP(w http.ResponseWriter, r *http.Request) { f(w,r) }

In fact, this is what the standard library does.

@smasher164

first, constraint satisfaction must be uniquely witnessed
I'm not sure I understand this limitation. Wouldn't tying a constraint to separate identifiers prevent >it from being unique to a given type?

The idea is that in Genus you can satisfy the same constraint with the same type in more than one way, unlike in Haskell. For example, if you have a HashSet[T], you can write HashSet[String] to hash strings in the usual way but HashSet[String with CaseInsens] to hash and compare strings with the CaseInsens model, which presumably treats strings in a case-insensitive way. Genus actually distinguishes these two types; this might be overkill for Go. Even if the type system does not keep track of it, it still seems important to be able to override the default operations provided by a type.

kind Any interface{} // accepts any type that satisfies interface{}.
type T Any // Declare a type of Any kind. Also binds it to an identifier.
kind Eq T == T // accepts any type for which equality is defined.
type Map[K Eq, V Any] struct { ...
}

The moral equivalent of this in Genus would be:

constraint Any[T] {}
// Just use Any as if it were a type
constraint Eq[K] {
   boolean equals(K);
}
class Map[K, V] where Eq[K] { ... }

In Familia we would merely write:

interface Eq {
    boolean equals(This);
}
class Map[K where Eq, V] { ... }

Edit: retracting this in favor a reflect based solution as described in #4146 A generics based solution as I described below grows linearly in the number of compositions. While a reflect based solution will always have a performance handicap it can optimize itself at runtime so that handicap is constant regardless of the number of compositions.

This isn't a proposal but a potential use-case to consider when designing a proposal.

Two things are common in Go code today

  • wrapping an interface value to provide additional functionality (wrapping an http.ResponseWriter for a framework)
  • having optional methods that sometimes interface values have (like Temporary() bool on net.Error)

These are both good and useful but they don't mix. Once you've wrapped an interface you've lost the ability to access to any methods not defined on the wrapping type. That is, given

type MyError struct {
  error
  extraContext extraContextType
}
func (m MyError) Error() string {
  return fmt.Sprintf("%s: %s", m.extraContext, m.error)
}

If you wrap an error in that struct you hide any additional methods on the original error.

If you don't wrap the error in the struct, you can't provide the extra context.

Let's say that the accepted generic proposal let you define something like the following (arbitrary syntax which I tried to make intentionally ugly so no one will focus on it)

type MyError generic_over[E which_is_a_type_satisfying error] struct {
  E
  extraContext extraContextType
}
func (m MyError) Error() string {
  return fmt.Sprintf("%s: %s", m.extraContext, m.E)
}

By leveraging embedding we could embed any concrete type satisfying the error interface and both wrap it and have access to its other methods. Unfortunately this only gets us part way there.

What we really need here is to take an arbitrary value of the error interface and embed its dynamic type.

This immediately raises two concerns

  • the type would have to be created at runtime (likely needed by reflect anyway)
  • type creation would have to panic if the error value is nil

If those haven't soured you on the thought, you also need a mechanism to "leap" over the interface to its dynamic type, either by an annotation in the list of generic parameters to say "always instantiate on the dynamic type of interface values" or by some magic function that can only be called during type instantiation to unbox the interface so that its type and value can be correctly spliced in.

Without that you're just instantiating MyError on the error type itself not the dynamic type of the interface.

Let's say that we have a magic unbox function to pull out and (somehow) apply the information:

func wrap(ec extraContext, err error) error {
  if err == nil {
    return nil
  }
  return MyError{
    E: unbox(err),
    extraContext: ec,
  }
}

Now let's say that we have a non-nil error, err, whose dynamic type is *net.DNSError. Then this

wrapped := wrap(getExtraContext(), err)
//wrapped 's dynamic type is a MyStruct embedding E=*net.DNSError
_, ok := wrapped.(net.Error)
fmt.Println(ok)

would print true. But if the dynamic type of err had been *os.PathError it would have printed false.

I hope the proposed semantic is clear given the obtuse syntax used in the demonstration.

I also hope there's a better way to solve that problem with less mechanism and ceremony, but I think that the above could work.

@jimmyfrasche If I'm understanding what you want, it's a wrapper-free adaptation mechanism. You want to be able to expand the set of operations a type offers without wrapping it in another object that hides the original. This is a functionality that Genus offers.

@andrewcmyers no.

Struct's in Go allow embedding. If you add a field without a name but with a type to a struct it does two things: It creates a field with the same name as the type and it allows transparent dispatch to any methods of that type. That sounds awfully like inheritance but it's not. If you had a type T that had a method Foo() then the following are equivalent

type S struct {
  T
}

and

type S struct {
  T T
}
func (s S) Foo() {
  s.T.Foo()
}

(when Foo is called its "this" is always of type T).

You can also embed interfaces in structs. This gives the struct all the methods in the interface's contract (though you need to assign some dynamic value to the implicit field or it will cause a panic with the equivalent of a null pointer exception)

Go has interfaces that define a contract in term of a type's methods. A value of any type that satisfies the contract can be boxed in a value of that interface. A value of an interface is a pointer to the internal type manifest (dynamic type) and an pointer to a value of that dynamic type (dynamic value). You can do type assertions on an interface value to (a) get the dynamic value if you assert to its non-interface type or (b) get a new interface value if you assert to a different interface that the dynamic value also satisfies. It's common to use the latter to "feature test" an object to see if it supports optional methods. To reuse an earlier example some errors have a "Temporary() bool" method so you can see if any error is temporary with:

func isTemp(err error) bool {
  if t, ok := err.(interface{ Temporary() bool}); ok {
    return t.Temporary()
  }
  return false
}

It's also common to wrap a type in another type to provide extra features. This works well with non-interface types. When you wrap an interface though you also hide the methods you don't know about it and you can't recover them with "feature test" type assertions: the wrapped type only exposes the required methods of the interface even if it has optional methods. Consider:

type A struct {}
func (A) Foo()
func (A) Bar()

type I interface {
  Foo()
}

type B struct {
  I
}

var i I = B{A{}}

You can't call Bar on i or even know that it exists unless you know that i's dynamic type is a B so you can unwrap it and get at the I field to type assert on that.

This causes real problems, especially dealing with common interfaces like error, or Reader.

If there were a way to lift the dynamic type and value out of an interface (in some safe, controlled manner), you could parameterize a new type with that, set the embedded field to the value, and return a new interface. Then you get a value that satisfies the original interface, has any enhanced functionality you want to add, but the rest of the methods of the original dynamic type are still there to be feature tested.

@jimmyfrasche Indeed. What Genus allows you to do is use one type to satisfy an "interface" contract without boxing it. The value still has its original type and its original operations. Further, the program can specify which operations the type should use to satisfy the contract -- by default, they are the operations the type provides, but the program can supply new ones if the type doesn't have the necessary operations. It can also replace the operations the type would use.

@jimmyfrasche @andrewcmyers For that use-case, see also https://github.com/golang/go/issues/4146#issuecomment-318200547.

@jimmyfrasche To me, it sounds like the key problem here is getting the dynamic type/value of a variable. Putting aside embedding, a simplified example would be

type MyError generic_over[E which_is_a_type_satisfying error] struct {
  e E
  extraContext extraContextType
}
func (m MyError) Error() string {
  return fmt.Sprintf("%s: %s", m.extraContext, m.e)
}

The value that is assigned to eneeds to have a dynamic (or concrete) type of something like *net.DNSError, which implements error. Here are a couple of possible ways that a future language change might tackle this problem:

  1. Have a magical unbox-like function that uncovers a variable's dynamic value. This applies to any type that is not concrete, for example unions.
  2. If the language change supports type variables, provide a means to get the dynamic type of the variable. With type information, we can write the unbox function ourselves. For example,
func unbox(v T1) T2 {
    t := dynTypeOf(v)
    return v.(t)
}

wrap can be written in the same way as before, or as

func wrap(ec extraContext, err error) error {
  if err == nil {
    return nil
  }
  t := dynTypeOf(err)
  return MyError{
    e: v.(t),
    extraContext: ec,
  }
}
  1. If the language change supports type constraints, here's an alternative idea:
type E1 which_is_a_type_satisfying error
type E2 which_is_a_type_satisfying error

func wrap(ec extraContext, err E1) E2 {
  if err == nil {
    return nil
  }
  return MyError{
    e: err,
    extraContext: ec,
  }
}

In this example, we accept a value of any type that implements error. Any user of wrap that expects an error will receive one. However, the type of the e inside MyError is the same as that of the err that is passed in, which is not limited to an interface type. If one wanted the same behavior as 2,

var iface error = ...
wrap(getExtraContext(), unbox(iface))

Since no one else seems to have done it, I'd like to point out the very obvious "experience reports" for generics as called for by https://blog.golang.org/toward-go2.

The first is the built-in map type:

m := make(map[string]string)

The next is the built-in chan type:

c := make(chan bool)

Finally, the standard library is riddled with interface{} alternatives where generics would work more safely:

  • heap.Interface (https://golang.org/pkg/container/heap/#Interface)
  • list.Element (https://golang.org/pkg/container/list/#Element)
  • ring.Ring (https://golang.org/pkg/container/ring/#Ring)
  • sync.Pool (https://golang.org/pkg/sync/#Pool)
  • upcoming sync.Map (https://tip.golang.org/pkg/sync/#Map)
  • atomic.Value (https://golang.org/pkg/sync/atomic/#Value)

There may be others I'm missing. The point being, each of the above are where I would expect generics to be useful.

(Note: I am not including sort.Sort here because it is an excellent example of how interfaces can be used instead of generics.)

http://www.yinwang.org/blog-cn/2014/04/18/golang
I think the generic is important.otherwise, cannot handle similar types.sometime interface cannot solve problem.

Simple syntax and type system are the important pros of Go. If you add generics, language will become an ugly mess like Scala or Haskell. Also this feature will attract pseudo-academic fanboys, who will eventually transform community values from "Let's get this done" to "Let's talk about CS theory and math". Avoid generics, it's a path to abyss.

@bxqgit please keep this civil. There's no need to insult anyone.

As for what the future will bring, we'll see, but I do know that while for 98% of my time I don't need generics, whenever I get to need them, I wish I could use them. How are they used vs how are they wrongfully used is a different discussion. Educating users should be part of the process.

@bxqgit
There are situations in which generics are needed, like generic data structures (Trees, Stacks, Queues , ...) or generic functions (Map, Filter, Reduce, ...) and these things are unavoidable, using interfaces instead of generics in these situations just adds a huge complexity both for code writer and code reader and it also has a bad effect on code efficiency at run-time so it should be much more rational to add to language generics than trying to use interfaces and reflect to write complex and inefficient code.

@bxqgit Adding generics doesn't necessarily adds complexity to the language, this can be achieved with simple syntax too. With generics, you are adding a variable compile time type constraint which is very useful with data structures, as @riwogo said.

The current interface system in go is very useful, nevertheless is very bad when you need, for example, a general implementation of list, which with interfaces need a execution time type constraint, nevertheless if you add generics, the generic type can be substituted in compile time with the actual type, making the constraint innecesary.

Also, remember, the people behind go, develop the language using what you call "CS theory and math", and are also the people that "are getting this done".

Also, remember, the people behind go, develop the language using what you call "CS theory and math", and are also the people that "are getting this done".

Personally I don't see much CS theory and math in Go language design. It's a fairly primitive language, which is good in my opinion. Also those people you are talking about decided to avoid generics and got things done. If it works just fine, why change anything? Generally, I think that constantly evolving and extending language's syntax is a bad practice. It only adds complexity which leads to chaos of Haskell and Scala.

The template is complicated but Generics is simple

Look at the functions SortInts, SortFloats, SortStrings in the sort package. Or SearchInts, SearchFloats, SearchStrings. Or the Len, Less, and Swap methods of byName in package io/ioutil. Pure boilerplate copying.

The copy and append functions exist because they make slices much more useful. Generics would mean that these functions are unnecessary. Generics would make it possible to write similar functions for maps and channels, not to mention user created data types. Granted, slices are the most important composite data type, and that’s why these functions were needed, but other data types are still useful.

My vote is no to generalized application generics, yes to more built-in generic functions like append and copy that work on multiple base types. Perhaps sort and search could be added for the collection types?

For my applications the only type missing is an unordered set (https://github.com/golang/go/issues/7088), I'd like this as a built-in type so it gets the generic typing like slice and map. Put the work into the compiler (benchmarking for each base type and a selected set of struct types then tuning for best performance) and keep additional annotation out of the application code.

smap built-in instead of sync.Map too please. From my experience using interface{} for runtime type safety is a design flaw. Compile-time type checking is a major reason to use Go at all.

@pciet

From my experience using interface{} for runtime type safety is a design flaw.

Can you just write a small (type safe) wrapper ?
https://play.golang.org/p/tG6hd-j5yx

@pierrre That wrapper is better than a reflect.TypeOf(item).AssignableTo(type) check. But writing your own type with map + sync.Mutex or sync.RWMutex is the same complexity without the type assertion that sync.Map requires.

My synchronized map use has been for global maps of mutexes with a var myMapLock = sync.RWMutex{} next to it instead of making a type. This could be cleaner. A generic built-in type sounds right to me but takes work I can't do, and I prefer my approach instead of type asserting.

I suspect that the negative visceral reaction to generics that many Go programmers seem to have arises because their main exposure to generics was via C++ templates. This is unfortunate because C++ got generics tragically wrong from day 1 and has been compounding the mistake since. Generics for Go could be a lot simpler and less error-prone.

It is would be disappointing to see Go becoming more and more complex by adding built-in parameterized types. It would be better just to add the language support for programmers to write their own parameterized types. Then the special types could just be provided as libraries rather than cluttering the core language.

@andrewcmyers "Generics for Go could be a lot simpler and less error-prone." --- like generics in C#.

It is disappointing to see Go becoming more and more complex by adding built-in parameterized types.

Despite the speculation in this issue, I think this is extremely unlikely to happen.

The exponent on the complexity measure of parameterized types is variance.
Go's types (excepting interfaces) are invariant and this can and should be
kept the rule.

A mechanical, compiler-assisted "type copy-paster" generics implementation
would solve 99% of the problem in a fashion true to Go's underlying
principles of shallowness and non-surprise.

By the way, this and dozens of other viable ideas have been discussed
before and some even culminated in good, workable approaches. At this
point, I'm borderline tinfoil-hatting about how they all dissappeared
silently into the void.

On Nov 28, 2017 23:54, "Andrew Myers" notifications@github.com wrote:

I suspect that the negative visceral reaction to generics that many Go
programmers seem to have arises because their main exposure to generics was
via C++ templates. This is unfortunate because C++ got generics tragically
wrong from day 1 and has been compounding the mistake since. Generics for
Go could be a lot simpler and less error-prone.

It is disappointing to see Go becoming more and more complex by adding
built-in parameterized types. It would be better just to add the language
support for programmers to write their own parameterized types. Then the
special types could just be provided as libraries rather than cluttering
the core language.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-347691444, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AJZ_jPsQd2qBbn9NI1wZeT-O2JpyraTMks5s7I81gaJpZM4IG-xv
.

Yes, you can have generics without templates. Templates are a form of advanced parametric polymorphism mostly for metaprogramming facilities.

@ianlancetaylor Rust allows for a program to implement a trait T on an existing type Q, provided that their crate defines either T or Q.

Just a thought: I wonder if Simon Peyton Jones (yes, of Haskell fame) and/or the Rust developers might be able to help. Rust and Haskell have probably the two most advanced type systems of any production languages, and Go should learn from them.

There's also Phillip Wadler, who worked on Generic Java, which eventually lead to the generics implementation Java has today.

@tarcieri I don’t think that Java’s generics are very good, but they are battle-tested.

@DemiMarie We've had Andrew Myers pitching in here, fortunately.

Based on my personal experience, I think that people who know a great deal about different languages and different type systems can be very helpful in examining ideas. But for producing the ideas in the first place, what we need are people who are very familiar with Go, how it works today, and how it can reasonably work in the future. Go is designed to be, among other things, a simple language. Importing ideas from languages like Haskell or Rust, which are significantly more complicated than Go, is unlikely to be a good fit. And in general ideas from people who have not already written a reasonable amount of Go code are unlikely to be a good fit; not that the ideas will be bad as such, just that they won't fit well with the rest of the language.

For example, it's important to understand that Go already has partial support for generic programming using interface types and already has (almost) complete support using the reflect package. While those two approaches to generic programming are unsatisfactory for various reasons, any proposal for generics in Go has to interact well with them while simultaneously addressing their shortcomings.

In fact, while I'm here, a while back I thought about generic programming with interfaces for a while, and came up with three reasons why it fails to be satisfactory.

  1. Interfaces require all operations to be expressed as methods. That makes it painful to write an interface for builtin types, such as channel types. All channel types support the <- operator for send and receive operations, and it's easy enough to write an interface with Send and Receive methods, but in order to assign a channel value to that interface type you have to write boilerplate plate Send and Receive methods. Those boilerplate methods will look precisely the same for each different channel type, which is tedious.

  2. Interfaces are dynamically typed, and so errors combining different statically typed values are only caught at run time, not compile time. For example, a Merge function that merges two channels into a single channel using their Send and Receive methods will require the two channels to have elements of the same type, but that check can only be done at run time.

  3. Interfaces are always boxed. For example, there is no way to use interfaces to aggregate a pair of other types without putting those other types into interface values, requiring additional memory allocations and pointer chasing.

I am happy to kibitz on generics proposals for Go. Perhaps also of interest is the increasing amount of research on generics at Cornell lately, seemingly relevant to what might be done with Go:

http://www.cs.cornell.edu/andru/papers/familia/ (Zhang & Myers, OOPSLA'17)
http://io.livecode.ch/learn/namin/unsound (Amin & Tate, OOPSLA'16)
http://www.cs.cornell.edu/projects/genus/ (Zhang et al., PLDI '15)
https://www.cs.cornell.edu/~ross/publications/shapes/shapes-pldi14.pdf (Greenman, Muehlboeck & Tate, PLDI '14)

In benchmarking map vs. slice for an unordered set type I wrote out separate unit tests for each, but with interface types I can combine those two test lists into one:

type Item interface {
    Equal(Item) bool
}

type Set interface {
    Add(Item) Set
    Remove(Item) Set
    Combine(...Set) Set
    Reduce() Set
    Has(Item) bool
    Equal(Set) bool
    Diff(Set) Set
}

Testing removing an item:

type RemoveCase struct {
    Set
    Item
    Out Set
}

func TestRemove(t *testing.T) {
    for i, c := range RemoveCases {
        if c.Out.Equal(c.Set.Remove(c.Item)) == false {
            t.Fatalf("%v failed", i)
        }
    }
}

This way I’m able to put my previously separate cases together into one slice of cases without any trouble:

var RemoveCases = []RemoveCase{
    {
        Set: MapPathSet{
            &Path{{0, 0}}:         {},
            &Path{{0, 1}, {1, 1}}: {},
        },
        Item: Path{{0, 0}},
        Out: MapPathSet{
            &Path{{0, 1}, {1, 1}}: {},
        },
    },
    {
        Set: SlicePathSet{
            {{0, 0}},
            {{0, 1}, {1, 1}},
        },
        Item: Path{{0, 0}},
        Out: SlicePathSet{
            {{0, 1}, {1, 1}},
        },
    },
}

For each concrete type I had to define the interface methods. For example:

func (the MapPathSet) Remove(an Item) Set {
    return MapDelete(the, an.(Path))
}
func (the SlicePathSet) Remove(an Item) Set {
    return SliceDelete(the, an.(Path))
}

These generic tests could use a proposed compile-time type check:

type Item generic {
    Equal(Item) bool
}
func (the SlicePathSet) Remove(an Item) Set {
    return SliceDelete(the, an)
}

Source: https://github.com/pciet/pathsetbenchmark

Thinking about that more, it doesn't seem like a compile-time type check would be possible for such a test since you'd have to run the program to know if a type is passed to the corresponding interface method.

So what about a "generic" type that is an interface and has an invisible type assertion added by the compiler when used concretely?

@andrewcmyers The "Familia" paper was interesting (and way over my head). A key notion was inheritance. How would the concepts change for a language like Go which relies on composition instead of inheritance?

Thanks. The inheritance part doesn't apply to Go -- if you are only interested in generics for Go, you can stop reading after section 4 of the paper. The main thing about that paper that is relevant to Go is that it shows how to use interfaces both in the way they are used for Go now and as constraints on types for generic abstractions. Which means you get the power of Haskell type classes without adding an entirely new construct to the language.

@andrewcmyers Can you give an example of how this would look in Go?

The main thing about that paper that is relevant to Go is that it shows how to use interfaces both in the way they are used for Go now and as constraints on types for generic abstractions.

My understanding is that the Go interface defines a constraint on a type (e.g. "this type can be compared for equality using the 'type Comparable interface' because it satisfies having an Eq method"). I'm not sure I understand what you mean by a type constraint.

I'm not familiar with Haskell but reading a quick overview has me guessing that types that fit a Go interface would fit into that type class. Can you explain what is different about Haskell type classes?

A concrete comparison between Familia and Go would be interesting. Thanks for sharing your paper.

Go interfaces can be viewed as describing a constraint on types, via structural subtyping. However, that type constraint, as is, is not expressive enough to capture the constraints you want for generic programming. For example, you can't express the type constraint named Eq in the Familia paper.

Some thoughts on the motivation for more generic programming facilities in Go:

So there’s my generic test list that doesn’t really need anything added to the language. In my opinion that generic type I proposed doesn’t satisfy the Go goal of straightforward understanding, it doesn’t have much to do with the generally accepted programming term, and doing the type assertion there wasn’t ugly since a panic on failure is fine. I’m satisfied with Go’s generic programming facilities already for my need.

But sync.Map is a different use case. There’s a need in the standard library for a mature generic synchronized map implementation beyond just a struct with a map and mutex. For type handling we can wrap it with another type that sets a non-interface{} type and does a type assertion, or we can add a reflect check internally so items following the first must match the same type. Both have runtime checks, the wrapping requires rewriting each method for each use type but it adds a compile-time type check for input and hides the output type assertion, and with the internal check we still have to do an output type assertion anyway. Either way we’re doing interface conversions without any actual use of interfaces; interface{} is a hack of the language and won’t be clear to new Go programmers. Although json.Marshal is good design in my opinion (including the ugly but sensible struct tags).

I’ll add that since sync.Map is in the standard library ideally it should swap out the implementation for the measured use cases where the simple struct is more performant. The unsynchronized map is a common early pitfall in Go concurrent programming and a standard library fix should just work.

The regular map has just a compile-time type check and doesn’t require any of this scaffolding. I argue that sync.Map should be the same or shouldn’t be in the standard library for Go 2.

I proposed adding sync.Map to the list of built-in types and to do the same for future similar needs. But my understanding is giving Go programmers a way to do this without having to work on the compiler and go through the open source acceptance gauntlet is the idea behind this discussion. In my view fixing sync.Map is a real case that partially defines what this generics proposal should be.

If you add sync.Map as a built-in, then how far do you go? Do you special case every container?
sync.Map isn't the only container and some are better for some cases than others.

@Azareal: @chowey listed these in August:

Finally, the standard library is riddled with interface{} alternatives where generics would work more safely:

• heap.Interface (https://golang.org/pkg/container/heap/#Interface)
• list.Element (https://golang.org/pkg/container/list/#Element)
• ring.Ring (https://golang.org/pkg/container/ring/#Ring)
• sync.Pool (https://golang.org/pkg/sync/#Pool)
• upcoming sync.Map (https://tip.golang.org/pkg/sync/#Map)
• atomic.Value (https://golang.org/pkg/sync/atomic/#Value)

There may be others I'm missing. The point being, each of the above are where I would expect generics to be useful.

And I'd like the unordered set for types that can be compared for equality.

I'd like a lot of work put into a variable implementation in the runtime for each type based on benchmarking so that the best implementation possible is usually what's used.

I'm wondering if there are reasonable alternative implementations with Go 1 that achieve the same goal for these standard library types without interface{} and without generics.

golang interfaces and haskell type classes overcome two things (which are very great!):

1.) (Type Constraint) They group different types with one tag, the interface name
2.) (Dispatch) They offer to dispatch differently on each type for a given set of functions via interface implementation

But,

1.) Sometimes you want only anonymous groups like a group of int, float64 and string. How should you name such an interface, NumericandString?

2.) Very often, you do not want to dispatch differently for each type of an interface but to provide only one method for all listed types of an interface (Maybe possible with default methods of interfaces)

3.) Very often, you do no want to enumerate all possible types for an group. Instead you go the lazy way and say I want all types T implementing some Interface A and the compiler then search for all types in all source files you edit and in all libraries you use to generate the appropriate functions at compile time.

Although the last point is possible in go via interface polymorphism, it has the drawback to be a runtime polymorphism involving casts and how do you restrict the parameter input of a function to contain types implementing more than one interface or one of many interfaces. The go way is to introduce new interfaces extending other interfaces ( by interface nesting ) to achieve something similar but not with best practice.

By the way.
I admit to those who say that go already has polymorphism and exactly therefore go is not any more a simple language like C. It is a high level system programming language. So why not expanding the polymorphism go offers.

Here’s a library I started today for generic unordered set types: https://github.com/pciet/unordered

This gives in documentation and testing examples that type wrapper pattern (thanks @pierrre) for compile-time type safety and also has the reflect check for run-time type safety.

What needs are there for generics? My negative attitude toward standard library generic types earlier centered around the use of interface{}; my complaint could be solved with a package-specific type for interface{} (like type Item interface{} in pciet/unordered) that documents the intended un-expressible constraints.

I don’t see the need for an added language feature when just documentation could get us there now. There is already large amounts of battle-tested code in the standard library that provides generic facilities (see https://github.com/golang/go/issues/23077).

Your code type-checks in runtime (and from that perspective it's in no way better than just interface{} if not worse). With generics you could have had the collection types with compile-time type checks.

@zerkms run-time checks can be turned off by setting asserting = false (this wouldn't go in the standard library), there's a use pattern for compile-time checks, and anyway a type check just looks at the interface struct (using interface adds more expense than the type check). If interface isn't performing then you'll have to write your own type.

You're saying maximized-performance generic code is a key need. It hasn't been for my use cases, but maybe the standard library could become faster, and maybe others need such a thing.

run-time checks can be turned off by setting asserting = false

then nothing guarantees correctness

You're saying maximized-performance generic code is a key need.

I did not say that. Type safety would be a great deal. Your solution is still interface{}-infected.

but maybe the standard library could become faster, and maybe others need such a thing.

may be, if core dev team is happy to implement whatever I need on demand and quickly.

@pciet

I don’t see the need for an added language feature when just documentation could get us there now.

You say this, yet you have no problem using the generic language features in the form of slices and the make function.

I don’t see the need for an added language feature when just documentation could get us there now.

Then why bother using a statically typed language? You can use a dynamically typed language like Python and rely on documentation to make sure correct data types are sent to your API.

I think one of the advantages of Go is the facilities to enforce some constraints by the compiler to prevent future bugs. Those facilities can be extended (with generics support) to enforce some other constraints to prevent some more bugs in the future.

You say this, yet you have no problem using the generic language features in the form of slices and the make function.

I'm saying the existing features get us to a good balanced point that does have generic programming solutions and there should be strong real reasons to change from the Go 1 type system. Not how a change would improve the language but what problems people are facing now, such as maintaining a lot of run-time type switching for interface{} in the fmt and database standard library packages, that would be fixed.

Then why bother using a statically typed language? You can use a dynamically typed language like Python and rely on documentation to make sure correct data types are sent to your API.

I've heard suggestions to write systems in Python instead of statically-typed languages and organizations do.

Most Go programmer using the standard library use types that can't be completely described without documentation or without looking at the implementation. Types with parametric sub-types or general types with applied constraints only fix a subset of these cases programmatically and would generate a lot of work already done in the standard library.

In the proposal for sum types I suggested a build feature for the interface type switch where an interface use in a function or method has a build error emitted when a possible value assigned to the interface does not match any contained interface type switch case.

A function/method taking an interface could reject some types at build by having no default case and no case for the type. This seems like a reasonable generic programming addition if the feature is feasible to implement.

If Go interfaces could capture the type of the implementer, there could be a form of generics that is completely compatible with current Go syntax - a single parameter form of generics (demonstration).

@dc0d for generic container types I believe that feature adds compile-time type checking without requiring a wrapper type: https://gist.github.com/pciet/36a9dcbe99f6fb71f5fc2d3c455971e5

@pciet You are right. In the provided code, No. 4, sample states that the type is captured for slices and channels (and arrays). But not for maps, because there is only one and only one type parameter: the implementer. And since a map needs two type parameter, wrapper interfaces are needed.

BTW I have to emphasis the demonstrational purpose of that code, as a line of thought. I'm no language designer. This is just a hypothetical way of thinking about the implementation of generics in Go:

  • Compatible with current Go
  • Simple (single generic type parameter, which _feels_ like _this_ in other OO, referring to current implementer)

Discussion of genericity and all possible use cases in the context of a desire to minimize impacts while maximizing important use cases and flexibility of expression is a very complex analysis. Not sure if any of us will be able to distill it down to short set of principles aka generative essence. I’m trying. Any way, here some of my initial thoughts from my _cursory_ perusal of this thread…

@adg wrote:

Accompanying this issue is a general generics proposal by @ianlancetaylor that includes four specific flawed proposals of generic programming mechanisms for Go.

Afaics, the linked section excerpted as follows fails to state a case of genericity lacking with current interfaces, _“There is no way to write a method that takes an interface for the caller supplied type T, for any T, and returns a value of the same type T.”_.

There is no way to write an interface with a method that takes an argument of type T, for any T, and returns a value of the same type.

So how else could the code at the call site type check that it has a type T as the result value? For example, the said interface may have a factory method for building type T. This is why we need to parametrise interfaces on type T.

Interfaces are not simply types; they are also values. There is no way to use interface types without using interface values, and interface values aren’t always efficient.

Agreed that since interfaces can’t currently be explicitly parametrised on the type T they operate on, the type T is not accessible to the programmer.

So this what typeclass bounds do at the function definition site taking as input a type T and having a where or requires clause stating the interface(s) that are required for type T. In many circumstances these interface dictionaries can be automatically monomorphised at compile-time so that no dictionary pointer(s) (for the interfaces) are passed into the function at runtime (monomorphisation which I presume the Go compiler applies to interfaces currently?). By ‘values’ in the above quote, I presume he means the input type T and not the dictionary of methods for the interface type implemented by type T.

If we then allow type parameters on data types (e.g. struct), then said type T above can be itself parameterised so we really have a type T<U>. Factories for such types which need to retain knowledge of U are called higher-kinded types (HKT).

Generics permit type-safe polymorphic containers.

C.f. also the issue of _heterogeneous_ containers discussed below. So by polymorphic we mean genericity of the value type of the container (e.g. element type of the collection), yet there’s also the issue of whether we can put more than one value type in the container simultaneously making them heterogeneous.


@tamird wrote:

These requirements seem to exclude e.g. a system similar to Rust's trait system, where generic types are constrained by trait bounds.

Rust’s trait bounds are essentially typeclass bounds.

@alex wrote:

Rust's traits. While I think they're a good model in general, they would be a bad fit for Go as it exists today.

Why do you think they’re a bad fit? Perhaps you’re thinking of the trait objects which employ runtime dispatch thus are less performant than monomorphism? But those can be considered separately from the typeclass bounds genericity principle (c.f. my discussion of heterogeneous containers/collections below). Afaics, Go’s interfaces are already trait-like bounds and accomplish the goal of typeclasses which is to late bind the dictionaries to the data types at the call site, rather than the anti-pattern of OOP which early binds (even if still at compile-time) dictionaries to data types (at instantiation/construction). Typeclasses can (at least a partial improvement of degrees-of-freedom) solve the Expression Problem which OOP can’t.

@jimmyfrasche wrote:

  • https://golang.org/doc/faq#covariant_types

I agree with the above link that typeclasses indeed aren’t subtyping and aren’t expressing any inheritance relationship. And agree with not unnecessarily conflating “genericity” (as a more general concept of reuse or modularity than parametric polymorphism) with inheritance as subclassing does.

However I do also want to point out that inheritance hierarchies (aka subtyping) are inevitable1 on the assignment to (function inputs) and from (function outputs) if the language supports unions and intersections, because for example a int ν string can accept assignment from an int or a string but neither can accept an assignment from an int ν string. Without unions afaik the only alternative ways to provide statically typed heterogeneous containers/collections are subclassing or existentially bounded polymorphism (aka trait objects in Rust and existential quantification in Haskell). Links above contain discussion about the tradeoffs between existentials and unions. Afaik, the only way to do heterogeneous containers/collections in Go now is to subsume all types to an empty interface{} which is throwing away the typing information and would I presume require casts and runtime type inspection, which sort of2 defeats the point of static typing.

The “anti-pattern” to avoid is subclassing aka virtual inheritance (c.f. also “EDIT#2” about the issues with implicit subsumption and equality, etc).

1 Regardless whether they’re matched structurally or nominally because subtyping is due to the Liskov Substitution Principle based on comparative sets and the direction of assignment with function inputs opposite to return values, e.g. a type parameter of a struct or interface can’t reside in both the function inputs and return values unless it invariant instead of co- or contra-variant.

2 Absolutism won’t apply because we can’t type check the universe of unbounded non-determinism. IOW, increasing static typing is in tension with algorithmic flexibility. So as I understand it to be, this thread is about choosing an optimum (“sweet spot”) limit to the level of stating typing w.r.t. to the genericity issues.

@andrewcmyers wrote:

Unlike Java and C# generics, the Genus generics mechanism is not based on subtyping.

It’s the inheritance and subclassing (not structural subtyping) that is the worst anti-pattern you don’t want to copy from Java, Scala, Ceylon, and C++ (unrelated to the problems with C++ templates).

@thwd wrote:

The exponent on the complexity measure of parameterized types is variance. Go's types (excepting interfaces) are invariant and this can and should be kept the rule.

Subtyping with immutability side-steps the complexity of covariance. Immutability also ameliorates some of the problems with subclassing (e.g. Rectangle vs. Square) but not others (e.g. implicit subsumption, equality, etc).

@bxqgit wrote:

Simple syntax and type system are the important pros of Go. If you add generics, language will become an ugly mess like Scala or Haskell.

Note that Scala attempts to merge OOP, subsclassing, FP, generic modules, HKT, and typeclasses (via implicit) all into one PL. Perhaps typeclasses alone might be sufficient.

Haskell is not necessarily obtuse because of typeclass generics, but more likely because it’s enforcing pure functions every where and employing monadic category theory to model controlled imperative effects.

Thus I think it’s not correct to associate the obtuseness and complexity of those PLs with typeclasses in for example Rust. And let’s not blame typeclasses for Rust’s lifetimes+exclusive mutability borrowing abstraction.

Afaics, in the Semantics section of the _Type Parameters in Go_, the problem encountered by @ianlancetaylor is a conceptualization issue because he’s (afaics) apparently unwittingly reinventing typeclasses:

Can we merge SortableSlice and PSortableSlice to have the best of both worlds? Not quite; there is no way to write a parameterized function that supports either a type with a Less method or a builtin type. The problem is that SortableSlice.Less can not be instantiated for a type without a Less method, and there is no way to only instantiate a method for some types but not others.

The requires Less[T] clause for the typeclass bound (even if implicitly inferred by the compiler) on the Less method for []T is on T not []T. The implementation of the Less[T] typeclass (which contains a method Less method) for each T will either provide an implementation in the function body of the method or assign the < built-in function as the implementation. Yet I believe this requires HKT U[T] if the methods of Sortable[U] need a type parameter U representing the implementing type, e.g. []T. Afair @keean has another way of structuring a sort employing a separate typeclass for the value type T which doesn’t require a HKT.

Note those methods for []T might be implementing a Sortable[U] typeclass, where U is []T.

(Technical aside: it may seem that we could merge SortableSlice and PSortableSlice by having some mechanism to only instantiate a method for some type arguments but not others. However, the result would be to sacrifice compile-time type safety, as using the wrong type would lead to a runtime panic. In Go one can already use interface types and methods and type assertions to select behavior at runtime. There is no need to provide another way to do this using type parameters.)

The selection of the typeclass bound at the call site is resolved at compile-time for a statically known T. If heterogeneous dynamic dispatch is needed then see the options I explained in my prior post.

I hope @keean can find time to come here and help explain typeclasses as he’s more expert and helped me to learn these concepts. I might have some errors in my explanation.

P.S. note for those who already read my prior post, note I edited it extensively about 10 hours after posting it (after some sleep) to hopefully make the points about heterogeneous containers more coherent.


The Cycles section appears to be incorrect. The runtime construction of the S[T]{e} instance of a struct has nothing to do with the selection of the implementation of the generic function called. He’s presumably thinking that the compiler doesn’t know if it’s specializing the implementation of the generic function for the type of the arguments, but all those types are known at compile-time.

Perhaps the Type Checking section specification could be simplified by studying @keean’s concept of a connected graph of distinct types as nodes for a unification algorithm. Any distinct types connected by an edge must have congruent types, with edges created for any types which connect via assignment or otherwise in the source code. If there’s union and intersection (from my prior post), then the direction of assignment has to be taken into account (somehow?). Each distinct unknown type starts off with a least upper bounds (LUB) of Top and a greatest lower bounds (GLB) of Bottom and then constraints can alter these bounds. Connected types have to have compatible bounds. Constraints should all be typeclass bounds.

In Implementation:

For example, it is always possible to implement parameterized functions by generating a new copy of the function for each instantiation, where the new function is created by replacing the type parameters with the type arguments.

I believe the correct technical term is monomorphisation.

This approach would yield the most efficient execution time at the cost of considerable extra compile time and increased code size. It’s likely to be a good choice for parameterized functions that are small enough to inline, but it would be a poor tradeoff in most other cases.

Profiling would tell the programmer which functions can most benefit from monomorphisation. Perhaps the Java Hotspot optimizer does monomorphisation optimization at runtime?

@egonelbre wrote:

There is Summary of Go Generics Discussions, which tries to provide an overview of discussions from different places.

The Overview section seems to imply that Java’s universal use of boxing references for instances in a container is the only axis of design diametrically opposing C++’s monomorphisation of templates. But typeclass bounds (which can also be implemented with C++ templates yet always monomorphised) are applied to functions not to container type parameters. Thus afaics the overview is missing the design axis for typeclasses wherein we can choose whether to monomorphise each typeclass bounded function. With typeclasses we always make programmers faster (less boilerplate) and can get a more refined balance between making compilers/execution faster/slower and code bloat greater/lesser. Per my prior post, perhaps the optimum would be if the choice of functions to monomorphise was profiler driven (automatically or more likely by annotation).

In the Problems : Generic Data Structures section:

Cons

  • Generic structures tend to accumulate features from all uses, resulting in increased compile times or code bloat or needing a smarter linker.

For typeclasses this is either not true or less of an issue, because interfaces only need to be implemented for data types which are supplied to functions which use those interfaces. Typeclasses are about late binding of implementation to interface, unlike OOP which binds every data type to it’s methods for the class implementation.

As well, not all the methods need to be put in a single interface. The requires clause (even if implicitly inferred by the compiler) on a typeclass bound for a function declaration can mix-and-match interfaces required.

  • Generic structures and the APIs that operate on them tend to be more abstract than purpose-built APIs, which can impose cognitive burden on callers

A counter argument which I think significantly ameliorates this concern is that the cognitive burden of learning an unbounded number of special case re-implementations of the essentially same generic algorithms, is unbounded. Whereas, learning the abstract generic APIs is bounded.

  • In-depth optimizations are very non-generic and context specific, hence it’s harder to optimize them in a generic algorithm.

This is not a valid con. The 80/20 rule says don’t add unbounded complexity (e.g. premature optimization) for code which when profiled doesn’t require it. The programmer is free to optimize in 20% of the cases whilst the remaining 80% get handled by the bounded complexity and cognitive load of the generic APIs.

What we’re really getting at here is the regularity of a language and generic APIs help, not hurt that. Those Cons are really not correctly conceptualized.

Alternative solutions:

  • use simpler structures instead of complicated structures

    • e.g. use map[int]struct{} instead of Set

Rob Pike (and I also watched him make that point in the video) seems to be missing the point that generic containers aren’t sufficient for making generic functions. We need that T in map[T] so we can pass the generic data type around in functions for inputs, outputs, and for our own struct. Generics only on container type parameters is wholly insufficient for expressing generic APIs and generic APIs are required for bounded complexity and cognitive load and obtaining regularity in a language ecosystem. Also I haven’t seen the increased level of refactoring (thus the reduced composability of modules that can’t be easily refactored) that non-generic code requires, which is what the Expression Problem is about that I mentioned in my first post.

In the Generic Approaches section:

Package templates
This is an approach used by Modula-3, OCaml, SML (so-called “functors”), and Ada. Instead of specifying an individual type for specialization, the whole package is generic. You specialize the package by fixing the type parameters when importing.

I may be mistaken but this seems not quite correct. ML functors (not to be confused with FP functors) can also return an output which remains type parametrised. There would be no way to use the algorithms within other generic functions otherwise, so thus generic modules wouldn’t be able reuse (by importing with concrete types into) other generic modules. This appears to be an attempt to oversimplify and then entirely miss the point of generics, module reuse, etc..

Rather my understanding is that that package (aka module) type parametrisation enables the ability to apply type parameter(s) to a grouping of struct, interface, and func.

More complicated type-system
This is the approach that Haskell and Rust take.
[…]
Cons:

  • hard to fit into simpler language (https://groups.google.com/d/msg/golang-nuts/smT_0BhHfBs/MWwGlB-n40kJ)

Quoting @ianlancetaylor in the linked document:

If you believe that, then it's worth pointing out that the core of the
map and slice code in the Go runtime is not generic in the sense of
using type polymorphism. It's generic in the sense that it looks at
type reflection information to see how to move and compare type
values. So we have proof by existence that it is acceptable to write
"generic" code in Go by writing non-polymorphic code that uses type
reflection information efficiently, and then to wrap that code in
compile-time type-safe boilerplate (in the case of maps and slices
this boiler plate is, of course, provided by the compiler).

And that’s what a compiler transpiling from a superset of Go with generics added, would output as Go code. But the wrapping would not be based on some delineation such as package, as that would lack the composability I already mentioned. Point being that there’s no short-cut to a good composable generics type system. Either we do it correctly or don’t do anything, because adding some non-composable hack that isn’t really generics is going to create eventually a clusterfuck inertia of patchwork half-assed genericity and irregularity of corner cases and workarounds making Go ecosystem code unintelligible.

It's also true that most people writing large complex Go programs have
not found a significant need for generics. So far it's been more like
an irritating wart--the need to write three lines of boilerplate for
each type to be sorted--rather than a major barrier to writing useful
code.

Yeah this has been one of the thoughts in my mind as whether going to a full blown typeclass system is justifiable. If all your libraries are based around it, then apparently it could be a beautiful harmony, but if we’re contemplating about the inertia of existing Go hacks for genericity, then perhaps the additional synergy gained is going to be low for a lot of projects?

But if a transpiler from a typeclass syntax emulated the existing manual way Go can model generics (Edit: which I just read that @andrewcmyers states is plausible), this might be less onerous and find useful synergies. For example, I realized that two parameter typeclasses can be emulated with interface implemented on a struct which emulates a tuple, or @jba mentioned an idea for employing inline interface in context. Apparently struct are structurally instead of nominally typed unless given a name with type? Also I confirmed a method of an interface can input another interface so afaics it may be possible to transpile from HKT in your sort example I wrote about in my prior post here. But I need to think this out more when I am not so sleepy.

I think it's fair to say that most of the Go team do not like C++
templates, in which one Turing complete language has been layered over
another Turing complete language such that the two languages have
completely different syntaxes, and programs in both languages are
written in very different ways. C++ templates serve as a cautionary
tale because the complex implementation has pervaded the entire
standard library, causing C++ error messages to become a source of
wonder and amazement. This is not a path that Go will ever follow.

I doubt anyone will disagree! The monomorphisation benefit is orthogonal to the downsides of a Turing complete generics metaprogramming engine.

Btw, the design error of C++ templates appears to me to be the same generative essence of the flaw of generative (as opposed to applicative) ML functors. The Principle of Least Power applies.


@ianlancetaylor wrote:

It is disappointing to see Go becoming more and more complex by adding built-in parameterized types.

Despite the speculation in this issue, I think this is extremely unlikely to happen.

I hope so. I firmly believe that Go should either add a coherent generics system or just accept that it will never have generics.

I think a fork to a transpiler is more likely to happen, partially because I have funding to implement it and am interested to do so. Yet I’m still analyzing the situation.

That would fracture the ecosystem though, but at least then Go can remain pure to its minimalist principles. Thus to avoid fracturing the ecosystem and allow for some other innovations I would like, I would probably not make it a superset and name it Zero instead.

@pciet wrote:

My vote is no to generalized application generics, yes to more built-in generic functions like append and copy that work on multiple base types. Perhaps sort and search could be added for the collection types?

Expanding this inertia is going to perhaps prevent a comprehensive generics feature from ever making it into Go. Those who wanted generics are likely to leave for greener pastures. @andrewcmyers reiterated this:

It ~is~ would be disappointing to see Go becoming more and more complex by adding built-in parameterized types. It would be better just to add the language support for programmers to write their own parameterized types.

@shelby3

Afaik, the only way to do heterogeneous containers/collections in Go now is to subsume all types to an empty interface{} which is throwing away the typing information and would I presume require casts and runtime type inspection, which sort of2 defeats the point of static typing.

See the wrapper pattern in comments above for static type checking of interface{} collections in Go.

Point being that there’s no short-cut to a good composable generics type system. Either we do it correctly or don’t do anything, because adding some non-composable hack that isn’t really generics…

Can you explain this more? For the collection types case having an interface defining the necessary generic behavior of contained items seems reasonable to write functions to.

@pciet this code is literally doing the exact thing @shelby3 was describing and considering an antipattern. Quoting you from earlier:

This gives in documentation and testing examples that type wrapper pattern (thanks @pierrre) for compile-time type safety and also has the reflect check for run-time type safety.

You are taking code that lacks type information and, on a type-by-type basis, adding casts and runtime type inspection using reflect. This is exactly what @shelby3 was complaining about. I tend to call this approach "monomorphization-by-hand" and is exactly the sort of tedious chore I think is best punted to a compiler.

This approach has a number of disadvantages:

  • Requires type-by-type wrappers, maintained either by hand or a go generate-like tool
  • (If done by hand instead of a tool) opportunity to make mistakes in the boilerplate which won't be caught until runtime
  • Requires dynamic dispatch instead of static dispatch, which is both slower and uses more memory
  • Uses runtime reflection rather than compile-time type assertions, which is also slow
  • Not composable: acts entirely on concrete types with no opportunities to use typeclass-like (or even interface-like) bounds on types, unless you handroll another layer of indirection for every non-empty interface you also want to abstract over

Can you explain this more? For the collection types case having an interface defining the necessary generic behavior of contained items seems reasonable to write functions to.

Now everywhere you want to use a bound instead of or in addition to a concrete type, you have to write the same typechecking boilerplate for every interface type too. It just further compounds the (perhaps combinatorial) explosion of static type wrappers you have to write.

There are also ideas that, as far as I know, simply cannot be expressed in Go's type system at all today, such as a bound on a combination of interfaces. Imagine we have:

type Foo interface {
    ...
}

type Bar interface {
    ...
}

How do we express, using a purely static type check, that we want a type that implements both Foo and Bar? As far as I know this isn't possible in Go (short of resorting to runtime checks that may fail, eschewing static type safety).

With a typeclass-based generics system we could express this as:

func baz<T Foo + Bar>(t T) {
    ...
}

@tarcieri

How do we express, using a purely static type check, that we want a type that implements both Foo and Bar?

simply like so:

type T interface {
    Foo
    Bar
}

func baz(t T) { ... }

@sbinet neat, TIL

Personally I consider runtime reflection a mis-feature, but that's just me... I can go into why if anyone is interested.

I think anyone implementing generics of any kind should read Stepanov's "Elements of Programming" several times first. It would avoid a lot of Not Invented Here problems and re-inventing the wheel. After reading that it should be clear why "C++ Concepts" and "Haskell Typeclasses" are the right way to do generics.

I see this issue seems active again
Here's a strawman proposal playground
https://go-li.github.io/test.html
just paste demo programs from here
https://github.com/go-li/demo

Thanks a lot for your evaluation of this single parametered
functions generics.

We maintain the hacked gccgo and
this project would been impossible without you, so we
wanted to contribute back.

Also we look forward to whatever generics you adopt, keep up the great work!

@anlhord where are the implementation details about this? Where can one read about the syntax? What is implemented? What isn't implemented? What are the specifications for this implementations? What are the pros and cons for it?

The playground link contains the worst possible example of this:

package main

import "fmt"

func main() {
    fmt.Println("Hello, playground")
}

That code tells me nothing on how to use it and what can I test.

If you could improve those things, it would help better understand what your proposal is and how it compares to the previous ones / see how the other points raised here apply or not to it.

Hope this helps you understand the problems with your comment.

@joho wrote:

Would there be value on the academic literature for any guidance on evaluating approaches?

The only paper I've read on the topic is Do developers benefit from generic types? (paywall sorry, you might google your way to a pdf download) which had the following to say

Consequently, a conservative interpretation of the experiment
is that generic types can be considered as a tradeoff
between the positive documentation characteristics and the
negative extensibility characteristics.

I presume OOP and subclassing (e.g. classes in Java and C++) won’t considered seriously both because Go already has a typeclass-like interface (without the explicit T generic type parameter), Java is cited as what to not copy, and because many have argued they’re an anti-pattern. Upthread I’ve linked to some of that argument. We could go deeper into that analysis if anyone is interested.

I haven’t yet studied newer research such as the Genus system mentioned upthread. I’m wary of “kitchen sink” systems which attempt to mix so many paradigms (e.g. subclassing, multiple inheritance, OOP, trait linearization, implicit, typeclasses, abstract types, etc), due to the complaints about Scala having so many corner cases in practice, although perhaps that will improve with Scala 3 (aka Dotty and the DOT calculus). I’m curious if their comparison table is comparing to experimental Scala 3 or the current version of Scala?

So afaics, what remains are ML functors and Haskell typeclasses in terms of proven genericity systems, which significantly improve extensibility and flexibility as compared to OOP+subsclassing.

I wrote down some of the private discussion @keean and I had about ML functor modules versus typeclasses. The highlights seem to be:

  • typeclasses _model an algebra_ (but without checked axioms) and implement each data type for each interface only one way. Thus enabling implicit selection of the implementations by the compiler without annotation at the call site.

  • Applicative functors have referential transparency whereas generative functors create a new instance on each instantiation, which means they’re not initialization order invariant.

  • ML functors are more powerful/flexible than typeclasses but this comes at the cost of more annotations and potentially more corner case interactions. And according to @keean they require dependent types (for associated types) which is more complex type system. @keean thinks Stepanov’s _expression of genericity as an algebra_ plus typeclasses is sufficiently powerful and flexibility, so that seems to be the sweet spot for state-of-the-art, well proven (in Haskell and now in Rust) genericity. However, the axioms aren’t enforced by typeclasses.

  • I've suggested adding unions for heterogeneous containers with typeclasses to extend along another axis of the Expression Problem, although this requires immutability or copying (only for the cases where the heterogeneous extensibility is employed) which is known to have an O(log n) slowdown compared to unrestrained mutable imperativity.

@larsth wrote:

It could be interesting to have one or more experimental transpilers - a Go generics source code to Go 1.x.y source code compiler.

P.S. I doubt Go will adopt such a sophisticated typing system, but I’m contemplating a transpiler to existing Go syntax as I mentioned in my prior post (see the edit at the bottom). I want Go’s goroutines (because they’re fundamentally superior to callback-based promises) and it’s portability to both client (GopherJS, Joy, and now the work proceeding on a WASM compile target) and server. And I want a robust generic system along with those very desirable Go features. I’ve tried my best to find another PL ecosystem which meets my desired capabilities and none do. Typeclass generics on Go appears to be what I want.

@bcmills wrote about his proposal about compile-time functions for genericity:

I've heard that same argument used to advocate for exporting interface types rather than concrete types in Go APIs, and the reverse turns out to be more common: premature abstraction overconstrains the types and hinders extension of APIs. (For one such example, see #19584.) If you want to rely on this line of argument I think you need to provide some concrete examples.

It’s certainly true that type system abstractions necessarily forsake some degrees-of-freedom and sometimes we have bust out of those constraints with “unsafe” (i.e. in violation of the statically checked abstraction), yet that has to be traded off against the benefits of modular decoupling with succinctly annotated invariants.

When designing a system for genericity, we’re likely wanting to increase the regularity and predictability of the ecosystem as one of the principle goals, especially if Go’s core philosophy is taken into consideration (e.g. that average programmers are a priority).

The Principle of Least Power applies. The power/flexibility of the invariants “hidden in” compile-time functions for genericity has to be weighed against their capability to do harm to for example the readability of the source code in the ecosystem (wherein modular decoupling is extremely important because the reader doesn’t have to read a potentially unbounded quantity of code due to implicit transitive dependencies, in order to understand a given module/package!). Implicit resolution of typeclass implementation instances have this problem if their algebra is not adhered to.

Sure, but that's already true for lots of implicit constraints in Go independent of any generic programming mechanism.

For example, a function may receive a parameter of interface type and initially call its methods sequentially. If that function later changes to call those methods concurrently (by spawning additional goroutines), the constraint "must be safe for concurrent use" is not reflected in the type system.

But afaik Go didn’t attempt to design an abstraction to modularize those effects. Rust has such an abstraction (which btw I think is overkill pita/tsuris/limiting for some/most use cases and I argue for an easier single-threaded model abstraction yet unfortunately Go doesn’t support restricting all spawned goroutines to the same thread). And Haskell requires monadic control over effects due to enforcing pure functions for referential transparency.


@alercah wrote:

I think the biggest downside to inferred constraints is that they make it easy to use a type in a way that introduces a constraint without fully understanding it. In the best case, this just means that your users may run into unexpected compile-time failures, but in the worst case, this means you can break the package for consumers by introducing a new constraint inadvertently. Explicitly-specified constraints would avoid this.

Agreed. Being able to surreptitiously break code in other modules because the invariants of the types aren’t explicitly annotated is egregiously insidious.


@andrewcmyers wrote:

To be clear, I don't consider macro-style code generation, whether done with gen, cpp, gofmt -r, or other macro/template tools, to be a good solution to the generics problem even if standardized. It has the same problems as C++ templates: code bloat, lack of modular type checking, and difficulty debugging. It gets worse as you start, as is natural, building generic code in terms of other generic code. To my mind, the advantages are limited: it would keep life relatively simple for the Go compiler writers and it does produce efficient code — unless there is instruction cache pressure, a frequent situation in modern software!

@keean seems to agree with you.

@shelby3 thanks for the comments. Can you next time make the comments / edits directly in the document itself. It's easier to track where things need to be fixed and easier to ensure that all notes get a proper response.

The Overview section seems to imply that Java’s universal use of boxing references for instances ...

Added comment to make it clear, that it's not meant as a comprehensive list. It's mainly there so that people get the gist of different trade-offs. The full list of different approaches is further below.

Generic structures tend to accumulate features from all uses, resulting in increased compile times or code bloat or needing a smarter linker.
For typeclasses this is either not true or less of an issue, because interfaces only need to be implemented for data types which are supplied to functions which use those interfaces. Typeclasses are about late binding of implementation to interface, unlike OOP which binds every data type to it’s methods for the class implementation.

That statement is about what happens to generic data-structures in the long term. In other words a generic data-structure often ends up collecting all different uses -- rather than having multiple smaller implementations for different purposes. Just as an example look at https://www.scala-lang.org/api/2.12.3/scala/collection/immutable/List.html.

It's important to note that, just the "mechanical design" and "as much flexibility" is not sufficient to create a good "generics solution". It also needs good instructions, how things should be used and what to avoid, and consider how people end up using it.

Generic structures and the APIs that operate on them tend to be more abstract than purpose-built APIs ...

A counter argument which I think significantly ameliorates this concern is that the cognitive burden of learning an unbounded number of special case re-implementations of the essentially same generic algorithms, is unbounded...

Added a note about cognitive load of many similar APIs.

The special-case re-implementations aren't unbounded in practice. You will only see a fixed-number of specialization.

This is not a valid con.

You may disagree with some of points, I disagree with quite a few of them to some degree, but I do understand their viewpoint and try to understand the problems people are faced with day-to-day. The goal of the document is to collect different opinions, not to judge "how annoying something is for someone".

However, the document does take a stance on "problems traceable to real-world-problems", because abstract and facilitated problems in forums tend to descend into meaningless chatter without any understanding being built.

What we’re really getting at here is the regularity of a language and generic APIs help, not hurt that.

Sure in practice you might need this style of optimization only for less than 1% of the cases.

Alternative solutions:

Alternative solutions aren't meant as a substitute for generics. But, rather a list of potential solutions for different kinds of problems.

Package templates

I may be mistaken but this seems not quite correct. ML functors (not to be confused with FP functors) can also return an output which remains type parametrised.

Can you provide a clearer wording and if necessary split into two different approaches?

@egonelbre thanks also for responding so I can know on which points I need to clarify my thoughts further.

Can you next time make the comments / edits directly in the document itself.

Apology wish I could comply but I’ve never used the discussion features of Google Doc, don’t have time to learn it, and I also prefer to be able to link to my discussions on Github for future reference.

Just as an example look at https://www.scala-lang.org/api/2.12.3/scala/collection/immutable/List.html.

The design of the Scala collections library was criticized by many people, including one of their former key team members. A comment posted to LtU is representative. Note I added the following to one of my prior posts in this thread to address this:

I’m wary of “kitchen sink” systems which attempt to mix so many paradigms (e.g. subclassing, multiple inheritance, OOP, trait linearization, implicit, typeclasses, abstract types, etc), due to the complaints about Scala having so many corner cases in practice, although perhaps that will improve with Scala 3 (aka Dotty and the DOT calculus).

I don’t think Scala’s collection library would be representative of libraries created for a PL with only typeclasses for polymorphism. Afair, the Scala collections employ the anti-pattern of inheritance, which caused the complex hierarchies, combined with implicit helpers such as CanBuildFrom which exploded the complexity budget. And I think if @keean’s point is adhered to about Stepanov’s _Elements of Programming_ being an algebra, an elegant collections library could be created. It was the first alternative I had seen to a functor (FP) based collections library (i.e. not copying Haskell) also based on math. I want to see this in practice, which is one reason I’m collaborating/discussing with him on design of a new PL. And as of this moment, I’m planning to have that language initially transpile to Go (although I’ve been for years trying to find a way to avoid doing so). So hopefully we’ll be able to experiment soon to see how it works out.

My perception is that the Go community/philosophy would rather wait to see what works in practice and adopt it later once proven, than to rush and pollute the language with failed experiments. Because as you reiterated, all these abstract claims are not so constructive (except to maybe PL design theorists). Also it’s probably implausible to design a coherent generics system by committee.

It also needs good instructions, how things should be used and what to avoid, and consider how people end up using it.

And I think it will help not mixing so many different paradigms available to the programmer in the same language. It's apparently not necessary (@keean and I need to prove that claim). I think we both ascribe to the philosophy that the complexity budget is finite and it’s what you leave out of the PL that is as important as the features included.

However, the document does take a stance on "problems traceable to real-world-problems", because abstract and facilitated problems in forums tend to descend into meaningless chatter without any understanding being built.

Agreed. And it’s also difficult for everyone to follow the abstract points. The devil is in the details and actual results in the wild.

Sure in practice you might need this style of optimization only for less than 1% of the cases.

Go already has interface for genericity, so that can handle the cases where don’t need parametric polymorphism on the type T for the instance of the interface supplied by the call site.

I think read somewhere, maybe it was upthread, the argument that actually the standard library of Go is suffering from inconsistency of optimal use of the most updated idioms. I don’t know if that is true, because I’m not experienced with Go yet. The point I’m making is that the generics paradigm chosen infects all the libraries. So yes as of now you can claim only 1% of the code would need it, because there’s already inertia in idioms that avoid the need for generics.

You may be correct. I have also my skepticism about how much I will use any particular language feature. I think experimentation to find out is the way I will proceed. PL design is an iterative process, then the problem is fighting against the inertia that develops makes it difficult to iterate the process. So I guess Rob Pike is correct in the video where he suggests writing programs that write code for programs (meaning go write generation tools and transpilers) to experiment and test ideas.

When we can show that a particular set of features are superior in practice (and hopefully also popularity of use) to those currently in Go, then we can perhaps see some consensus form around adding them to Go. I encourage others to also create experimental systems that transpile to Go.

Can you provide a clearer wording and if necessary split into two different approaches?

I add my voice to those who would want discourage the attempt to put some overly simplistic templating feature in Go and claim that is generics. IOW, I think a proper functioning generics system that won’t end up being bad inertia is fundamentally incompatible with the desire to have an excessively simplistic design for generics. Afaik, a generics system needs a well thought out and well proven holistic design. Echoing what @larsth wrote, I encourage those with serious proposals to first build a transpiler (or implement in a fork of the gccgo frontend) and then experiment with the proposal so we could all better understand it’s limitations. I was encouraged to read upthread that @ianlancetaylor didn’t think a pollution of bad inertia would be added to Go. As for my specific complaint about the package level parametrisation proposal, my suggestion for who ever is proposing it, please contemplate whether to make a compiler we can all use to play with and then we all can talk about examples of what we like and don’t like about it. Otherwise, we’re talking past each other because maybe I don’t even understand correctly the proposal as described abstractly. I must not understand the proposal, because I don’t understand how the parametrised package can be reused in another package which is also parametrised. IOW, if a package takes parameters, then it needs to instantiate other packages with parameters also. But it seemed the proposal was stating that the only way to instantiate a parametrised package was with a concrete type, not type parameters.

Apology so long-winded. Want to make sure I’m not misunderstood.

@shelby3 ah, I then misunderstood the initial complaint. First I should make clear that the sections in "Generics Approaches" are not concrete proposals. They are approaches or, in other words, bigger design decisions that one might take in a concrete generics approach. However, the groupings are strongly motivated by existing implementations or concrete/informal proposals. Also, I suspect there are at least 5 big ideas still missing from that list.

For the "package templates" approach there are two variations of it (see the linked discussions in document):

  1. "interface" based generic packages,
  2. explicitly generic packages.

For 1. it doesn't require the generic package to do anything special -- for example the current container/ring would become usable for specialization. Imagine "specialization" here as replacing all instances of the interface in the package with the concrete type (and ignoring circular imports). When that package itself specializes another package, it can use the "interface" as the specialization - it follows that then this use, will be specialized too.

For 2. you can look at them in two ways. One is the recursive concrete specialization at each import -- similar to templating/macroing, at no point there would be a "partially applied package". Of course it can be also seen taken from the functional side, that the generic package is a partial with parameters and then you specialize it.

So, yes, you can use one parameterised package in another.

Echoing what @larsth wrote, I encourage those with serious proposals to first build a transpiler (or implement in a fork of the gccgo frontend) and then experiment with the proposal so we could all better understand it’s limitations.

I know this wasn't explicitly directed at that approach, but it does have 4 different prototypes to test out the idea. Of course, they are not full transpilers, but they are sufficient to test few of the ideas. i.e. I'm not sure whether anyone has implemented the "using parameterised package from another" case.

Parameterised packages sound a lot like ML modules (and ML functors is the parameters can be other packages). There are two ways these can work "applicative" or "generative". An applicative functors is like a value, or a type-alias. A generative functors must be constructed and each instance is different. Another way to think about this, is for a package to be applicative it must be pure (that is no mutable variables at the package level). If there is state at the package level it must be generative as that state needs to be initialised, and it matters which "instance" of a generative package you actually pass as a parameter to other packages which in turn must be generative. For example Ada packages are generative.

The problem with the generative package approach is that it creates lots of boilerplate, where you are instantiating packages with parameters. You can look at Ada generics to see what this looks like.

Type classes avoid this boilerplate by implicitly selecting the typeclass based on the types used in the function only. You can also view type-classes as constrained overloading with multiple dispatch, where the overload resolution almost always occurs statically at compile time, with exceptions for polymorphic recursion and existential types (which are essentially variants you cannot cast out-of, you can only use the interfaces the variant confirms to).

An applicative functors is like a value, or a type-alias. A generative functors must be constructed and each instance is different. Another way to think about this, is for a package to be applicative it must be pure (that is no mutable variables at the package level). If there is state at the package level it must be generative as that state needs to be initialised, and it matters which "instance" of a generative package you actually pass as a parameter to other packages which in turn must be generative. For example Ada packages are generative.

Thank you for the exact terminology, I need to think how to integrate these ideas into the document.

Also, I cannot see a reason why you couldn't have an "automatic type-alias to a generated package" -- in a sense something between "applicative functor" and "generative functor" approach. Obviously, when the package does contain some form of state, it can get complicated to debug and understand.

The problem with the generative package approach is that it creates lots of boilerplate, where you are instantiating packages with parameters. You can look at Ada generics to see what this looks like.

As far as I see, it would create less boilerplate than C++ templates but more than type-classes. Do you have a good real-world program for Ada that demonstrates the problem? _(By real-world, I mean code that someone is/was using in production.)_

Sure, have a look at my Ada go-board: https://github.com/keean/Go-Board-Ada/blob/master/go.adb

Although this is a fairly loose definition of production, the code is optimised, performs as well as the C++ version, and its open-source, and the algorithm has been refined over several years. You could look at the C++ version too: https://github.com/keean/Go-Board/blob/master/go.cpp

This shows (I think) that Ada generics are a neater solution than C++ templates (but that is not hard), on the other hand it is hard to do the fast access to the data structures in Ada due to the restrictions on returning a reference.

If you want to look at a package generics system for an imperative language, I think Ada is one the best to look at. Its a shame they decided to go multi-paradigm and add all the OO stuff to Ada. Ada is an enhanced Pascal, and Pascal was a small and elegant language. Pascal plus Ada generics would have still been quite a small language, but would have been much better in my opinion. Because the focus of Ada shifted to an OO approach finding good documentation and examples of how to do the same things with generics seem hard to find.

Although I think typeclasses have some advantages, I could live with Ada style generics, there are a couple of issues that hold me back from using Ada more widely, I think it gets values/objects wrong (I think very few languages get this right, 'C' being one of the only ones), it is hard to work with pointers (access variables) and to create safe-pointer abstractions, and it does not provide a way to use packages with runtime-polymorphism (it provides an object model for this, but it adds a whole new paradigm instead of trying to find a way to have runtime-polymorphism using packages).

The solution to runtime-polymorphism is to make packages first-class so instances of package-signatures can be passed as function arguments, this unfortunately requires dependent types (see the work done on Dependent Object Types for Scala to clear up the mess they made with their original type-system).

So I think package generics can work, but it took Ada decades to deal with all the edge cases, so I would look at a production generics system to see what refinements use in production produced. However Ada still falls short because the packages are not first-class, and cannot be used in runtime polymorphism, and this would need to be addressed.

@keean wrote:

Personally I consider runtime reflection a mis-feature, but that's just me... I can go into why if anyone is interested.

Type erasure enables “Theorems for free”, which has practical implications. Writable (and maybe even readable due to transitive relations to imperative code?) runtime reflection makes it’s impossible to guarantee referential transparency in any code and thus certain compiler optimizations aren’t possible and type safe monads aren’t possible. I realize Rust doesn’t even have an immutability feature yet. OTOH, reflection enables other optimizations that wouldn’t be otherwise possible if they couldn’t be statically typed.

I had also stated upthread:

And that’s what a compiler transpiling from a superset of Go with generics added, would output as Go code. But the wrapping would not be based on some delineation such as package, as that would lack the composability I already mentioned. Point being that there’s no short-cut to a good composable generics type system. Either we do it correctly or don’t do anything, because adding some non-composable hack that isn’t really generics is going to create eventually a clusterfuck inertia of patchwork half-assed genericity and irregularity of corner cases and workarounds making Go ecosystem code unintelligible.


@keean wrote:

[…] for a package to be applicative it must be pure (that is no mutable variables at the package level)

And no impure functions may be employed to initialize immutable variables.

@egonelbre wrote:

So, yes, you can use one parameterised package in another.

What I apparently had in mind was “first-class parametrised packages” and the commensurate runtime (aka dynamic) polymorphism that @keean subsequently has mentioned, because I presumed the parametrised packages were proposed in lieu of typeclasses or OOP.

EDIT: but there are two possible meanings for “first-class” modules: modules as first-class values such as in Successor ML and MixML distinguished from modules as first-class values with first-class types as in 1ML, and the necessary tradeoff in module recursion (i.e. mixing) between them.

@keean wrote:

The solution to runtime-polymorphism is to make packages first-class so instances of package-signatures can be passed as function arguments, this unfortunately requires dependent types (see the work done on Dependent Object Types for Scala to clear up the mess they made with their original type-system).

What do you mean by dependent types? (EDIT: I presume now he meant “non-value-dependent” typing, i.e. “functions whose result type depends on the [runtime?] argument[’s type]”) Certainly not dependent on the values of for example int data, such as in Idris. I think you’re referring to dependently typing (i.e. tracking) the type of the values representing instantiated module instances down the call hierarchy so that such polymorphic functions can be monomorphised at compile-time? Does the runtime polymorphism enter due to such monomorphised types being the existential type bound for dynamic types? F-ing Modules demonstrated that “dependent” types aren’t absolutely necessary for modeling ML modules in system Fω. Have I oversimplified if I presume @rossberg reformulated the typing model to remove all monomorphisation requirements?

The problem with the generative package approach is that it creates lots of boilerplate […]
Type classes avoid this boilerplate by implicitly selecting the typeclass based on the types used in the function only.

Isn’t there also boilerplate with applicative ML functors? There’s no known unification of typeclasses and ML functors (modules) which retains retains the brevity without introducing restrictions that are necessary to prevent (c.f. also) the inherent anti-modularity of the global uniqueness criterion of typeclass implementation instances.

Typeclasses can only implement each type in one way and otherwise requires newtype wrapper boilerplate to overcome the limitation. Here’s another example of multiple ways to implement an algorithm. Afaics, @keean worked around this limitation in his typeclass sort example by overriding implicit selection with an explicitly selected Relation employing wrapping data types to name different relations generically on the value type, but I am doubting whether such tactics are general to all variants of modularity. However a more generalized solution (which can aid in ameliorating the modularity problem of global uniqueness possibly combined with an orphan restriction as improvement to the proposed versioning for orphan resolution by employing a non-default for implementations which could be orphaned) may be to have an extra type parameter implicitly on all typeclass interface, which when not specified defaults to the normal implicit matching, but when specified (or when not specified doesn’t match any other2) then selects the implementation that has the same value in its comma-delimited list of custom values (so this is more generalized, modular matching than naming a specific implement instance). The comma-delimited list is so that an implementation can be differentiated in more than one degree-of-freedom, such as if it has two orthogonal specializations. The desired non-default specialized could be specified either at the function declaration or call site. At the call site, e.g. f<non-default>(…).

So why would we need parametrised modules if we have typeclasses? Afaics only for (← important link to click) substitution because reusing typeclasses for that purposes doesn’t fit well in that for example we want a package module to be able to span multiple files and we want to be able to implicitly open the contents of the module into scope without additional boilerplate. So perhaps going forward with a _syntactical-only_ substitution-only (not first class) package parametrisation is reasonable first step that can address module-level genericity while remaining open to compatibility with and non-overlap of functionality if adding typeclasses later for function-level genericity. However, there’s an issue whether these macros are for example typed or just syntactical (aka “preprocessor”) substitution. If typed, then modules duplicate functionality of typeclasses, which is undesirable both from the standpoint of minimizing the overlapping paradigms/concepts of the PL and potential corner cases due to interactions of the overlap (such as those when attempting to offer both ML functors and typeclasses). Typed modules are more modular because modifications to any encapsulated implementation within the module that doesn’t modify the exported signatures can’t cause consumers of the module to become incompatible (other than the aforementioned anti-modularity problem of typeclass overlapping implementation instances). I’m interested to read @keean’s thoughts on this.

[…] with exceptions for polymorphic recursion and existential types (which are essentially variants you cannot cast out-of, you can only use the interfaces the variant confirms to).

To help other readers. By “polymorphic recursion”, I think refers to higher-ranked types, e.g. parametrised callbacks set at runtime where the compiler can’t monomorphise the body of the callback function because its not known at compile-time. The existential types are as I mentioned before equivalent to Rust’s trait objects, which are one way to attain heterogeneous containers with a later binding in the Expression Problem than class subclassing virtual inheritance, but not as open to extension in the Expression Problem as unions with immutable data structures or copying3 which have an O(log n) performance cost.

1 Which doesn’t requires HKT in the above example, because SET is not requiring the elem type is a type parameter of the generic type of set, i.e. it’s not set<elem>.

2 Yet if there existed more than one non-default implementation and no default implementation, then the selection would be ambiguous so the compiler should generate an error. This could be rectified by allowing one implementation to include default on its list of other specializations.

3 Note mutating with immutable data structures doesn’t necessarily require copying the entire data structure, if the data structure is smart enough to isolate history such as singly-linked list.

Implementing func pick(a CollectionOfT, count uint) []T would be a good example application of generics (from https://github.com/golang/go/issues/23717):

// pick returns a slice (len = n) of pseudorandomly chosen elements 
// in unspecified order from c which is an array, slice, or map.
for i, e := range pick(c, n) {

The interface{} approach here is complicated.

I've commented a few times on this issue that one of the major problems with the C++ template approach is its reliance on overload resolution as the mechanism for compile-time metaprogramming.

It seems that Herb Sutter has come to the same conclusion: there is now an interesting proposal for compile-time programming in C++.

It has some elements in common with both the Go reflect package and my earlier proposal for compile-time functions in Go.

Hi.
I've written a proposal for generics with constraints for Go. You can read it here. Perhaps it can be added as a document of 15292. It's mostly about constraints and reads as an amendment to Taylors Type Parameters in Go.
It is intented as an example of a workable (I believe) way of doing 'type safe' generics in Go, - hopefully it can add something to this discussion.
Please note, that while I have read (most of) this very long thread, I have not followed all links in it, so others may have made similar suggestions. If that's the case, I apologize.

br. Chr.

Syntax bikeshedding:

constraint[T] Array {
    :[#]T
}

could be

type [T] Array constraint {
    _ [...]T
}

which looks more like Go to me. :-)

Several elements here.

One thing is replacing : with _ and replace # with ....
I suppose you could do that if it's preferred.

Another thing is replacing constraint[T] Array with type[T] Array constraint.
That would seem to indicate that constraints are types, which I don't think is correct. Formally, a constraint is a _predicate_ on the set of all types, ie. a mapping from the set of types to the set {true, false}.
Or if you prefer, you can think of a constraint as simply _a set of_ types.
It is not _a_ type.

br. Chr.

Why isn't that constraint be just an interface?

type [T io.Writer] List struct { 
    element T; 
    next *List[T];
}

An interface would be a bit more useful as a constraint with the following proposal: #23796 which would in turn also give some merit to the proposal itself.

Also, if the proposal for sum types is accepted in some form (#19412), then those should be used to constraint the type.

Though I believe the constraint keyword, some something like it should be added, in order to not repeat large constraints and prevent errors due to absentmindedness.

Finally, for the bikeshedding portion, I think constraints should be listed at the end of a definition, to avoid overcrowding (rust seems like it has a good idea here):

// similar to the map[T]... syntax
// also no constraint
type List[T] struct {
    element T
    next *List[T]
}

// with constraint
type List[T] struct {
    element T
    next *List[T]
} where T is io.Writer | encoding.BinaryMarshaler

type BigConstraint constraint {
     io.Writer
     SomeFunc() int
     AnotherFunc()
     AField int64
     StringField string
}


// with predefined constraint
type List[T, U] struct {
    element T
    val U
    next *List[T, U]
} where T is BigConstraint | encoding.BinaryMarshaler,
    U is io.Reader

@urandom: I think its one big advantage for go to have interfaces implemented implicitly instead of explicitly. @surlykke proposal in this comment I think is much more close to other Go syntax in spirit.

@surlykke I apologize if the proposal has the answer to any of these.

A use of generics is to allow built-in style functions. How would you implement application-level len with this? The memory layout is different for each allowed input, so how is this better than an interface?

The “pick” described earlier has a similar problem where indexing into a map vs indexing into a slice are different. In the map case if there was a conversion to slice first then the same picking code can be used, but how is this done?

Collections is another use:

// An unordered collection of comparable items.
type [T Comparable] Set []T

func (a Set) Diff(from Set) Set {
    // the implementation is the same as one with
    //     type Comparable interface { Equal(Comparable) bool }
    //     type Set []Comparable
}

// compile error
d := Set[int]{1, 2}.Diff(Set[string]{“abc”, “def”})

// Go 1, easier to read but runtime error
d := Set{1, 2}.Diff(Set{“abc”, “def”})

For the collection type case I’m not convinced this is a big win over Go 1 generics since there are readability tradeoffs.

I agree that type parameters must have some form of constraints. Otherwise we'll be repeating the mistakes of C++ templates. The question is, how expressive should the constraints be?

At one end, we could just use interfaces. But as you point out, a lot of useful patterns can't be captured that way.

Then there is your idea, and similar ones, that try to carve out a set of useful constraints and provide new syntax for expressing them. Aside from the problem of adding yet more syntax, it's not clear where to stop. As you point out, your proposal captures many patterns, but by no means all.

At the other extreme is the idea I propose in this doc. It uses Go code itself as the constraint language. You can capture virtually any constraint that way, and it requires no new syntax.

@jba
It's a bit verbose. Maybe if Go had a lambda syntax it would be a bit more palatable. On the other hand, it seems the biggest problem it is trying to solve is checking whether a type supports an operator of some sort. It might be easier if Go just had predefined interfaces for various operators:

func equal[T](x, y T) bool
    where T is runtime.Equitable {
    return x == y
}

func copyable[T](x, y []T) int {
    return copy(x, y)
}

or something along these lines.

If the issue is with extending builtins, then maybe the problem lies in the language's way of creating adapter types. For instance, isn't the bloat associated with sort.Interface the whole reason behind https://github.com/golang/go/issues/16721 and sort.Slice?
Looking at https://github.com/golang/go/issues/21670#issuecomment-325739411, @Sajmani's idea of having interface literals might be the ingredient necessary for type parameters to easily work with builtins.
Look at the following definition of Iterator:

type [T] Iterator interface {
    Next() (elem T, done bool)
}

If print is a function that simply iterates over a list and prints its contents, then the following example uses interface literals to construct a satisfying interface for print.

func SliceIterator(slice []T) Iterator {
    i := 0
    return Iterator{
        Next: func() (elem int, done bool) {
            v := slice[i]
            if i+1 == len(slice) {
                return v, true
            }
            i++
            return v, false
        },
    }
}

func main() {
    arr := []int{1,2,3,4,5}
    // SliceIterator works for an arbitrary slice
    print(SliceIterator(arr))
}

One can already do this if they globally declare types whose sole responsibility is to satisfy an interface. However, this conversion from a function to a method makes interfaces (and hence "constraints") easier to satisfy. We don't pollute top-level declarations with simple adapters (like "widgetsByName" in sorting).
User-defined types obviously can also take advantage of this feature as well, as seen by this LinkedList example:

type ListNode struct {
    v string
    next *ListNode
}
func (l *ListNode) Iterator() Iterator {
    ptr := l
    return Iterator{
        Next: func() (elem int, done bool) {
            v := ptr.v
            if ptr.next == nil {
                return v, true
            }
            ptr = ptr.next
            return v, false
        },
    }
}

@geovanisouza92 : Constraints as I've described them are more expressive than interfaces (fields, operators). I did briefly consider extending interfaces instead of introducing constraint, but I think that would be a much too intrusive change to an existing element of Go.

@pciet I'm not quite sure what you mean by 'application level'. Go has a built in len function which may be applied to array, pointer to array, slice, string and channel, so, in my proposal, if a type parameter is constrained to have one of these as it's underlying type, len may be applied to it.

@pciet About your example with Comparable constraint/interface. Note that if you define (the interface variant):

type Comparable interface { Equal(Comparable) bool }
type Set []Comparable

Then you can put anything implementing Comparable into Set. Compare that to:

constraint [T] Comparable { Equal(t T) bool }
type [T Comparable[T]] Set []T
...
type FooSet Set[Foo] // Where Foo satisfies constraint Comparable

where you can only put values of type Foo into FooSet. That is stronger type safety.

@urandom Again, I'm not a fan of:

type MyConstraint constraint {....}

as I do not believe a constaint is a type. Also, I would definitely not allow:

var myVar MyConstraint

which makes no sense to me. Another indication that constraints are not types.

@urandom On bikeshedding: I believe Constraints should be declared just next to the type parameters. Consider an ordinary function, defined like this:

func MyFunc(i) {
     if (i>0) fmt.Println("It's positive")
} with i being an integer

You couldn't read this from left-to-right. Instead you'd first read func MyFunc(i) to determine that it's a function definition. Then you'd have to jump to the end to figure out what i is, and then back to the function body. Not ideal, IMO. And I don't see how generic definitions should be any different.
But obviously, this discussion is orthogonal to the one about whether Go should have constraints or generics.

@surlykke
I'm fine with it not being a type. The most important thing is that they have a name so that they can be referred to by multiple types.

For functions, if we follow the rust syntax, it would be:

func MyFunc[I](i I) int64
     where I is being an integer {
   return 42
}

So it will not hide things like the name of the function or its parameters, and you would not need to go to the end of the function body to see what the constraint on the generic types are

@surlykke for posterity, could you locate where your proposal could be added to:
https://docs.google.com/document/d/1vrAy9gMpMoS3uaVphB32uVXX4pi-HnNjkMEgyAHX4N4

It's a great place to "compile" all the proposals.

Another question I pose to you all is how one would deal with specialization of different instantiations of a generic type. In the type-params proposal, the way to do it is to generate the same templated function for each instantiated type, replacing the type parameter with the type name. In order to have separate functionality for different types, perform a type-switch on the type-parameter.

Is safe to assume that when the compiler sees a type-switch on an type-parameter, it is allowed to generate a separate implementation for each assertion? Or is that too involved of an optimization, since nested type-parameters in the asserted structs may create a parametric aspect to the code generation?

In the compile-time functions proposal, because we know that these declarations are generated at compile-time, a type-switch doesn't pose any runtime cost.

A practical scenario: If we consider a case of the math/bits package, performing a type-assertion to call OnesCount for each uintXX would beat the point of having an efficient bit-manipulation library. If however, the type-assertions were transformed into the following

func OnesCount(x T) int {
    switch x.(type) {
    case uint:
        // separate uint functionality...
    case uint8:
        // separate uint8 functionality...
    case uint16:
        // separate uint16 functionality...
    case uint32:
        // separate uint32 functionality...
    case uint64:
        // separate uint64 functionality...
    }
}

A call to

var x uint8 = 255
bits.OnesCount(x)

would then call the following generated function (name is not important here):

func $OnesCount_uint8(x uint8) {
    // separate uint8 functionality...
}

@jba That's an interesting proposal, but to me it mostly highlights the fact that the definition of the parametric function itself usually suffices to define its constraints.

If you're going to use “operators used in a function” as the constraints, then what advantage does it buy you to write a second function containing a subset of the operators used in the first?

@bcmills One of them is a spec and the other is the implementation. It's the same advantage as static typing: you can catch errors earlier.

If the implementation is the spec, à la C++ templates, then any change to the implementation potentially breaks dependents. That may not be discovered until much later, when the dependents recompile, and the discoverers have no context to understand the error message. With the spec in the same package, you can detect breakage locally.

@mandolyte I'm not quite sure where to add it - maybe a paragraph under 'Generics approaches' named 'Generics with constraints' ?
The document does not seem to contain much about constraining type parameters, so if you added a paragraph where my proposal would be mentioned, then other approaches to constraints could be listed there as well.

@surlykke the general approach on the document is to make a change what feels right and I'll try to accept, incorporate and organize it with the rest of document. I added a section here. Feel free to add things I missed.

@egonelbre That's very nice. Thanks!

@jba
I like your proposal, but I think it is a way too heavy for golang. It reminds me a lot of templates in c++. The main problem I think is that you can write really complex code with it.
To decide if two generic interface instances overlap because the constrained set of types overlap would be hard task causing slower compile times. The same for code generation.

I think that the proposed constraints are more lightweight for go. From what I've heard is that constraints aka typeclasses could be implemented orthogonal to the type system of a language.

I have to strongly agree that we should not go with implicit constraints from the body of the function. They're widely considered one of C++ templates' most significant misfeatures:

  • The constraints are not easily visible. While godoc could theoretically enumerate all the constraints into the documentation, they aren't visible in the source code except implicitly.
  • Because of that, it's possible to accidentally include an additional constraint that is only visible when you try to use the function in a way that isn't expected. By requiring explicit specification of the constraints, the programmer must know exactly what constraints they are introducing.
  • It makes the decision about what kinds of constraints are allowed much more ad-hoc. For instance, am I allowed to define the following function? What are the actual constraints on T, U, and V here? If we require the programmer to explicitly specify constraints, then we are conservative in the kind of constraints we allow (letting us expand that slowly and deliberately). If we try to be conservative anyway, how do we give an error message for a function like this? "Error: cannot assign u.v() to T because it imposes an illegal constraint"?
func[T, U, V] Foo(u U, v V) {
  var t T = u.v(V) + 1;
}
  • Calling generic functions in other generic functions makes the above situations worse, as you now need to look over all the callees' constraints in order to understand the constraints of the function you're writing or reading.
  • Debugging can be very difficult, because error messages must either not provide enough information to find the source of the constraint, or they must leak internal details of the function. For instance, if F has some requirement on a type T, and the author of F is trying to figure out where that requirement came from, they would like the compiler to alert them to exactly which statement gives rise to the constraint (especially if it comes from a generic callee). But a user of F doesn't want that information and, indeed, if it is included in the error messages, then we are leaking implementation details of F in error messages from its users, which are a terrible user experience.

@alercah

For instance, am I allowed to define the following function?

func[T, U, V] Foo(u U, v V) {
  var t T = u.v(V) + 1;
}

No. u.v(V) is a syntax error because V is a type, and the variable t is unused.

However, you could define this function, which may be the one you intended:

func[T, U, V] Foo(u U, v V) {
    var _ T = u.v(v) + 1;
}

What are the actual constraints on T, U, and V here?

  • The type V is unconstrained.
  • The type U must have a method v that accepts a single parameter or varargs of some type assignable from V, because u.v is invoked with a single argument of type V.

    • U.v could be a field of function type, but arguably that that should imply a method; see #23796.

  • The type returned by U.v must be numeric, because the constant 1 is added to it.
  • The return type of U.v must be assignable to T, because u.v(…) + 1 is assigned to a variable of type T.
  • The type T must be numeric, because the return type of U.v is numeric and assignable to T.

(An aside: you could argue that U and V should have the constraint “copyable” because arguments of those types are passed by value, but the existing, non-generic type system does not enforce that constraint either. That's a matter for a separate proposal.)

If we require the programmer to explicitly specify constraints, then we are conservative in the kind of constraints we allow (letting us expand that slowly and deliberately).

Yes, that's true: but omitting a constraint would be a serious defect whether those constraints are implicit or not. IMO, the more important role of constraints is to resolve ambiguity. For example, in the above constraints, the compiler must be prepared to instantiate u.v as either a single-argument or variadic method.

The most interesting ambiguity occurs for literals, where we need to disambiguate between struct types and composite types:

func[T] Foo() (t T) {
    x := 42;
    t = T{x: "some string"}  // Is x an index, or a field name?
    _ = x
}

If we try to be conservative anyway, how do we give an error message for a function like this? "Error: cannot assign u.v() to T because it imposes an illegal constraint"?

I'm not quite sure what you're asking, since I don't see conflicting constraints for this example. What do you mean by an “illegal constraint”?

Debugging can be very difficult, because error messages must either not provide enough information to find the source of the constraint, or they must leak internal details of the function.

Not every relevant constraint can be expressed by the type system (see also https://github.com/golang/go/issues/22876#issuecomment-347035323). Some constraints are enforced by run-time panics; some are enforced by the race detector; the most dangerous constraints are merely documented and not detected at all.

All of those “leak internal details” to some degree. (See also https://xkcd.com/1172/.)

For instance, if […] the author of F is trying to figure out where that requirement came from, they would like the compiler to alert them to exactly which statement gives rise to the constraint (especially if it comes from a generic callee). But a user of F doesn't want that information[.]

Maybe? That is how API authors use type annotations in type-inferred languages such as Haskell and ML, but it also leads down a rabbit-hole of deeply parametric (“higher-order”) types in general.

For example, suppose that you have this function:

func [F, Arg, Result] InvokeAsync(f F, x Arg) (<-chan Result) {
    c := make(chan result, 1)
    go func() { c <- f(x) }()
    return c
}

How do you express the explicit constraints on the type Arg? They depend on the specific instantiation of F. That sort of dependency seems to be missing from many of the recent proposals for constraints.

No. u.v(V) is a syntax error because V is a type, and the variable t is unused.

However, you could define this function, which may be the one you intended:

Yes, that was the intent, my apologies.

The type T must be numeric, because the return type of U.v is numeric and assignable to T.

Should we really consider this a constraint? It's deducible from the other constraints, but is it more or less useful to call this a distinct constraint? Implicit constraints ask this question in a way explicit constraints do not.

Yes, that's true: but omitting a constraint would be a serious defect whether those constraints are implicit or not. IMO, the more important role of constraints is to resolve ambiguity. For example, in the above constraints, the compiler must be prepared to instantiate u.v as either a single-argument or variadic method.

I meant "constraints we allow" as in the language. With explicit constraints, it's much easier for us to decide what kind of constraints we're willing to allow users to write, rather than just saying the constraint is "whatever makes things compile". For instance, my example Foo above actually involves an implicit additional type separate from T, U, or V, since we must consider the return type of u.v. This type doesn't explicitly get referred to in any way in f's declaration; the properties it must have are completely implicit. Likewise, are we willing to permit higher-ranked (forall) types? I can't come up with an example off the top of my head, but I also can't convince myself you can't implicitly write a higher-ranked type bound.

Another example is whether we should allow a function to take advantage of overloaded syntax. If an implicitly-constrained function does for i := range t for some t of generic type T, the syntax works out if T is any array, slice, channel, or map. But the semantics are quite different, especially if T is a channel type. For instance, if t == nil (which can happen as long as T is an array), then the iteration either does nothing, since there are no elements in a nil slice or map, or blocks forever since that's what receives on nil channels do. This is a big footgun waiting to happen. Similarly is doing m[i] = ...; if I intend for m to be a map, I will need to guard against it actually being a slice as the code could panic on an out-of-range assignment otherwise.

In fact, I think this lends itself to another argument against implicit constraints: API authors might write artificial statements just to add constraints. For instance for _, _ := range t { break } prevents a channel while still allowing maps, slices, and arrays; x = append(x) forces x to have slice type. var _ = make(T, 0) allows slices, maps, and channels but not arrays. There will be a recipe book of how to implicitly add constraints so that someone can't call your function with a type that you haven't written correct code for. I can't even think of a way to write code that only compiles for map types unless I know the key type as well. And I don't think this is hypothetical at all; maps and slices behave quite differently for most applicat

I'm not quite sure what you're asking, since I don't see conflicting constraints for this example. What do you mean by an “illegal constraint”?

I mean a constraint that is not permitted by the language, such as if the language decides to disallow higher-ranked constraints.

Not every relevant constraint can be expressed by the type system (see also #22876 (comment)). Some constraints are enforced by run-time panics; some are enforced by the race detector; the most dangerous constraints are merely documented and not detected at all.

All of those “leak internal details” to some degree. (See also https://xkcd.com/1172/.)

I don't really see how #22876 comes into this; that's trying to use the type system to express a different kind of constraint. It will always be true that we can't express some constraints on values, or on programs, even with a type system of arbitrary complexity. But we're only talking about constraints on types here. The compiler needs to be able to answer the question "Can I instantiate this generic with type T?" which means that it must understand the constraints, whether they're implicit or explicit. (Note that some languages, like C++ and Rust, cannot decide this question in general because it can depend on arbitrary computation and thus devolves to the Halting Problem, but they still express the constraints that need to be satisfied.)

What I mean is more like "what error message should the following example give?"

func [U] DirectlyConstrained(U t) {
    t.DoSomething();
}
func [T] IndirectlyConstrained(T t) {
    DirectlyConstrainted(t);
}
func Illegal() {
    IndirectlyConstrained(4);
}

We can say Error: cannot call IndirectlyConstrained with [T = int]; T must have a method with signature func (T t) DoSomething(). This error message is helpful to a user of IndirectlyConstrained, because it clearly sets out the constrained that they are missing. But it provides no information to someone trying to debug why IndirectlyConstrained has that constraint, which is a big usability problem if it's a large function. We could add Note: this constraint exists on T because IndirectlyConstrained calls DirectlyConstrained with [U = T] on line N, but now we're leaking details of IndirectlyConstrained's implementation. Further to that, we haven't explained why IndirectlyConstrained has the constraint, so do we add another Note: this constraint exists on U because DirectlyConstrained calls t.DoSomething() on line M? What if the implicit constraint comes from some callee four levels down the call stack?

Furthermore, how do we format this error messages for types that are not explicitly listed as parameters? E.g. if in the above example, IndirectlyConstrained calls DirectlyConstrained(t.U()). How do we even refer to the type? In this case we could say the type of t.U(), but the value won't necessarily be the result of a single expression; it could be built up over multiple statements. Then we'd either need to synthesize an expression with the correct types to put in the error message, one that never appears in the code, or we'd need to find some other way to refer to it which would be less clear to the poor caller who violated the constraint.

How do you express the explicit constraints on the type Arg? They depend on the specific instantiation of F. That sort of dependency seems to be missing from many of the recent proposals for constraints.

Drop F and have the type of f be func (Arg) Result. Yes, it ignores variadic functions, but the rest of Go does too. A proposal to make varargs funcs assignable to compatible signatures could be done separately.

For cases where we genuinely require higher-order type bounds, it may or may not sense to include them in generics v1. Explicit constraints do force us to decide explicitly whether we want to support higher-order types, and how. The lack of consideration so far is a symptom, I think, of the fact that Go currently has no way to refer to properties of built-in types. It's a general open question of how any generics system will allow functions generic over all numeric types, or all integer types, and most of the proposals haven't focused a lot on this.

Please evaluate my generics implementation in your next project
http://go-li.github.io/

We can say Error: cannot call IndirectlyConstrained with [T = int]; T must have a method with signature func (T t) DoSomething(). This error message […] provides no information to someone trying to debug why IndirectlyConstrained has that constraint, which is a big usability problem if it's a large function.

I want to point out a big assumption that you're making here: that the error message from go build is the _only_ tool the programmer has available to diagnose the problem.

To use an analogy: if you encounter an error at run-time, you have several options for debugging. The error itself contains only a simple message, which may or may not be adequate to describe the error. But it's not the only information you have available: for example, you also have whatever log statements the program emitted, and if it's a really gnarly bug you can load it into an interactive debugger.

That is, run-time debugging is an interactive process. So why should we assume non-interactive debugging for compile-time errors⸮ As one alternative, we could teach the guru tool about type constraints. Then, the output from the compiler would be something like:

somefile.go:123: Argument `4` to DirectlyConstrained has type `int`,
    but DirectlyConstrained requires a type `T` with method `DoSomething()`.
    (For more detail, run `guru contraints path/to/somefile.go:#1033`.)

That gives the user of the generic package the information they need in order to debug the immediate call site, but _also_ gives a breadcrumb for the package maintainer (and, importantly, their editing environment!) to investigate further.

We could add Note: this constraint exists on T because IndirectlyConstrained calls DirectlyConstrained with [U = T] on line N, but now we're leaking details of IndirectlyConstrained's implementation.

Yes, that's what I mean about information leaking out anyway. You can already use guru describe to peek inside an implementation. You can peek inside a running program using a debugger, and not only look up the stack but also step down into arbitrarily low-level functions.

I absolutely agree that we should hide likely-irrelevant information _by default_, but that doesn't mean that we must hide it in absolute.

If an implicitly-constrained function does for i := range t for some t of generic type T, the syntax works out if T is any array, slice, channel, or map. But the semantics are quite different, especially if T is a channel type.

I think that's the more compelling argument for type constraints, but that doesn't require explicit constraints to be anywhere near as verbose as what some folks are proposing. To disambiguate call sites, it seems sufficient to constrain the type parameters by something closer to reflect.Kind. We don't need to describe operations that are already clear from the code; instead, we only need to say things like “T is a slice type”. That leads to a much simpler set of constraints:

  • a type subject to index operations needs to be labeled as linear or associative,
  • a type subject to range operations needs to be labeled as nil-empty or nil-blocking,
  • a type with literals needs to be labeled as having fields or indices, and
  • (perhaps) a type with numeric operations needs to be labeled as fixed or floating point.

That leads to a much narrower constraint language, perhaps something like:

TypeConstraint = "sliceable" | "map" | "chan" | "struct" | "integer" | "float" | "type"

with examples like:

func[T:integer, U, V] Foo(u U, v V) {
    var _ T = u.v(v) + 1;
}
func [S:sliceable, T] append(s S, x ...T) S {
    dst := s
    if cap(s) - len(s) < len(x) {
        dst = make(S, len(s), nextSizeClass(cap(s)))
        copy(dst, s)
    }
    copy(dst[len(s):cap(s)], x)
    return dst[:len(s)+len(x)]
}

I feel we have moved a large step towards custom generic by introducing type alias.
Type alias makes super types (type of types) possible.
We can treat types like values in using.

To make explainations simpler, we can add a new code element, genre.
The relation between genres and types is like the relation between types and values.
In other words, a genre means a type of types.

Each kind of type, except struct and interface and function kinds, corresponds to a predeclared genre.

  • Bool
  • String
  • Int8, Uint8, Int16, Uint16, Int32, Uint32, Int64, Uint64, Int, Uint, Uintptr
  • Float32, Float64
  • Complex64, Complex128
  • Array, Slice, Map, Channel, Pointer, UnsafePointer

There are some other predeclared genres, such as Comaprable, Numeric, Interger, Float, Complex, Container, etc. We can use Type or * denotes the genre of all types.

The names of all built-in genres all start with an upper-case letter.

Each struct and interface and function type corresponds to a genre.

We can also declare custom genres:

genre Addable = Numeric | String
genre Orderable = Interger | Float | String
genre Validator = func(int) bool // each parameter and result type must be a specified type.
genre HaveFieldsAndMethods = {
    width  int // we must use a specific type to define the fields.
    height int // we can't use a genre to define the fields.
    Load(v []byte) error // each parameter and result type must be a specified type.
    DoSomthing()
}
genre GenreFromStruct = aStructType // declare a genre from a struct type
genre GenreFromInterface = anInterfaceType // declare a genre from an interface type
genre GenreFromStructInterface = aStructType + anInterfaceType
genre ComparableStruct = HaveFieldsAndMethods & Comprable
genre UncomparableStruct = HaveFieldsAndMethods &^ Comprable

To make the following explaination consistent, a genre modifier is needed.
The genre modifier is denoted by Const. For example:

  • Const Integer is a genre (different from Integer) and its instance must be a constant value which type must be an integer. However, the constant value can be viewed as a special type.
  • Const func(int) bool is a genre (different from func(int) bool) and its instance must be a delcared function value. However, the function declaration can be viewed as a special type.

(The modifier solution is some tricky, maybe there are other better design solutions.)

Ok, let's continue.
We need another concept. Finding a good name for it is not easy,
Let's just call it crate.
Generally, the relation between crates and genres is like the relation between functions and types.
A crate can takes types as parameters and return types.

A crate declaration (assume the following code is declared in lib package):

crate Example [T Float, S {width, height T}, N Const Integer] [*, *, *] {
    type MyArray [N]T

    func Add(a, b T) T {
        return a+b
    }

    type M struct {
        x T
        y S
    }

    func (m *M) Area() T {
        m.DoSomthing()
        return m.y.width * m.y.height
    }

    func (m *M) Perimeter() T {
        return 2 * Add(m.y.width, m.y.height)
    }

    export M, Add, MyArray
}

Using the above crate.

import "lib"

// We can use AddFunc as a normal delcared function.
// Its genre is "Const func (a, b T) T"
type Rect, AddFunc, Array = lib.Example[float32, struct{x, y float32}, 100]

func demo() {
    var r Rect
    a, p = r.Area(), r.Perimeter()
    _ = AddFunc(a, p)
}

My ideas absorb many ideas of others shown above.
They are very not mature now.
I post them here just for I feel they are some interesting,
and I don't want to improve it any more.
So many brain cells were killed by fixing the holes in the ideas.
I hope these ideas can bring some inspirations to other gophers.

What you call “genre” is actually called “kind”, and is well-known in the
functional programming community. What you call a crate is a restricted
kind of ML functor.

On Wed, Apr 4, 2018, 12:41 PM dotaheor notifications@github.com wrote:

I feel we have moved a large step towards custom generic by introducing
type alias.
Type alias makes super types (type of types) possible.
We can treat types like values in using.

To make explainations simpler, we can add a new code element, genre.
The relation between genres and types is like the relation between types
and values.
In other words, a genre means a type of types.

Each kind of type, except struct and interface and function kinds,
corresponds to a predeclared genre.

  • Bool
  • String
  • Int8, Uint8, Int16, Uint16, Int32, Uint32, Int64, Uint64, Int, Uint,
    Uintptr
    & Float32, Float64
  • Complex64, Complex128
  • Array, Slice, Map, Channel, Pointer, UnsafePointer

There are some other predeclared genres, such as Comaprable, Numeric,
Interger, Float, Complex, Container, etc. We can use Type or * denotes
the genre of all types.

The names of all built-in genres all start with an upper-case letter.

Each struct and interface and function type corresponds to a genre.

We can also declare custom genres:

genre Addable = Numeric | String
genre Orderable = Interger | Float | String
genre Validator = func(int) bool // each parameter and result type must be a specified type.
genre HaveFieldsAndMethods = {
width int // we must use a specific type to define the fields.
height int // we can't use a genre to define the fields.
Load(v []byte) error // each parameter and result type must be a specified type.
DoSomthing()
}
genre GenreFromStruct = aStructType // declare a genre from a struct type
genre GenreFromInterface = anInterfaceType // declare a genre from an interface type
genre GenreFromStructInterface = aStructType | anInterfaceType

To make the following explaination consistent, a genre modifier is needed.
The genre modifier is denoted by Const. For example:

  • Const Integer is a genre and its instance must be a constant value
    which type must be an integer.
    However, the constant value can be viewed as a special type.
  • Const func(int) bool is a genre and its instance must be a delcared
    function value.
    However, the function declaration can be viewed as a special type.

(The modifier solution is some tricky, maybe there are other better design
solutions.)

Ok, let's continue.
We need another concept. Finding a good name for it is not easy,
Let's just call it crate.
Generally, the relation between crates and genres is like the relation
between functions and types.
A crate can takes types as parameters and return types.

A crate declaration (assume the following code is declared in lib
package):

crate Example [T Float, S {width, height T}, N Const Integer] [*, *, *] {
type MyArray [N]T

func Add(a, b T) T {
return a+b
}

// A crate-scope genre. Can only be used in the crate.

// M is a type of genre G
type M struct {
x T
y S
}

func (m *M) Area() T {
m.DoSomthing()
return m.y.width * m.y.height
}

func (m *M) Perimeter() T {
return 2 * Add(m.y.width, m.y.height)
}

export M, Add, MyArray
}

Using the above crate.

import "lib"

// We can use AddFunc as a normal delcared function.
type Rect, AddFunc, Array = lib.Example(float32, struct{x, y float32})

func demo() {
var r Rect
a, p = r.Area(), r.Perimeter()
_ = AddFunc(a, p)
}

My ideas absorb many ideas of others shown above.
They are very not mature now.
I post them here just for I feel they are some interesting,
and I don't want to improve it any more.
So many brain cells were killed by fixing the holes in the ideas.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-378665695, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AGGWB78BrjN0BxRfroH-jRNy4mCXgSwCks5tlPfMgaJpZM4IG-xv
.

I feel there are some differences between Kind and Genre.

By the way, if a crate only return one type, we can use its call as a type directly.

package lib

// export a type
crate List [T *] * {
    type List struct {
        ...
    }

    export List
}

use it:

import "lib"

var l lib.List[int]

There would be some "genre deduction" rules, just like "type deduction" in the current system.

@dotaheor, @DemiMarie is correct. Your “genre” concept sounds exactly like the “kind” from type theory. (Your proposal happens to require a subkinding rule, but that's not uncommon.)

The genre keyword in your proposal defines new kinds as super-kinds of existing kinds. The crate keyword defines objects with “crate signatures”, which are a kind that is not a subkind of Type.

As a formal system, your proposal seems to be something like:

Crate ::= χ | ⋯
Type ::= τ | χ | int | bool | ⋯ | func(τ) | func(τ) τ | []τ | χ[τ₁, …]

CrateSig ::= [κ₁, …] ⇒ [κₙ, …]
Kind ::= κ | exactly τ | kindOf κ | Map | Chan | ⋯ | Const κ | Type | CrateSig

To abuse some type-theory notation:

  • Read “⊢” as “entails”.
  • Read “k1k2” as “k1 is a subkind of k2”.
  • Read “:” as “is of kind”.

Then the rules look something like:

τ : exactly τ
exactly τkindOf exactly τ
kindOf exactly τType

τ : κ₁κ₁κ₂τ : κ₂

τ₁ : Typeτ₂ : TypekindOf exactly map[τ₁]τ₂Map
MapType

κ₁κ₂Const κ₁Const κ₂

[…]
(And so on, for all the built-in kinds)


Type definitions confer kinds, and the underlying kinds collapse to the kinds of built-in types:

type τ₁ τ₂τ₂ : κτ₁ : kindOf κ

kindOf kindOf κkindOf κ
kindOf MapMap
[…]


genre defines new subtype relationships:
genre κ = κ₁ | κ₂κ₁κ
genre κ = κ₁ | κ₂κ₂κ

(You can define Numeric and the like in terms of |.)

genre κ = κ₁ & κ₂ ∧ (κ₃κ₁) ∧ (κ₃κ₂) ⊢ κ₃κ


The crate expansion rule is similar:
type τₙ, … = χ[τ₁, …] ∧ (χ : [κ₁, …] ⇒ [κₙ, …]) ∧ (τ₁ : κ₁) ∧ ⋯ ⊢ τₙ : κₙ

This is all just talking about the kinds, of course. If you want to turn it into a type system you also need type rules. 🙂


So what you're describing is a pretty well-understood form of parametricity. That's nice, in that it is well-understood, but disappointing in that it does not help to resolve the unique problems that Go introduces.

The really interesting and gnarly problems that Go introduces are mainly around dynamic type inspection. How should type parameters interact with type-assertions and reflection?

(For example, should it be possible to define interfaces with methods of parametric types? If so, what happens if you type-assert a value of that interface with a novel parameter at run-time?)

On a related note, has there been a discussion about how to make code generic over builtins and user-defined types? Such as making code that can handle bigints and primitive integers?

On a related note, has there been a discussion about how to make code generic over builtins and user-defined types? Such as making code that can handle bigints and primitive integers?

Type-class-based mechanisms, such as in Genus and Familia, can do this efficiently. See our PLDI 2015 paper for details.

@DemiMarie
I think "genre" == "trait set".

[edit]
Maybe traits is a better keywrod.
We can view each kind is also a trait set.

Most traits are defined for a single type only.
But a more complex trait may define a relation between two types.

[edit 2]
assume there are two trait set A and B, we can do the following operations:

A + B: union set
A - B: difference set
A & B: intersection set

The trait set of an argument type must be a super set of the corresponding parameter genre (a trait set).
The trait set of a result type must be a sub set of the corresponding result genre (a trait set).

(IMHO)

Still I think rebinding Type Aliases is the way to go, for adding generics to Go. It does not need a huge change in the language. Packages that are generalized this way, still can be used in Go 1.x. And there is no need for adding constraints because it is possible to do it by setting the default type for the type alias, to something that already fulfils those constraints. And most important aspect of rebinding type aliases is that, the built-in composite types (slices, maps and channels) do not need to be changed and generalized.

@dc0d

How should type aliases replace generics?

@sighoya Rebinding Type Aliases can replace generics (not just type aliases). Let's assuming a package introduces some package level type aliases like:

package likedlist

type T = interface{}

type LinkedList struct {
    // ...
}

If Type Alias Rebinding (and compiler facilities) is provided then it is possible to use this package, for creating linked lists for different concrete types, instead of empty interface:

package main

import (
    "likedlist"
)

type intLL = likedlist.LinkedList(likedlist.T = int)
type stringLL = likedlist.LinkedList(likedlist.T = string)

func main() {}

If we use alias as such, the following way is cleaner.

// pkg.go
package pkg

type ListNode struct {
    prev, next *ListNode
    element    ?Element
}

func Add(x, y ?T) ?T {
    return x+y
}



// main.go
package main

import "pkg"

type intList = pkg.ListNode[Element=int]
func stringAdd = pkg.Add[T=string]

func main() {
}

@dc0d and how exactly that would be implemented? The code is nice but it doesn't tell anything about how it actually works inside. And, looking at the history of generics proposals, for Go it's very important, not just how it looks and feels.

@dotaheor That is incompatible with Go 1.x.

@creker I have implemented a tool (named goreuse) that uses this technique for generating code and born as a concept for Type Alias Rebinding.

It can be found here. There is a 15 minutes video that explains the tool.

@dc0d so it works kind of like C++ templates generating specialized implementations. I don't think it would be accepted as Go team (and, frankly, me and many another people here) seems to be against anything similar to C++ templates. It increases binaries, slows compilation, possibly would not be able to produce meaningful errors. And, on top of that, is not compatible with binary only packages which Go does support. Which is why C++ opted to writing templates in header files.

@creker

so it works kind of like C++ templates generating specialized implementations for every used type.

I do not know (It's been about 16 years since I've written any C++). But from your explanation it seems to be the case. Yet I am not sure if or how they are the same.

I don't think it would be accepted as Go team (and, frankly, me and many another people here) seems to be against anything similar to C++ templates.

Sure everybody here have good reasons for their preferences based on their priorities. First on my list is compatibility with Go 1.x.

It increases binaries,

It might.

slows compilation,

I highly doubt that (as it can be experienced with goreuse).

And, on top of that, is not compatible with binary only packages which Go does support.

I am not sure. Does other ways of implementing generics support this?

possibly would not be able to produce meaningful errors.

This could be a bit troublesome. Still it happens at compile time and can be compensated for, employing some tools, to a great extent. Besides if the type alias acting as the type parameter for the package, is an interface, then it can simply be checked it is assignable from the concrete provided type. Though the problem for primitive types like int and string and structs remains.

@dc0d

I think a bit about it.
Beside that it is internally established on interfaces, the 'T' in your example

type T=interface{}

is treated as a mutable type variable, but it should be an alias to a specific type, i.e. const reference to a type.
What you want is T Type, but this would imply the introduction of generics.

@sighoya I am not sure if I understand what you said.

It is internally established on interfaces

Not true. As described in my original comment, it is possible to use specific types that fulfil a constraint. For example the type parameter type alias can be declared as:

type T = int

And only types that has + operator (or - or *; depends if that operator is used at the body of the package at all) can be used as a type value that sits in that type parameter.

So it is not just interfaces that can be used as type parameters place-holder.

but this would imply the introduction of generics.

This _is_ a way for introducing/implementing generics in Go language itself.

@dc0d

To provide polymorphism you will use interface{} as this allows to set T to any type later.

Setting 'type T=Int' would not gain much.

If you would say that 'type T' is undeclared/undefined first which can be set later, well then you have something like generics.

Problem with it is that 'T' holds module/package wide and is not local to any function or struct (okay good, maybe a nested type declaration in a struct which can be accessed from the outside).

Why not writing instead?:

fun<type T>(t T)

or

fun[type T](t T)

Further we need some type inference machinery to deduce the right types when calling a generic function or struct without type parameter specialization at first.

@dc0d wrote

And only types that has + operator (or - or *; depends if that operator is used at the body of the package at all) can be used as a type value that sits in that type parameter.

Can you elaborate more on this?

@sighoya

To provide polymorphism you will use interface{} as this allows to set T to any type later.

Polymorphism is not achieved by having compatible types, when rebinding type aliases. The only actual constraint is, the body of the generic package. They have to be compatible mechanically.

Can you elaborate more on this?

For example if a package level type parameter type alias is defined like:

package genericadd

type T = int

func Add(a, b T) T { return a + b }

Then virtually all numeric types can be assigned to T, like:

package main

import (
    "genericadd"
)

var add = genericadd.Add(
    T = float64
)

func main() {
    var (
        a, b float64
    )

    println(add(a, b))
}

@dc0d

Yet I am not sure if or how they are the same.

They're the same in a sense that they work pretty much identically from what I see. For every class template instantiation compiler would generate a unique implementation if it's the first time it sees usage of the particular combination of class template and it's parameter list. That increases binary size as you now have multiple implementations of the same class template. Slows compilation as compiler would now need to generate these implementations and do all sorts of checks. In case of C++, increase in compile time could be huge. Your toy examples are fast but so are C++ ones.

I am not sure. Does other ways of implementing generics support this?

Other languages have no problem with that. In particular, C# as most familiar to me. But it uses runtime code generation that Go team rules out completely. Java also works but their implementation is not the best, to say the least. Some of the ianlancetaylor proposals could handle binary only packages from what I understand.

The only thing I don't understand is whether binary only packages must be supported. I don't see them being mentioned explicitly in the proposals. I don't really care about them but still, it's a language feature.

Just to test my understanding... consider this repo of copy/paste algorithms [here]. Unless you want to use "int", the code cannot be used directly. It must be copy and pasted and modified to work. And by modifications, I mean each instance of "int" must be changed to whatever type you really need.

The type alias approach would make the modifications once to, say T, and insert a line "type T int". Then the compiler would need to rebind T to something else, say float64.

Therefore:
a) I would argue that there would be no compiler slowdown unless you actually used this technique. So it is your choice.
b) Given the new vgo stuff, where multiple versions of the same code can be used... meaning that there must be some method of tucking the sources used away out-of-sight, then surely the compiler can keep track of whether two uses of the same rebinding are used and avoid duplication. So I think that code bloat would be the same as current copy/paste techniques.

It seems to me that between type aliases and coming vgo, the foundations for this approach to generics is nearly complete...

There are some "unknowns" listed in proposal [here]. So it would be nice to flesh it out a bit more.

@mandolyte you can add another level of indirection by wrapping specialized types in some general container. That way your implementation can stay the same. The compiler will then do all the magic. I think Ian's type parameters proposal works that way.

I think the user needs a choice between type erasure and monomorphization.
The latter is why Rust provides zero-cost abstractions. Go should too.

On Mon, Apr 9, 2018, 8:32 AM Antonenko Artem notifications@github.com
wrote:

@mandolyte https://github.com/mandolyte you can add another level of
indirection by wrapping specialized types in some general container. That
way your implementation can stay the same. The compiler will then do all
the magic. I think Ian's type parameters proposal works that way.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-379735199, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AGGWB1v9h5kWmuHCBuoewTTSX751OHgrks5tm1TsgaJpZM4IG-xv
.

It seems to me that there is an understandable confusion in this discussion about the tradeoff between modularity and performance. The C++ technique of re-type-checking and instantiating generic code at every type it is used for is bad for modularity, bad for binary distributions, and because of code bloat, bad for performance. The good part of that approach is that it automatically specializes the generated code to the types being used, which is particularly helpful when the types being used are primitive types like int. Java homogeneously translates generic code, but pays a price in performance, particularly when the code uses the type T[].

Fortunately, there are a couple of ways to address this without the non-modularity of C++ and without full run-time code generation:

  1. Generate specialized instantiations for primitive types. This could be done either automatically or by programmer directive. Some dispatching is needed to access the correct instantiation, but can be folded into the dispatching already needed by a homogeneous translation. This would work similarly to C#, but does not require full run-time code generation; a little extra support might be desirable in the runtime to set up dispatch tables as code loads.
  2. Use a single generic implementation in which an array of T is actually represented as an array of a primitive type when T is instantiated as a primitive type. This approach, which we used in PolyJ, Genus, and Familia, greatly improves performance relative to the Java approach, though it is not quite as fast as a fully specialized implementation.

@dc0d

Polymorphism is not achieved by having compatible types, when rebinding type aliases. The only actual constraint is, the body of the generic package. They have to be compatible mechanically.

Type aliases is the wrong way, because it should be a constant reference.
It is better to write 'T Type' directly and then you see you use indeed generics.

Why you want to use a global type variable 'T' for the whole package/module, local type vars in <> or [] is more modular.

@creker

In particular, C# as most familiar to me. But it uses runtime code generation that Go team rules out completely.

For reference types, but not for value types.

@DemiMarie

I think the user needs a choice between type erasure and monomorphization.
The latter is why Rust provides zero-cost abstractions. Go should too.

"Type Erasure" is ambiguous, I will assume you mean Type Parameter Erasure, the thing that Java provides which is also not quite true.
Java has monomorphization, but it monomorphizes (semi) constantly to the upper bound in the generic constraint which is mostly Object.
To provide methods and fields of other types, the upper bound is casted internally to your appropriate type which is quite ugly.
If the Valhalla project will be accepted, things will change for value types but sadly not for reference types.

Go don't have to go the Java Way because:

"Binary compatibility for compiled packages is not guaranteed between releases"

whereas this is not possible in Java.

It seems to me that there is an understandable confusion in this discussion about the tradeoff between modularity and performance. The C++ technique of re-type-checking and instantiating generic code at every type it is used for is bad for modularity, bad for binary distributions, and because of code bloat, bad for performance.

Which kind of performance are you talking about here?

If by “code bloat” and “performance” you mean “binary size” and “instruction cache pressure”, then the problem is fairly straightforward to resolve: as long as you don't over-retain debug information for each specialization, you can collapse functions with the same bodies into the same function at link time (the so-called “Borland model”). That trivially handles specializations for primitive types and types without calls to non-trivial methods.

If by “code bloat” and “performance” you mean “linker input size” and “linking time”, then the problem is also fairly straightforward, if you can make certain (reasonable) assumptions about your build system. Instead of emitting each specialization in every compilation unit, you can instead emit a list of specializations needed, and have the build system instantiate each unique specialization exactly once prior to linking (the “Cfront model”). IIRC, this is one of the problems that C++ modules attempt to address.

So unless you mean a third kind of “code bloat” and “performance” that I have missed, it seems like you're talking about a problem with the implementation, not the specification: _as long as the implementation does not over-retain debug information,_ the performance issues are fairly straightforward to address.


The bigger problem for Go is that, if we are not careful, it becomes possible to use type-assertions or reflection to produce a novel instance of a parameterized type at run‐time, which no amount of implementation cleverness — short of an expensive whole‐program analysis — can fix.

That is indeed a failure of modularity, but it has ~nothing to do with code bloat: instead, it comes from the fact that the types of Go functions (and methods) do not capture a complete enough set of constraints on their arguments.

@sighoya

For reference types, but not for value types.

From what I've read, C# JIT does specialization at runtime for each value type and once for all reference types. There's no compile-time (IL-time) specialization. Which is why C# approach is completely ignored - Go team doesn't want to depend on runtime code generation as it limits platforms Go can run on. In particular, on iOS you're not allowed to do code generation at runtime. It works and I actually done some of it but Apple doesn't allow it in the AppStore.

How did you do it?

On Mon, Apr 9, 2018, 3:41 PM Antonenko Artem notifications@github.com
wrote:

@sighoya https://github.com/sighoya

For reference types, but not for value types.

From what I've read, C# JIT does specialization at runtime for each value
type and once for all reference types. There's no compile-time
specialization. Which is why C# approach is completely ignored - Go team
doesn't want to depend on runtime code generation as it limits platforms Go
can run on. In particular, on iOS you're not allowed to do code generation
at runtime. It works and I actually done some of it but Apple doesn't allow
it in the AppStore.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-379870005, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AGGWB-tslGeUSGXl2ZlEDLf0dCATUaYvks5tm7lvgaJpZM4IG-xv
.

@DemiMarie launched my old research code just to be sure (that research was dropped for other reasons). Once again, debugger mislead me. I allocate a page, write some instructions to it, mprotect it with PROT_EXEC and jump to it. Under debugger it works. Without debugger app is SIGKILLed with CODESIGN message in the crash log, as expected. So, it doesn't work even without AppStore. Even stronger argument against runtime code generation if iOS is important for Go.

First, it would be helpful to ponder the Rob Pike's 5 Rules of Programming one more time.

Second (IMHO):

About slow compilation and binary size, how many generic types are used in common types of applications that are being developed using Go (_n is usually small_ from Rule 3)? Unless the problem needs a high level of cardinality in concrete concepts (high number of types) that overhead can be overlooked. Even then I would argue that something is wrong with that approach. When implementing an e-commerce system, nobody defines a separate type for each kind of product and it's variations and perhaps the possible customizations.

Verbosity is a good form of simplicity and familiarity (for example in syntax) which makes things more obvious and cleaner. While I doubt that code bloat would be higher using Type Alias Rebinding, I do like the familiar Go-ish syntax and the obvious verbosity accompanying it. One of the goals of Go is being easy to read (while I personally find it relatively easy and pleasant to write in too).

I do not understand how it can harm performance because at runtime, only concrete bounded types are being used which had been generated at compile-time. There is no runtime overhead.

The only concern with Type Alias Rebinding that I see, might be the binary distribution.

@dc0d performance harm usually means filling up instruction cache due to different implementations of class templates. How exactly it relates to real performance is an open question, I don't know of any benchmarks, but theoretically it is a problem.

As for binary size. It's another theoretical issue that people usually bring up (as I did earlier) but how real code will suffer from it is, again, an open question. For example, specialization for all pointer and interface types could be the same, I think. But specialization for all value types would be unique. And that also includes structs. Using generic containers to store them is common and would cause significant code bloat as generic containers implementations are not small.

The only concern with Type Alias Rebinding that I see, might be the binary distribution.

Here I'm still not sure. Does generics proposal have to support binary-only packages or we could just mention that binary-only packages don't support generics. It would be much easier, that's for sure.

As was mentioned earlier, if one does not need to support debugging, one
can combine identical template instantiations.

On Tue, Apr 10, 2018, 5:46 AM Kaveh Shahbazian notifications@github.com
wrote:

First, it would be helpful to ponder the Rob Pike's 5 Rules of Programming
https://users.ece.utexas.edu/%7Eadnan/pike.html one more time.

Second (IMHO):

About slow compilation and binary size, how many generic types are used in
common types of applications that are being developed using Go (n is
usually small
from Rule 3)? Unless the problem needs a high level of
cardinality in concrete concepts (high number of types) that overhead can
be overlooked. Even then I would argue that something is wrong with that
approach. When implementing an e-commerce system, nobody defines a separate
type for each kind of product and it's variations and perhaps the possible
customizations.

Verbosity is a good form of simplicity and familiarity (for example in
syntax) which makes things more obvious and cleaner. While I doubt that
code bloat would be higher using Type Alias Rebinding, I do like the
familiar Go-ish syntax and the obvious verbosity accompanying it. One of
the goals of Go is being easy to read (while I personally find it
relatively easy and pleasant to write in too).

I do not understand how it can harm performance because at runtime, only
concrete bounded types are being used which had been generated at
compile-time. There is no runtime overhead.

The only concern with Type Alias Rebinding that I see, might be the binary
distribution.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-380040032, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AGGWB6aDfoHz2wbsmu8mCGEt652G_VE9ks5tnH9xgaJpZM4IG-xv
.

The instantiations don't even need to be “identical” in the sense of “using the same arguments”, or even “using arguments with the same underlying type”. They just need to be close enough to result in the same generated code. (For Go, that also implies “the same pointer masks”.)

@creker

From what I've read, C# JIT does specialization at runtime for each value type and once for all reference types. There's no compile-time (IL-time) specialization.

Well, this is sometimes a little bit complicated because their byte code is interpreted just in time before code executes, so code generation is done before executing the program but after compilation, so you are right in the sense of the vm which is running while code is generated.

I think that the generic system of c# would be fine for go if we instead generate code at compile time.
Runtime code generation in the sense of c# is not possible with go, because go is not a vm.

@dc0d

The only concern with Type Alias Rebinding that I see, might be the binary distribution.

Can you elaborate a bit.

@sighoya My mistake; I meant not binary distribution but binary packages - which personally I have no idea how important it is.

@creker Nice sum up! (MO) Unless a strong reason be found, any form of overloading the Go language constructs must be avoided. One reason for going with Type Alias Rebinding is to avoid overloading built-in composite types like slices or maps.

Verbosity is a good form of simplicity and familiarity (for example in syntax) which makes things more obvious and cleaner. While I doubt that code bloat would be higher using Type Alias Rebinding, I do like the familiar Go-ish syntax and the obvious verbosity accompanying it. One of the goals of Go is being easy to read (while I personally find it relatively easy and pleasant to write in too).

I disagree with this notion. Your proposal will force users to do the hardest thing known to any programmer - naming things. So we'll end up with code riddled with hungarian notation, which not only looks bad, it is unnecessarily verbose and causes stutters. Moreover, other proposals also bring in a go-ish syntax, and at the same time do not have these problems.

There are three categories of names that we have to devise on a daily basis:

  • For Domain Entities/Logic
  • Program Workflow Data Types/Logic
  • Services/Interfacing Data Types/Logic

How many times a programmer had succeeded avoiding naming anything in her/his code, ever?

Hard or not it needs to be done on a daily basis. And most of it's hurdles come from incompetency in structuring a code-base - not the hardships of naming process itself. That quote - at least in it's current form - has done a great disservice to the world of programming so far. It simply tries to emphasise the importance of naming. Because we communicate via names in our code.

And names become so much more powerful when they accompany a code structuring practice; both in terms of code layout (a file, directory structure, packages/modules) and practices (design patterns, service abstractions - such as REST, resource management - concurrent programming, accessing hard drive, throughput/latency).

As for syntax and verbosity, I do favour verbosity over clever conciseness (at least in the context of Go) - again, Go is meant to be easy to read, not necessarily easy to write (which strangely I find it good at that too).

I read a lot of experience reports and proposal on why and how to implement generics in Go.

Do you mind if I try to actually implement them in my Go interpreter gomacro ?

I have some experience on the topic, having added generics to two languages in the past

  1. a now abandoned language that I created back when I was naive :) It transpiled to C source code
  2. Common Lisp with my library cl-parametric-types - it also supports partial and full specializations of generic types and functions

@cosmos72 it would make a nice experience report to see a prototype of a technique that preserved type safety.

Just started working on it. You can follow the progress on https://github.com/cosmos72/gomacro/tree/generics-v1

At the moment I am starting with a (slightly modified) blend of the third and fourth Ian's proposal listed at https://github.com/golang/proposal/blob/master/design/15292-generics.md#Proposal

@cosmos72 There is a summary of proposals at the link below. Is your blend one of them?
https://docs.google.com/document/d/1vrAy9gMpMoS3uaVphB32uVXX4pi-HnNjkMEgyAHX4N4

I have read that document, it summarizes many different approaches to generics by various programming languages.

At the moment I am going toward the "Type specialization" technique used by C++, Rust and others, possibly with a little of "Parameterized template scopes" because Go most general syntax for new types is type ( Foo ...; Bar ...) and I am extending it to template[T1,T2...] type ( Foo ...; Bar ...).
Also, I am keeping the door open for "Constrained specialization".

I would like to also implement the "Polymorphic function specialization", i.e. to arrange for the specialization to be automatically inferred by the language at the call site if not specified by the programmer, but I guess it may be somewhat complex to implement. We will see.

The blend I was referring to is between https://github.com/golang/proposal/blob/master/design/15292/2013-10-gen.md and https://github.com/golang/proposal/blob/master/design/15292/2013-12-type-params.md

Update: to avoid spamming this official Go issue beyond the initial announcement, it's probably better to continue the gomacro-specific discussion at gomacro issue #24: add generics

Update 2: first template functions compiled and executed successfully. See https://github.com/cosmos72/gomacro/tree/generics-v1

Just for the record, it is possible to rephrase my opinion (on generics and Type Alias Rebinding):

Generics should be added as a compiler feature (code generation, templates, etc), not a language feature (meddling with Go's type system at all levels).

@dc0d
But are C++ templates not a compiler and language feature?

@sighoya Last time I wrote C++ professionally was around 2001. So I might be wrong. But assuming the implications of the naming is accurate - the "template" part - yes (or rather no); it might be a compiler feature (and not a language feature), accompanied by some language constructs, which most likely are not overloading any language constructs that are involved in the type system.

I support @dc0d. If you consider it, this feature would be nothing more than an integrated code generator.

Yes: the binary size may and WILL increase, but right now we use code generators, which are pretty the same but as an external feature. If I have to create my template as:

type BinaryTreeOfStrings struct {
    left, right *BinaryTreeOfStrings;
    string content;
}

// Its methods here

type BinaryTreeOfBigInts struct {
    left, right *BinaryTreeOfBigInts;
    uint64 content;
}

// AGAIN the same methods but different type

... I'd seriously like that, rather than copypasting or using an external tool, this feature become part of the compiler itself.

Please note:

  • Yes, end code would be duplicated. Right as if we use a generator. And the binary would be bigger.
  • Yes, the idea is not original, but borrowed from C++.
  • Yes, functions of MyType not involving anything with type T (directly or indirectly) would also be repeated. That could be optimized (e.g. methods that refer something of type T -other than the pointer to the message-receiving object- will be generated for each T; methods that hold invocations to methods that would be generated for each T, will also be generated for each T, recursively - while methods where their only reference to T is *T in the receiver, and other methods calling only those safe methods and satisfying the same criteria, could be made only once). Anyway, IMO this point is large and less to the point: I'd be quite happy even if this optimization does not exist.
  • Type arguments should be explicit in my opinion. Specially when an object satisfies potentially infinite interfaces. Again: a code generator.

So far in my comment, my proposal is to implement it as-is: as a compiler-supported code generator, instead of an external tool.

It would be unfortunate for Go to follow the C++ route. Many people view the C++ approach as a mess that has turned programmers against the whole idea of generics: difficulty of debugging, lack of modularity, code bloat. All the "code generator" solutions are really just macro substitution — if that's the way you want to write code, why do we even need compiler support?

@andrewcmyers I had this proposal Type Alias Rebinding in which we write just normal packages and instead of using interface{} explicitly we just use it as type T = interface{} as a package-level generic parameter. And that's all.

  • We debug it like a normal package - it is actual code, not some intermediate half-life creature.
  • There is no need for meddling with Go type system at all levels - think about assignability alone.
  • It is explicit. No hidden mojo. Of-course one might find not being able to chain generic calls seamlessly, a drawback. I see it as draw-forward! Changing type in two consecutive calls, in one statement is not Goish (IMO).
  • And best of all it is backward compatible with Go 1.x (x >= 8) series.

While the idea is not new, the way that Go allows to implement it, is pragmatic and clear.

Further bonus: there is no operator overloading in Go. But by defining the default value of the type alias as (for example) type T = int, not the only valid types that can be used to customize this generic package, are numeric types that have an internal implementation for + operator.

Also the alias type parameter can be forced to fulfill more than one interface just by adding some validator types and statements.

Now, that would be super ugly using any explicit notation for a generic type that has a parameter that implements Error and Stringer interfaces and also is a numeric type that supports + operator!

right now we use code generators, which are pretty the same but as an external feature.

The difference being, that the widely accepted way to do code generation (via go generate) happens at commit/development time, not at compile time. Doing it at compile time implies that you need to allow arbitrary code execution in the compiler, libraries may blow up compilation times by orders of magnitude and/or you'll have separate build dependencies (i.e. code can no longer be built just with the Go tool). I like Go for pushing the invocation of meta-programming to the upstream developer.

That is, as all approaches to solve these problems, this approach also has downsides and involves tradeoffs. Personally, I'd argue that actual generics with support in the type system are not only better (i.e. have a more powerful feature set) but also may retain the advantage of predictable and safe compilation.

I will read all the stuff above, I promise, and yet I'll add a bit as well - GoLang SDK for Apache Beam seems rather bright example/showcase of troubles library designer has to endure to pull off anything _properly_ high-level.

There are at least two experimental implementations for Go generics. Earlier this week I spent some time with (1). I was pleased to find that the readability impact of the code was minimal. And I found use of anonymous functions to provide equality tests worked well; so I’m convinced that operator overloading isn’t needed. The one problem I did find was in error handling. The common idiom of “return nil,err” will not work if the type is, say, an integer or a string. There are a number of ways to work around this, all with a complexity cost. I may be a bit weird, but I like Go’s error handling. So this leads me to observe that a Go generics solution ought to have a universal keyword for the zero value of a type. The compiler would simply replace it with zero for numeric types, an empty string for string types, and nil for structs.

While this implementation did not enforce a package level approach, it would certainly be natural to do so. And, of course, this implementation did not address all the technical details about where compiler instantiated code should go (if anywhere), how code debuggers would work, etc.

It was quite nice to use the same algorithm code for integers and something like a Point:

type Point struct {
    x,y int
}

See (2) for my testing and observations.

(1) https://github.com/albrow/fo; the other is the aforementioned https://github.com/cosmos72/gomacro#generics
(2) https://github.com/mandolyte/fo-experiments

@mandolyte You can use *new(T) to get the zero value of any type.

A language construct like default(T) or zero(T) (the first one is the one
in C# IIRC) would be clear, but OTOH longer than *new(T) (although more
performant).

2018-07-06 9:15 GMT-05:00 Tom Thorogood notifications@github.com:

@mandolyte https://github.com/mandolyte You can use *new(T) to get the
zero value of any type.


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-403046735, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AlhWhQ5cQwnc3x_XUldyJXCHYzmr6aN3ks5uD3ETgaJpZM4IG-xv
.

--
This is a test for mail signatures to be used in TripleMint

19642 is for discussing a generic zero value

@tmthrgd Somehow I missed that little tidbit. Thanks!

prelude

Generics are all about specializing customizable constructs. Three categories of specialization are:

  • Specializing Types, Type<T> - an _array_;
  • Specializing Computations, F<T>(T) or F<T>(Type<T>) - a _sortable array_;
  • Specializing Notation, _LINQ_ for example - select or for statements in Go;

Of-course there are programming languages that present even more generic constructs. But conventional programming languages like _C++_, _C#_ or _Java_ provide more or less language constructs limited to this list.

thoughts

The first category of generic types/constructs should be type agnostic.

The second category of generic types/constructs need to _act_ upon a _property_ of the type parameter. For example a _sortable array_ has to be able to _compare_ the _comparable property_ of its items. Assuming T.(P) is a property of T and A(T.(P)) is a computation/action that acts upon that property, The (A, .(P)) can be applied either to each individual item or be declared as a specialized computation, passed to the original customizable computation. An example of the latter case in Go is the sort.Interface interface which also has the counterpart separate function sort.Reverse.

The third category of generic types/constructs are _type-specialized_ language notations - seems to not be a Go thing _in general_.

questions

to be continued ...

Any feedback more descriptive than an emoji is most welcome!

@dc0d I would recommend studying Sepanov's "Elements of Programming" before attempting to define Generics. The TL;DR is we write concrete code to begin with, say an algorithm that sorts an array. Later we add other collection types like a Btree, etc. We notice we are writing many copies of the sort algorithm that are essentially the same, so we define some concept, say 'sortable'. Now we want to categorise the sort algorithms, maybe by access pattern they require, say forward only, single pass (a stream), forward only multiple pass (a singly linked list), bidirectional (a doubly linked list), random access (an array). When we add a new collection type we only need to indicate which category of "coordinate" it falls into to get access to all the relevant sort algorithms. These algorithm categories are a lot like 'Go' interfaces. I would be looking to extend interfaces in Go to support multiple type parameters, and abstract/associated types. I don't think functions need ad-hoc type parameterisation.

@dc0d As an attempt to break generics into component parts I hadn't considered 3, "specializing notation," as its own separate part before. Perhaps it could be characterized as defining DSLs by utilizing type constraints.

I might argue that your 1 and 2 are "data structures" and "algorithms", respectively. With that terminology it's a bit clearer why it might be difficult to cleanly separate them, since they are often highly dependent on one another. But sort.Interface is a pretty good example of where you can draw a line between storage and behavior (with a little recent sugar to make it nicer), since it encodes the requirements Indexable and Comparable into the minimum behavior needed to implement the sort algorithm with "swap" and "less" (and len). But this seems to break down on more complicated data structures like trees or heaps, both of which currently take some contortions to map into pure behavior as Go interfaces.

I could imagine a relatively small generics addition to interfaces (or otherwise) that could allow most textbook data structures and algorithms to be implemented relatively cleanly without contortions (like sort.Interface is today), but not be powerful enough to design DSLs. Whether we want to limit ourselves to such a restricted generics implementation when we're going to all the trouble of adding generics at all is a different question.

@infogulch coordinate structures for binary trees are "bifurcating coordinates", and equivalents exist for other trees. However you can also project the ordering of a tree through one of three orders, pre-order, in-order and post-order. Having decided on one of these the tree can be addressed as a bidirectional coordinate, and the family of sort algorithms defined on bidirectional coordinates would be optimally efficient.

The point is you categorise the sort algorithms by their access patterns. There are only a finite number of optimal sort algorithms for each access pattern. You don't care about the data-structures at this point. Talking about more complex structures misses the point, we want to categorise the family of sort algorithms not the data structures. Whatever data you have you will have to use one of the algorithms that exist to sort it, so the question becomes which of the available data-access-pattern categorisations of sort algorithms is optimal for the data-structures you have.

(IMHO)

@infogulch

Perhaps it could be characterized as defining DSLs by utilizing type constraints

You are right. But since they are part of the set of language constructs, IMO calling them DSLs would be a bit inaccurate.

1 and 2 ... are often highly dependent

Again true. But there are many cases that there is need for a container type to be passed around, while the actual usage is not decided yet - at that point in a program. That's why 1 is needed to be studied on its own.

sort.Interface is a pretty good example of where you can draw a line between _storage_ and _behavior_

Well said;

this seems to break down on more complicated data structures

That is one of my questions: to generalize the type parameter and describe it in terms of restrictions (like List<T> where T:new, IDisposable) or to provide a generalized _protocol_ applicable to all items (of a set; of a certain type)?

@keean

the question becomes which of the available data-access-pattern categorizations of sort algorithms is optimal for the data-structures you have

True. Accessing by index is a _property_ of a slice (or array). So the first requirement for a sortable container (or _tree_ -able container whatever the _tree_ algorithm is) is to provide an _access & mutate (swap)_ utility. The second requirement is the items must be comparable. That's the confusing part (for me) about what you call algorithms: requirements must be fulfilled on both sides (on the container and on the type parameter). That's the point I can not imagine a pragmatic implementation of generics in Go. Each side of the problem can be described in term of interfaces perfectly. But how to combine these two in an effective notation?

@dc0d algorithms require interfaces, data-structures provide them. This is enough for full generality, providing the interfaces are sufficiently powerful. Interfaces are parameterised by types, but you need type-variables.

Taking the 'sort' example, 'Ord' is a property of the type stored in the container, not the container itself. The access pattern is a property of the container. Simple access apatterns are 'iterators', but that name comes from C++, Stepanov preferred 'coordinates' as it can be applied to more complex multi-dimensional containers.

Trying to define sort, we want something like this:

bubble_sort : forall T U I => T U -> T U requires
   ForwardIterator<T>, Readable<T>, Writable<T>,
   Ord<U>,  ValueType(T) == U, Distance type(T) == I

Note: I am not suggesting this notation, just trying to pull in some other related work, the requires clause is in the syntax preferred by Stepanov, the function type is from Haskell, whose type-classes probably represent a good implementation of these concepts.

@keean
Perhaps I'm misunderstanding you, but I don't think you can simply restrict algorithms to interfaces only, at least in the way interfaces are defined right now.
Consider sort.Slice for example, we are interested in sorting slices, and I don't see how one would construct an interface that would represent all slices.

@urandom you abstract the algorithms not the collections. So you ask, what data access patterns exist in "sort" algorithms, and then classify those. So it doesn't matter whether the container is a "slice" we are not attempting to define all operations you may want to perform on a slice, we are trying to determine the requirements of an algorithm and use that to define an interface. A slice is not special it's just a type T that we can define a set of operations on.

So interfaces relate to libraries of algorithms, and you can define your own interfaces for your own data structures in order to be able to use those algorithms. The libraries could come with pre-defined interfaces for the built-in types.

@keean
I thought that that's what you meant. But in the context of Go, that would probably mean that there would need to be a significant overhaul of what interfaces can define. I'd imagine that various builtin operations, such as iterations, or operators, would need to be exposed via methods in order for things like sort.Slice or math.Max to be made generic over interfaces.

So you'd have to have support for the following interface (pseudo-code):

type [T] OrderedIterator interface {
   Len() int
   ValueAt(i int) *T
}

...
package sort

func [T] Slice(s [T]OrderedIterator, func(i, j int) bool) {
   ...
}

and all slices would then have these methods?

@urandom An iterator is not an abstraction of a collection, but an abstraction of the reference/pointer into a collection. For example forward iterator could have a single method 'successor' (sometimes 'next'). Being able to access data at the location of an iterator is not a property of the iterator (otherwise you will end up with read/write/mutable flavours of iterator). It's best to define "references" separately as Readable, Writable and Mutable interfaces:

type T ForwardIterator interface {
   type DistanceType D
   successor(x T) T
}

type T Readable interface {
   type ValueType U 
   source(x T) U
}

Note: The type 'T' is not the slice, but the type of the iterator on the slice. This could just be a plain pointer, if we adopt the C++ style of passing a start and end iterator to functions like sort.

For a random access iterator, we would end up with something like:

type T RandomIterator interface {
   type DistanceType D
   setPosition(x DistanceType)
}

So an iterator/coordinate is an abstraction of the reference to a collection, not the collection itself. The name 'coordinate' expresses this quite nicely, if you think of the iterator as the coordinate, and the collection as the map.

Aren't we selling Go short by not leveraging function closures and anonymous functions? Having functions/methods as a first class type in Go can help. For example, using syntax from albrow/fo, a bubble sort might look like this:

type SortableContainer[C,T] struct {
    Less func(C,T,T) bool
    Swap func(C,int,int)
    Next func(C) (T,bool)
}

func (bs *SortableContainer[C,T]) BubbleSort(container C, e1,e2 T) {    
    swapCount := 1
    var item1, item2 T
    item1, ok1 = bs.Next()
    if !ok1 {return}
    item2, ok2 = bs.Next()
    if !ok2 {return}
    for swapCount > 0 {
        swapCount = 0
        for {
            if Less(item2, item1) { 
                bs.Swap(C,item2,item1)
                swapCount += 1
            }
        }
    }
}

Please overlook any mistakes... completely untested!

@mandolyte I am not sure if this was addressed to me? I don't really see any difference between what I was suggesting and your example, except you are using multi-parameter interfaces, and I was giving examples using abstract/associated types. To be clear I think you need both multi-parameter interfaces, and abstract/associated types for full generality, neither of which are currently supported by Go.

I would suggest your interfaces are less general than the ones I proposed, because they tie the sort-order, access-pattern, and accessibility into the same interface, which of course will result in the proliferation of interfaces, for example two orders (less, greater), three access types (read-only, write-only, mutable) and five access patterns (forward-single-pass, forward-multi-pass, bidirectional, indexed, random) would lead to 36 interfaces compared to only 11 if the concerns are kept separate.

You could define the interfaces I propose with multi-parameter interfaces instead of abstract types like this:

type I ForwardIterator interface {
   successor(x I) I
}
type R V Readable interface {
   source(x R) V
}
type V Ord interface {
   less(x V, y V) : bool
}

Notice that the only one that needs two type parameters is the Readable interface. However we lose that ability for an iterator object to 'contain' the type of the objects iterated over, which is a big problem as now we have to move the 'value' type around in the type system, and we have to get it correct. This leads to a proliferation of type parameters which is not good, and it increases the possibility of coding errors. We also lose the ability to define the 'DistanceType' on the iterator, which is the smallest number type necessary to count the elements in the collection, which is useful for mapping to int8, int16, int32 etc, to give the type you need to count elements without overflow.

This is closely tied to the concept of 'functional-dependency'. If a type is functionally-dependent on another type, it should be an abstract/associated type. Only if the two types are independent should they be separate type parameters.

Some problems:

  1. Cannot use current f(x I) syntax for multi-parameter interfaces. I don't like that this syntax confuses interfaces (which are constraints on types) with types anyway.
  2. There would need a way to declare parameterised types.
  3. There would need to be a way to declare associated types for interfaces with a given set of type parameters.

@keean Not sure I understand how or why the count of interfaces gets so high. Here is a complete working example: https://play.folang.org/p/BZa6BdsfBgZ (slice based, not a general container, thus no Next() method needed).

It uses only one type struct, no interfaces at all. I have to supply all the anonymous functions and closures (that's probably where the trade off is?). The example uses the same bubble sort algorithm to sort both a slice of integers and a slice of "(x,y)" Points, where distance from origin is the basis of the Less() function.

At any rate, I was hoping to show how having functions in the type system can help.

@mandolyte I think I misunderstood what you were proposing. I see what you are talking about is "folang" which already has some nice functional programming features added to Go. What you have implemented is basically plumbing a multi-parameter type-class by hand. You are passing what is known as a function dictionary to the sort function. This is doing explicitly what an interface would do implicitly. These kind of features are probably needed before multi-parameter interfaces and associated types, but you eventually run into problems passing all those dictionaries around. I think interfaces provide for cleaner more readable code.

Sorting a slice is a solved problem. Here's the code for a slice quicksort.go implemented using the go-li (golang improved) language.

func main(){
    var data = []int{5,3,1,8,9}

    Sort(data, func(a *int, b *int) int {
        return *a - *b
    })

    fmt.Println(data)
}

You can experiment with this on the playground

Full example you can paste to the playground, because importing the quicksort package does not work on the playground.

@go-li I am sure you can sort a slice, it would be a bit poor if you could not. The point is generically you would like to be able to sort any linear container with the same code, so that you only ever have to write a sort algorithm once, no matter what container (data-structure) you are sorting and no matter what the content is.

When you can do this, the standard library can provide universal sorting functions, and nobody need ever write one again. There are two benefits to this, less mistakes as it's harder than you think to write a correct sorting algorithm, Stepanov uses the example that most programmers cannot correctly define the pair 'min' and 'max', so what hope do we have to be correct for more complex algorithms. The other benefit is that when there is only one definition of each sorting algorithm, any improvements in clarity or performance that can be made benefit all programs that use it. People can spend their time trying to improve the common algorithm instead of having to write their own for every different data-type.

@keean
Another questions related to our previous discussion. I can't figure out how one would be able to define a mapping function that changes items from an iterable, returning a new concrete iterable type whose items might be of a different type than the original one.

And I imagine a user of such a function would want a concrete type returned, not another interface.

@urandom Assuming we don't mean to do it 'in-place' which would be unsafe, what you want is a map function that has a 'read-iterator' of one type and a 'write-iterator' of another type, which can be defined something like:

map<I, O, U>(first I, last I, out O, fn U) requires
   ForwardIterator<I>, Readable<I>,
   ForwardIterator<O>, Writable<O>,
   UnaryFunction<U>, Domain(U) == ValueType(I), Codomain(U) == ValueType(O)

For clarity, "ValueType" is an associated type of the interfaces "Readable" and "Writable", "Domain" and "Codomain" are associated types of the "UnaryFunction" interface. It obviously helps a lot if the compiler can automatically derive the interfaces for data-types like "UnaryFunction". Whilst this sort of looks like reflection, it is not, and it all happens at compile time using static types.

@keean How to model those Readable and Writable constraints in the context of current Go's interfaces?

I mean, when we have a type A and we want to convert to type B, the signature of that UnaryFunction would be func (input A) B (right?), but how can that be modeled using only interfaces and how that generic map (or filter, reduce, etc.) would be modeled to keep the pipeline of types?

@geovanisouza92 I think "Type Families" would work well as they can be implemented as an orthogonal mechanism in the type system, and then integrated into the syntax for interfaces as is done in Haskell.

A type-family is like a restricted function on types (a mapping). As interface implementations are selected by type we can provide a type-mapping for each implementation.

So if we define:

ValueType MyIntArrayIterator -> Int

Functions are a little tricker but a function has a type, for example:

fn(x : Int) Float

We would write this type:

Int -> Float

It is important to realise that -> is just an infix type constructor, like '[]' for an Array is a type constructor, we could just as easily write this;

Fn Int Float
Or
Fn<Int, Float>

Depending on our preference for type syntax. Now we can clearly see how we can define:

Domain  Fn<Int, Float> -> Int
Codomain Fn<Int, Float> -> Float

Now whilst we could provide all these definitions by hand, they can easily be derived by the compiler.

Given these type families, we can see that the definition of map I gave above only requires types I O and U to instantiate the generic, as all the other types are functionally dependent on these. We can see these types are directly provided by the arguments.

Thanks, @keean.

This would work fine for built-in/predefined functions. Are you saying the same concept would be applied for user-defined functions or userland libs?

Those "Type Families" will be carried for runtime, in the case of some error context?

How about empty interfaces, type switches and reflection?


EDIT: I'm just curious, not complaining.

@giovanisouza92 well nobody has committed Go to have generics so I expect scepticism. My approach is that if you are going to do generics you should do them right.

In my example 'map' is user defined. There is nothing special about it, and within the function you simply use the methods of the interfaces you have required on those types exactly as you do in Go right now. The only difference is that we can require a type to satisfy multiple interfaces, interfaces can have multiple type parameters (although the map example does not use this) and there are also associated types (and constraints on types like the type equality '==' but this is like a Prolog equality and unifies the types). This is why there is the different syntax for specifying the interfaces required by a function. Note there is another important difference:

f(x I, y I) requires ForwardIterator<I>

Vs

f(x ForwardIterator, y ForwardIterator)

Note there is a difference in the latter 'x' and 'y' can be different types that satisfy the ForwardIterator interface, whereas in the former syntax 'x' and 'y' must both be the same type (that satisfies forward iterator). This is important so that functions are not under-constrained, and allows concrete types to be propagated much further during compilation.

I don't think anything changes regarding type switches and reflection, because we are just extending the concept of interfaces. As go has runtime type information you do not get into the same problem as Haskell and require existential types.

Thinking about Go, runtime polymorphism and type families, we would probably want to constrain the type-family itself to an interface to avoid having to treat every associated type as an empty interface at runtime which would be slow.

So in light of those thoughts I would modify my above proposal so that when declaring an interface you would declare an interface/type for each associated type that all implementations of that interface would have to provide an associated type that satisfies that interface. In that way we can know that it is safe to call any methods from that interface on the associated types at runtime without having to type-switch from an empty interface.

@keean
For the sake of advancing the debate let me clear misconception I feel is similar to not invented here syndrome going on.

Bidirectional iterator (in T syntax func (*T) *[2]*T ) has the type func (*) *[2]* in go-li syntax. In words it takes a pointer to some type and returns pointer to two pointers to the next and previous element of the same type. It is the fundamental concrete foundational type used by a doubly linked list.

Now you can write what you call map, what I call the foreach generic function. Make no mistake this works not just over linked list but over anything that exposes a bidirectional iterator!

func Foreach(link func(*) *[2]*, list **, direction byte, f func(*)) {

    if nil == *list {
        return
    }

    var end *
    end = *list

    var e *
    e = (*link(*list))[direction]
    f(end)

    for (e != end) && ((*link(e))[direction] != nil) {
        var newe = (*link(e))[direction]
        f(e)
        e = newe
    }
    return
}

The Foreach can be used in two ways, you use it with a lambda in a for-loop-like-iteration over list or collection elements.

const forward = 1
const backwards = 0
Foreach(iterator, collection, forward, func(element *element_type){
    // do something with every element
})

Or you can use it to functionally map a function to every collection element.

Foreach(iterator, collection, backwards, function_to_be_mapped_on_elements)

Bidirectional iterator can of course be also modeled using interfaces in go 1.
interface Iterator { Iter() [2]Iterator } You need to model it using interfaces in order to wrap ("box") the underlying type. Iterator user then type asserts the known type once it locates and wants to visit a specific collection element. This is potentially compile time unsafe.

What you are describing next is the differences between legacy approach and generics based approach.

func modern(x func  (*) *[2]*, y func  (*) *[2]*){}

this approach compile time type checks that the two collections have the same underlying type, in other words whether the iterators actually return the same concrete types

func modern_T_syntax<T>(x func  (*T) *[2]*T, y func  (*T) *[2]*T){}

Same as above but using the familiar T means type placeholder syntax

func legacy(x Iterator, y Iterator){}

In this case the user can pass for example integer linked list as x and float linked list as y. This could lead to potential run-time errors, panics or other internal decoherences but it all depends on what legacy would do with the two iterators.

Now the misconception. You claim that doing iterators and doing generic sorts to sort those iterators would be the the would be the way to go. That would be a truly poor thing to do, here's why

Iterator and linked list are two sides of the same coin. Proof: any collection that exposes iterator simply advertises itself as a linked list. Let's say you need to sort that. What do?

Obviously you delete the linked list from your codebase and replace it with a binary tree. Or if you wanna be fancy use balanced search tree like avl, red-black, as proposed I don't know how many years ago by Ian et all. Still this haven't been done generically in golang. Now that would be the way to go.

Another solution is to quickly in O(N) time loop over the iterator, collect the pointers to elements into a slice of generic pointers, denoted []*T and sort those generic pointers using the poor slice sort

Please give other people's ideas a chance

@go-li If we want to avoid not-invented-here syndrome we should look to Alex Stepanov for a definition, as he pretty much invented generic programming. Here's how I would define it, taken from Stepanov's "Elements of Programming" page 111:

Bidirectional iterator<T> =
    ForwardIterator<T>
/\ predecessor : T -> T
/\ predecessor takes constant time
/\ (forall i in T) successor(i) is defined =>
        predecessor(successor(i)) is defined and equals i
/\ (forall i in T) predecessor(i) is defined =>
        successor(predecessor(i)) is defined and equals i

This depends on the definition of ForwardIterator:

ForwardIterator<T> =
    Iterator<T>
/\ regular_unary_function(successor)

So essentially we have an interface that declares a successor function and a predecessor function, along with some axioms that they must comply with to be valid.

Regarding legacy it's not that legacy will go wrong, it obviously does not go wrong in Go currently, but the compiler is missing optimisation opportunities, and the type system is missing the opportunity to propagate concrete types further. It is also limiting the programmers ablitily to specify their intent precisely. An example would be an identity function, which I mean to return exactly the type it is passed:

id(x T) T

Perhaps it is also worth mentioning the difference between a parametric type and a universally quantified type. A parametric type would be id<T>(x T) T whereas the universally quantified one is id(x T) T (we normally omit the outermost universal quantifier in this case forall T). With parametric types the type system must have a type for T provided at the callsite for id, with universal quantification that is not necessary as long as T gets unified with a concrete type before compilation has finished. Another way to understand that is the parametric function is not a type but a template for a type, and it is only a valid type after T has been substituted for a concrete type. With the universally quantified function id actually has a type forall T . T -> T that can be passed about by the compiler just like Int.

@go-li

Obviously you delete the linked list from your codebase and replace it with a binary tree. Or if you wanna be fancy use balanced search tree like avl, red-black, as proposed I don't know how many years ago by Ian et all. Still this haven't been done generically in golang. Now that would be the way to go.

Having ordered data structures does not mean that you never need to sort data.

If we want to avoid not-invented-here syndrome we should look to Alex Stepanov for a definition, as he pretty much invented generic programming.

I would contest any claim that generic programming was invented by C++. Read the Liskov et al. 1977 CACM paper if you want to see an early model of generic programming that actually works (type-safe, modular, no code bloat): https://dl.acm.org/citation.cfm?id=359789 (see Section 4)

I think we should stop this discussion and wait for the golang team (russ) to come on with some blog posts and then implement a solution 👍 (see vgo) They'll just do it 🎉

https://peter.bourgon.org/blog/2018/07/27/a-response-about-dep-and-vgo.html

I hope this story serves as a warning to others: if you’re interested in making substantial contributions to the Go project, no amount of independent due diligence can compensate for a design that doesn’t originate from the core team.

This thread shows how the core team is not interested in actively participating in finding a solution with the community.

But in the end, if they can make a solution by themselves again, thats fine by me, just get it done 👍

@andrewcmyers Well maybe "invented" was a bit of a stretch, that probably is more like David Musser in 1971, who later worked with Stepanov on some generic libraries for Ada.

Elements of Programming is not a book about C++, the examples may be in C++ but that is a very different thing. I think this book is essential reading for anyone wanting to implement generics in any language. Before dismissing Stepanov, you should really read the book to see what its actually about.

This issue is already straining under the limits of GitHub scalability. Please keep the discussion here focused on concrete issues for Go proposals.

It would be unfortunate for Go to follow the C++ route.

@andrewcmyers Yes, I wholeheartedly agree, please do not use C++ for syntax suggestions or as a benchmark of doing things properly. Instead, please take a look at D for inspiration.

@nomad-software

I like D, very, but does go needs the powerful compile time meta programming features which D offers?

I don't like the template syntax, too in C++, stemming from stone age.

But what about the normal ParametricType standard found in Java or C#, if needed one can also overload this with ParametricType

And more further, I don't like the template call syntax in D with its bang symbol, the bang symbol is rather used nowadays to denote mutable or immutable access for the parameters of a function.

@nomad-software I was not suggesting that C++ syntax or the template mechanism is the right way to do generics. More that "concepts" as defined by Stepanov treat types as an algebra, which is very much the right way to do generics. Look at Haskell type-classes to see how this could look. Haskell type classes are very close to c++ templates and concepts semantically, if you understand what is going on.

So +1 for not following c++ syntax and +1 for not implementing a type-unsafe template system :-)

@keean The reason for the D syntax is to avoid <,> altogether and adhere to context-free grammar. This is part of my point to use D as inspiration. <,> is a really bad choice for the syntax of generic parameters.

@nomad-software As I pointed out above (in a now hidden comment) you need to specify the type parameters for parametric types, but not for universally quantified types (hence the difference between Rust and Haskell, the way types are handled is actually different in the type system). Also C++ concepts == Haskell type-classes == Go interfaces, at least at a conceptual level.

Is D syntax really preferable:

auto add(T)(T lhs, T rhs) {
    return lhs + rhs;

Why is this better than C++/Java/Rust style:

T add<T>(T lhs, T rhs) {
    return lhs + rhs;
}

Or Scala style:

T add[T](T lhs, T rhs) {
    return lhs + rhs;
}

I have done some thinking about syntax for type parameters. I have never been a fan of "angle brackets" in C++ and Java because they make parsing quite tricky and thus impede the development of tools. Square brackets are actually a classic choice (from CLU, System F, and other early languages with parametric polymorphism).

However, the syntax of Go is quite touchy, perhaps because it is already so terse. Possible syntaxes based on square brackets or parentheses create grammatical ambiguities even worse than those introduced by angle brackets. So despite my predispositions,angle brackets seem actually to be the best choice for Go. (Of course, there are also real angle brackets which would not create any ambiguity — ⟨⟩ — but they would require using Unicode characters).

Of course, the precise syntax used for type parameters is less important than getting the semantics right. On that point, the C++ language is a bad model. My research group's work on generics in Genus (PLDI 2015) and Familia (OOPSLA 2017) offers another approach that extends type classes and unifies them with interfaces.

@andrewcmyers I think both of those papers are interesting, but I would say not a good direction for Go, as Genus is object oriented, and Go is not, and Familia unifies subtyping and parametric polymorphism, and Go has neither. I think Go should simply adopt either parametric polymorphism or universal quantification, it does not need subtyping, and in my opinion is a better language for not having it.

I think Go should be looking for generics that don't require object-orientation and don't require subtyping. Go already has interfaces, which I think are a fine mechanism for generics. If you can see that Go interfaces == c++ concepts == Haskell type-classes, it would seem to me that the way to add generics whilst keeping the flavour of 'Go' would be to extend interfaces to take multiple type parameters (I would like associated types on interfaces too, but that could be a separate extension of it helps get multiple type parameters accepted). That would be the key change, but to enable this there would need to be an 'alternative' syntax for interfaces in function signatures, so that you can get the multiple type parameters to the interfaces, which is where the whole angle bracket syntax comes in.

Go interfaces are not type classes — they are merely types — but unifying interfaces with type classes is what Familia shows a way to do. The mechanisms of Genus and Familia are not tied to the languages being fully object-oriented. Go interfaces already make Go "object-oriented" in the ways that matter, so I think the ideas could be adapted in slightly simplified form.

@andrewcmyers

Go interfaces are not type classes — they are merely types

They don't behave like types to me, as they allow polymorphism. The object in a polymorphic array like Addable[] still has its actual type (visible by runtime reflection), so they do behave exactly like single-parameter type classes. The fact that they get put in the place of a type in type signatures is simply a shorthand notation omitting the type variable. Don't confuse the notation with the semantics.

f(x : Addable) == f<T>(x : T) requires Addable<T>

This identity is of course only valid for single parameter interfaces.

The only significant difference between interfaces and single-parameter type classes is that interfaces are defined locally, but this is useful because it avoids the global coherence problem Haskell has with its type-classes. I think this is an interesting point in the design space. Multi-parameter interfaces would give you all the power of multi-parameter type-classes with the benefit of being local. There is no need to add any inheritance or subtyping to the Go language (which are the two key features that define OO I think).

IMHO:

Still having a default type would be preferable to a DSL dedicated to expressing type restrictions. Like having a function f(s T fmt.Stringer) which is a generic function that accepts any type that is-also/satisfies fmt.Stringer interface.

This way it is possible to have a generic function like:

func add(a, b T int) T int {
    return a + b
}

Now function add() works with any type T that like ints support + operator.

@dc0d I agree that seems attractive looking at current Go syntax. However it is not 'complete' in that it cannot represent all the constraints necessary for generics, and there will still be a push to extend this further. This will result in a proliferation of different syntaxes which I see as in conflict with the goal of simplicity. My view is that simplicity is not simple, it has to be the simplest but still offer the required expressive power. Currently I see Go's major limitation in generic expressive power is lack of multi-parameter interfaces. For example a Collection interface could be defined like:

type T U Collection interface {
   member(c T, v U) Bool
   insert(c T, v U) T
}

So this makes sense right? We would like to write interfaces over things like collections. So the question is how do you use this interface in a function. My suggestion would be something like:

func[T, U] f(c T, e U) (Bool, T) requires Collection[T, U] {
   a := member(c, e)
   d := insert(c, e)
   return a, d
}

The syntax is only a suggestion however, I don't really mind what the syntax is, as long as you can express these concepts in the language.

@keean It would not be accurate if I say I don't mind the syntax at all. But the point was to emphasis on having a default type for every generic parameter. In that sense the provided example for interface will become:

type Collection interface (T interface{}, U interface{}) {
   member(c T, v U) Bool
   insert(c T, v U) T
}

Now the (T interface{}, U interface{}) part helps with defining constraints. For example if the members are meant to satisfy fmt.Stringer, then the definition would be:

type Collection interface (T fmt.Stringer, U fmt.Stringer) {
   member(c T, v U) Bool
   insert(c T, v U) T
}

@dc0d This would again be restrictive in the sense that you want to constrain by more than one type parameter, consider:

type OrderedCollection[T, U] interface
   requires Collection[T, U], Ord[U] {...}

I think I see where you are coming from with the parameter placement, you could have:

type OrderedCollection interface(T, U)
   requires Collection(T, U), Ord(U) {...}

As I said, I am not too fussed by the syntax, as I can get used to most syntaxes. From the above I take it you prefer parentheses '()' for multi-parameter interfaces.

@keean Let's consider the heap.Interface interface. Current definition in standard library is:

type Interface interface {
    sort.Interface
    Push(x interface{}) // add x as element Len()
    Pop() interface{}   // remove and return element Len() - 1.
}

Now let's rewrite it as a generic interface, employing default type:

type Interface interface (T interface{}) {
    sort.Interface
    Push(x T) // add x as element Len()
    Pop() T   // remove and return element Len() - 1.
}

This breaks none of Go 1.x code series out there. One implementation would be my proposal for Type Alias Rebinding. But I am sure there can be better implementations.

Having default types allows us write generic code that can be used with Go 1.x style code. And the standard library can become a generic one, without breaking anything. That's big win IMO.

@dc0d so you are suggesting an incremental improvement? What you are suggesting looks fine to me as an incremental improvement, however it still has limited generic expressive power. How would you implement "Collection" and "OrderedCollection" interfaces?

Consider that several partial language extensions may lead to a more complex end product (with multiple alternative syntaxes) than implementing the complete solution in the simplest way you can.

@keean I do not understand the requires Collection[T, U], Ord[U] part. How are they restricting type parameters T and U?

@dc0d They work the same way as in a function, but apply for the everything. So for any Pair of types T U which are an OrderedCollection we require that T U is also an instance of Collection and that U is Ord. So anywhere we use OrderedCollection, we can use methods from Collection and Ord as appropriate.

If we are being minimalist these are not required, because we can include the extra interfaces in the function types where we need them, for example:

type OrderedCollection interface(T, U)
{
   first(c T) U
}

func[T] first(c T[]) T requires Collection(T[], T), Ord T
{...}

func[T] f(c T[]) requires OrderedCollection(T[], T), Collection(T[], T), Ord(T)
{...}

But this might be more readable:

type OrderedCollection interface(T, U) 
   requires Collection(T, U), Ord(U)
{
   first(c T) U
}

func[T] first(c T[]) T
{...}

func[T] f(c T[]) requires OrderedCollection(T[], T)
{...}

@keean (IMO) As long as there is a mandatory default value for the type parameters, I feel happy. That way it is possible to maintain backward compatibility with Go 1.x code series. That's the main point I tried to make.

@keean

Go interfaces are not type classes — they are merely types

They don't behave like types to me, as they allow polymorphism.

Yes, they allow subtype polymorphism. Go has subtyping via interface types. It does not have explicitly declared subtype hierarchies, but that is largely orthogonal. What makes Go not fully object-oriented is the lack of inheritance.

Alternatively, you can view interfaces as existentially quantified applications of type classes. I believe that is what you have in mind. That's what we did in Genus and Familia.

@andrewcmyers

Yes, they allow subtype polymorphism.

Go as far as I know is invariant, there is no covariance or contravariance, this speaks strongly that this is not subtyping. Polymorphic type systems are invariant, so to me it seems Go is closer to this model, and treating interfaces as single parameter type classes seems more in line with the simplicity of Go. The lack of covariance and contravariance is a great benefit to generics, just look at the confusion such things create in languages like C#:

https://docs.microsoft.com/en-us/dotnet/standard/generics/covariance-and-contravariance

I think Go should totally avoid this kind of complexity. To me this means that we don't want generics and subtyping in the same type system.

Alternatively, you can view interfaces as existentially quantified applications of type classes. I believe that is what you have in mind. That's what we did in Genus and Familia.

Because Go has type information at runtime, there is no need for existential quantification. In Haskell types are unboxed (like native 'C' types) and this means once we have put something into an existential collection we cannot (easily) recover the type of the contents, all we can do is use the provided interfaces (type-classes). This is implemented by storing a pointer to the interfaces alongside the raw data. In Go the type of the data is stored instead, the data is 'Boxed' (as in C# boxed and unboxed data). As such Go is not limited to just the interfaces stored with the data because it is possible (by use of a type-case) to recover the type of the data in the collection, which is only possible in Haskell by implementing a 'Reflection' typeclass (although awkward to get the data out it is possible to serialise the type and the data, to say strings, and then deserialise outside the existential box). So the conclusion I have is that Go interfaces behave exactly like type-classes would, if Haskell provided the 'Reflection' type-class as a builtin. As such there is no existential box, and we can still type-case on the contents of collections, yet interfaces behave exactly like type-classes. The difference between Haskell and Go is in the semantics of boxed vs unboxed data, and interfaces are single parameter type-classes. In effect when 'Go' treats an interface as a type, what it is actually doing is:

Addable[] == exists T . T[] requires Addable[T], Reflection[T]

Its probably worth noting that this is the same way "Trait Objects" work in Rust.

Go can totally avoid existentials (being visible to the programmer), covariance and contravariance which is a good thing, and that will make generics much simpler and more powerful in my opinion.

Go as far as I know is invariant, there is no covariance or contravariance, this speaks strongly that this is not subtyping.

Polymorphic type systems are invariant, so to me it seems closer to this model, and treating interfaces as single parameter type classes seems more in line with the simplicity of Go.

May I suggest that you're both correct? In that interfaces are equivalent to type-classes, but type-classes are a form of subtyping. The definitions of subtyping I found so far are all pretty vague and imprecise and boil down to "A is a subtype of B, if one can be substituted for the other". Which, IMO, can be pretty easily argued to be satisfied by type classes.

Note, that the variance-argument in itself isn't really working IMO. Variance is a property of type-constructors, not a language. And it's pretty normal that not all type-constructors in a language are variant (for example, lots of languages with subtyping have mutable arrays, which have to be invariant to be type-safe). So I don't see why you couldn't have subtyping without variant type constructors.

Also, I believe this discussion is a bit too broad for an issue on the Go repository. This shouldn't be about discussing the intricacies of type theories, but about if and how to add generics to Go.

@Merovius Variance is a property associated with subtyping. In languages without subtyping, there is no variance. For there to be variance in the first place you have to have subtyping, which introduces the covariance/contravariance problem to type constructors. You are right however that in a language with subtyping, it is possible to have all type-constructors invariant.

Type-classes are very definitely not subtyping, because a type-class is not a type. However we can view 'interface types' in Go as what Rust calls a 'trait object' effectively a type derived from the type-class.

Go's semantics seem to fit with either model at the moment, because it has no variance, and it has implicit 'trait objects'. So perhaps Go is at a tipping point, generics and the type system could be developed along the lines of subtyping, introducing variance and ending up with something like generics in C#. Alternatively Go could introduce multi-parameter interfaces, allowing interfaces for Collections, and this would break the immediate link between interfaces and 'interface types'. For example if you have:

type (T, U) Collection interface {
    member : (c T, e U) Bool
    insert: (c T, e U) T
}

member(c int32[], e int32) Bool {...}
insert(c int32[], e int32) int32[] {...}

member(c float32[], e float32) Bool {...}
insert(c float32[], e float32) float32[] {...}

There is no longer an obvious subtype relation between the types T, U and the interface Collection. So you can only view the relation between the instance type and interface types as subtyping for the special case of single-parameter interfaces, and we cannot express abstractions of things like collections with single-parameter interfaces.

I think for generics you clearly need to be able to model things like collections, so multi-parameter interfaces is a must-have for me. However I think the interaction between covariance and contravariance in generics creates an overly complex type system, so I would want to avoid subtyping.

@keean Since interfaces may be used as types, and type classes are not types, the most natural explanation of Go semantics is that interfaces are not type classes. I understand that you are arguing to generalize interfaces as type classes; I think it's a reasonable direction to take the language, and in fact we have already explored that approach extensively in our published work.

As to whether Go has subtyping, please consider the following code:

package main

type Cloneable interface {
    Clone() Cloneable
}

type CloneableZ interface {
    Clone() Cloneable
    zero() int
}

type S struct {}

func (t S) Clone() Cloneable {
    c := t
    return c
}

func (t S) zero() int {
    return 0
}

var x CloneableZ = S{}
var y Cloneable = x

func main() {
    print("ok\n")
}

The assignment from x to y demonstrates that the type of y may be used where the type of x is expected. This is a subtyping relationship, to wit: CloneableZ <: Cloneable, and also S <: CloneableZ. Even if you explained interfaces in terms of type classes, there would still be a subtyping relationship at play here, something like S <: ∃T.CloneableZ[T] <: ∃T.Cloneable[T].

Note that it would be perfectly safe for Go to allow the function Clone to return an S, but Go happens to enforce unnecessarily restrictive rules for conformance to interfaces: in fact, the same rules that Java originally enforced. Subtyping does not require non-invariant type constructors, as @Merovius observed.

@andrewcmyers What happens with multi-parameter interfaces, like those necessary to abstract collections?

Further the assignment from x to y can be seen as demonstrating interface inheritance with no subtyping at all. In Haskell (which clearly does not have subtyping) you would write:

class Cloneable t => CloneableZ t where...

Where we have x is a type that implements CloneableZ which by definition also implements Cloneable, so can obviously be assigned to y.

To try and summarise, you can either view an interface as a type and Go to have limited subtyping with no covariant or contravarient type constructors, or you can view it as a "trait object", or perhaps in Go we would call it an "interface object", which is effectively a polymorphic container constrained by an interface "typeclass". In the typeclass model there is no subtypeing, and therefore no reason to have to think about covariance and contravariance.

If we stick with the subtyping model, we cannot have collection types, this is why C++ had to introduce templates, because object-oriented subtyping is not sufficient to generically define concepts like containers. We end up with two mechanisms for abstraction, objects and subtyping, and templates/traits and generics, and the interactions between the two get complex, look at C++, C#, and Scala for examles. There will be continued calls to introduce covariant and contravariant constructors to increase the power of generics, in line with those other languages.

If we want generic collections without introducing a separate generics system, then we should think of interfaces like type-classes. Multi-parameter interfaces would mean no longer thinking about subtyping, and instead thinking about interface inheritance. If we want to improve generics in Go, and allow abstractions of things like collections, and we do not want the complexity of the type systems of languages like C++, C#, Scala etc, then multi-parameter interfaces, and interface inheritance are the way to go.

@keean

What happens with multi-parameter interfaces, like those necessary to abstract collections?

Please see our papers on Genus and Familia, which do support multiparameter type constraints. Familia unifies those constraints with interfaces and allows interfaces to constrain multiple types.

If we stick with the subtyping model, we cannot have collection types

I'm not completely sure what you mean by "the subtyping model", but it's pretty clear that Java and C# have collection types, so this claim doesn't make much sense to me.

Where we have x is a type that implements CloneableZ which by definition also implements Cloneable, so can obviously be assigned to y.

No, in my example, x is a variable and y is another variable. If I know that y is some CloneableZ type and x is some Cloneable type, that does not mean that I can assign from y to x. That is what my example is doing.

To clarify that subtyping is needed to model Go, below is a sharpened version of the example whose moral equivalent doesn't type-check in Haskell. The example shows that subtyping enables the creation of heterogeneous collections in which different elements have different implementations. Furthermore, the set of possible implementations is open-ended.

type Cloneable interface {
    Clone() Cloneable
}

type CloneableZ interface {
    Clone() Cloneable
    zero() int
}

type S struct {}

func (t S) Clone() Cloneable {
    c := t
    return c
}

type T struct { x int }

func (t T) Clone() Cloneable {
    c := t
    return c
}

func (t S) zero() int {
    return 0
}

var x CloneableZ = S{}
var y Cloneable = T{}
var a [2]Cloneable = [2]Cloneable{x, y}

@andrewcmyers

I'm not completely sure what you mean by "the subtyping model", but it's pretty clear that Java and C# have collection types, so this claim doesn't make much sense to me.

Have a look at why C++ developed templates, the OO subtyping model was not capable of expressing the generic concepts necessary for generalising things like collections. C# and Java also had to introduce a complete generics system separate from objects, subtyping and inheritance, and then had to clean up the mess of the complex interactions of the two systems with things like covariant and contravariant type constructors. With the benefit of hindsight we can avoid OO subtyping, and instead look at what happens if we add interfaces (type-classes) to a simply typed language. This is what Rust has done so it is worth taking a look at, but of course its complicated by the whole lifetime thing. Go has GC so it would not have that complexity. My suggestion is that Go can be extended to allow multi-parameter interfaces, and avoid this complexity.

Regarding your claim that you cannot do this example in Haskell, here is the code:

{-# LANGUAGE ExistentialQuantification #-}

class ICloneable t where
    clone :: t -> t

class ICloneable t => ICloneableZ t where
    zero :: t

data S = S deriving Show

instance ICloneable S where
    clone x = x

data T = T Int deriving Show

instance ICloneable T where
    clone x = x

instance ICloneableZ T where
    zero = T 0

data Cloneable = forall a . (ICloneable a, Show a) => ToCloneable a

instance Show Cloneable where
    show (ToCloneable x) = show x

main = do
    x <- return S
    y <- return (T 27)
    a <- return [ToCloneable x, ToCloneable y]
    putStrLn (show a)

Some interesting differences, Go automatically derives this type data Cloneable = forall a . (ICloneable a, Show a) => ToCloneable a as this is how you turn an interface (which has no storage) into a type (which has storage), Rust also derives these types and calls them "trait objects". In other languages like Java, C# and Scala we find you cannot instantiate interfaces, which is actually "correct", interfaces are not types, they have no storage, Go is deriving the type of an existential container automatically for you so that you can treat the interface like a type, and Go hides this from you by giving the existential container the same name as the interface it is derived from. The other thing to note is that this [2]Cloneable{x, y} coerces all the members to Cloneable, whereas Haskell does not have such implicit coercions, and we have to explicitly coerce the members with ToCloneable.

It has also been pointed out to me that we should not consider S and T subtypes of Cloneable because S and T are not structurally compatible. We can literally declare any type an instance of Cloneable (just by declaring the relevant definition of the function clone in Go) and those types need have no relation to each other at all.

Most of the proposals for Generics seem to include additional tokens which I think hurts the readability and the simple feeling of Go. I would like to propose a different syntax that I think could possibly work with Go's existing grammar well (even happens to syntax highlight pretty well in Github Markdown).

The main points of the proposal:

  • Go's grammar seems to always have an easy way to determine when a type declaration has ended because there's some specific token or keyword we're looking for. If this is true in all cases, type arguments can simply be added following the type names themselves.
  • Like most proposals, the same identifier means the same type in any function declaration. These identifiers never escape the declaration.
  • In most proposals you have to declare generic type arguments, but in this proposal it's implicit. Some people will claim this hurts readability or clarity (implicitness is bad), or constrains the ability to name a type, rebuttals follow:

    • When it comes to hurting readability I think you can argue it either way, the extra or [T] hurts readability just as much by making a lot of syntactical noise.

    • Implicitness when used properly can help a language be less verbose. We elide type declarations with := all the time because the information concealed by that simply isn't important enough to spell out each time.

    • Naming a concrete (non-generic) type a or t is probably bad practice, so this proposal assumes it's safe to reserve these identifiers to act as generic type arguments. Though this would require a go fix migration perhaps?

package main

import "fmt"

type LinkedList a struct {
  Head *Node a
  Tail *Node a
}

type Node a {
  Next *Node a
  Prev *Node a

  Value a
}

func main() {
  // Not sure about how recursive we could get with the inference
  ll := LinkedList string {
    // The string bit could be inferred
    Head: Node string { Value: "hello world" },
  }
}

func (l *LinkedList a) Append(value a) {
  newNode := &Node{Value: value}

  if l.Tail == nil {
    l.Head = newNode
    l.Tail = l.Head
    return
  }

  l.Tail.Next = newNode
  l.Tail = l.Tail.Next
}

This is taken from a Gist that has a bit more detail as well as sum types proposed here: https://gist.github.com/aarondl/9b950373642fcf5072942cf0fca2c3a2

This is not a fully flushed out Generics proposal and it's not meant to be, there are a lot of problems to be solved to be able to add generics to Go. This one only tackles syntax, and I'm hoping we can have a conversation about whether or not what's proposed is feasible / desirable.

@aarondl
Looks fine to me, using this syntax we would have:

type Collection a b interface {
   member(c a, e b) Bool
   insert(c a, e b) a
}

func insert(c *LinkedList a, e a) *LinkedList a {
   c.Append(e)
   return c
}

@keean Would you please explain the Collection type a bit. I fail to understand it:

type Collection a b interface {
   member(c a, e b) Bool
   insert(c a, e b) a
}

@dc0d Collection is an interface abstracting _all_ collections, so trees, lists, slices etc, so we can have generic operations like member and insert that will work on any collection containing any data-type. In the above I gave the example of defining 'insert' for the LinkedList type in the previous example:

func insert(c *LinkedList a, e a) *LinkedList a {
   c.Append(e)
   return c
}

We could also define it for a slice

func insert(c []a, e a) []a {
   return append(c, e)
}

However we don't even need the kind of parametric functions with type variables as illustrated by @aarondl with polymorphic type a for this to work, as you can just define for concrete types:

func insert(c *LinkedList int, e int) *LinkedList int {
   c.Append(e)
   return c
}

func insert(c *LinkedList float, e float) *LinkedList float {
   c.Append(e)
   return c
}

func insert(c int[], e int) int[] {
   return append(c, e)
}

func insert(c float[], e float) float[] {
   return append(c, e)
}

So Collection is an interface for generalising over both the type of a container and the type of its contents, allowing generic functions to be written that operate on all combinations of container and contents.

There's no reason you cannot also have a slice of collections []Collection where the contents would all be different collection types with different values types, providing member and insert were defined for each combination.

@aarondl Given that type LinkedList a is already a valid type declaration, I can only see two ways to make this parseable unambiguously: Making the grammar context sensitive (getting into the problems of parsing C, ugh) or using unbounded lookahead (which the go grammar tends to avoid, because of bad error messages in the failure case). I might be misunderstanding something, but IMO that speaks against a token-less approach.

@keean Interfaces in Go use methods, not functions. In the specific syntax you suggested, there is nothing that attaches insert to *LinkedList for the compiler (in Haskell that's done via instance declarations). It's also normal for methods to mutate the value they're operating on. None of this is a Show-Stopper, just pointing out that the syntax you're suggesting doesn't work well with Go. Probably more something like

type Collection e interface {
    Element(e) book
    Insert(e)
}

func (l *(LinkedList e)) Element(el e) book {
    // ...
}

func (l* (LinkedList e)) Insert(el e) {
    // ...
}

Which also demonstrates a couple more questions in regards to how the type parameters are scoped and how this should get parsed.

@aarondl there are also more questions I'd have about your proposal. For example, it doesn't allow constraints, so you only get unconstrained polymorphism. Which, in general, isn't really that useful, as you're not allowed to do anything with the values you're getting (e.g. you couldn't implement Collection with a map, as not all types are valid map keys). What should happen when someone tries to do something like that? If it's a compile-time error, does it complain about the instantiation (C++ error messages ahead) or at the definition (you can't do basically anything, because there is nothing that works with all types)?

@keean Still I fail to understand how a is restricted to be a list (or slice or any other collection). Is this a context-dependent special grammar for collections? If so what is its value? It is not possible to declare user-defined types this way.

@Merovius Does that mean Go cannot do multiple-dispatch, and makes the first argument of a 'function' special? This suggests that associated types would be a better fit than multiple-parameter interfaces. Something like this:

type Collection interface {
   type Element
   Member(e Element) Bool
   Insert(e Element) Collection
}

type IntSlice struct {
    value []Int,
}

type IntSlice.Element = Int

func (IntSlice) Member(e Int) Bool {...}
func (IntSlice) Insert(e Int) IntSlice {...}

func useIt(c Collection, e Collection.Element) {...}

However this still has problems because there is nothing constraining the two collections to be the same type... You would end up needing something like:

func[A] useIt(c A, e A.Element) requires A:Collection

To attempt to explain the difference, multi-parameter interfaces have extra _input_ types that take part in instance selection (hence the connection with multiple-dispatch), whereas associated types are _output_ types, only the receiver type takes part in instance selection, and then the associated types depend on the type of the receiver.

@dc0d a and b are type parameters of the interface, just like in a Haskell type class. For something to be considered a Collection it has to define the methods that match the types in the interface where a and b can be any type. However as @Merovius has pointed out, Go interfaces are method based, and do not support multiple-dispatch so multi-parameter interfaces may not be a good fit. With Go's single-dispatch method model, then having associated types in interfaces, instead of multiple-parameters would seem to be a better fit. However the lack of multiple dispatch makes implementing functions like unify(x, y) hard, and you have to use the double-dispatch pattern which is not very nice.

To explain the multi-parameter thing a bit further:

type Cloneable[A] interface {
   clone(x A) A
}

Here a stands for any type, we don't care what it is, as long as the correct functions are defined we consider it Cloneable. We would consider interfaces as constraints on types rather than types themselves.

func clone(x int) int {...}

so in the case of 'clone' we substitute a for int in the interface definition, and we can call clone if the substitution succeeds. This fits nicely with this notation:

func[A] test(x A) A requires Cloneable[A] {...}

This is equivalent to:

type Cloneable interface {
   clone() Cloneable
}

but declares a function not a method, and can be extended with multiple parameters. If you have a language with multiple-dispatch there is nothing special about the first argument of a function/method, so why write it in a different place.

As Go does not have multiple dispatch, this all starts to feel like its too much to change all at once. It seems like associated types would be a better fit, although more limited. This would allow abstract collections, but not elegant solutions to things like unification.

@Merovius Thanks for taking a look at the proposal. Let me try to address your concerns. I'm sad you thumbs downed the proposal before we discussed it more, I hope I can change your mind - or maybe you can change mine :)

Unbounded lookahead:
So as I mentioned in the proposal, it currently seems like Go grammar has a good way of detecting the "end" of pretty much everything syntactically. And we still would because of the implicit generic arguments. Single letter lowercase being the syntactical construct that creates that generic argument - or whatever we decide to make that inline token, maybe we even fallback to a tokenized thing like @a in the proposal if we like the syntax enough but it's not possible given compiler difficulty without tokens, though the proposal loses a lot of charm as soon as you do that.

Regardless the problem with type LinkedList a under this proposal isn't that hard because we know that a is a generic type argument and so this would fail with a compiler error the same as type LinkedList fails today with: prog.go:3:16: expected type, found newline (and 1 more errors). The original post didn't really come out and say it but you are not allowed to name a concrete type [a-z]{1} anymore which I -think- solves this problem and is a sacrifice I think we'd all be okay with making (I can only see detriments in creating real types with single letter names in Go code today).

It's just unconstrained polymorphism
The reason I had omitted any kind of traits or generic argument constraints is because I feel that's the role of interfaces in Go, if you would like to do something with a value then that value should be an interface type and not a fully generic type. I think this proposal plays well with interfaces too.

Under this proposal we would still have the same issue as we do now with operators like + so you couldn't make a generic add function for all numeric types, but you could accept a generic add function as an argument. Consider the following:

func Sort(slice []a, compare func (a, a) bool) { ... }

Questions about scoping

You gave an example here:

type Collection e interface {
    Element(e) book
    Insert(e)
}

func (l *(LinkedList e)) Element(el e) book {
    // ...
}

func (l* (LinkedList e)) Insert(el e) {
    // ...
}

The scope of these identifiers as a rule is bound to the particular declaration/definition they're in. They're shared nowhere and I'm not seeing a reason for them to be.

@keean That's very interesting although as others have pointed out you would have to change what you've shown there to actually be able to implement the interfaces (currently in your example there are no methods with receivers, only functions). Trying to think more about how this affects my original proposal.

Single letter lowercase being the syntactical construct that creates that generic argument

I don't feel good about that; it requires having separate productions for what an identifier is depending on context and also means arbitrarily forbidding certain identifiers for types. But it's not really the time to talk about these details.

Under this proposal we would still have the same issue as we do now with operators like +

I don't understand this sentence. Currently, the + operator doesn't have any of those problems, because the types of its operands are locally known and the error message is clear and unambiguous and points to the source of the problem. Am I correct in assuming that you are saying that you want to disallow any usage of generic values that is not allowed for all possible types (I can't think of a lot of such operations)? And create a compiler error for the offending expression in the generic function? IMO that would limit the value of generics too much.

if you would like to do something with a value then that value should be an interface type and not a fully generic type.

The two main reasons people want generics for, is performance (avoid wrapping of interfaces) and type-safety (making sure that the same type is used in different places, while not caring about which one it is). This seems to ignore those reasons.

you could accept a generic add function as an argument.

True. But pretty unergonomic. Consider how much complaints there where about the sort API. For a lot of generic containers, the amount of functions that the caller would have to implement and pass seems to be prohibitive. Consider, how would a container/heap implementation look under this proposal and how would it be better than the current implementation, in terms of ergonomics? It would seem, the wins are negligible here, at best. You'd have to implement more trivial functions (and duplicate to/reference at each usage site), not fewer.

@Merovius

thinking about this point from @aarondl

you could accept a generic add function as an argument.

It would be better to have an Addable interface to allow overloading of addition, given some syntax for defining infix operators:

type Addable interface {
   + (x Addable, y Addable) Addable
}

Unfortunately this does not work, because it does not express that we expect all the types to be the same. To define addable we would need something like the multi-parameter interfaces:

type Addable[A] interface {
   + (x A, y A) A
}

Then you would also need Go to do multiple-dispatch which would mean all arguments in a function are treated like a receiver for interface matching. So in the above example any type is Addable if there is a function + defined on it that satisfies the function definitions in the interface definition.

But given those changes you could now write:

type S struct {
   value: int
}

func (+) (x S, y S) S {
   return S {
      value: x.value + y.value
   }
}

func main() {
    println(S {value: 27} + S {value: 5})
}

Of course function overloading and multiple-dispatch may be something that people never want in Go, but then things like defining basic arithmetic on user defined types like vectors, matrices, complex numbers etc, will always be impossible. Like I said above 'associated types' on interfaces would allow some increase in generic programming capability, but not full generality. Is multiple-dispatch (and presumably function overloading) something that could ever happen in Go?

things like defining basic arithmetic on user defined types like vectors, matrices, complex numbers etc, will always be impossible.

Some might consider that a feature :) AFAIR there is some proposal or thread floating around somewhere discussing whether it should. FWIW, I think this is - again - wandering off-topic. Operator overloading (or general "how to make Go more Haskell" ideas) isn't really the point of this issue :)

Is multiple-dispatch (and presumably function overloading) something that could ever happen in Go?

Never say never. I wouldn't expect it though, personally.

@Merovius

Some might consider that a feature :)

Sure, and if Go doesn't do it there are other languages that will :-) Go does not have to be everything to everyone. I was just trying to establish some scope for generics in Go. My focus is creating fully generic languages, as I have an aversion to repeating myself and boilerplate (and I don't like macros). If I had a penny for every time I have had to write a linked list or a tree in 'C' for some specific datatype. It actually makes some projects impossible for a small team because of the volume of code that needs to be held in your head to understand it, and then maintained through changes. Sometimes I think that people that don't get the need for generics just haven't written a large enough program yet. Of course you can instead have a large team of developers working on something and only have each developer responsible for a small part of the total code, but I am interested in making a single developer (or small team) as effective as possible.

Given that function overloading and multiple-dispatch is out of scope, and also given the parsing problems with @aarondl 's suggestion, it seems that adding associated types to interfaces, and type parameters to functions would be about as far as you would want to go with generics in Go.

Something like this would seem to be the right sort of thing:

type Collection interface {
   type Element
   Member(e Element) Bool
   Insert(e Element) Collection
}

type IntSlice struct {
    value []Int,
}

type IntSlice.Element = Int

func (IntSlice) Member(e Int) Bool {...}
func (IntSlice) Insert(e Int) IntSlice {...}

func useIt<T>(c T, e T.Element) requires T:Collection {...}

Then there would be a decision in the implementation whether to use parametric types or universally quantified types. With parametric types (like Java) then a 'generic' function is not actually a function but some kind of type-safe function template, and as such cannot be passed as an argument unless it is has its type parameter provided so:

f(useIt) // not okay with parametric types
f(useIt<List>) // okay with parametric types

With universally quantified types, you can pass useIt as an argument, and it can then be provided with a type parameter inside f. The reason to favour parametric types is because you can monomorphise the polymorphism at compile time meaning no elaboration of polymorphic functions at runtime. I am not sure this is a concern with Go, because Go is already doing runtime dispatch on interfaces, so as long as the type parameter for useIt implements Collection, you can dispatch to the correct receiver at runtime, so universal quantification is probably the right way for Go.

I wonder, SFINAE mentioned only by @bcmills. Not even mentioned in proposal (though Sort is there as example).
How the Sort for slice and linkedlist might looks like then?

@keean
I can't figure out how one would define a generic 'Slice' collection with your suggestion. You seem to be defining an 'IntSlice' that might be implementing 'Collection' (though Insert returns a different type than the one wanted by the interface), but that is not a generic 'slice', as it seems to be only for ints, and the method implementations are only for ints. Do we need to define specific implementation per type?

Sometimes I think that people that don't get the need for generics just haven't written a large enough program yet.

I can assure you that impression is false. And FWIW, ISTM that "the other side" is putting "not seeing the need" into the same bucket as "not seeing the use". I see the use and don't refute it. I don't really see the need, though. I'm doing fine without, even in large codebases.

And don't mistake "wanting them to be done right and pointing out where existing proposals aren't" with "fundamentally opposing the very idea" either.

also given the parsing problems with @aarondl 's suggestion.

As I said, I don't think talking about the parsing problem is really productive right now. Parsing problems can be solved. The lach of constrained polymorphism is far more serious, semantically. IMO, adding generics without that just isn't really worth the effort.

@urandom

I can't figure out how one would define a generic 'Slice' collection with your suggestion.

As given above you would still need to define a separate implementation for each type of slice, however you would still gain from being able to write algorithms in terms of the generic interface. If you wanted to allow a generic implementation for all slices, you would need to allow parametric associated types and methods. Note I moved the type parameter to after the keyword so it occurs before the receiver type.

type<T> []T.Element = Int

func<T> ([]T) Member(e T) Bool {...}
func<T> ([]T) Insert(e T) Collection {...}

However now you also have to deal with specialisation, because someone could define the associated type and methods for the more specialised []int and you would have to deal with which one to use. Normally you would go with the more specific instance, but it does add another layer of complexity.

I am not sure how much this actually gains you. With my original example above you can write generic algorithms to act on general collections using the interface, and you would only have to provide the methods and associated types for the types you actually use. The major win for me is being able to define algorithms like sort on arbitrary collections and put those algorithms in a library. If I then have a list of "shapes" I just have to define the collection interface methods for my list of shapes, and I can then use any algorithm in the library on them. Being able to define the interface methods for all slice types is of less interest to me, and might be too much complexity for Go?

@Merovius

I don't really see the need, though. I'm doing fine without, even in large codebases.

If you can cope with a 100,000 line program, then you will be able to do more with 100,000 generic lines than you could with 100,000 non-generic lines (due to the repetition). So you may be a super-star developer able to cope with very large codebases, but you would still achieve more with a very large generic codebase as you would be eliminating the redundancy. That generic program would expand into an even larger non-generic program. It just seems to me that you have not hit your complexity limit yet.

However I think you are right 'need' is too strong, I am happily writing go code, with only occasional frustration about the lack of generics, and I can work around this by simply writing more code, and in Go that code is plesently direct and literal.

The lack of constrained polymorphism is far more serious, semantically. IMO, adding generics without that just isn't really worth the effort.

I agree with this.

you will be able to do more with 100,000 generic lines than you could with 100,000 non-generic lines (due to the repetition)

I'm curious, from your hypothetical example, what % of those lines would be a generic function?
In my experience this is less than 2% (from a codebase with 115k LOC), so I don't think this is a good argument unless you write a library for "collections"

I do wish we eventually get generics tho

@keean

Regarding your claim that you cannot do this example in Haskell, here is the code:

This code is not morally equivalent to the code I wrote. It introduces a new Cloneable wrapper type in addition to the ICloneable interface. The Go code did not need a wrapper; nor would other languages that support subtyping.

@andrewcmyers

This code is not morally equivalent to the code I wrote. It introduces a new Cloneable wrapper type in addition to the ICloneable interface.

Isn't this what this code does:

type Cloneable interface {...}

It intoduces a data-type 'Cloneable' derived from the interface. You don't see the 'ICloneable' because you don't have instance declarations for interfaces, you just declare the methods.

Can you consider it subtyping when the types that implement an interface do not have to be structurally compatible?

@keean I would consider Cloneable to be merely a type, not really a "data type". In a language like Java, there would be essentially no added cost to the Cloneable abstraction, because there would be no wrapper, unlike in your code.

It seems to me limiting and undesirable to require structural similarity between types implementing an interface, so I am confused about what you're thinking here.

@andrewcmyers
I am using type and data type interchangeably. Any type that can contain data is a data-type.

because there would be no wrapper, unlike in your code.

There is always a wrapper because Go types are always boxed, so the wrapper exists around everything. Haskell needs the wrapper to be explicit because it has unboxed types.

structural similarity between types implementing an interface, so I am confused about what you're thinking here.

Structural subtyping requires the types be 'structurally compatible'. As there is no explicit type hierachy like in an OO language with inheritance, subtyping cannot be nominal, so it must be structural, if it is there at all.

I do see what you mean though, which I would describe as considering an interface to be an abstract base class, not an interface, with some kind of implicit nominal subtype relationship with any type that implements the required methods.

I actually think Go fits both models right now, and it could go either way from here, but I would suggest that calling it an interface not a class suggests a non-subtyping way of thinking.

@keean I don't understand your comment. First you tell me you disagree and that I "just haven't met my complexity limit yet" and then you tell me you agree (in that "need" is too strong a word). I also think your argument is fallacious (you assume LOC is the primary measure of complexity and that every line of code is equal). But most of all, I don't think the "who is writing more complicated programs" is really a productive line of discussion. I was just trying to clarify, that the argument "if you disagree with me, that must mean you are not working on as hard or interesting problems" isn't convincing and does not come off as in good faith. I hope you can just trust that people can disagree with you about the importance of this feature while being equally competent and doing just as interesting things.

@merovius
I was saying you are likely a more capable programmer than I am, and thus able to work with more complexity. I certainly don't think you are working on less interesting or less complex problems, and I am sorry it came across that way. I spent yesterday trying to get a scanner working, which was a very uninteresting problem.

I can think that generics help me write more complex programs with my limited brainpower, and also admit that I don't "need" generics. It's a question of degree. I can still program without generics, but I can't necessarily write software of the same complexity.

I hope that reassures you I am acting in good faith, I have no hidden agenda here, and if Go does not adopt generics I will still use it. I have an opinion about the best way to do generics, but it's not the only opinion, I can only talk from my own experience. If I'm not helping there are plenty of other things I can spend my time on, so just say the word, and I will refocus elsewhere.

@Merovius Thanks for the continued dialog.

| The two main reasons people want generics for, is performance (avoid wrapping of interfaces) and type-safety (making sure that the same type is used in different places, while not caring about which one it is). This seems to ignore those reasons.

Maybe we're looking at what I've proposed very differently, as from my perspective it does both of these things as far as I can tell? In the linked list example there is no wrapping with interfaces and therefore it should as performant as if hand-written for a given type. In the type-safety side it is the same. Is there a counter-example you can give here to help me understand where you're coming from?

| True. But pretty unergonomic. Consider how much complaints there where about the sort API. For a lot of generic containers, the amount of functions that the caller would have to implement and pass seems to be prohibitive. Consider, how would a container/heap implementation look under this proposal and how would it be better than the current implementation, in terms of ergonomics? It would seem, the wins are negligible here, at best. You'd have to implement more trivial functions (and duplicate to/reference at each usage site), not fewer.

I'm actually not concerned by this at all. I don't believe that the amount of functions would be prohibitive but I'm definitely opened to seeing some counter-examples. Recall that the API that people complained about was not one that you had to provide a function for but the original one here: https://golang.org/pkg/sort/#Interface where you needed to create a new type that was simply your slice + type, and then implement 3 methods on it. In light of the complaints and the pain associated with this interface the following was created: https://golang.org/pkg/sort/#Slice, I for one have no problem with this API and we would recover the performance penalties of this under the proposal we're discussing by simply altering the definition to func Slice(slice []a, less func(a, a) bool).

In terms of the container/heap data structure no matter what Generic proposal you accept that needs an entire rewrite. container/heap just like the sort package is just providing algorithms on top of your own data structure, but neither package ever owns the data structure because otherwise we'd have []interface{} and the costs associated with that. Presumably we would change them since you would be able to have a Heap that owns a slice with a concrete type thanks to generics, and this is true under any of the proposals I've seen here (including my own).

I'm trying to tease apart the differences in our perspectives on what I've proposed are. And I think the root of the disagreement (past any personal preference syntactically) is that there's no constraints on the Generic types. But I'm still trying to figure out what that gains us. If the answer is that nothing where performance is concerned is allowed to use an interface then there's not a lot I can say here.

Consider the following hash table definition:

// Hasher turns a key into a hash
type Hasher interface {
  func Hash() []byte
}

type HashTable v struct {
   Keys   []Hasher
   Values []v
}

// Note that the generic arguments must be repeated here and immediately
// understood without reading another line of code, which to me
// is a readability win over the sudden appearance of the K and V which are
// defined elsewhere in the code in the example below. This is of course because
// the tokenized type declarations with constraints are fairly painful in general
// and repeating them everywhere is simply too much.
func (h (*HashTable v)) Insert(key Hasher, value v) { ... }

Are we saying that the []Hasher is a non-starter due to performance/storage concerns and that in order to have a successful Generics implementation in Go we absolutely must have something like the following?

// Without selecting another proposal I have no idea how the constraint might be defined or implemented so let's just pretend
type [K: Hasher, V] HashTable a struct {
   Keys   []K
   Values []V
}

func (h *HashTable) Insert(key K, value V) { ... }

Hopefully you see where I'm coming from. But it's definitely possible that I don't understand the constraints that you wish to impose upon certain code. Maybe there's use cases I haven't considered, regardless I hope to come to a fuller understanding of what the requirements are and how the proposal is failing them.

Maybe we're looking at what I've proposed very differently, as from my perspective it does both of these things as far as I can tell?

The "this" in the section you are quoting is referring to using interfaces. The issue isn't, that your proposal doesn't do either, it's that your proposal doesn't allow constrained polymorphism, which excludes most usages for them. And the alternative you suggested for that where interfaces, which don't really address the core use-case for generics either (because of the two things I mentioned).

For example, your proposal (as originally written) did not actually allow writing a generic map of any sorts, as that would require to be able to at least compare keys using == (which is a constraint, so implementing a map requires constrained polymorphism).

In light of the complaints and the pain associated with this interface the following was created: https://golang.org/pkg/sort/#Slice

Note, that this interface still isn't possible in your proposal of generics, as it relies on reflection for length and swapping (so, again, you have a constraint on slice-operations). Even if we accept that API as the lower bound of what generics should be able to accomplish (lots of people wouldn't. There are still plenty of complaints about the lack of type-safety in that API), your proposal wouldn't pass that bar.

But also, again, you are quoting a response to a specific point you made, namely that you could get constrained polymorphism by passing function literals in the API. And that specific way you suggested to work around the lack of constrained polymorphism would require implementing more-or-less the old API. i.e. you are quoting my response to this argument, which you are then just repeating:

we would recover the performance penalties of this under the proposal we're discussing by simply altering the definition to func Slice(slice []a, less func(a, a) bool).

That's the old API though. You are saying "my proposal doesn't allow constrained polymorphism, but that's no problem, because we can just not use generics and instead use the existing solutions (reflection/interfaces) instead". Well, responding to "your proposal doesn't allow the most basic use cases that people want generics for" with "we can just do the things people are already doing without generics for those most basic use cases" doesn't seem to get us anywhere, TBH. A generics proposal that doesn't help you to write even basic container types, sort, max… just doesn't seem worth it.

this is true under any of the proposals I've seen here (including my own).

Most generics proposals include some way to constrain type-parameters. i.e. to express "the type parameter has to have a Less method", or "the type parameter must be comparable". Yours - AFAICT - doesn't.

Consider the following hash table definition:

Your definition is incomplete. a) The key type also needs equality and b) you are not preventing using different key types. i.e. this would be legal:

type hasherA uint64

func (a hasherA) Hash() []byte {
    b := make([]byte, 8)
    binary.BigEndian.PutUint64(b, uint64(a))
    return b
}

type hasherB string

func (b hasherB) Hash() []byte {
    return []byte(b)
}

h := new(HashTable int)
h.Insert(hasherA(42), 1)
h.Insert(hasherB("Hello world"), 2)

It shouldn't be legal though, as you are using different key types. i.e. the container is not type-checked to the degree that people want. You need to parameterize hashtable over both key and value type

type HashTable k v struct {
    Keys []k
    Values []v
}

func (h *(HashTable k v)) Insert(key k, value v) {
    // You can't actually do anything with k, as it's unconstrained. i.e. you can't hash it, compare it…
    // Implementing this is impossible in your proposal.
}

// If it weren't impossible, you'd get this:
h := new(HashTable hasherA int)
h[hasherA(42)] = 1
h[hasherB("Hello world")] = 2 // compile error - can't use hasherB as hasherA

Or, if it helps, imagine you are trying to implement a hash-set. You'd get the same issue but now the resulting container doesn't have any additional type-checking over interface{}.

This is why your proposal doesn't address the most basic use-cases: It relies on interfaces to constrain polymorphism, but then doesn't actually provide any way to check those interfaces for consistency. You can either have consistent type-checking or have constrained polymorphism, but not both. But you need both.

that in order to have a successful Generics implementation in Go we absolutely must have something like the following?

It's at least how I feel about that, yeah, pretty much. If a proposal doesn't allow writing type-safe containers or sort or… it doesn't really add anything to the existing language that is significant enough to justify the cost.

@Merovius Okay. I think I've got an understanding of what you want. Keep in mind that your use cases are very far away from what I want. I'm not really itching for type safe containers though I suspect - as you stated - that may be a minority opinion. A few of the biggest things that I'd like to see are result types instead of errors and easy slice manipulation without duplication or reflection everywhere which my proposal does a reasonable job of addressing. However, I can see how from your perspective it "doesn't address the most basic use-cases" if your basic use-case is writing generic containers without the use of interfaces,

Note, that this interface still isn't possible in your proposal of generics, as it relies on reflection for length and swapping (so, again, you have a constraint on slice-operations). Even if we accept that API as the lower bound of what generics should be able to accomplish (lots of people wouldn't. There are still plenty of complaints about the lack of type-safety in that API), your proposal wouldn't pass that bar.

Reading this it's clear you've thoroughly misunderstood the way the generic slices would/should work under this proposal. It's through this misunderstanding that you've come to the false conclusion that "this interface still isn't possible in your proposal". Under any proposal a generic slice must be possible, this is what I think. And len() in the world as I saw it would be defined as: func len(slice []a), which is a generic slice argument meaning that it can count length in a non-reflection way for any slice. This is a lot of the point of this proposal as I said above (easy slice manipulation) and I'm sorry I wasn't able to convey that well through the examples I gave and the gist I made. A generic slice should be able to be used as easily as an []int is today, I'll say again that any proposal that doesn't address this (slice/array swaps, assignment, len, cap, etc.) is falling short in my opinion.

All that said, now we're really clear about what each other's goals are. When I proposed what I did I very much said that it was simply a syntactical proposal and that the details were super fuzzy. But we sort of got into the details anyway and one of those details ended up being the lack of constraints, when I wrote it up I just didn't have them in mind because they're not important for what I'd like to do, it's not to say we couldn't add them or that they're not desirable. The main problem with continuing with the proposed syntax and trying to shoehorn constraints in would be that the definition of a generic argument currently repeats itself (intentionally) so there is no referring to code elsewhere to determine constraints etc. If we were to introduce constraints I don't see how we could keep this.

The best counter-example is that sort function we were discussing earlier.

type Sort(slice []a:Lesser, less func(a:Lesser, a:Lesser)) { ... }

As you can see there's no nice way to make this happen, and the token-spam approaches to Generics start to sound better again. In order to define constraints on these we need to change two things from the original proposal:

  • There needs to be a way to point at a type argument and give it constraints.
  • The constraints need to last for longer than a single definition, perhaps that scope is a type, perhaps that scope is a file (file actually sounds pretty reasonable).

Disclaimer: The following isn't an actual amendment to the proposal because I'm just throwing random symbols out there, I'm just using these syntaxes as examples to illustrate what we could do to amend the proposal as it stands originally

// Decorator style, follows the definition of the type thorugh all
// of it's methods.
@a: Lesser, Hasher, Equaler
func Sort(slice []a) { ... }
@k: Equaler, Hasher
type HashTable k v struct

// Inline, follows the definition of the type through
// all of it's methods.
func [a: Hasher, Equaler] Sort(slice []a) { ... }
type [k: Hasher, Equaler] HashTable k v struct

// File-scope global style, if k appears as a generic argument
// it's constrained by this that appears at the top of the file underneath
// the imports but before any other code.
@k: Equaler, Hasher

Again note that none of the above I actually want to add to the proposal really. I'm just showing what sort of constructs we could use to solve the problem, and how they look is somewhat irrelevant right now.

The question we then need to answer is: Do we still gain value from the implicit generic arguments? The main point of the proposal was to keep the clean Go-like feel of the language, to keep things simple, to keep things sufficiently low noise by eliminating excessive tokens. In the many cases where there are no constraints necessary, for example a map function or the definition of a Result type, does it look good, does it feel like Go, is it useful? Assuming that constraints are also available in some form or another.

func map(slice []a, mapper func(a) b) {
  for i := range slice {
    slice[i] = mapper(slice[i])
  }
}

type Result a b struct {
  Ok  a
  Err b
}

@aarondl I will have a go at explaining. The reason you need type constraints is because that is the only way you can call functions or methods on a type. consider the unconstrained type a what type can this be, well it could be a string or an Int or anything. So we cannot call any functions or methods on it because we do not know the type. We could use a type-switch and runtime reflection to get the type, and then call some functions or methods on it, but this is something we want to avoid with generics. When you constrain a type for example a is an Animal we can then call any method defined for an animal on a.

In your example, yes you can pass a mapper function in, but this is going to result in functions taking a lot of arguments, and is basically like a language with no interfaces, just first class functions. To pass every function you are going to use on type a is going to get very long list of functions in any real program, especially if you are writing mainly generic code for dependency-injection, which you want to do to minimise coupling.

For example what if the function that calls map is also generic? What if the function that calls that is generic etc. How do we define mapper if we don't yet know the type of a?

func m(slice []a) []b {
   mapper := func(x a) b {...}
   return map(slice, mapper)
}

What functions can we call on x when trying to define mapper?

@keean I understand the purpose and the function of the constraints. I simply don't value them as highly as simple things like generic container structs (not generic containers so to speak) and generic slices and therefore didn't even include them in the original proposal.

I still mostly believe that interfaces are the right answer to problems like the one you're talking about where you're doing dependency injection, that just simply doesn't seem to be the right place for generics but who am I to say. The overlap between their responsibilities is quite large in my eyes, hence why @Merovius and I had to have the discussion whether or not we could live without them, and he's pretty much got me convinced they'd be useful in some use cases hence I explored a little bit of what we might be able to do to add the feature to the proposal I originally made.

As for your example, you can call no functions on x. But you can still operate on the slice as any other slice which is tremendously useful on it's own. Also not sure what the func inside the func is... maybe you meant to assign to a var?

@aarondl
Thanks, I fixed the syntax, however I think the meaning was still clear.

The examples I gave above used both parametric polymorphism and interfaces to achieve some level of generic programming, however the lack of multiple-dispatch is always going to place a ceiling on the level of generality achievable. As such it appears Go is not going to provide the features I am looking for in a language, that doesn't mean I can't use Go for some tasks, and infact I already am and it works well, even if I have had to cut-and-paste code that really only needs one definition. I just hope in the future if that code needs changing the developer can find all the pasted instances of it.

I am then in two minds as to whether the limited genarallity possible without such big changes to the language is a good idea, considering the complixity it will add. Maybe Go is better remaining simple, and people can add macro like pre-processing, or other languages that compile to Go, to provide these features? On the other hand, adding parametric polymorphism would be a good first step. Allowing those type parameters to be constrained would be a good next step. Then you could add associated type parameters to interfaces, and you would have something reasonably generic, but that's probably as far as you can get without multiple-dispatch. By splitting into separate smaller features I guess you would increase the chance of getting them accepted?

@keean
Is multiple-dispatch all that necessary? Very few languages natively support it. Even C++ doesn't support it. C# kinda supports it via dynamic but I've never used it in practice and the keyword in general is very very rare in real code. Examples I remember deal with something like JSON parsing, not writing generics.

Is multiple-dispatch all that necessary?

IMHO,I think @keean speaks about static multiple dispatch provided by typeclasses/interfaces.
This is even provided in C++ by method overloading (I don't know for C#)

What you mean is dynamic multiple dispatch which is quite cumbersome in static languages without union types. Dynamic languages circumvent this problem by omitting static type checking (partial type inference for dynamic languages, the same for C#'s "Dynamic" Type).

Could a type be provided as "just" a parameter?

func Append(t, t2 type, arr []t, value t2) []t {
    v := t(value) // conversion
    return append(arr, v)
}

var arr []float64
v := 0

arr = Append(float64, int, arr, v)

@Inuart wrote:

Could a type be provided as "just" a parameter?

Questionable to which degree this would be possible or desired in go

What you want could be achieved instead if generic constraints are supported:

func Append(arr []t, value s) []t  requires Convertible<s,t>{
    v := t(value) // conversion
    return append(arr, v)
}

var arr []int64
v := 0.5

arr = Append(arr, v)

Also this should be possible with constraints, too:

func convert(value s) t requires Convertible<s,t>{
    return t(value);
}

f:float64:=2.0

i:int64=convert(f)

For what it is worth, our Genus language does support multiple dispatch. Models for a constraint can supply multiple implementations that are dispatched to.

I understand that the Convertible<s,t> notation is needed for compile time safety, but could maybe be degraded to a runtime check

func Append(t, t2 type, arr []t, value t2) []t {
    v, ok := t(value) // conversion
    if !ok {
        panic(...) // or return an err
    }
    return append(arr, v)
}

var arr []float64
v := 0

arr = Append(float64, int, arr, v)

But this looks more like syntax sugar for reflect.

@Inuart the point is the compiler can check the type implements the typeclass at compile time, so the runtime check is unnecessary. The benefit is better performance (so called zero cost abstraction). If it's a runtime check you may as well use reflect.

@creker

Is multiple-dispatch all that necessary?

I am in too minds about this. On the one hand multiple-dispatch (with mutli-parameter type classes) do not work well with existentials, what 'Go' calls 'interface values'.

type Equals<T> interface {eq(right T) bool}
(left I) eq(right I) bool {return left == right}
(left I) eq(right F) bool {return false}
(left F) eq(right I) bool {return false}
(left F) eq(right F) bool {return left == right}

func main() {
    x := []Equals<?>{I{2}, F{4.0}, I{2}, F{4.0}}
}

We cannot define the slice of Equals because we have no way to indicate the right hand parameter is from the same collection. We cannot even do this in Haskell:

data Equals = forall a . IEquals a a => Equals a

This is no good because it only allows a type to be compared with itself

data Equals = forall a b . IEquals a b => Equals a

This is no good because we have no way to constrain b to be another existential in the same collection as a (if a even is in a collection).

It does however make it very easy to extend with a new type:

(left K) eq(right I) bool {return false}
(left K) eq(right F) bool {return false}
(left I) eq(right K) bool {return false}
(left F) eq(right K) bool {return false}
(left K) eq(right K) bool {return left == right}

And this would be even more concise with default instances or specialisation.

On the other hand we can rewrite this in 'Go' that works right now:

package main

type I struct {v int}
type F struct {v float32}

type EqualsInt interface {eqInt(left I) bool}
func (right I) eqInt (left I) bool {return left == right}
func (right F) eqInt (left I) bool {return false}

type EqualsFloat interface {eqFloat(left F) bool}
func (right I) eqFloat (left F) bool {return false}
func (right F) eqFloat (left F) bool {return left == right}

type EqualsRight interface {
    EqualsInt
    EqualsFloat
}

type EqualsLeft interface {eq(right EqualsRight) bool}
func (left I) eq (right EqualsRight) bool {return right.eqInt(left)}
func (left F) eq (right EqualsRight) bool {return right.eqFloat(left)}

type Equals interface {
    EqualsLeft
    EqualsRight
}

func main() {
    x := []Equals{I{2}, F{4.0}, I{2}, F{4.0}}
    println(x[0].eq(x[1]))
    println(x[1].eq(x[0]))
    println(x[0].eq(x[2]))
    println(x[1].eq(x[3]))
}

This works nicely with the existential (interface value), however its much more complex, harder to see what is going on and how it works, and it has the big restriction that we need one interface per type and we need to hard code the acceptable right hand side types like this:

type EqualsRight interface {
    EqualsInt
    EqualsFloat
}

Which means we would have to modify the library source to add a new type because the interface EqualsRight is not extensible.

So without multi-parameter interfaces we cannot define extensible generic operators like equality. With multi-parameter interfaces existentials (interface values) become problematic.

My main issue with a lot of the proposed syntaxes (syntaces?) Blah[E] is that the underlying type does not show any information about containing generics.

For instance:

type Comparer[C] interface {
    Compare(other C) bool
}
// or
type Comparer c interface {
    Compare(other c) bool
}
...

This means we are declaring a new type which adds more information onto the underlying type. Isn't the point of the type declaration to define a name based on another type?

I'd propose a syntax more along the line of

type Comparer interface[C] {
    Compare(other C) bool
}

This means that really Comparer is just a type based on interface[C] { ... }, and interface[C] { ... } is of course it's own separate type from interface { ... }. This allows you to use a generic interface without naming it, if you want (which is allowed with normal interfaces). I think this solution is a bit more intuitive and works well with Go's type system, although please correct me if I am wrong.

Note: Declaring a generic type would only be allowable on interfaces, structs, and funcs with the following syntaxes:
interface[G] { ... }
struct[G] { ... }
func[G] (vars...) { ... }

Then "implementing" the generics would have the following syntaxes:
interface[G] { ... }[string]
struct[G] { ... }[string]
func[G] (vars...) { ... }[int](args...)

And with some examples to make it a bit more clear:

Interfaces

package add

type Adder interface[E] {
    // Adds the element and returns the size
    Add(elem E) int
}

// Adds the integer 5 to any implementation of Adder[int].
func AddFiveTo(a Adder[int]) int {
    return a.Add(5)
}

Structs

package heap

type List struct[T] {
    slice []T
}

func (l *List) Add(elem T) { // T is a type defined by the receiver
    l.slice = append(l.slice, elem)
}

Functions

func[A] AddManyTo(a Adder[A], many ...A) {
    for _, each := range a {
        a.Add(each)
    }
}

This is in response to the Go2 contracts draft and I will use its syntax, but I'm posting it here as it applies to any proposal for parametric polymorphism.

Embedding of type parameters should not be allowed.

Consider

type X(type T C) struct {
  R // A regular type with method Foo()
  T // Some type parameter
}
// X defines some methods other than Foo(),
// some of which invoke Foo.

for some arbitrary type R and some arbitrary contract C that does not contain Foo().

T will have all the selectors required by C but a particular instantiation of T may also have arbitrary other selectors, including Foo.

Let's say Bar is a struct, admissible under C, that has a field named Foo.

X(Bar) could be an illegal instantiation. Without a way to specify the contract that a type not have a selector, this would have to be an inferred property.

Methods of X(Bar) could continue to resolve references to Foo as X(Bar).R.Foo. This makes writing the generic type possible but could be confusing to a reader unfamiliar with the nitpickery of the resolution rules. Outside of the methods of X, the selector would remain ambiguous so, while interface { Foo() } does not depend on the parameters of X, some instantiations of X would not satisfy it.

Disallowing embedding of a type parameter is simpler.

(If this is to be allowed, however, the field name would be T for the same reason that the field name of an embedded S defined as type S = io.Reader is S and not Reader but also because the type instantiating T does not necessarily need to have a name at all.)

@jimmyfrasche I think that embedded fields with generic types are useful enough that it would be good to allow them, even if there might be a little awkwardness in places. My suggestion would be to assume in all generic code that the embedded type has defined all possible fields and methods at every possible level, so that within generic code all embedded methods and fields of non-generic types are erased.

So given:

type R struct(type T) {
    io.Reader
    T
}

methods on R would not be able to invoke Read on R without indirecting through Reader. For example:

func (r R) Do() {
     r.Read(buf)     // Illegal
     r.Reader.Read(buf)  // ok
}

The only down side I can see of this is that the dynamic type may contain more members than the static type. For example:

func (r R) Do() {
    var x interface{} = r
    x.(io.Reader)    // Succeeds
}

@rogpeppe

The only down side I can see of this is that the dynamic type may contain more members than the static type.

This is the case with type parameters directly, so I think it should also be fine with parametric types. I think the solution to the problem @jimmyfrasche presented might be to put the desired method set of the parameterized type in the contract.

contract C(t T) {
  interface { Foo() } (X(T){})
  // ...
}

type X(type T C) struct {
  R // A regular type with method Foo()
  T // Some type parameter
}
// X defines some methods other than Foo(),
// some of which invoke Foo.

This would allow Foo to be called on X directly. Of course, this would run afoul of the "no local names in contracts" rule...

@stevenblenkinsop Hmm, it's possible, if awkward, to do that without referring to X

contract C(t T) {
  struct{ R; T }{}.Foo
}

C is still bound to the implementation of X albeit a bit more loosely.

If you don't do that, and you write

func (x X(T)) Fooer() interface { Foo() } {
  return x
}

does it compile? It wouldn't under @rogpeppe's rule which seems like it would need to be adopted as well for when you don't make the guarantee in the contract. But then does it apply only when you embed a type argument without a sufficient contract or for all embeddings?

It would be easier to just disallow it.

I started working on this proposal before the Go2 draft was announced.

I was ready to happily scrap mine when I saw the announcement, but I'm still unsettled with the complexity of the draft, so I finished mine up. It's less powerful but simpler. If nothing else, it may have some bits worth stealing.

It expands on the syntax of @ianlancetaylor's earlier proposals, as that is what was available when I began. That is not fundamental. It could be replaced by a (type T etc. syntax or something equivalent. I just needed some syntax as a notation for the semantics.

It is located here: https://gist.github.com/jimmyfrasche/656f3f47f2496e6b49e041cd8ac716e4

The rule would have to be that any method promoted from a greater depth than that of an embedded type parameter cannot be called unless (1) the identity of the type argument is known or (2) the method is asserted to be callable on the outer type by the contract constraining the type parameter. The compiler could also determine upper and lower bounds on the depth a promoted method must have within the outer type O, and use them to determine whether the method is callable on a type that embeds O, i.e. whether there is potential for conflict with other promoted methods or not. Something similar would also apply for any type parameter that is asserted to have callable methods, where the depth ranges of the methods within the type parameter would be [0, inf).

Embedding type parameters just seems too useful to forbid it completely. For one thing, it permits transparent composition, which the pattern of embedding interfaces doesn't allow.

I also found a potential use in defining contracts. If you want to be able to accept a value of type T (which could be a pointer type) which might have methods defined on *T, and you want to be able to put that value in an interface, you can't necessarily put T in the interface, since the methods might be on *T, and you can't necessarily put *T in the interface because T might itself be a pointer type (and thus *T might have an empty method set). However, if you had a wrapper like

type Wrapper(type T) { T }

you could put a *Wrapper(T) in the interface in all cases if your contract says it satisfies the interface.

Can't you just do

type Interface interface {
  SomeMethod(int) error
}

contract MightBeAPointer(t T) {
  Interface(t)
}

func Example(type T MightBeAPointer)(v T) {
  var i Interface = v
  // ...
}

I'm trying to handle the case where someone calls

type S struct{}
func (s *S) SomeMethod(int) error { ... }
...
var s S
Example(S)(s)

This won't work because S can't be converted to Interface, only *S can.

Obviously, the answer might be "don't do that". However, the contracts proposal describes contracts like:

contract Contract(t T) {
    var _ error = t.SomeMethod(int(0))
}

S would satisfy this contract because of auto-addressing, as would *S. What I'm trying to address is the capability gap between method calls and interface conversions in contracts.

Anyways, this is a bit of a tangent, showing one potential use for embedding type parameters.

Re embedding, I think “can embed in a struct” is another restriction that the contracts would have to capture if allowed.

Consider:

contract Embeddable(type X, Y) {
    type S struct {
        X
        Y
    }
}

type Embedded(type First, Second Embeddable) struct {
        First
        Second
}

// Error: First and Second both provide method Read.
// That must be diagnosed to the Embeddable contract, not the definition of Embedded itself.
type Boom = Embedded(*bytes.Buffer, *strings.Reader)

@bcmills embedding types with ambiguous selectors is allowed so I'm not sure how that contract is supposed to be interpreted.

At any rate, if you're only embedding known types, it's fine. If you're only embedding type parameters, it's fine. The only case that gets weird is when you embed one or more known types AND one or more type parameters and then only when the selectors of the known type(s) and the type arguments(s) aren't disjoint

@bcmills embedding types with ambiguous selectors is allowed so I'm not sure how that contract is supposed to be interpreted.

Hmm, good point. I'm missing one more constraint to trigger the error.¹

contract Embeddable(type X, Y) {
    type S struct {
        X
        Y
    }
    var _ io.Reader = S{}
}

¹https://play.golang.org/p/3wSg5aRjcQc

That requires one of X or Y but not both to be an io.Reader. It's interesting that the contract system is expressive enough to allow that. I'm glad I don't have to figure out the type inference rules for such a beast.

But that's not really the problem.

It's when you do

type S (type T C) struct {
  io.Reader
  T
}
func (s *S(T)) X() io.Reader {
  return s
}

That should fail to compile because T could have a Read selector unless C has

struct{ io.Reader; T }.Read

But then what are the rules when C does not ensure the selector sets are disjoint and S does not reference the selectors? Is it possible for every instantiation S to satisfy an interface except for types that create an ambiguous selector?

Is it possible for every instantiation S to satisfy an interface except for types that create an ambiguous selector?

Yes, that seems to be the case. I wonder if that implies anything deeper... 🤔

I haven't been able to construct anything irredeemably nasty, but the asymmetry is quite unpleasant and makes me feel uneasy:

type I interface { /* ... */ }
a := G(A) // ok, A satisfies contract
var _ I = a // ok, no selector overlap
b := G(B) // ok, B satisfies contract
var _ = b // error, selector overlap

I am worried about the error messages when G0(B) uses a G1(B) uses a . . . uses a Gn(B) and Gn is the one that causes the error. . . .

FTR, you don't need to go through the trouble of ambiguous selectors to trigger type-errors with embedding.

// Error: Duplicate field name Reader
type Boom = Embedded(*bytes.Reader, *strings.Reader)

You're assuming that the embedded field name is based on the argument type, whereas it's more likely to be the name of the embedded type parameter. This is like when you embed a type alias and the field name is the alias rather than the name of the type it aliases.

This is actually specified in the draft design in the section on parameterized types:

When a parameterized type is a struct, and the type parameter is embedded as a field in the struct, the name of the field is the name of the type parameter, not the name of the type argument.

type Lockable(type T) struct {
    T
    mu sync.Mutex
}

func (l *Lockable(T)) Get() T {
    l.mu.Lock()
    defer l.mu.Unlock()
    return l.T
}

(Note: this works poorly if you write Lockable(X) in the method declaration: should the method return l.T or l.X? Perhaps we should simply ban embedding a type parameter in a struct.)

I'm just sitting back here on the sidelines and observing. But also getting a tad worried.

One thing I am not embarrassed to say is that 90% of this discussion is over my head.

It seems 20 years of earning a living from writing software without knowing what generics, or parametric polymorphism is, hasn't stopped me from getting the job done.

Sadly I only took the time about a year ago to learn Go. I made the false assumption that it was a steep learning curve, and would take too long to become productive.

I couldn't have been more wrong.

I was able to learn enough Go to build a microservice that absolutely destroyed the node.js service I was having performance trouble with in less than a weekend.

Ironically, I was just playing around. I wasn't particularly serious about conquering the world with Go.

And yet, within a couple of hours, I found myself sitting up from my slouched defeated posture, like I was on the edge of my seat watching an action thriller. The API I was building came together so quickly. I realised that this was indeed a language worth investing my precious time in, because it was obviously so pragmatic in its design.

And that's the thing I love about Go. It's very fast..... To learn. We all here know of its performance capabilities. But the speed at which it can be learnt is unmatched by the 8 other languages I have learnt over the years.

Since then I have been singing Go's praises, and gotten 4 more Devs to fall in love with it. I just sit with them for a couple of hours and build something. Results speak for themselves.

Simplicity, and speed to learn. These are the true killer features of the language.

Programming languages that require months of hard slog learning often don't retain the very developers they seek to attract. We have work to do, and employer's that want to see progress daily (thanks agile, appreciate it)

So, there are two things I hope the Go team can take into consideration:

1) What day to day problem are we looking to solve?

I can't seem to find a real world example, with a show stopper that would be solved by generics, or whatever they are going to be called.

Cookbook style examples of every day tasks that are problematic, with an example of how they might be improved with these language change proposals.

2) Keep it simple, like all the other great features of Go

There are some incredibly intelligent comments here. But I'm certain that the majority of developers who use Go on a day to day basis for general programming such as myself, are perfectly happy and productive with things the way they are.

Perhaps a compiler argument to enable such advanced features? ‘--hardcore’

I would be really sad if we negatively impacted the compiler performance. Just say'n

And that's the thing I love about Go. It's very fast..... To learn. We all here know of its performance capabilities. But the speed at which it can be learnt is unmatched by the 8 other languages I have learnt over the years.

I completely agree. The combination of power with simplicity in a fully compiled language is something that is completely unique. I definitely don't want Go to lose that, and as much as I want generics, I don't think they're worth it at that expense. I don't think it's necessary to lose that, though.

I can't seem to find a real world example, with a show stopper that would be solved by generics, or whatever they are going to be called.

I have two main primary use cases for generics: Type-safe boilerplate elimination of complex data structures, such as binary trees, sets, and sync.Map, and the ability to write _compile-time_ type-safe functions that operate based purely on the functionality of their arguments, rather than their layout in memory. There some fancier things I wouldn't mind being able to do, but I wouldn't mind _not_ being able to do them if it's impossible to add support for them without completely breaking the simplicity of the language.

To be honest, there are already features in the language that are quite abusable. The primary reason that they're _not_ abused that often, I think, is the Go culture of writing 'idiomatic' code, combined with the standard library providing clean, easy to find examples of such code, for the most part. Getting good usage of generics into the standard library should definitely be a priority when they're implemented.

@camstuart

I can't seem to find a real world example, with a show stopper that would be solved by generics, or whatever they are going to be called.

Generics are so you don't have to write the code yourself. So you never need to implement another linked list, binary tree, deque, or priority-queue yourself again. You will never need to implement a sort algorithm, a partitioning algorithm or a rotate algorithm etc. Data structures become composing standard collections (a Map of Lists for example), and processing becomes composing standard algorithms (I need to sort the data, partition, and rotate). If you can re-use these components the error rate goes down, because every time you re-implement a Priority Queue, or a partitioning algorithm there is a chance you get it wrong and introduce a bug.

Generics mean you write less code, and re-use more. They mean that standard, well maintained library functions and abstract data types can be used in more situations, so you don't have to write your own.

Even better, all of that can technically be done in Go right now, but only with a near complete loss of compile-time type-safety _and_ with some, potentially major, runtime overhead. Generics let you do it without either of those downsides.

Generic function implementation:

/*

* "generic" is a KIND of types, just like "struct", "map", "interface", etc...
* "T" is a generic type (a type of kind generic).
* var t = T{int} is a value of type T, values of generic types looks like a "normal" type

*/

type T generic {
    int
    float64
    string
}

func Sum(a, b T{}) T{} {
    return a + b
}

Function caller:

Sum(1, 1) // 2
// same as:
Sum(T{int}(1), T{int}(1)) // 2

Generic struct implementation:

type ItemT generic {
    interface{}
}

type List struct {
    l []ItemT{}
}

func NewList(t ItemT) *List {
    l := make([]t)
    return &List{l}
}

func (p *List) Push(item ItemT{}) {
    p.l = append(p.l, item)
}

Caller:

list := NewList(ItemT{int})
list.Push(42)

As someone just learning Swift and not liking it, but with plenty of experience in other languages like Go, C, Java, etc; I really believe that generics (or templating, or whatever you want to call it) is not a good thing to add to the Go language.

Maybe I'm just more experienced with the current version of Go but to me this feels like a regression to C++ in that it is more difficult to understand code that other people have written. The classic T placeholder for types makes it so difficult to understand what a function is trying to do.

I know this is a popular feature request so I can deal with it if it lands, but I wanted to add my 2 cents (opinion).

@jlubawy
Do you know another way that I never have to implement a linked list or quicksort algorithm? As Alexander Stepanov points out most programmers cannot correctly define the "min" and "max" functions so what hope do we have to correctly implement more complex algorithms without lots of debugging time. I would much rather pull standard versions of these algorithms out of a library and just apply to the types I have. What alternative is there?

@jlubawy

or templating, or whatever you want to call it

Everything depends on the implementation. if we're talking about C++ templates then yes, they're difficult to understand in general. Even writing them is difficult. On the other hand, if we take C# generics then that's whole another thing completely. The concept itself is not a problem here.

If you didn't know, the Go Team has announced a draft of Go 2.0:
https://golang.org/s/go2designs

There is a draft to the Generics design in Go 2.0 (contract). You may want to take a look and give feedback on their Wiki.

This the relevant section:

Generics

After reading the draft, I ask:

Why

T:Addable

means "a type T implementing the contract Addable"? Why adding a new
concept when we already have INTERFACES for that? Interfaces assignment is
checked in build time, so we already have the means to not need any
additional concept here. We can use this term to say something like: Any
type T implementing the interface Addable. Additionally, T:_ or T:Any
(being Any a special keyword or a built-in alias of interface{}) would do
the trick.

Just I don't know why to reimplement most of the stuff like that. Makes no
sense and WILL be redundant (as redundant is the new handling of errors wrt
the handling of panics).

2018-09-14 6:15 GMT-05:00 Koala Yeung notifications@github.com:

If you didn't know, the Go Team has announced a draft of Go 2.0:
https://golang.org/s/go2designs

There is a draft to the Generics design in Go 2.0 (contract). You may want
to take a look and give feedback
https://github.com/golang/go/wiki/Go2GenericsFeedback on their Wiki
https://github.com/golang/go/wiki/Go2GenericsFeedback.

This the relevant section:

Generics


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-421326634, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AlhWhS8xmN5Y85_aUKT5VnutoOKUAaLLks5ua4_agaJpZM4IG-xv
.

--
This is a test for mail signatures to be used in TripleMint

Edit: "[...] would do the trick IF YOU NEED NO PARTICULAR REQUIREMENT ON
THE TYPE ARGUMENT".

2018-09-17 11:10 GMT-05:00 Luis Masuelli luis.masuelli@jobsity.com:

After reading the draft, I ask:

Why

T:Addable

means "a type T implementing the contract Addable"? Why adding a new
concept when we already have INTERFACES for that? Interfaces assignment is
checked in build time, so we already have the means to not need any
additional concept here. We can use this term to say something like: Any
type T implementing the interface Addable. Additionally, T:_ or T:Any
(being Any a special keyword or a built-in alias of interface{}) would do
the trick.

Just I don't know why to reimplement most of the stuff like that. Makes no
sense and WILL be redundant (as redundant is the new handling of errors wrt
the handling of panics).

2018-09-14 6:15 GMT-05:00 Koala Yeung notifications@github.com:

If you didn't know, the Go Team has announced a draft of Go 2.0:
https://golang.org/s/go2designs

There is a draft to the Generics design in Go 2.0 (contract). You may
want to take a look and give feedback
https://github.com/golang/go/wiki/Go2GenericsFeedback on their Wiki
https://github.com/golang/go/wiki/Go2GenericsFeedback.

This the relevant section:

Generics


You are receiving this because you commented.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-421326634, or mute
the thread
https://github.com/notifications/unsubscribe-auth/AlhWhS8xmN5Y85_aUKT5VnutoOKUAaLLks5ua4_agaJpZM4IG-xv
.

--
This is a test for mail signatures to be used in TripleMint

--
This is a test for mail signatures to be used in TripleMint

@luismasuelli-jobsity If I read the history of generic implementations in Go correctly, then it looks like the reason to introduce Contracts is because they did not want operator overloading in Interfaces.

An earlier proposal that was eventually rejected used interfaces to constrain parametric polymorphism, but seems to have been rejected because you could not use common operators like '+' in such functions because it is not definable in an interface. Contracts allow you to write t == t or t + t so you can indicate the type must support equality or addition etc.

Edit: Also Go does not support multiple type parameter interfaces, so in a way Go has separated typeclass into two separate things, Contracts that relate the functions type parameters to each other, and interfaces which supply methods. What it looses is the ability to select a typeclass implementation based on multiple types. It is arguably simpler if you only need to use interfaces or contracts, but more complex if you need to use both together.

Why T:Addable means "a type T implementing the contract Addable"?

That's actually not what it means; it just looks that way for one type argument. Elsewhere in the draft it makes the comment that you can only have one contract per function, and this is where the main difference comes in. Contracts are actually statements about the types of the function, not just the types independently. For example, if you have

func Example(type K, V someContract)(k K, v V) V

you can do something like

contract someContract(k K, v V) {
  k.someMethod(v)
}

This vastly simplifies coordinating multiple types without having to redundantly specify the types in the function signature. Remember, they're trying to avoid the 'curiously repeating generic pattern'. For example, the same function with parameterized interfaces used to constrain the types would be something like

type someMethoder(V) interface {
  someMethod(V)
}

func Example(type K: someMethoder(V), V)(k K, v V) V

This is kind of awkward. The contract syntax allows you to still do this if you need to, though, because the contract's 'arguments' are auto-filled by the compiler if the contract has the same number of them as the function does type parameters. You can specify them manually if you want to, though, meaning that you _could_ do func Example(type K, V someContract(K, V))(k K, v V) V if you really wanted to, though it's not particularly useful in this situation.

One way of making it clearer that contracts are about entire functions, not individual arguments, would be to simply associate them based on name. For example,

contract Example(k K, v V) {
  k.someMethod(v)
}

func Example(type K, V)(k K, v V) V

would be the same as the above. The downside, however, is that contracts would not be re-usable and you lose that ability to specify the contract's arguments manually.

Edit: To show further why they want to solve the curiously repeating pattern, consider the shortest path problem that they kept referring to. With parameterized interfaces, the definition winds up looking like

type E(Node) interface {
  Nodes() []Node
}

type N(Edge) interface {
  Edges() (from, to Edge)
}

type Graph(type Node: N(Edge), Edge: E(Node)) struct { ... }
func New(type Node: N(Edge), Edge: E(Node))(nodes []Node) *Graph(Node, Edge) { ... }
func (*Graph(Node, Edge)) ShortestPath(from, to Node) []Edge { ... }

Personally, I rather like the way that contracts are specified for functions. I'm not _too_ keen on just having 'normal' function bodies as the actual contract specification, but I think a lot of the potential problems could be solved by introducing some kind of gofmt-like simplifier that auto-simplifies contracts for you, removing extraneous parts. Then you _could_ just copy a function body into it, simplify it, and modify it from there. I'm not sure how possible this will be to implement, though, unfortunately.

Some things will still be a bit awkward to specify, though, and the apparent overlap between contracts and interfaces still seems a bit odd.

I find the "CRTP" version much clearer, more explicit, and easier to work with (no need to create contracts that only exist to define the relationship between pre-existing contracts over a set of variables). Admittedly, that could just be the many years of familiarity with the idea.

Clarifications. By the draft design, contract can be applied to both functions and types.

"""
It is arguably simpler if you only need to use interfaces or contracts, but more complex if you need to use both together.
"""

As long as they allow you to, inside a contract, reference one or more interfaces (instead of only operators and functions, thus allowing DRY), this issue (and my claim) will be solved. There is a chance that I misread or did not completely read the contracts stuff, and also a chance that the said feature is supported and I did not notice it. If it isn't, it should be.

Can't you do the following?

contract Example(t T, v V) {
  t.(interface{
    SomeMethod() V
  })
}

You can't use an interface that's declared elsewhere because of the restriction that you can't reference identifiers from the same package as the contract is declared in, but you can do this. Or they could just remove that restriction; it seems a bit arbitrary.

@DeedleFake No, because any interface type can be type-asserted (and then just potentially panic at runtime, but contracts aren't executed). But you can use an assignment instead.

t.(someInterface) would also mean that it must be an interface

Good point. Woops.

The more examples of this I see, the more error-prone 'figure it out from a function body' seems to be.

There are lots of cases where it's confusing for a person, same syntax for different operations, shades of implications from different constructs, etc., but a tool would be able to take that and reduce it to a normal form. But then the output of such a tool becomes a de facto sub-language for expressing type constraints that we have to learn by rote, making it all the more surprising when someone deviates and writes a contract by hand.

I'll also note that

contract I(t T) {
  var i interface { Foo() }
  i = t
  t.(interface{})
}

expresses that T must be an interface with at least Foo() but it could also have any other number of additional methods.

T must be an interface with at least Foo() but it could also have any other number of additional methods

Is that a problem, though? Don't you usually want to constrain things so that they allow specific functionality but you don't care about other functionality? Otherwise a contract like

contract Example(t T) {
  t + t
}

wouldn't allow subtraction, for example. But from the point of view whatever I'm implementing, I don't care if a type allows subtraction or not. If I restricted it from being able to perform subtraction, then people would just arbitrarily not be able, to for example, pass anything that does to a Sum() function or something. That seems arbitrarily restrictive.

No, it's not a problem at all. It was just an unintuitive (to me) property, but perhaps that was due to insufficient coffee.

Its fair to say that the current contract declaration need to have better compiler messages to work with. And the rules for a valid contract should be strict.

Hi
I made a proposal for constraints for generics that i posted in this thread about ½ a year ago.
Now I've made a version 2. The main changes are:

  • The syntax has been adapted to the one proposed by the go-team.
  • Constraining by fields have been omitted, which allows for quite a bit of simplifications.
  • Paragraphs deemed not strictly necessary has been taken out.

I thought of an interesting (but maybe more detailed than appropriate at this stage in the design?) question regarding type identity recently:

func Foo() interface{} {
    type S struct {}
    return S{}
}

func Bar(type T)() interface{} {
    type S struct {}
    return S{}
}

func Baz(type T)() interface{} {
    type S struct{t T}
    return S{}
}

func main() {
    fmt.Println(Foo() == Foo()) // 1
    fmt.Println(Bar(int)() == Bar(string)()) // 2
    fmt.Println(Baz(int)() == Baz(string)()) // 3
}
  1. Prints true, because the types of the returned values are originating in the same type declaration.
  2. Prints…?
  3. Prints false, I assume.

i.e. the question is, when two types declared in a generic function are identical and when they aren't. I don't think this is described in the ~spec~ design? At least I can't find it right now :)

@merovius I assume the middle case was supposed to be:

fmt.Println(Bar(int)() == Bar(int)()) // 2

This is an interesting case, and it depends on whether types are "generative" or "applicative". There are actually to variants of ML that take different approaches. Applicative types view the generic as a type function, and hence f(int) == f(int). Generative types view the generic as a type template that creates a new unique 'instance' type each time it is used so t<int> != t<int>. This must be approached at a whole type-system level as it has subtle implications for unification, inference and soundness. For further details and examples of then kind of problems I recommend reading Andreas Rossberg's "F-ing modules" paper: https://people.mpi-sws.org/~rossberg/f-ing/ although the paper is talking about ML "functors" this is because ML separates its type system into two levels, and functors are MLs equivalent of a generic and are only available at the module level.

@keean You assume wrong.

@merovius Yes, my mistake, I see the question is because the type parameter is not used (a phantom type).

With generative types, each instantiation would result in a different unique type for 'S', so even though the parameter is not used, they would not be equal.

With applicative types, the 'S' from each instantiation would be the same type, and so they would be equal.

It would be weird if the result in case 2 changed based on compiler optimizations. Sounds like UB.

It's 2018 people, I can't believe I actually have to type this like in 1982:

func min(x, y int) int {
if x < y {
return x
}
return y
}

func max(x, y int) int {
if x > y {
return x
}
return y
}

I mean, seriously, dudes MIN(INT,INT) INT, how is that NOT in the language?
I'm angry.

@dataf3l If you want those to work as expected with pre-orders then:

func min(x, y int) int {
   if x <= y {
      return x
   }
   return y
}

This is so the pair (min(x, y), max(x, y)) is always distinct and is either (x, y) or (y, x), and that is therefore a stable sort of two elements.

So another reason these should be in the language or a library is that people mostly get them wrong :-)

I thought about the < vs <=, for integers, I'm not sure I quite see the difference.
Maybe I'm just dumb...

I'm not sure I quite see the difference.

There's none in this case.

@cznic true in this case as they are integers, however as the thread was about generics I assumed that the library comment was about having generic definitions of min and max so users don't have to declare them themselves. Re-reading the OP I can see they just want simple min and max for integers, so my bad, but they were off topic asking for simple integration functions in a thread about generics :-)

Generics are a crucial addition to this language, especially given the lack of built in data structures. So far my experience with Go is that it is a great and easy language to learn. It has a huge trade off though, which is that you have to code the same things over and over and over.

Maybe I'm missing something, but this seems like fairly large flaw in the language. Bottom line, there are few built-in data structures, and every time we create a data structure we have to copy and paste the code to support each T.

I'm not sure how to contribute other than posting my observation here as a 'user'. I'm not an experienced enough programmer to contribute to design or implementation, so I can only say that generics would greatly enhance productivity in the language (so long as build time and tooling remained awesome as they are now).

@webern Thanks. See https://go.googlesource.com/proposal/+/master/design/go2draft.md .

@ianlancetaylor, after posting, a fairly radical/unique idea popped into my head that I think would be 'lightweight' as far as the language and tooling is concerned. I haven't read your link fully just yet, I will. But if I wanted to submit a idea/proposal for generic programming in MD format, how would I do that?

Thanks.

@webern Write it up (most people have been using gists for the markdown format) and update the wiki here https://github.com/golang/go/wiki/Go2GenericsFeedback

Lots of others have already done so.

I have merged (against latest tip) and uploaded the CL of our pre-Gophercon prototype implementation of a parser (and printer) implementing the contracts draft design. If you're interesting in trying out the syntax, please have a look: https://golang.org/cl/149638 .

To play with it:

1) Cherry-pick the CL in a repo that's recent:
git fetch https://go.googlesource.com/go refs/changes/38/149638/2 && git cherry-pick FETCH_HEAD

2) Rebuild and install the compiler:
go install cmd/compile

3) Use the compiler:
go tool compile foo.go

See the CL description for details. Enjoy!

contract Addable(t T) {
    t + t
}

func Sum(type T Addable)(x []T) T {
    var total T
    for _, v := range x {
        total += v
    }
    return total
}

This generics design, func Sum(type T Addable)(x []T) T, is VERY VERY VERY UGLY!!!

To be compared to func Sum(type T Addable)(x []T) T, I think func Sum<T: Addable> (x []T) T is more clear, and has no burden for the programmer coming from other programming languages.

You mean the syntax is more verbose?
There must be some reason why it's not func Sum(T Addable)(x []T) T.

without the type keyword there will be no way to differentiate between a generic function and one that returns another function, which itself is being called.

@urandom That's only a problem at instantiation time and there we don't require the type keyword, but just live with the ambiguity AIUI.

The problem is, that without the type keyword, func Foo(x T) (y T) could parsed be either as declaring a generic function taking a T and returning nothing or a non-generic function taking a T and returning a T.

func Sum (x []T) T

I agree, I prefer something along these lines. Given the expansion of linguistic scope represented by generics, I think it would be reasonable to introduce this syntax to "call attention" to a generic function.

I also think this would make code a little more easy (read: less Lisp-y) to parse for human readers, as well as reduce the chances of hitting some obscure parsing ambiguity further down the line (see C++'s "Most Vexing Parse", to help motivate an abundance of caution).

It's 2018 people, I can't believe I actually have to type this like in 1982:

func min(x, y int) int {
if x < y {
return x
}
return y
}

func max(x, y int) int {
if x > y {
return x
}
return y
}

I mean, seriously, dudes MIN(INT,INT) INT, how is that NOT in the language?
I'm angry.

There is a reason for it.
If you don't understand, you can learn or go away.
Your choice.

I sincerely hope they make it better.
But your "you can learn or go away" attitude, is not providing a good example for others to follow. it reads unnecessarily abrasive. I don't think that's what this community is about @petar-dambovaliev. however, it is not for me to tell you what do do, or how to behave online, that is not my place.

I know there are lots of strong feelings about generics, but please bear in mind our Gopher values. Please keep the conversation respectful and welcoming on all sides.

@bcmills thank you, you make the community a better place.

@katzdm agreed, the language has so many parentheses already, this new stuff looks really ambiguous to me

Defining generics seems inevitable introducting things like type's type, which makes Go rather complicated.

Hope this is not too off-topic, but a feature of function overload seems plenty for me.

BTW, I know there was some discussion on overloading.

@xgfone Agree, that the language has so many parentheses already, making the code unclear.
func Sum<T: Addable> (x []T) T or func Sum<type T Addable> (x []T) T is better and clearer.

For consistency (with built-in generics), func Sum[T: Addable] (x []T) T is better than func Sum<T: Addable> (x []T) T.

I may be influenced by previous work in other languages, but Sum<T: Addable> (x []T) T does seem more distinct and readable at first glance.

I also agree with @katzdm in that it's better at bringing attention to something new in the language. It's also quite familiar to non-Go developers jumping into Go.

FWIW, there is an approximately 0% chance Go will use angle brackets for generics. C++’s grammar is unparseable because you can’t tell a < b > c (a legal but meaningless series of comparisons) from a generic invocation without understanding the types of a, b, and c. Other languages avoid use angle brackets for generics for this reason.

func a < b Addable> (...
I guess you can if you realize that after func you can only have either the function name, a ( or a <.

@carlmjohnson I hope you are right

f := sum<int>(10)

But here you know that sum is a contract..

C++’s grammar is unparseable because you can’t tell a < b > c (a legal but meaningless series of comparisons) from a generic invocation without understanding the types of a, b, and c.

I think it's worth pointing out that while Go, unlike C++, disallows this in the type system, since the < and > operators return bools in Go and < and > can't be used with bools, it _is_ syntactically legal, so this is still an issue.

Another problem with angle brackets is List<List<int>>, in which the >> is tokenized as a right shift operator.

What were the issues with using []? It seems to me that most of the above are solved by using them:

  • Syntactically, f := sum[int](10), to use the example above, is unambiguous because it's got the same syntax as an array or map access, and then the type system can figure it out later, the same as it already has to do for the difference between array and map accesses, for example. This is different from the case of <> because a single < is legal, leading to ambiguity, but a single [ is not.
  • func Example[T](v T) T is also unambiguous.
  • ]] isn't its own token, so that problem is also avoided.

The design draft mentions an ambiguity in type declarations, such as in type A [T] int, but I think this could be relatively easily solved in a couple of different ways. For example, the generic definition could be moved to the keyword itself, rather than the type name, i.e.:

  • func[T] Example(v T) T
  • type[T] A int

The complication here could come from the usage of type declaration blocks, like

type (
  A int
)

but I think this is rare enough that it's fine to basically say that if you need generics then you can't use one of those blocks.

I think it would be very unfortunate to write

type[T] A []T
var s A[int]

because the square brackets move from one side of A to the other. Of course it could be done, but we should aim for better.

That said, the use of the type keyword in the current syntax does mean that we could replace parentheses with square brackets.

This doesn't seem that different from array type vs. expression syntax being [N]T vs. arr[i], in terms of how something is declared not matching how it's used. Yes, in var arr [N]T, the square brackets end up on the same side of arr as when using arr, but we normally think of the syntax in terms of type vs expression syntax being opposite.

I extended and improved some my old immature ideas to try to unify custom and builtin generics. The new solution is still immature, but I hope it can make some inspirations for others.

I'm not sure if discussing ( vs < vs [ and the use of type is bikeshedding or there really is a problem with the syntax

@ianlancetaylor ... wondered whether the feedback warranted any tweaks to the proposed design? My own sense of the feedback was that many felt that interfaces and contracts could be combined, at least initially. Seemed to be a shift after a while that the two concepts should be kept separate. But I could be reading the trends wrong. Would love to see an experimental option in a release this year!

Yes, we are considering changes to the draft design, including looking at the many counter-proposals that people have made. Nothing is finalized.

Juts to add some practical experience report:
I implemented generics as a language extension in my Go interpreter https://github.com/cosmos72/gomacro. Interestingly, both the syntaxes

type[T] Pair struct { First T; Second T }
type Pair[T] struct { First T; Second T }

turned out to introduce a lot of ambiguities in the parser: the second could be parsed as a declaration that Pair is an array of T structs, where T is some constant integer. When Pair is used there are ambiguities too: Pair[int] could also be parsed as an expression instead of a type: it could be indexing an array/slice/map named Pair with the index expression int (note: int and other basic types are NOT reserved keywords in Go), so I had to resort to a new syntax - admittedly ugly, but does the job:

template[T] type Pair struct { First T; Second T }
type pairOfInt = Pair#[int]
var p Pair#[int]

and similarly for functions:

template[T] func Sum(args ...T) T { /*...*/ }
Sum#[int] (1,2,3)

So, while in theory I agree that syntax is a superficial matter, I must point out that:
1) from one side, syntax it is what Go programmers will be exposed to - so it must be expressive, simple and possibly palatable
2) from the other side, a bad choice of syntax will complicate the parser, typechecker and compiler in order to resolve the introduced ambiguities

Pair[int] could also be parsed as an expression instead of a type: it could be indexing an array/slice/map named Pair with the index expression int

This isn't a parsing ambiguity, just a semantic one (until after name resolution); the syntactic structure is the same either way. Notice that Sum#[int] could also either be a type or an expression depending on what Sum is. The same is true of (*T) in existing code. As long as name resolution doesn't affect the structure of what's being parsed, you're fine.

Compare this to the problems with <>:

f ( a < b , c < d >> (e) )

You can't even tokenize this, since >> could be one or two tokens. Then, you can't tell whether there's one or two arguments to f... the structure of the expression changes significantly depending on what is denoted by a.

Anyways, I'm interested to see what the current thinking is in the team about generics, in particular, whether "constraints-are-just-code" has been iterated on or abandoned. I can understand wanting to avoid defining a distinct constraint language, but it turns out that writing code that sufficiently constrains the types involved forces an unnatural style, and you also have to put bounds on what the compiler can actually infer about types based on the code because otherwise these inferences can become arbitrarily complex, or can rely on facts about the language that might change in the future.

@cosmos72

Maybe I'm wrong, but beside what was said by @stevenblenkinsop, is it all possible that a term:

a b

could also imply that b is not a type if b is known to be an alpha numeric (no operator/no separator) with optional [identifier] appended on and a is not a special keyword/special alphanumeric (e.g. no import/package/type/func)?.

Don't know the grammar of go too much.

In some way types like int and Sum[int] would be treated anyway as expressions:

type (
    nodeList = []*Node  // nodeList and []*Node are identical types
    Polar    = polar    // Polar and polar denote identical types
)

If go would allow infix functions, then indeed a type tag would be ambiguous as type could be an infix function or a type.

I noticed today that this proposal's problem overview claims of Swift:

Declaring that T satisfies the Equatable protocol makes the use of == in the function body valid. Equatable appears to be a built-in in Swift, not possible to define otherwise.

This appears to be more of an aside than something that is deeply affecting the decisions made on this topic, but on the off chance it gives people much smarter than I am some inspiration, I wanted to point out that there isn't actually anything special about Equatable other than it being pre-defined in the language (mainly so that lots of other built-in types can "conform to" it). It is entirely possible to create similar protocols:

protocol Equatable2 {
    static func == (lhs: Self, rhs: Self) -> Bool
}

class uniq: Equatable2 {
    static func == (lhs: uniq, rhs: uniq) -> Bool {
        return false
    }
}

let narf = uniq(), poit = uniq()

func !=<T: Equatable2> (lhs: T, rhs: T) -> Bool {
    return !(lhs == rhs)
}

print(narf != poit)

@sighoya
I was talking about ambiguities of the syntax a[b] proposed for generics, since it is already used to index slices and maps - not about a b.

In the meantime I have been studying Haskell, and while I knew beforehand it extensively used type inference, the expressiveness and sophistication of its generics surprised me.

Unluckily it has a quite peculiar naming scheme, so it's not always easy to understand at first glance. For example a class is actually a constraint for types (generic or not). The Eq class is the constraint for types whose values can be compared with '==' and '/=':

class Eq a where
  (==) :: a -> a -> Bool
  (/=) :: a -> a -> Bool

means that a type a satisfies the constraint Eq if a "specialization" exists (actually an "instance" in Haskell parlance) of the infix functions == and /= which accepts two arguments, each with type a and returns a Bool result.

I am currently trying to adapt some of the ideas found in Haskell generics to a proposal for Go generics, and see how well they fit. I am really glad to see that investigation is going on with other languages beyond C++ and Java:

the Swift example above, and my Haskell example, show that constraints on generic types are already used in practice by several programming languages, and that a non-trivial amount of experience on various approaches to generics and constraints exists and is available among programmers of these (and other) languages.

In my opinion, it's certainly worth studying such experience before finalizing a proposal for Go generics.

Stray thought: if the form of constraint you want the generic type to satisfy happens to be more-or-less congruent to an interface definition, you might use the existing type-assertion syntax we are already accustomed to:

type Comparer interface {
  Compare(v interface{}) (*int, error)
}
type PriorityQueue<T.(Comparer)> struct {
  things []T
}

Apologies if this has already been discussed exhaustively elsewhere; I haven't seen it, but I'm still getting caught up on the literature. I've been ignoring it for a bit because, well, I don't want generics in any version of Go. But the idea seems to be gaining momentum and a sense of inevitability in the community at large.

@jesse-amano It is interesting that you don't want generics in any version of Go. I find this difficult to understand because as a programmer I really don't like repeating myself. Whenever I program in 'C' I find myself having to implement the same basic things like a List or a Tree on some new datatype, and inevitably my implementations are full of bugs. With generics we can have only one version of any algorithm, and the whole community can contribute to making that one version the best. What is your solution to not repeating yourself?

Regarding the other point, Go seems to be introducing new syntax for generic constraints because interfaces do not allow overloading operators (like '==' and '+'). There are two ways forward from this, define a new mechanism for generic constraints, which is the way Go seems to be going, or allowing interfaces to overload operators which is the way I prefer.

I prefer the second option because it keeps the language syntax smaller and simpler, and allows new numeric types to be declared that can use the usual operators, for example complex numbers that you can add with '+'. The argument against this seems to be that people might abuse operator overloading to make '+' do weird things, but this seems a non-argument to me because I can already abuse any function name, for example I can write a function called 'print' that erases all the data on my hard drive and terminates the program. I would like the ability to restrict overloads of both operators and functions to conform to certain axiomatic properties like commutativity or associativity, but if it doesn't apply to both operators and functions I don't see much point. An operator is just an infix function, and a function is just a prefix operator after all.

Another point to mention is that generic constraints that reference multiple type parameters are very useful, if single parameter generic constraints are predicates on types, multi-parameter constrains are relations on types. Go interfaces cannot have more than one type parameter, so again either new syntax needs to be introduced, or interfaces need to be redesigned.

So in a way I agree with you, Go was not designed as a generic language, and any attempt to bolt generics on is going to be sub-optimal. Maybe it is better to keep Go without generics, and design a new language around generics from the ground up to keep the language small with a simple syntax.

@keean I don't have as strong an aversion to repeating myself a few times when I need to, and Go's approach to error handling, method receivers, etc. generally seems to do a good job of keeping most bugs at bay.

In a handful of cases over the past four years, I have found myself in situations where a complex but generalizable algorithm needed to be applied to more than two complex but self-consistent data structures, and in all cases -- and I say this with all seriousness -- I found code generation via go:generate to be more than sufficient.

As I read through experience reports, in many cases I think go:generate or a similar tool could've solved the problem, and in some other cases I feel like maybe Go1 just wasn't the right language, and something else might have been used instead (perhaps with a plugin wrapper if some Go code needed to make use of it). But I'm aware it's easy enough for me to speculate what I _might have_ done, which _might have_ worked; I've thus far had zero practical experiences that made me wish Go1 had more ways of expressing generic types, but it could be I have an odd way of thinking about things, or it could be I've just been extremely lucky to only work on projects that didn't really need generics.

I'm hoping that if Go2 ends up supporting a generic syntax, it would have a fairly straightforward mapping to the logic that will be generated, with no weird edge cases possibly arising from boxing/unboxing, "reification", inheritance chains, etc. that other languages have to worry about.

@jesse-amano In my experience though, its not just a few times, every program is a composition of well known algorithms. I cannot remember the last time I wrote an original algorithm, maybe a complex optimisation problem that needed domain knowledge.

When writing a program the first thing I do is try and break down the problem into well known chunks I can compose, an argument parser, some file streaming, constraint based layout of UI. Its not just complex algorithms that people make mistakes in, hardly anyone can write a correct implementation of "min" and "max" the first time (See: http://componentsprogramming.com/writing-min-function-part5/ ).

The problem with go:generate is that is basically just a macro processor, it has no type safety, you somehow have to type check and error check the generated code, which you cannot do until you have run the generation. This kind of meta-programming is very hard to debug. I don't want to write a program to write the program, I just want to write the program :-)

So the difference with generics is that I can write a simple _direct_ program that can be error checked and type checked by my understanding of the meaning, without having to generate the code, and the debug that and work the bugs back to the generator.

A really simple example is "swap", I want to just swap two values, I don't care what they are:

swap<A>(x: *A, y: *A) {
   let tmp = *x
   *x = *y
   *y = tmp
}

Now I think it is trivial to see if this function is correct, and its trivial to see it is generic and can be applied to any type. Why would I ever want to type this function in again and again for every type of pointer to a value that I might want to use swap on. Of course then I can build bigger generic algorithms from this like an in-place sort. I don't think the go:generate code even for a simple algorithm would be easy to see if its correct.

I could easily make a mistake like:

let tmp = *x
*y = *x
*x = tmp

typing this in by hand every time I wanted to swap the contents of two pointers.

I understand that the idiomatic way to do this kind of thing in Go is to use an empty interface, but this is not type-safe and is slow. However it seems to me that Go doesn't have the right features to elegantly support this kind of generic programming, and empty interfaces provide an escape-hatch to work around the problems. Rather than completely changing the style of go, it seems better develop a language suitable for this kind of generics from scratch. Interestingly 'Rust' gets a lot of the generic stuff right, but because it uses static memory management rather than garbage collection, it adds a whole lot of complexity that isn't really necessary for most programming. I think between Haskell, Go and Rust there are probably all the bits necessary to make a decent mainstream generic language, just all mixed up.

For information: I am currently writing a wishlist on Go generics,

with the intention of actually implementing it in my Go interpreter gomacro, which already has a different implementation of Go generics (modeled after C++ templates).

It's not yet complete, feedback is welcome :)

@keean

I read the blog post you linked about the min function, and the four posts leading up to it. I did not observe even an attempt to make the argument that "hardly anyone can write a correct implementation of 'min'...". The writer actually seems to acknowledge that their first implementation _is_ correct... as long as the domain is restricted to numbers. It is the introduction of objects and classes, and the requirement that they be compared along only one dimension, unless the values in that dimension are the same, except when— and so forth, which creates the additional complexity. The subtle hidden requirements involved in needing to carefully define the comparator and sorting functions on a complex object are exactly why I _don't_ like generics as a concept (at least in Go; Java with Spring seems like it's already a good enough environment for composing together a bunch of mature libraries into an application).

I personally do not find a need for type-safety in macro generators; if they are generating legible code (gofmt helps set the bar for this fairly low), then compile-time error-checking should be sufficient. It shouldn't matter to the user of the generator (or code invoking it) for production, anyway; in the admittedly small set of times I've been called upon to write a generic algorithm as a macro, a handful of unit tests (usually float, string, and pointer-to-struct — if there are any hard-coded types that shouldn't be hard-coded, one of these three will be incompatible with it; if any of these three can't be used in the generic algorithm, then it isn't a generic algorithm) was sufficient to ensure the macro worked appropriately.

swap is a bad example. Sorry, but it is. It's already a one-liner in Go, no need for a generic function to wrap it and no room for a programmer to make a non-obvious error.

*y, *x = *x, *y

There is also already an in-place sort in the standard library. It uses interfaces. To make a version specific to your type, define:

type myslice []mytype
func (s myslice) Len() int { return len(s) }
func (s myslice) Less(i, j int) bool { return s[i].whatWouldAlsoBeNeededInAGenericImpl(s[j]) }
func (s myslice) Swap(i, j int) { s[i], s[j] = s[j], s[i] }

It's admittedly several more bytes to type than SortableList<mytype>(myThings).Sort(), but it's a _lot_ less dense to read, isn't as likely to "stutter" throughout the rest of an application, and if bugs do arise, I'm unlikely to need something as heavy as a stack trace to find the cause. The current approach has several advantages, and I worry that we'd lose them if we leaned into generics too much.

@jesse-amano
The problems with 'min/max' apply even if you don't understand the need for a stable sort. For example one developer implements min/max for some datatype in one module, and then it gets used in a sort or some other algorithm by another team member without proper checking of assumptions, and leads to strange bugs because its not stable.

I think programming is mostly composing standard algorithms, very rarely do programmers create new innovative algorithms, so min/max and sort are just examples. Picking holes in the specific examples I chose just shows that I did not choose very good examples, it does not address the actual point. I chose "swap" because it is very simple, and quick for me to type. I could have chosen many others, sort, rotate, partition, which are very general algorithms. It doesn't take long when you are writing a program that uses a collection like a red/black tree to get fed up of having to redo the tree for every different datatype you want a collection of, because you want type-safety, and an empty interface is little better than "void*" in 'C'. Then you would have do do the same again for every algorithm that uses each of these trees, like pre-order, in-order, post-order iteration, searching, and that's before we get onto any sophisticated stuff like Tarjan's network algorithms (disjoint sets, heaps, minimum spanning trees, shortest paths, flows etc.)

I think code generators have their place, for example generating a validator from a json-schema or a parser from a grammar definition, but I don't think they make a suitable replacement for generics. For generic programming I want to be able to write any algorithm once, and have it clear, simple and direct.

In any case I agree with you about 'Go', I don't think a 'Go' was designed from the start to be a good generic language, and adding generics now is probably not going to result in a good generic language, and is going to lose some of the directness and simplicity it already has. Personally if you are having to reach for a code generator (beyond things like generating validators from json-schema or parsers from a grammar file) then you are probably using the wrong language anyway.

Edit: Regarding testing generics with "float" "string" "pointer-to-struct", I don't think there are many generic algorithms that work on that diverse a set of types, except maybe 'swap'. True 'generic' functions are really limited to shuffles and don't occur very often. Constrained generics are much more interesting, where the generic types are constrained by some interface. As you can see, with the in-place sort example from the standard-library, you can make some constrained generics work in 'Go' in limited cases. I like the way Go interfaces work, and you can do a lot with them. I like true constrained generics even more. I don't really like adding a second constraint mechanism like the current generics proposal does. A language where interfaces directly constrain types would be much more elegant.

It is interesting that as far as I can tell, the only reason the new constraints were introduced is because Go does not allow operators to be defined in interfaces. Earlier generics proposals did allow types to be constrained by interfaces, but were abandoned because they did not cope with operators like '+'.

@keean
Perhaps there's a better place for a protracted discussion. (Perhaps not; I've looked around and this seems to be _the_ place to discuss generics in Go2.)

I certainly understand the need for a stable sort! I suspect the authors of the original Go1 standard library understood it too, since sort.Stable has been in there since public release.

I think the great thing about the standard library's sort package is that it _doesn't_ only work on slices. It's certainly at its simplest when the receiver is a slice, but all you really need is a way to know how many values are in the container (the Len() int method), how to compare them (the Less(int, int) bool method), and how to swap them (the Swap(int, int) method, of course). You can implement sort.Interface using channels! It's slow, of course, because channels aren't designed for efficient indexing, but it can be proven correct given a generous execution-time budget.

I don't mean to nitpick, but the problem with a bad example is that... it's bad. Stuff like sort and min are just _not_ points in favor of a high-impact language feature like generics. I feel pretty strongly that poking holes in these examples _does_ address the actual point; _my_ point is there's no need for generics when a better solution already exists in the language.

@jesse-amano

better solution already exists in the language

Which one? I don't see anything that's better than type-safe constrained generics. Generators are not Go, plain and simple. Interfaces and reflection produce unsafe, slow and panic prone code. These solutions are good enough because there's nothing else. Generics would solve the issue with boilerplate, unsafe empty interface constructs and, the worst of all, eliminate many uses of reflection which is even more prone to runtime panics. Even the new errors package proposal suffers from the lack of generics and its API would greatly benefit from them. You can look at As as an example - non idiomatic, prone to panics, hard to use, requires vet check to use properly. All because Go lacks any type of generics.

sort, min and other generic algorithms are great examples because they show the main benefit of generics - composability. They allow building extensive library of generic transformation routines that can be chained together. And most importantly, it would be easy to use, safe, fast (at least it's possible with generics), no need for boilerplate, generators, interface{}, reflection and other obscure language features used solely because there's no other way.

@creker

Which one?

For sorting stuff, the package sort. Anything that implements sort.Interface can be sorted (with a stable or unstable algorithm of your choice; some in-place versions are provided via the sort package, but you are free to write your own with a similar or different API). Since the standard library sort.Sort and sort.Stable both operate on the value passed through the argument list, the value you get back is the same as the value you started with — and therefore, necessarily, the type you get back is the same as the type you started with. It's perfectly type-safe, and the compiler does all the work of inferring whether your type implements the needed interface and is capable of _at least_ as many compile-time optimizations as would be possible with a generics-style sort<T> function.

For swapping stuff, the one-liner x, y = y, x. Again, no type assertions, interface casts, or reflection necessary. It's just swapping two values. The compiler can easily make sure your operations are type-safe.

There isn't a single specific tool that I'd consider to be a better solution than generics in all cases, but for any given problem generics is supposed to solve, I believe there's a better solution. I might be wrong here; I'm still open to seeing an example of something that generics can do where all existing solutions would've been terrible. But if I can poke holes in it, then it isn't one of those examples.

I don't much like the xerrors package either, but xerrors.As doesn't strike me as being non-idiomatic; it's a very similar API to json.Unmarshal, after all. It might need better documentation and/or example code, but it's otherwise fine.

But no, sort and min are, on their own, pretty terrible examples. The former exists in Go already and is perfectly composable, all without need for generics. The latter is in its broadest sense one of the outputs of sort (which we already solved), and in cases where a more specialized or optimized solution might be needed, you would write the specialized solution anyway rather than lean on generics. Again, there are no generators, interface{}, reflection, or "obscure" language features used in the standard library's sort package. There are non-empty interfaces (which are well-defined in the API so that you get compile-time errors if you use them incorrectly, inferred so you don't need casts, and checked at compile-time so you don't need assertions). There might be some boilerplate _if_ the collection you are sorting is a slice, but if it happens to be a struct (such as one representing the root node of a binary-search tree?), you can make that satisfy the sort.Interface too, so it's actually _more_ flexible than a generic collection.

@jesse-amano

my point is there's no need for generics when a better solution already exists in the language

I kind think better solution is really relatively based on how you see it. If we have better language we could have better solution, that's why we want to make this language better. For example if better generic exists, we could have better sort in our stdlib, at least the current way to implement the sort interface is not a good user experience for me, I still have to type a lot of similar code which I strongly feel we could abstract it away.

@jesse-amano

I think the great thing about the standard library's sort package is that it doesn't only work on slices.

I agree, I like the standard sort.

The former exists in Go already and is perfectly composable, all without need for generics.

This is a false dichotomy. Interfaces in Go already are a form of generics. The mechanism is not the thing itself. Look beyond the syntax, and see the goal, which is the ability to express any algorithm in a generic way without limitations. The interface abstraction of 'sort' is a generic, it allows any datatype that can implement the required methods to be sorted. The notation is simply different. We could write:

f<T>(x: T) requires Sortable(T)

Which would mean that the type 'T' must implement the 'Sortable' interface. In 'Go' this might be written func f(x Sortable). So at least function application in Go can be handled generically, but there are operations which cannot like arithmetic, or dereferencing. Go does pretty well, as interfaces can be considered type-predicates, but Go has no answer for relations on types.

Its easy to see the limitations with Go, consider:

func merge(x, y Sortable)

where we are going to merge two sortable things, however Go does not let us enforce that these two things must be the same. Contrast this with:

merge<T>(x: T, y: T) requires Sortable(T)

Here we are clear that we are merging two sortable types that are the same. 'Go' throws away the underlying type information and just treats anything "sortable" as the same.

Lets try for a better example: lets say I want to write a red/black tree that can contain any datatype, as a library, so that other people can use it.

Interfaces in Go already are a form of generics.

If so, then this issue may be closed as already solved, because the original statement was:

This issue proposes that Go should support some form of generic programming.

Equivocation does all parties a disservice. Interfaces are indeed _a_ form of generic programming, and they indeed _don't_ necessarily, on their own, solve every last problem that other forms of generic programming can solve. So let us, for simplicity, allow any problem that can be solved with tools outside the scope of this proposal/issue to be considered "solved without generics". (I believe an overwhelming majority of solvable problems encountered in the real world, if not all, are in that set, but this is just to make sure we're all speaking the same language.)

Consider: func merge(x, y Sortable)

It's unclear to me why merging two sortable things (or things that implement sort.Interface) would be in any way different from merging two collections _in general_. For slices, that's append; for maps, that's for k, v := range m { n[k] = v }; and for more complex data structures, there are necessarily more complex merging strategies depending on the structure (whose contents might be required to implement some methods the structure needs). Assuming you're talking about a more complicated sorting algorithm that partitions and chooses sub-algorithms for the partitions before merging them back together, what you need is not for the partitions to be "sortable" but rather some kind of guarantee that your partitions are already _sorted_ before merging. That's a very different kind of problem, and not one that template syntax helps solve in any obvious way; naturally, you would want some pretty rigorous unit tests to guarantee the reliability of your merge-sort algorithm(s), but surely you wouldn't want to expose an _exported_ API that burdens the developer with this kind of stuff.

You do raise an interesting point about Go not having a good way to check whether two values are of the same type without reflection, type-switches, etc. I do feel like using interface{} is a perfectly acceptable solution in the case of general-purpose containers (e.g. a circular linked list) as the boilerplate involved in wrapping the API for type-safety is absolutely trivial:

type MyStack struct { stack Stack }
func (s *MyStack) Push(v MyType) error { return s.stack.Push(v) }
func (s *MyStack) Pop() (MyType, error) {
  v, err := s.stack.Pop()
  var m MyType
  if v != nil {
    if m, ok := v.(MyType); ok { return m, err; }
    panic("this code should be unreachable from the exported API")
  }
  return nil, err
}

I struggle to imagine why this boilerplate would be a problem, but if it is, a reasonable alternative might be a (text/) template. You could annotate types that you want to define stacks for with a //go:generate stackify MyType github.com/me/myproject/mytype comment, and let go generate produce the boilerplate for you. As long as cmd/stackify/stackify_test.go tries it out with at least one struct and at least one built-in type, and it compiles and passes, I don't see why this would be a problem -- and it's probably pretty close to what any compiler would've ended up doing "under the hood" if you'd defined a template. The only difference is the errors are more helpful because they're less dense.

(There may also be cases where we want a generic _something_ that cares about two things being of the same type more than it cares about their behavior, which do not fall into the category "containers of stuff". That would be very interesting, but adding a generic template construction syntax to the language still might not be the only possible solution available.)

Supposing that the boilerplate _isn't_ a problem, I'm interested in tackling the problem of creating a red/black tree that is as easy for callers to use as packages like sort or encoding/json. I will certainly fail because... well, I'm just not that smart. But I'm excited to find out how close I might get.

Edit: The beginnings of an example may be seen here, although it is far from complete (best I could throw together in a couple of hours). Of course, there also exist other attempts at similar data structures.

@jesse-amano

If so, then this issue may be closed as already > solved, because the original statement was:

It's not just that interfaces _are_ a form of generics, but that improving the interfaces approach can get us all the way in generics. For example multi-parameter interfaces (where you can have more than one 'reciever') would allow relations on types. Allowing interfaces to override operators like addition and dereferencing would remove the need for any other form of constraint on types. Interfaces _can_ be all the type constraints you need, if they are designed with an understanding of the endpoint of fully general generics.

Interfaces are semantically similar to Haskell's type-classes, and Rust's traits which _do_ solve these generic problems. Type-classes and traits solve all the same generic problems C++ templates do, but in a type-safe way (but maybe not all the meta-programming uses, which I think is a good thing).

I struggle to imagine why this boilerplate would be a problem, but if it is, a reasonable alternative might be a (text/) template.

I personally don't have a problem with that much boilerplate, but I understand the desire to have no boilerplate at all, as a programmer it's boring and repetitive, and it's exactly the kind of task we write programs to avoid. So again, personally, I think writing an implementation for a 'stack' interface/type-class is exactly the _right_ way to make your datatype 'stackable'.

There are two limitations with Go that frustrate further generic programming. The 'type' equivalence problem, for example defining maths functions so that the result and all arguments must be the same. We could imagine:

mul<T>(x, y T) T requires Addable(T) {
    r := 0
    for i := 0; i < y; ++i  {
        r = r + x
    }
    return r
}

To satisfy the constraints on '+' we need to ensure that x and y are numeric, but also both the same underlying type.

The other is the limitation of interfaces to only a single 'receiver' type. This limitation means that you don't just have to type the boilerplate above once (which I think is reasonable) but for each different type you want to put into MyStack. What we want is to declare the type contained as part of the interface:

type Stack<T> interface {...}

This would allow, amoungst other things, an implementation to be declared that is parametric in T so that we can put any T in MyStack using the Stack interface, as long as all uses of Push and Pop on the same instance of MyStack operate on the same 'value' type.

With these two changes we should be able to create a generic red/black tree. It should be possible without them, but like the Stack, you will have to declare a new instance of the interface for each type you wish to put into the red/black tree.

From my point of view the two extensions above to interfaces are all that is needed for Go to fully support 'generics'.

@jesse-amano
Looking at the red/black tree example, what we really want generically is the definition of a 'Map' the red/black tree is just one possible implementation. As such we might expect an interface like this:

type Map<Key, Value> interface {
   put(x Key, y Value) 
   get(x Key) Value
}

Then the red/black tree could be provided as an implementation. Ideally we want to write code that does not depend on the implementation, so you could provide a hash-table, or a red-black tree, or a BTree. We would then write our code:

f<K, V, T>(index T) T requires Map<K, V> {
   ...
}

Now whatever f is, it can work independently of the implementation of the Map, f may be a library function written by someone else, who does not need to know whether my application uses a red/black tree or a hash-map.

In go as it is now, we would need to define a specific map like this:

type MapIntString interface {
   put(x Int, y String)
   get(x Int) String
}

Which is not so bad, but it means the 'library' function f has to be written for every possible combination of key and value types if we are going to be able to use it in an application where we don't know the types of the keys and values when we write the library.

While I agree with @keean last comment, the difficulty is writing a red/black tree in Go that implements a known interface, as for example the one just suggested.

Without generics, it is well known that in order to implement type-agnostic containers one has to use interface{} and/or reflection - unluckily both approaches are slow and error-prone.

@keean

It's not just that interfaces are a form of generics, but that improving the interfaces approach can get us all the way in generics.

I don't view any of the proposals linked to this issue, to date, as an improvement. It seems fairly uncontroversial to say that they are all flawed in some way. I believe those flaws severely outweigh any benefit, and many of the _claimed_ benefits are in fact already supported by existing features. My belief is based on practical experience, not speculation, but it is still anecdotal.

I personally don't have a problem with that much boilerplate, but I understand the desire to have no boilerplate at all, as a programmer it's boring and repetitive, and it's exactly the kind of task we write programs to avoid.

I don't agree with this either. As a paid professional, my objective is to reduce time/effort costs _for myself and others_, while increasing my employer's gains, however those might be measured. A task being "boring" is only bad if it is also time-consuming; it cannot be difficult, or it would not be boring. If it's only a little time-consuming up-front, but eliminates future time-consuming activities and/or gets the product into release sooner, then it's still completely worth it.

Then the red/black tree could be provided as an implementation.

I think I have made decent progress these last couple of days on an implementation of a red/black tree, (it's unfinished; lacks even a readme) but I'm worried I have already failed to illustrate my point if it isn't abundantly clear that my goal is not to work toward an interface but rather to work toward an implementation. I'm writing a red/black tree, and of course I want it to be _useful_, but I don't care what _specific_ things other developers might want to use it for.

I know that the minimal interface required by a red/black tree library is one where a "weak" ordering exists on its elements, so I need something _like_ a function named Less(v interface{}) bool, but if the caller has a method that does something similar but isn't named Less(v interface{}) bool, it's up to them to write the boilerplate wrappers/shims to make it work.

When you access elements contained by the red/black tree you get interface{}, but if you're willing to trust my guarantee that the library provided _is_ a red/black tree, I don't understand why you wouldn't trust that the types of elements you put in will be exactly the types of elements you get out. If you _do_ trust both of those guarantees, then the library isn't error-prone at all. Simply write (or paste) a dozen or so lines of code to cover the type-assertions.

Now you have a perfectly safe library (again, assuming no more than the level of trust you'd have to be willing to give in order to download the library in the first place) that even has the exact function names you want. This is important. In a Java-style ecosystem where library authors are bending over backward to code against an _exact_ interface definition (they almost _have_ to, because the language enforces it by way of class MyClassImpl extends AbstractMyClass implements IMyClass syntax) and there's a bunch of extra bureaucracy, you have to go out of your way to make a facade for the third-party library to fit into your organization's coding standards (which is the same amount of boilerplate, if not more), or else allow this to be an "exception" to your organization's coding standards (and eventually your org has as many exceptions in its standards as in its codebases), or else give up on using a perfectly good library (assuming, for the sake of argument, that the library is actually good).

Ideally we want to write code that does not depend on the implementation, so you could provide a hash-table, or a red-black tree, or a BTree.

I agree with this ideal, but I think Go already satisfies it. With an interface like:

type MyStorage interface {
  Get(KeyType) (ValueType, error)
  Put(KeyType, ValueType) error
}

the only thing that is missing is the ability to parameterize what KeyType and ValueType are, and I'm not convinced this is especially important.

As a (hypothetical) maintainer of a red/black tree library, I don't care what your types are. I'll just use interface{} for all my core functions that handle "some data", and _maybe_ provide some exported example funcs that let you use them more easily with common types like string and int. But it's up to the caller to provide the extremely-thin layer around this API to make it safe for whatever custom types they might end up defining. But the only important thing about the API I'm providing is that it allows the caller to do all the things they might expect a red/black tree to be able to do.

As a (hypothetical) caller of a red/black tree library, I probably just want it for fast storage and lookup time. I don't care that it's a red/black tree. I care that I can Get things from it and Put things in it, and — importantly — I care what those things are. If the library doesn't offer functions named Get and Put, or can't interact perfectly with the types I have defined, that doesn't matter to me as long as it's easy for me to write the Get and Put methods myself, and make my own type satisfy the interface the library needs while I'm at it. If it's not easy, I usually find that it is the library author's fault, not the language's, but once again it's possible there are counterexamples I'm just not aware of.

By the way, the code could get a lot more tangled if it _weren't_ like this. As you say, there are many possible implementations of a key/value store. Passing around an abstract key/value storage "concept" hides the complexity of how the key/value storage is accomplished, and a developer on my team might choose the wrong one for their task (including a future version of myself whose knowledge of the key/value storage implementation has paged out of memory!). The application or its unit tests might, despite our best efforts in code-review, contain subtle implementation-dependent code that stops working reliably when some key/value stores depend on a connection to a DB and others don't. It's a pain when the error report comes with a big stack trace, and the only line in the stack trace referencing something in the _real_ codebase points at a line that uses an interface value, all because the implementation of that interface is generated code (which you can only see in the runtime) instead of an ordinary struct, with methods returning readable error values.

@jesse-amano
I agree with you, and I like the 'Go' way of doing things where the "user" code declares an interface that abstracts the way it works, and then you write the implementation of that interface for the library/dependency. This is backwards from the way most other languages think about interfaces. but once you get it is very powerful.

I would still like to see the following things in a generic language:

  • parametric types, like: RBTree<Int, String> as this would enforce type safety of user collections.
  • type variables, like: f<T>(x, y T) T, because this is necessary to define families of related functions like addition, subtraction etc where the function is polymorphic, but we require all the arguments to be of the same underlying type.
  • type constraints, like: f<T: Addable>(x, y T) T, which is applying interfaces to type variables, because once we introduce type-variables, we need a way to constrain those type variables instead of treating Addable as a type. If we regard Addable as a type and write f(x, y Addable) Addable, we have no way of telling if the original underlying types of x and y are the same as each other or the returned type.
  • multi-parameter interfaces, like: type<K, V> Map<K, V> interface {...}, that could be used like merge<K, V, T: Map<K, V>>(x, y T) T which allow us to declare interfaces that are parameterized not just by the container type, but in this case also the key and value types of the map.

I think each of these would increase the abstractive power of the language.

Any progress or schedule on this?

@leaxoy There is talk scheduled on "Generics in Go" by @ianlancetaylor at GopherCon. I would expect to hear more on the current state of affairs in that talk.

@griesemer Thanks for that link.

@keean I'd love to also see the Where clause from Rust here, which may be an improvement to your type constraints proposal. It allows using the type system to constrain against behavior like "starting a transaction prior to query" to be type checked against without runtime reflection. Check out this video on it: https://www.youtube.com/watch?v=jSpio0x7024

@jadbox sorry if my explanation wasn't clear, but the 'where' clause is almost exactly exactly what I was proposing. The things after 'where' in rust are type constraints, but I think I used the keyword 'requires' in an earlier post instead. This stuff was all done in Haskell at least a decade ago, except Haskell uses the '=>' operator in type signatures to indicate type constraints, but it's the same underlying mechanism.

I left this out of my summary post above because I wanted to keep things simple, but I would like something like this:

merge<K, V, T>(x, y T) T requires T: Map<K, V>

But it doesn't really add anything to what you can do apart from a syntax that can be more readable for long constraint sets. You can represent anything you can with the 'where' clause by putting the constraint after they type variable in the initial declaration like this:

merge<K, V, T: Map<K, V>>(x, y T) T

Providing you can reference the type variables before they are declared, you can put any constraints in there, and you would use a comma separated list to apply multiple constraints to the same type variable.

So as far as I am aware, the only advantage to a 'where'/'requires' clause is that all the type variables are already declared up front, which may make it easier for the parser and for kind-inference.

Is this still the right thread for feedback/discussion on the current/latest working Go 2 Generics proposal that was recently announced?

In short, I really like the direction the proposal is going in general and the contracts mechanism in particular. But I'm concerned with what appears to be a single-minded assumption throughout that compile-time generic parameters must (always) be type parameters. I've written up some feedback on this issue here:

Are Only Type Parameters Generic Enough for Go 2 Generics?

Certainly comments here are OK, but in general I don't think GitHub issues are a good format for discussion, as they don't provide for any sort of threading. I think the mailing lists are better.

I don't think it's clear yet how often people will want to parameterized functions on constant values. The most obvious case would be for array dimensions--but you can already do that by passing the desired array type as a type argument. Other than that case, what do we really gain by passing a const as a compile-time argument rather than a run-time argument?

Go offers many different and great ways to solve problems already and we should never add anything new unless it is fixing a really big problem and shortcoming, which this clearly isn't doing, and even in such circumstance the added complexity that follows is a very high price to pay.

Go is unique exactly because of the way it is. If it isn't broke, then please don't try to fix it!

People who are unhappy about the way Go was designed should go and use one of the multitude of other languages that already posses this added and annoying complexity.

Go is unique exactly because of the way it is. If it isn't broke, then please don't try to fix it!

It's broken, thus it should be fixed.

It's broken, thus it should be fixed.

It might not work the way you think it should—but then a language never can. It's certainly in no way broken. Considering the available information and debate then taking time to make an informed and sensible decision is always the best option. Many other languages have suffered, in my opinion, due to adding more and more features to solve more and more potential issues. Remember that "no" is temporary, "yes" is forever.

Having participated in past mega-issues, may I suggest that a channel is opened on Gopher Slack for those who want to discuss this, the issue is temporarily locked, and then times posted when the issue will be unfrozen for anyone who wants to consolidate the discussion from Slack? Github Issues no longer work as a forum once the dreaded "478 hidden items Load more…" link comes in.

may I suggest that a channel is opened on Gopher Slack for those who want to discuss this
The mailing lists are better because they provide a searchable archive. A summary can still be posted on this issue.

Having participated in past mega-issues, may I suggest that a channel is opened on Gopher Slack for those who want to discuss this

Please don't move the discussion entirely to closed platforms. If anywhere, golang-nuts is available to all (ish? I don't know if that works without a Google account either actually, but at least it's a standard method of communication that everyone has or can get) and it should be moved there. GitHub is bad enough, but I grudgingly accept that we're stuck with it for communication, not everyone can get a Slack account or can use their terrible clients.

not everyone can get a Slack account or can use their terrible clients

What does "can" mean here? Are there real restrictions on Slack that I don't know about or do people just not like using it? The latter is fine, I guess, but some people also boycott Github because they don't like Microsoft, so you lose some people but gain others.

not everyone can get a Slack account or can use their terrible clients

What does "can" mean here? Are there real restrictions on Slack that I don't know about or do people just not like using it? The latter is fine, I guess, but some people also boycott Github because they don't like Microsoft, so you lose some people but gain others.

Slack is an US company and as such will follow any foreign policies imposed by US.

Github has the same problem and was just in the news for kicking out Iranians with no warning. It's unfortunate, but unless we use Tor or IPFS or something, we'll have to respect US/European law for any practical discussion forum.

Github has the same problem and was just in the news for kicking out Iranians with no warning. It's unfortunate, but unless we use Tor or IPFS or something, we'll have to respect US/European law for any practical discussion forum.

Yes, we are stuck with GitHub and Google Groups. Let's not add more problematic services to the list. Also chat just isn't a good archive; it's hard enough digging through these discussions when they're nicely threaded and on golang-nuts (where they come straight to your inbox). Slack means if you're not in the same timezone as everybody else you have to wade through masses of chat archives, one off non-sequiters, etc. mailing lists mean you have it at least somewhat organized in threads, and people tend to take more time in their replies so you don't get tons of random 1-off comments left casually. Also I just don't have a Slack account and their stupid clients won't work on any of the machines I use. Mutt on the other hand (or your email client of choice, yay standards) works everywhere.

Please keep this issue about generics. The fact that the GitHub issue tracker is not ideal for large-scale discussions like generics is worth discussing, but not on this issue. I've marked several comments above as "off topic".

Regarding the uniqueness of Go: Go has some nice features but it's not as unique as some seem to think. As two examples, CLU and Modula-3 have similar goals and similar payoff, and both support generics in some form (since ~1975 in the case of CLU!) They have no industrial support at present but FWIW, it is possible to get a compiler working for both of them.

couple of inquiries on syntax, is the type keyword in the type parameters required? and would it make more sense to adopt <> for the type parameters like other languages? This might make things more readable and familiar...

Although I'm not against the way it is in the proposal, just putting this up for consideration

instead of:

type Vector(type Element) []Element
var v Vector(int)
func (v *Vector(Element)) Push(x Element) { *v = append(*v, x) }
type VectorInt = Vector(int)

we could have

type Vector<Element> []Element
var v Vector<int>
func (v *Vector<Element>) Push(x Element) { *v = append(*v, x) }
type VectorInt = Vector<int>

The <> syntax is mentioned in the draft, @jnericks (Your username is perfect for this discussion...). The primary argument against it is that it massively increases the complexity of the parser. More generally, it makes Go a significantly harder language to parse for little benefit. Most people agree that it does improve readability, but there's disagreement on whether or not it's worth the trade off. Personally, I don't think it is.

The type keyword usage is necessary to disambiguate. Otherwise it's hard to tell the difference between func Example(T)(arg int) {} and func Example(arg int) (int) {}.

I read through the latest proposal about the go generics. all are match my taste except the contract declaration grammar.

as we know, in go we always declare struct or interface like this:

type MyStruct struct {
        a int
        s string
}

type MyInterface inteface {
    Method1() err
    Method2() string
}

but contract declaration in latest proposal is like this:

contract Ordered(T) {
    T int, int8
}

contract G(Node, Edge) {
    Node Edges() []Edge
    Edge Nodes() (from Node, to Node)
}

In my thought, contract grammar is inconsistent in form with traditional approach. how about the grammar like below:

type Ordered(T) contract {
    T int, int8
}

if there is only one type parameter, the declaration above can be also wrote like this:

type Ordered contract {
    int , int8
}


if there are more than one type parameter, we have to use named parameter:

type G(Node, Edge) contract {
    Node Edges() []Edge
    Edge Nodes() (from Node, to Node)
}

now the form of contract is consistent with traditional. we can declare a contract in a type block with struct, interface:

type (
        Sequence contract {
                string, []byte
        }

    Stringer(T) contract {
        T String() string
    }

    Stringer contract { // equivalent with the above Stringer(T), single type parameter could be omitted
        String() string
    }

        MyStruct struct {
                a int
                b string
        }

    G(Node, Edge) contract {
        Node Edges() []Edge
        Edge Nodes() (from Node, to Node)
    }
)

So the "contract" become the same level keyword as struct, interface. The difference is that contract is used to declare the meta type for type.

@bigwhite We're still discussing this notation. The argument in favor of the notation suggested in the design draft is that a contract is not a type (e.g. one cannot declare a variable of a contract type), and so a contract is a new kind of entity in the same vain as a constant, function, variable, or type. The argument in favor of your suggestion is that a contract is simply a "type type" (or a meta type) and thus should follow consistent notation. Another argument in favor of your suggestion is that it would permit the use of "anonymous" contract literals w/o the need to declare them explicitly. In short, IMHO this is not yet settled. But it's also easy to change down the road.

FWIW, CL 187317 supports both notations at the moment (though the contract parameter must be written with the contract), e.g.:

type C contract(X) { ... }

and

contract C (X) { ... }

are accepted and represented the same way internally. The more consistent approach would be:

type C(type X) contract { ... }

A contract isn't a type. It isn't even a meta-type, since the only types it
concerns itself with are its parameters. There's no separate receiver type
which the contract could be considered the meta type of.

Go also has function declarations:

func Name(args) { body }

which the proposed contract syntax more directly mirrors.

Anyways, these kinds of syntax discussions seem low on the priority list at
this point. It's more important to look at the semantics of the draft and
how they impact code, what kind of code can be written based on those
semantics, and what code can't.

Edit: Regarding in-line contracts, Go has function literals. I don't see any reason there can't be contract literals. There'd just be a more limited number of places they could appear, since they aren't types or values.

@stevenblenkinsop I wouldn't go as far as stating matter-of-factly that a contract is not a type (or meta-type). I think there are very reasonable arguments for both viewpoints. For instance, a single parameter contract that only specifies methods serves essentially as an "upper bound" for a type parameter: Any valid type argument must implement those methods. Which is what we usually use interfaces for. It may make a lot of sense to permit interfaces in those cases instead of a contract, a) because these cases might be common; and b) because satisfying a contract in this case simply means satisfying the interface spelled out as a contract. That is, such a contract acts very much like a type against which another type is "compared against".

@griesemer considering contracts as types can lead to problems with the Russel paradox (as in the type of all types that are not 'members' of themselves). I think they are better considered 'constraints on types'. If we consider a type system a form of 'logic', we can prototype this in Prolog. Type variables become logic variables, types become atoms, and contracts/constraints can be solved by Constraint Logic Programming. It's all very neat and non-paradoxical. In terms of syntax we could consider a contract a function on types that returns a boolean.

@keean Any interface already serves as a "constraint on types", yet they are types. Type theory people very much look at constraints of types as types, in a very formal way. As I have mentioned above there are reasonable arguments that can be made for either point of view. There are no "logic paradoxes" here - in fact the current work-in-progress prototype models a contract as a type internally as it simplifies matters at the moment.

@griesemer interfaces in Go are 'subtypes' not constraints on types. However I do find the need for both contracts and interfaces a disadvantage to the design of Go, however it may be too late to change interfaces into type constraints rather than subtypes. I have argued above that Go interfaces do not necessarily have to be subtypes, but I do not see a lot of support for that idea. This would allow interfaces and contracts to be the same thing - if interfaces could be declared for operators too.

There are paradoxes here, so tread carefully, Girard's Paradox is the most common 'encoding' of Russel's Paradox into type theory. Type theory introduces the concept of universes to prevent these paradoxes, and you are only allowed to reference types in universe 'U' from universe 'U+1'. Internally these type theories get implemented as higher order logics (for example Elf uses lambda-prolog). This in turn reduces to constraint solving for the decidable subset of higher order logic.

So whilst you can think of them as types, you need to add in a set of restrictions on use (syntactic or otherwise) that effectively get you back to constraints on types. I personally find it easier to work directly with the constraints, and avoid the two further layers of abstraction, higher order logic and dependent types. These abstractions add nothing to the expressive power of the type system, and require further rules or restrictions to prevent paradoxes.

Regarding the current prototype treating constraints as types, the danger comes if you can use this "constraint-type" as a normal type, and then construct another 'constraint-type' on that type. You will need checks to prevent self-reference (this is normally trivial) and mutual reference loops. This sort of prototype should really be written in Prolog, as it allows you to focus on the implementation rules. I believe the Rust devs finally realised this a while back (see Chalk).

@griesemer Interesting, re modelling contracts as types. From my own mental model, I would think of constraints as metatypes, and contracts as a sort of type-level struct.

type A int
func (a A) Foo() int {
    return int(a)
}

type C contract(T, U) {
    T int
    U int, uint
    U Foo() int
}

var B (int, uint; Foo() int).type = A
var C1 C = C(A, B)

This suggests to me that the current type declaration-style syntax for contracts is the more correct one of the two. I think the syntax set out in the draft is still better, though, since it doesn't require addressing the "if it's a type what do its values look like" question.

@stevenblenkinsop you lost me, why do you pass T to C contract when it's not used, and what are the var lines trying to do?

@griesemer thanks for your reply. One of Go's design principle is "only provide one way to do something". It is better to keep only one contract declaration form. type C(type X) contract { ... } is better.

@Goodwine I've renamed the types to distinguish them from the contract parameters. Maybe that helps? (int, uint; Foo() int).type is intended to be the metatype of any type that has an underlying type of int or uint and which implements Foo() int. var B is intended to show using a type as a value, and assigning it to a variable whose type is a metatype (since a metatype is like a type whose values are types). var C1 is intended to show a variable whose type is a contract, and show an example of something that might be assigned to such a variable. Basically, trying to answer the question "if a contract is a type, what do its values look like?". The point is to show that that value doesn't seem to itself be a type.

I got a problem with contracts with multiple types.

You can add or leave it for type paremeter contract, both
type Graph (type Node, Edge) struct { ... }
and
type Graph (type Node, Edge G) struct { ... } are OK.

But what if I only want to add a contract on one of the two type parameters?

contract G(Node, Edge) {
    Node Edges() []Edge
    Edge Nodes() (from Node, to Node)
}

VS

contract G(Edge) {
    Edge Nodes() (from Node, to Node)
}

@themez That's in the draft. You can use the syntax (type T, U comparable(T)) to constrain only one type parameter, for example.

@stevenblenkinsop I see, thanks.

@themez This has come up a couple of times now. I think there's some confusion from the fact that the usage looks like a type for a variable definition. It really isn't though; a contract is more of a detail of the entire function rather than an argument definition. I think the assumption is that you'd essentially write a new contract, potentially composed of other contracts to help with repetition, for basically every generic function/type you create. Things like what @stevenblenkinsop mentioned are really there to catch the edge cases where that assumption doesn't make sense.

At least, that's the impression I've gotten, especially from the fact that they're called 'contracts'.

@keean I think we're interpreting the word "constraint" differently; I'm using it rather informally. By definition of interfaces, given an interface I, and a variable x of type I, only values with types that implement I can be assigned to x. Thus I can be viewed as a "constraint" on those types (of course there are still infinitely many types that satisfy that "constraint"). Similarly, one could use I as a constraint for a type parameter P of a generic function; only actual type arguments with method sets that implements I would be permitted. Thus I also limits the set of possible actual argument types.

In both cases the reason for this is to describe the available operations (methods) inside the function. If the I is used as the type of a (value) parameter, we know that parameter provides those methods. If the I us used as a "constraint" (in place of a contract), we know that all values of the so constrained type parameter provide those methods. It's obviously pretty straight-forward.

I'd like a concrete example as to why this specific idea of using interfaces for single-parameter contracts that only declare methods "breaks down" without some restrictions as you alluded to in your comment.

How will the contracts proposal be introduced? Using the go modules go1.14 parameter? A GO114CONTRACTS environment variable? Both? Something else..?

Sorry if this has been addressed before, feel free to redirect me there.

One thing I particularly like about the current generics draft design is that it puts clear water between contracts and interfaces. I feel this is important because the two concepts are easily confused even though there are three basic differences between them:

  1. Contracts describe the requirements of a _set_ of types, whereas interfaces describe the methods which a _single_ type must have to satisfy it.

  2. Contracts can deal with built-in operations, conversions etc. by listing types which support them; interfaces can only deal with methods which the built-in types themselves don't have.

  3. Whatever they may be in type theoretic terms, contracts are not types in the sense we normally think of them in Go i.e. you can't declare variables of contract types and give them some value. On the other hand interfaces are types, you can declare variables of those types and assign appropriate values to them.

Although I can see the sense of a contract, which requires a single type parameter to have certain methods, to be represented instead by an interface (it's something I've even advocated in my own past proposals), I feel now it would be an unfortunate move because it would again muddy the waters between contracts and interfaces.

It hadn't really occurred to me before that contracts could plausibly be declared in the way @bigwhite suggested using the existing 'type' pattern. However, again I'm not keen on the idea because I feel it would compromise (3) above. Also, if it's necessary (for parsing reasons) to repeat the type keyword when declaring a generic struct like so:

type List(type Element) struct {
    next *List(Element)
    val  Element
}

presumably it would also be necessary to repeat it if contracts were declared in a similar fashion which is a bit 'stuttery' compared to the draft design approach.

Another idea which I'm not keen on is `contract literals' which would allow contracts to be written 'in place' rather than as separate constructs. This would make generic function and type definitions more difficult to read and, as some people think they already are, it's not going to help persuade those people that generics are a good thing.

Sorry to appear so resistant to proposed changes to the generics draft (which admittedly has some issues) but, as an enthusiastic advocate of simple generics for Go, I feel these points are worth making.

I would like to suggest not calling predicates over types "contracts". There are two reasons:

  • The term "contracts" is already used in computer science in a different way. For example, see: (https://scholar.google.com/scholar?hl=en&as_sdt=0%2C33&q=contracts+languages&btnG=)
  • There are already multiple names for this idea in the computer science literature. I know of at least ~three~ four: "typesets", "type classes", "concepts", and "constraints". Adding another will just confuse matters further.

@griesemer "constraints on types" are a purely compile time thing, because types get erased before runtime. The constraints cause generic code to be elaborated into non-generic code which can be executed. Subtypes exist at runtime, and are not constraints in the sense that a constraint on types would be type-equality or type-disequality at a minimum, with constraints like 'is a subtype of' optionally available depending on the type system.

For me the runtime nature of subtypes is the critical difference, if X <: Y we can pass X where Y is expected but we only know the type as Y without unsafe runtime operations. In this sense it does not constrain the type Y, Y is always Y. Subtypeing is also 'directional' hence can be covariant or contravariant depending on whether it is applied to an input or output argument.

With a type constraint 'pred(X)', we start with a fully polymorphic X, and then we constrain the permitted values. So say only X that implements 'print'. This is non-directional and hence does not have co-variance or contravariance. It is infact invariant in that we know the ground type of X at compile time.

So I think it is dangerous to think of interfaces as constraints on types as it ignores important differences like covariance and contravariance.

Does that this answer your question , or did I miss the point?

Edit: I should point out that I am referring to 'Go' interfaces specifically above. The points about subtyping apply to all languages that have subtypes, but Go is unusual in making interfaces a type and hence having a subtyping relationship. In other languages like Java an interface is explicitly not a type (a class is a type) so interfaces _are_ a constraint on types. So whilst it is right in general to consider interfaces as constraints on types, it is wrong specifically for 'Go'.

@Inuart It's much too early to tell how this would be added to the implementation. There is no proposal yet, just a design draft. It certainly will not be in 1.14.

@andrewcmyers I like the word "contract" because it describes a relationship between the writer of the generic function and its caller.

Words like "typesets" and "type classes" suggests that we are talking about a meta-type, which of course we are, but contracts also describe a relationship between multiple types. I know that type classes in, e.g., Haskell, can have multiple type parameters, but it seems to me that the name is a poor fit for the idea being described.

I have never understood why C++ call this a "concept." What does that even mean?

"Constraint" or "constraints" would be fine with me. At the moment I think of a contract as containing multiple constraints. But we could change that thinking.

I'm not too concerned by the fact that there is an existing programming language construct called a "contract". I think of that idea as being relatively similar to the idea we want to express, in that it is a relationship between a function and its callers. I understand that the way in which that relationship is expressed is quite different, but I feel like there is an underlying similarity.

I have never understood why C++ call this a "concept." What does that even mean?

A concept is an abstraction of instantiations sharing some commonality, e.g. signatures.

The term concept is by far a better fit for interfaces as the latter is also used to denote a shared boundary between two components.

@sighoya I was also going to mention that 'concepts' are conceptual because they include 'axioms' that are vital to prevent abuse of operators. For example addition '+' should be associative and commutative. These axioms cannot be represented in C++, hence they exist as abstract ideas, hence 'concepts'. So a concept is the syntactic 'contract' plus the semantic axioms.

@ianlancetaylor "Constraint" is what we called it in Genus (http://www.cs.cornell.edu/~yizhou/papers/genus-pldi2015.pdf), so I'm partial to that terminology. The term "contract" would be a completely reasonable choice, except that it is in very active use in the PL community to refer to the relationship between interfaces and implementations, which also has a contractual flavor.

@keean Without being an expert, I don't think the dichotomy you are painting is reflecting reality very well. For example, whether the compiler generates instantiated versions of generic functions is entirely a question of implementation, so it is perfectly reasonable to have a runtime representation of constraints, say in the form of a table of function pointers for each required operation. Exactly like the interface method tables, in fact. Likewise, interfaces in Go don't fit your subtype-definition, because you can safely project them back down (via type-assertions) and because you have neither co- nor contravariance for any type-constructors in Go.

Lastly: Whether or not the dichotomy you are painting is realistic and accurate, doesn't change that an interface is, at the end of the day, just a list of methods - and even in your dichotomy, there's no reason why that list can't be re-used as either a runtime-represented table or a compile-time-only constraint, depending on the context it's used in.

How about something like:

typeConstraint C(T) {
}

or

typeContract C(T) {
}

It is different from other type declarations to emphasize that this is not a runtime construct.

About the new contract design, I have some questions.

1.

When a generic type A embeds another generic type B,
or a generic function A calls another generic function B,
do we need also specify the contracts of B on A?

If the answer is true, then if a generic type embeds many other geneirc types,
or a generic function calls many other generic functions,
then we need combine many contracts into one as the contract of the embedding type or caller function.
This may cause the const-poisoning alike problem.

  1. Besides the current type kind and method set contraints, do we need other contraints?
    Such as convertible from one type to another, assignable from one type to another,
    comparable between two types, is a sendable channel, is a receivable channel,
    has a specified field set, ...

3.

If a generic function uses a line like the following one

v.Foo()

How can we write a contract which allows Foo be either a method or a field of a function type?

@merovius type-constraints must resolve at compile time, or the type system can be unsound. This is because you can have a type that depends on another that is not known until runtime. You then have two choices, you have to implement a full dependent type system (which allows type checking to occur at runtime as types become known) or you have to add existential types to the type system. Existentials encode the phase difference of statically known types, and types that are only known at runtime (types that depend on reading from IO for example).

Subtypes as stated above, are normally not known until runtime, although many languages have optimisations in the case the type is known statically.

If we assume one of the above changes is introduced the the language (dependent types or existential types) then we still need to separate the concepts of subtyping and type-constraints. For Go specifically type-constrictors are invariant we can ignore these differences, and we can consider that Go-interfaces _are_ constraints on types (statically).

We can therefore consider a Go-interface to be a single parameter contract where the parameter is the receiver of all functions/methods. So why does go have both interfaces and contracts? It appears to me to be because Go does not want to permit interfaces for operators (like '+'), and because Go does not have dependent types nor existential types.

So there are two factors that create a real difference between type-constraints and subtyping. One is co/contra-variance, which we may be able to ignore in Go due to type-constructor invariance, and the other is the need for dependent-types or existential-types to make a type system that has type-constraints sound if there is runtime polymorphism of the type parameters to the type-constraints.

@keean Cool, so AIUI we're at least in agreement that interfaces in Go can be considered constraints :)

In regards to the rest: Above you claimed:

"constraints on types" are a purely compile time thing, because types get erased before runtime. The constraints cause generic code to be elaborated into non-generic code which can be executed.

That claim is more specific than your latest one, that constraints need to be resolved at compile-time. All I was trying to say, is that the compiler can do that resolution (and all the same type-checks), but then still generate generic code. It would still be sound, because the semantics of the type-system are the same. But the constraints would still have a run-time representation. That is kinda nit-picky - but it's why I feel defining these based on run-time vs. compile-time is not the best way to go about it. It is mixing implementation-concerns into a discussion about the abstract semantics of a type-system.

FWIW, I've argued before that I would prefer using interfaces for expressing constraints - and also came to the conclusion, that allowing the use of operators in generic code is the main road block to do that and thus the main reason to introduce a separate concept in the form of contracts.

@keean Thanks, but no, your reply did not answer my question. Note that in my comment I described a very simple example of using an interface in place of a corresponding contract/"constraint". I asked for a _simple_ _concrete_ example why this scenario wouldn't work "without some restrictions" as you alluded to in your earlier comment. You did not provide such an example.

Note that I did not mention subtypes, co- or contra-variance (which we don't allow in Go anyway, signatures must always match), etc. Instead, I've been using elementary and established Go terminology (interfaces, implements, type parameter, etc.) to explain what I mean by "constraint" because that is the common language everybody here understands and so everybody can follow along. (Also, contrary to your claim here, in Java, an interface looks like a type to me according to the Java spec: "An interface declaration specifies a new named reference type". If this doesn't say an interface is a type then the Java Spec people have some work to do.)

But it looks like you answered my question indirectly with your latest comment, as @Merovius already observed, when you say: "We can therefore consider a Go-interface to be a single parameter contract where the parameter is the receiver of all functions/methods.". This is exactly the point I was making in the beginning, so thanks for confirming what I said all along.

@dotaheor

When a generic type A embeds another generic type B, or a generic function A calls another generic function B, do we need also specify the contracts of B on A?

If a generic type A embeds another generic type B, then the type parameters passed to B must satisfy any contract used by B. In order to do so, the contract used by A must imply the contract used by B. That is, all constraints on the type parameters passed to B must be expressed in the contract used by A. This is also applies when a generic function calls another generic function.

If the answer is true, then if a generic type embeds many other geneirc types, or a generic function calls many other generic functions, then we need combine many contracts into one as the contract of the embedding type or caller function. This may cause the const-poisoning alike problem.

I think what you say is true, but it's not the const-poisoning problem. The const-poisoning problem is that you have to spread const everywhere an argument is passed, and then if you discover some place where the argument has to be changed you have to remove const everywhere. The case with generics is more like "if you call several functions, you have to pass values of the correct type to each of those functions."

In any case it seems to me to be extremely unlikely that people will write generic functions that call many other generics functions that all use different contracts. How would that naturally happen?

Besides the current type kind and method set contraints, do we need other contraints? Such as convertible from one type to another, assignable from one type to another, comparable between two types, is a sendable channel, is a receivable channel, has a specified field set, ...

Constraints like convertibility and assignability and comparability are expressed in the form of types, as the design draft explains. Constraints like sendable or receivable channel can only be expressed in the form of chan T where T is some type parameter, as the design draft explains. There is no way to express the constraint that a type has a specified field set, but I doubt that will come up very often. We will have to see how this works out by writing real code to see what happens.

If a generic function uses a line like the following one

v.Foo()
How can we write a contract which allows Foo be either a method or a field of a function type?

In the current design draft, you can't. Does that seem like an important use case? (I know that the previous design draft did support this.)

@griesemer you missed the point where I said that was only valid if you introduce dependent-types or existential-types into the type system.

Otherwise if you use a contract as an interface you can fail at runtime, because you need to defer type checking until after you know the types, and type checking can fail, which is therefore not type-safe.

I have also seen interfaces explained as subtypes, so you have to be careful someone does not try and introduce co/contra-variance into type-constructors in the future. Better to not have interfaces as types, then there is no possibility of this, and the intentions of the designers, that these are not subtypes, are clear.

For me it would be a better design to merge interfaces and contracts, and make them explicitly type constraints (predicates on types).

@ianlancetaylor

In any case it seems to me to be extremely unlikely that people will write generic functions that call many other generics functions that all use different contracts. How would that naturally happen?

Why would that be unusual? If I define a function on type 'T' then I will want to call functions on 'T'. For example if I define a 'sum' function over 'addable types' by contract. Now I want to build a generic multiply function that calls sum? Many things in programming have a sum/product structure (anything that is a 'group').

I don't get what will be the purpose of interface after contracts be on the language, it looks like contracts will serve for the same purpose, to ensure a type has a set of methods defined on it.

@keean The unusual case is functions that call many other generics functions that all use different contracts. Your counter-example is only calling one function. Remember that I am arguing against the similarity to const-poisoning.

@mrkaspa The simplest way to think about is that contracts are like C++ template functions and interfaces are like C++ virtual methods. There is a use and purpose for both.

@ianlancetaylor from experience there ar two problems that occur that are similar to const poisoning. Both occur because of the tree like nature of nested function calls. The first is when you want to add debugging to a deeply nested function, you have to add printable from the leaf all the way to to root, which could involve touching multiple third party libraries. The second is that you can accumulate a large number of contracts at the root, making function signatures difficult to read. It is often better to have the compiler infer the constraints as Haskell does with type-classes to avoid these two problems.

@ianlancetaylor I don't know too much about c++, what will be the use cases for interfaces and contracts in golang? when should I use interface or contract?

@keean This subthread is about a specific design draft for the Go language. In Go all values are printable. It is not something that needs to be expressed in a contract. And while I'm willing to see evidence that many contracts can accumulate for a single generic function or type, I'm not willing to accept an assertion that that will happen. The point of the design draft is to try writing real code that uses it.

The design draft explains as clearly as I can why I think that inferring the constraints is a poor choice for a language like Go which is designed for programming at scale.

@mrkaspa For example, if you have a []io.Reader then you want an interface value, not a contract. A contract would require that all elements in the slice be the same type. An interface will permit them to be different types, as long as all the types implement io.Reader.

@ianlancetaylor as far as I got interface creates a new type meanwhile contracts constraint a type but no creates a new one, am I right?

@ianlancetaylor:

Could you not do something like the following?

contract Reader(T) {
  T Read([]byte) (int, error)
}

func ReadAll(type T Reader)(readers []T) ([]byte, error) {
  // Use the readers...
}

Now ReadAll() should accept a []io.Reader just as well as it would accept a []*os.File, wouldn't it? io.Reader does seem to satisfy the contract, and I don't remember anything in the draft about interface values being unable to be used as type arguments.

Edit: Never mind. I misunderstood. This is still a place where you'd be using an interface, so it is an answer to @mrkaspa's question. You're just not using the interface in the function signature; you're only using it where it gets called.

@mrkaspa Yes, that is true.

@ianlancetaylor if I had a list of []io.Reader and this contract:

contract Reader(T) {
  T Read([]byte) (int, error)
}

func ReadAll(type T Reader)(readers []T) ([]byte, error) {
  // Use the readers...
}

I could call ReadAll over each interface because they satisfy the contract?

@ianlancetaylor sure things are printable, but it's easy to come up with other examples, for example logging to file or to network, we want logging to be generic so we can change the log target between null, local file, network service etc. Adding logging to a leaf function requires adding the constraints all the way back total the root, including having to modify third party libraries used.

Code is not static, you have to allow for maintenance too. In fact code is in 'maintenance' for a lot longer than it takes to initially write, so there is a good argument that we should design languages to make maintenance, refactoring, adding features etc easier.

Really these problems will only manifest in a large codebase, that is maintained over time. It is not something you can write a quick small example to demonstrate.

These problems exist in other generic languages as well, for example Ada. You could port some large Ada application that Go that makes extensive use of generics, but if the problem exists in Ada I don't see anything in Go that would mitigate that problem.

@mrkaspa Yes.

At this point I suggest that this conversation thread should move to golang-nuts. The GitHub issue tracker is a poor place for this kind of discussion.

@keean Perhaps you are right. Time will tell. We're explicitly asking people to try writing code to the design draft. There is little value in purely hypothetical discussions.

@keean I don't understand your logging example. The problem you describe is something you can solve with interfaces at runtime, not with generics at compile time.

@bserdar interfaces only have one type parameter, so you cannot do something where one parameter is the thing to be logged, and a second type parameter is the type of the log.

@keean IMO in that example you'd do the same thing you are doing today, without any type parameters at all: Use reflection to inspect the thing to be logged and use context.Context to pass the value of the log. I know that these ideas are repulsive to typing-enthusiasts, but they turn out to be pretty practical. Of course there is value in constrained type-parameters, which is why we're having this conversation - but I'd posit that the reason the cases that come to your mind are the cases that already work pretty well in current Go codebases at scale, are that those are not the cases that truly benefit from additional strict type-checking. Which comes back to Ians point - it remains to be seen, whether this is a problem that manifests in practice.

@merovius If it were up to me all runtime reflection would be banned, as I don't want shipped software generating typing errors at runtime that could affect the user. This allows more aggressive compiler optimisations because you don't have to worry about the runtime model aligning with the static model.

Having dealt with migrating large projects at scale from JavaScript to TypeScript, in my experience strict typing becomes more important the bigger the project and the larger the team working on it. This is because you need to rely on the interface/contract of a block of code without having to look at the implementation to maintain efficiency when working with a large team.

Aside: Of course it does depend on how you achieve scale, right now I prefer an API-First approach, starting with an OpenAPI/Swagger JSON file and then using code-generation to build the server stubs and the client SDK. As such OpenAPI is actually acting as your type system for micro-services.

@ianlancetaylor

Constraints like convertibility and assignability and comparability are expressed in the form of types

Considering there are so many details in the Go type conversion rules, it is really hard to write a custom contract C to satisfy the following general slice conversion function:

func ConvertSlice(type In, Out C(In, Out)) (x []In) []Out {
    o := make([]Out, len(x))
    for i := range x {
        o[i] = Out(x[i])
    }
    return o
}

A perfect C should allow conversions:

  • between any integer, floating-point numeric types
  • between any complex numeric types
  • between two types whose underlying types are identical
  • from a type Out which implements In
  • from a channel type to a bi-directional channel type and the two channel types have an identical element type
  • struct tag related, ...
  • ...

By my understanding, I can't write such a contract. So do we need a builtin convertible contract?

There is no way to express the constraint that a type has a specified field set, but I doubt that will come up very often

Considering that type embedding is used often in Go programming, I think the needs wouldn't be rare.

@keean That's a valid opinion to have, but it's obviously not the one guiding the design and development of Go. To participate constructively, please accept that and start working from where we are and under the assumption that any development of the language must be a gradual change from the status quo. If you can't, then there are languages that are more closely aligned with your preferences and I feel everyone - you in particular - would be happier if you contribute your energy there.

@merovius I am prepared to accept that changes to Go must be gradual, and accept the status quo.

I was just replying to your comment as part of a conversation, agreeing that I am a typing-enthusiast. I stated an opinion about runtime reflection, I did not suggest that Go should abandon runtime reflection. I do work on other languages, use many languages in my work. I am developing (slowly) my own language, but I am always hopeful developments to other languages will make this unnecessary.

@dotaheor I agree that we can't write a general contract for convertibility today. We'll have to see whether that seems to be a problem in practice.

Responding to @ianlancetaylor

I don't think it's clear yet how often people will want to parameterized functions on constant values. The most obvious case would be for array dimensions--but you can already do that by passing the desired array type as a type argument. Other than that case, what do we really gain by passing a const as a compile-time argument rather than a run-time argument?

In the case of arrays, just passing the (whole) array type as a type argument seems to be extremely limiting, because the contract wouldn't be able to decompose either the array dimension or the element type and impost constraints on them. For example, could a contract taking a "whole array type" require the array type's element type to implement certain methods?

But your call for more specific examples of how non-type generic parameters would be useful is well taken, so I expanded the blog post to include a section covering a couple significant classes of example use-cases and a few specific examples of each. Since it's been a few days, again the blog post is here:

Are Only Type Parameters Generic Enough for Go 2 Generics?

The new section is titled "Example Ways Generics Over Non-Types are Useful".

As a quick summary, contracts for matrix and vector operations could impose appropriate constraints on both the dimensionality and the element types of arrays. For example, matrix multiplication of an n x m matrix with an m x p matrix, each represented as a two-dimensional array, could correctly constrain the first matrix's number of rows to equal the second matrix's number of columns, etc.

More generally, generics could use non-type parameters to enable compile-time configuration and specialization of code and algorithms in numerous ways. For example, a generic variant of math/big.Int could be configurable at compile time to a particular bit with and/or signedness, satisfying demands for 128-bit integers and other non-native fixed-width integers with reasonable efficiency likely much better than the existing big.Int where everything is dynamic. A generic variant of big.Float might similarly be specializable at compile time to a particular precision and/or other compile-time parameters, e.g., to provide reasonably-efficient generic implementations of the binary16, binary128, and binary256 formats from IEEE 754-2008 that Go doesn't support natively. Many library algorithms that can optimize their operation based on knowledge of the user's needs or particular aspects of the data being processed - e.g., graph algorithm optimizations that work only on non-negative edge weights or only on DAGs or trees, or matrix processing optimizations that rely on matrices being upper- or lower-triangular, or big-integer arithmetic for cryptography sometimes needing to be implemented in constant-time and sometimes not - could use generics to make themselves configurable at compile-time to depend on optional declarative information such as this, while ensuring that all tests of these compile-time options in the implementation typically get compiled out via constant propagation.

@bford wrote:

namely that the parameters to generics are bound to constants at compile time.

This is the point I don't understand. Why you require this condition.
Theoretically, one could redefine variables/parameters in the body. It doesn't matter.
Intuitively, I assume would you want to state the first function application must occur at compile time.

But for this requirement, a keyword like comp or comptime would be better fit.
Further, if golang's grammar would only allow for two parameter tuples at most for a function, then this keyword annotation can be left out because the first parameter tuple of a type and of a function (in case of two parameter tuples) will always be evaluated at compile time.

Another point: What if const is extended to allow for runtime expressions (true single sign on)?

On Pointer vs value methods:

If a method is listed in a contract with a plain T rather than *T, then it may be either a pointer method or a value method of T. In order to avoid worrying about this distinction, in a generic function body all method calls will be pointer method calls. ...

How does this square with interface implementation? If a T has some pointer method (like the MyInt in the example), can T be assigned to the interface with that method (Stringer in the example)?

Allowing it means having another hidden address operation &, not allowing it means contracts and interfaces can only interact via explicit type switch. Neither solutions seems good to me.

(Note: we should revisit this decision if it leads to confusion or incorrect code.)

I see the team already has some reservations about this ambiguity in the pointer method syntax. I'm just adding that the ambiguity also affects interface implementation (and implicitly adding my reservations about it too).

@fJavierZunzunegui You're right, the current text does imply that when assigning a value of a type parameter to an interface type, an implicit address operation may be required. That may be another reason to not use implicit addresses when invoking methods. We'll have to see.

On Parameterized types, particularly regarding type parameters embedded as a field in a struct:

Consider

type Lockable(type T) struct {
    T
    sync.Locker
}

What if T had a methods named Lock or Unlock? The struct would not compile. This not having a method X condition isn't supported by contracts, hence we have invalid code which does not break the contract (defeating the whole purpose of contracts).

It gets even more complicated if you have multiple embedded parameters (say T1 and T2) since those must not have common methods (again, not enforced by contracts). Additionally, supporting arbitrary methods depending on the embedded types contributes to very limited compile time restrictions on type switches for those structs (very similarly to Type assertions and switches).

As I see it there are 2 good alternatives:

  • disallowing embedding of type parameters altogether: simple, but at a small cost (if the method is needed, one has to write it explicitly in the struct with the field).
  • restrict callable methods to the contract ones: similarly to embedding an interface. This deviates from normal go (a non-goal) but at no cost (methods don't need to be written explicitly in the struct with the field).

The struct would not compile.

It would compile. Try it. What fails to compile is a call to the ambiguous method. Your point is still valid, however.

Your second solution, restricting callable methods to the ones mentioned in the contract, won't work: even if the contract on T specified Lock and Unlock, you still couldn't call them on a Lockable.

@jba thanks for the insights on compilation.

By the second solution I mean treating embedded type parameters as we do with interfaces now, such that if the method is not in the contract it is not immediately accessible after embedding. In this scenario, since T has no contract it is effectively treated as interface{}, thus it would not conflict with the sync.Locker even if T were instantiated with a type with those methods. This might help explain my point.

Either way I prefer the fist solution (banning embedding altogether), so if that is your preference there is little purpose discussing the second one! :smiley:

The example provided by @JavierZunzunegui also covers another case. What if T is an struct that has a noCopy noCopy field? Compiler should be able to handle that case also.

Not sure if this is exactly the right place for this, but I wanted to comment with a concrete real-world use case for generic types that allow for "parameterization on non-type values such as constants", and specifically for the case of arrays. I hope this is helpful.

In my world without generics, I write a lot of code that looks like this:

import "math/bits"

// SigEl is the element type used in variable length bit vectors, 
// can be any unsigned integer type
type SigEl = uint

// SigElBits is the number of bits storable in each SigEl
const SigElBits = 8 << uint((^SigEl(0)>>32&1)+(^SigEl(0)>>16&1)+(^SigEl(0)>>8&1))

// HammingDist counts the number bitwise differences between two
// bit vectors b1 and b2. I want this to be generic
// Function will panic at runtime if b1 and b2 aren't of equal length.
func HammingDist(b1, b2 []SigEl) (sum int) {
    // Give the compiler a hint so it won't need to bounds check the slices in loops
    _ = b1[len(b2)-1]  
        // This switch is optimized away because SigElBits is const
    switch SigElBits {   // Yay no golang generics!
    case 64:
        _ = b2[len(b1)-1]
        for x := range b1 {
            sum += bits.OnesCount64(uint64(b1[x] ^ b2[x]))
        }
    case 32:
        _ = b2[len(b1)-1]
        for x := range b1 {
            sum += bits.OnesCount32(uint32(b1[x] ^ b2[x]))
        }
    case 16:
        _ = b2[len(b1)-1]
        for x := range b1 {
            sum += bits.OnesCount16(uint16(b1[x] ^ b2[x]))
        }
    case 8:
        _ = b2[len(b1)-1]
        for x := range b1 {
            sum += bits.OnesCount8(uint8(b1[x] ^ b2[x]))
        }
    }
    return sum
}

This works well enough, with one wrinkle. I often need hundreds of millions of []SigEls, and their length is often 128-384 total bits. Because slices impose a fixed 192-bit overhead on top of the size of the underlying array, when the array is itself is 384 bits or less, this imposes an unnecessary 50-150% memory overhead, which is obviously terrible.

My solution is to allocate a slice of Sig _arrays_, and then slice them on the fly as the parameters to HammingDist above:

const SigBits = 256  // Any multiple of SigElBits is valid

// Sig is the bit vector array type
type Sig [SigBits/SigElBits]SigEl

bitVects := make([]Sig, 100000000)
// stuff happens ... 

// Note slicing below, just to make the arrays "generic" for the call 
dist := HammingDist(bitVects[x][:], bitVects[y][:])

What I'd _like_ to be able to do instead of all of that is define a generic Signature type and rewrite all of the above as (something like):

contract UnsignedInteger(T) {
    T uint, uint8, uint16, uint32, uint64
}

type Signature (type Element UnsignedInteger, n int) [n]Element

// HammingDist counts the number bitwise differences between two bit vectors
func HammingDist(b1, b2 *Signature) (sum int) {
    for x := range *b1 {
        // Assuming the std lib bits.OnesCount becomes generic over 
        // all UnsignedInteger types
        sum += bits.OnesCount(*b1[x] ^ *b2[x])
    }
    return sum
}

So then to use this library:

type sigEl = uint   // Any unsigned int type
const sigElBits = 8 << uint((^SigEl(0)>>32&1)+(^SigEl(0)>>16&1)+(^SigEl(0)>>8&1))
const sigBits = 256  // Any multiple of SigElBits is valid
type sig Signature(sigEl, sigBits/sigElBits)

bitVects := make([]sig, 100000000)
// stuff happens ... 

dist := HammingDist(&bitVects[x], &bitVects[y])

An engineer can dream... 🤖

If you know how large the maximum bit length might be you can use something like this instead:

contract uintArrayOfFixedLength(ElemType,ArrayType)
{
    ArrayType [1]ElemType,[2]ElemType,...,[maxBit]ElemType
    ElemType uint8,uint16,uint32,uint64
}

func HammingDist(type ElemType,ArrayType uintArrayOfFixedLength)(t1,t2 ArrayType) (sum int)
{

}

@vsivsi I am not sure I understand how you think that will improve things - are you maybe assuming, that the compiler would generate an instantiated version of that function for every possible array length? Because ISTM that a) that's not super likely, so b) you'd end up with exactly the same performance-characteristics as you do now. The most likely implementation, IMO, would still be that the compiler passes the length and a pointer to the first element, so you'd effectively still pass a slice, in the generated code (I mean, you wouldn't pass the capacity, but I don't think an additional word on the stack really matters).

Honestly, IMO what you are saying is a pretty good example for overusing generics, where they aren't needed - "an array of indeterminate length" is exactly what slices are for.

@Merovius Thanks, I think your comment reveals a couple of interesting discussion points.

"an array of indeterminate length" is exactly what slices are for.

Right, but in my example there are no arrays of indeterminate length. The array length is a known constant at _compile time_. This is precisely what arrays are for, but they are underused in golang IMO because they are so inflexible.

To be clear, I'm not suggesting

type Signature (type Element UnsignedInteger, n int) [n]Element

means that n is a runtime variable. It must still be a constant in the same sense as today:

const n = 10
type nArray [n]uint               // works
type nSigInt Signature(uint, n)   // works 

var m = int(n)
type mArray [m]uint               // error
type mSigInt Signature(uint, m)   // error 

So let's look at the "cost" of the slice based HammingDist function. I agree that the difference between passing an array as bitVects[x][:] vs &bitVects[x] is small(-ish, a factor of 3 max). The real difference comes in the code and runtime checking that needs to happen inside that function.

In the slice based version, the runtime code needs to bounds check the slice accesses to ensure memory safety. This means that this version of the code can panic (or an explicit error checking and return mechanism is needed to prevent that). The NOP assignments (_ = b1[len(b2)-1]) make a meaningful performance difference by giving the compiler optimizer a hint that it needn't bounds check every slice access in the loop. But these minimal bounds checks are still necessary, even though the passed underlying arrays are always the same length. Furthermore, the compiler may have difficulty profitably optimizing the for/range loop (say via unrolling).

In contrast, the generic array based version of the function cannot panic at runtime (requiring no error handling) and bypasses the need for any conditional bounds checking logic. I highly doubt a compiled generic version of the function would need to "pass" the array length as you suggest because it is literally a constant value that is part of the instantiated type at compile time.

Furthermore, for small array dimensions (important in my case), it would be easy for the compiler to profitably unroll or even entirely optimize away the for/range loop for a decent performance gain, since it will know at compile time what those dimensions are.

The other big benefit of the generic version of the code is it allows the user of the HammingDist module to determine the unsigned int type in their own code. The non-generic version requires the module itself to be modified to change the defined type SigEl, since there is no way to "pass" a type to a module. A consequence of this difference is that the implementation of the distance function becomes simpler when there is no need to write separate code for each of the {8,16,32,64}-bit uint cases.

The costs of the slice based version of the function and needing to modify the library code to set the element type are highly sub-optimal concessions needed to avoid having to implement and maintain "NxM" versions of this function. Generic support for (constant) parameterized array types would solve this problem:

// With generics + parameterized constant array lengths:
type Signature (type Element UnsignedInteger, n int) [n]Element
func HammingDist(b1, b2 *Signature) (sum int) { ... }

// Without generics
func HammingDistL1Uint(b1, b2 [1]uint) (sum int) { ... }
func HammingDistL1Uint8(b1, b2 [1]uint8) (sum int) { ... }
func HammingDistL1Uint16(b1, b2 [1]uint16) (sum int) { ... }
func HammingDistL1Uint32(b1, b2 [1]uint32) (sum int) { ... }
func HammingDistL1Uint64(b1, b2 [1]uint64) (sum int) { ... }

func HammingDistL2Uint(b1, b2 [2]uint) (sum int) { ... }
func HammingDistL2Uint8(b1, b2 [2]uint8) (sum int) { ... }
func HammingDistL2Uint16(b1, b2 [2]uint16) (sum int) { ... }
func HammingDistL2Uint32(b1, b2 [2]uint32) (sum int) { ... }
func HammingDistL2Uint64(b1, b2 [2]uint64) (sum int) { ... }

func HammingDistL3Uint(b1, b2 [3]uint) (sum int) { ... }
func HammingDistL3Uint8(b1, b2 [3]uint8) (sum int) { ... }
func HammingDistL3Uint16(b1, b2 [3]uint16) (sum int) { ... }
func HammingDistL3Uint32(b1, b2 [3]uint32) (sum int) { ... }
func HammingDistL3Uint64(b1, b2 [3]uint64) (sum int) { ... }

// and L4, L5, L6 ... ad nauseum

Avoiding the above nightmare, or the very real costs of the current alternatives seems like the _opposite_ of "generic overuse" to me. I agree with @sighoya that enumerating all of the permissible array lengths in the contract could work for a very limited set of cases, but I believe that is too limited even for my case, since even if I put the upper cutoff of support at a low 384 total bits, that would require nearly 50 terms in the ArrayType [1]ElemType,[2]ElemType,...,[maxBit]ElemType clause of the contract to cover the uint8 case.

Right, but in my example there are no arrays of indeterminate length. The array length is a known constant at compile time.

I understand that, but note that I didn't say "at runtime" either. You want to write code that is oblivious to the length of the array. Slices can already do that.

I highly doubt a compiled generic version of the function would need to "pass" the array length as you suggest because it is literally a constant value that is part of the instantiated type at compile time.

A generic version of the function would - because every instantiation of that type uses a different constant. That is why I get the impression that you are assuming that the generated code won't be generic, but expanded for every type. i.e. you seem to assume that there will be several instantiations of that function generated, for [1]Element, [2]Element, etc. I'm saying, that that seems unlikely to me, that it seems more likely that there will be one version generated, which is essentially equivalent to the slice-version.

Of course it doesn't have to be that way. So, yeah, you are right in that you don't need to pass the array length. I'm just strongly predicting that it would be implemented that way and it seems a questionable assumption that it won't. (FWIW, I'd also argue that if you are willing to have the compiler generate specialized function bodies for separate lengths, it could just as well do that transparently for slices too, but that's a different discussion).

The other big benefit of the generic version of the code

To clarify: By "the generic version", are you referring to the general idea of generics, as implemented for example in the current contracts design draft, or are you referring more specifically to generics with non-type-parameters? Because the advantages you name in this paragraph apply to the current contracts design draft too.

I'm not trying to make a case against generics in general here. I'm just explaining why I don't think your example serves to show that we need other parameter-kinds than types.

// With generics + parameterized constant array lengths:
// Without generics

This is a false dichotomy (and a so obvious one that I'm a bit frustrated with you). There's also "with type-parameters, but without integer-parameters":

contract Unsigned(T) {
    T uint, uint8, uint16, uint32, uint64
}
func HammingDist(type T Unsigned) (b1, b2 []T) (sum int) {
    if len(b1) != len(b2) {
        panic("slices of different lengths passed to HammingDist")
    }
    for i := range b1 {
        sum += bits.OnesCount(b1[i]^b2[i]) // Same assumption about OnesCount being generic you made above
    }
    return sum
}

Which seems fine to me. It is slightly less type-safe, by requiring a runtime-panic if the types don't match. But, and that's kind of my point, that's the only advantage of adding non-type generic parameters in your example (and it's an advantage that was already clear, IMO). The performance gains you are predicting rely on pretty strong assumptions about how generics in general and generics over non-type parameters specifically are implemented. That I, personally, don't consider very likely based on what I've heard from the Go team so far.

I highly doubt a compiled generic version of the function would need to "pass" the array length as you suggest because it is literally a constant value that is part of the instantiated type at compile time.

You’re just assuming that generics would work like C++ templates and duplicate function implementations but that’s just not right. The proposal explicitly allows for single implementations with hidden parameters.

I think if you really need templatized code for a small number of numeric types, it’s not that big of a burden to use a code generator. Generics are really only worth the code complexity for things like container types where there’s a measurable performance benefit from using primitive types but you can’t reasonable expect to just generate a small number of code templates in advance.

I obviously have no idea how the golang maintainers will ultimately implement anything, so I'll refrain from speculating further and happily defer to those with more insider knowledge.

What I do know is that for the example real-world problem I shared above, the potential performance difference between the current slice-based implementation and a well-optimized generic array-based one is substantial.

BenchmarkHD/256-bit_unrolled_array_HD-20            2000000000           1.05 ns/op        0 B/op          0 allocs/op
BenchmarkHD/256-bit_slice_HD-20                     300000000            5.10 ns/op        0 B/op          0 allocs/op

Code at: https://github.com/vsivsi/hdtest

That's a 5-fold potential performance difference for the 4x64-bit case (a sweet spot in my work) with just a little loop-unrolling (and essentially no extra emitted code) in the array case. These calculations are in the inner loops of my algorithms, made literally many trillions of times, so a 5x performance difference is pretty huge. But to realize these efficiency gains today, I need to write every version of the function, for every needed element type and array length.

But yes, if optimizations such as these are never implemented by the maintainers, then the whole exercise of adding parameterized array lengths to generics would be pointless, at least as it might benefit this example case.

Anyway, interesting discussion. I know these are contentious issues, so thanks for keeping it civil!

@vsivsi FWIW, the wins you are observing vanish if you are not manually unrolling your loops (or if you are also unrolling the loop over a slice) - so this still doesn't actually support your argument that integer-parameters help because they allow the compiler to do the unrolling for you. It seems bad science to me, to argue X over Y, based on the compiler becoming arbitrarily clever for X and staying arbitrarily dumb for Y. It's not clear to me, why a different unrolling heuristic would trigger in the case of looping over an array, but not trigger in the case of looping over a slice with length known at compile-time. You are not showing the benefits of a certain flavor of generics over another, you are showing the benefits of that different unrolling heuristic.

But in any case, no one really argued that generating specialized code for each instantiation of a generic function wouldn't be potentially faster - just that there are other tradeoffs to consider when deciding if you want to do that.

@Merovius I think the strongest case for generics in this kind of example is with compile time elaboration (so emitting a unique function for each type-level-integer) where the code to be specialised is in a library. If the library user is going to be using a limited number of instantiations of the function, then they get the advantage of an optimised version. So if my code only uses arrays of length 64, I can use optimised elaborations of the library functions for length 64.

In this specific case, it depends on the frequency distribution of array lengths, because we might not want to elaborate all possible functions if there are thousands of them due to memory constraints, and page-cache trashing which could make things slower. If for example small sizes are common, but larger are possible (a long-tail distribution in size) then we can elaborate specialised functions for the small integers with unrolled loops (say 1 to 64) and then provide a single generalised version with a hidden-parameter for the rest.

I don't like the idea of the "arbitrarily clever compiler" and think this is a bad argument. How long will I have to wait for this arbitrarily clever compiler? I particularly don't like the idea of the compiler changing types, for example optimising a slice to an Array making hidden specialisations in a language with reflection, as when you reflect on that slice something unexpected could happen.

Regarding the "generic dilemma", personally I would go with "make the compiler slower/do more work", but try and make it as fast as possible by using a good implementation and separate compilation. Rust seems to do quite well, and after Intel's recent announcement seems like it could eventually replace 'C' as the main systems programming language. Compile time did not seem to be even a factor in Intel's decision, as the runtime memory and concurrency safety with 'C' like speed seemed to be the key factors. Rust's "traits" are a reasonable implementation of generic type-classes, they have some annoying corner cases which I think comes from their type-system design.

Referring back to our earlier discussion, I have to be careful to separate discussion about generics in general, and as how they might specifically apply to Go. As such I am not sure Go should even have generics as it complicates what is a simple and elegant language, much in the way 'C' does not have generics. I still think there is a gap in the market for a language that has generic implementations as a core feature, but remains simple and elegant.

I'm wondering if there's been any progress on this.

How long I can try generics. I've been waiting for a long time

@Nsgj You can checkout this CL: https://go-review.googlesource.com/c/go/+/187317/

In the current spec, is this possible?

contract Point(T) {
  T struct { X, Y float64 }
}

In other words, the type must be a struct with two fields, X and Y, of type float64.

edit: with example usage

func generate(type T Point)() T {
  return T{X: randomFloat64(), Y: randomFloat64()}
}

@abuchanan-nr Yes, the current design draft would permit that, though it's hard to see how it would be useful.

I am also not sure it's useful, but I didn't see a clear example of using a custom struct type in a type list of a contract. Most examples use builtin types.

FWIW, I was imagining a 2D graphics library. You might want each vertex to have a number of application-specific fields, like color, force, etc. But you might also want a generic library of methods and algorithms just for the geometry part, which only really relies on X,Y coordinates. It might be nice to pass your custom vertex type into this library, e.g.

type MyVertex struct {
  X, Y float64
  Color color.Color
  OtherAttr int
}
p := geo.RandomPolygon(MyVertex)()

for _, vert := range p.Vertices() {
  p.Color = randColor()
}

Again, not sure that turns out to be a good design in practice, but it's where my imagination was at the time :)

See https://godoc.org/image#Image for how this is done in standard Go today.

With regards to Operators/Types in contracts:

This results in a duplication of many generic methods, as we would need them in operator format (+, ==, <, ...) and method format (Plus(T) T, Equal(T) bool, LessThan(T) bool, ...).

I propose we unify these two approaches into one, the method format. To achieve that, the pre-declared types (int, int64, string, ...) would need to be cast to types with arbitrary methods. For the (trivial) simple case that is already possible (type MyInt int; func (i MyInt) LessThan(o MyInt) bool {return int(i) < int(o)}), but the real value is lies in composite types ([]int->[]MyInt, map[int]struct{}->map[MyInt]struct{}, and so on for channel, pointer, ...), which is not allowed (see FAQ). Allowing this conversions is a significant change to go in itself so I have expanded on the technicalities in Relaxed Type Conversion Proposal. That would allow generics functions to not deal with operators and still support all types, including pre-declared ones.

Note this change also benefits non-predeclared types. Under the current proposal, given type X struct{S string} (which comes from an external library, so you can't add methods to it), say you have a []X and want to pass it to a generic function expecting []T, for T satisfying the Stringer contract. That would require a type X2 X; func(x X2) String() string {return x.S}, and a deep copy of []X into []X2. Under this proposed changes to this proposal, you save the deep copy entirely.

NOTE: the mentioned Relaxed Type Conversion Proposal requires challenge. I suggest we keep the technical debate of that in the gist, and whatever applies specifically to the generic proposal here.

@JavierZunzunegui Providing a "method format" (or operator format) for basic unary/binary operators is not the problem. It's fairly straight-forward to introduce methods such as +(x int) int by simply allowing operator symbols as method names, and to extend that to built-in types (though even this breaks down for shifts since the right-hand operator can be an arbitrary integer type - we don't have a way to express this at the moment). The problem is that that is not sufficient. One of the things that a contract needs to express is whether a value x of type X can be converted to the type of a type parameter T as in T(x) (and vice versa). That is, one needs to invent a "method format" for permissible conversions. Furthermore, there needs to be a way to express that an untyped constant c can be assigned to (or converted to) a variable of type parameter type T: is it legal to assign, say, 256 to t of type T? What if T is byte? There's a few more things like this. One can invent "method format" notation for these things, but it gets complicated quickly, and it's not clear it's more understandable or readable.

I'm not saying it can't be done, but we have not found a satisfactory and clear approach. The current design draft which simply enumerates types on the other hand is pretty straight-forward to understand.

@griesemer This may be hard in Go due to other priorities, but it's quite a well solved problem in general. It is one of the reasons I see implicit conversions as bad. There are other reasons like magic happening that is not visible to someone reading the code.

If there are no implicit conversions in the type system, then I can use overloading to precisely control the range of types accepted, and Interfaces control overloading.

I would tend to express similarity between types using interfaces, hence operations like '+' would be expressed generically as operations on a numeric interface rather than a type. You need to have type variables as well as interfaces to express the constraint that both the arguments to and the result of addition must be the same type.

So here the addition operator is declared to operate over types with a Numeric interface. This ties in nicely with mathematics, where 'integers' and 'addition' form a "group" for example.

You would end up with something like:

+(T Addable)(x T, y T) T

If you allow implicit interface selection, then the '+' operator can just be a method of the Numeric interface, but I think that would cause problems with method selection in Go?

@griesemer on your point about conversions:

One of the things that a contract needs to express is whether a value x of type X can be converted to the type of a type parameter T as in T(x) (and vice versa). That is, one needs to invent a "method format" for permissible conversions

I can see how that would be a complication, but I don't think is needed. They way I see it such conversions would happen outside the generic code, by the caller. An example (using Stringify as per the draft design):

Stringify(int)([]int{1,2}) // does not compile
type MyInt int
func (i MyInt) String() string {...}
Stringify(MyInt)([]MyInt([]int{1,2})) // OK. Generic type MyInt could be inferred

Above, as far as Stringify is concerned the argument is type []MyInt and meets the contract. Generic code can't convert generic types to anything else (other than interfaces they implement, as per the contract), precisely because their contract states nothing about that.

@JavierZunzunegui I don't see how the caller can do such conversions w/o exposing them in the interface/contract. For instance, I might want to implement a generic numeric algorithm (a parameterized function) operating on various integer or floating point types. As part of that algorithm, the function code needs to assign constant values c1, c2, etc. to values of the parameter type T. I don't see how the code can do this without knowing that it's ok to assign these constants to a variable of type T. (One certainly wouldn't want to have to pass those constants into the function.)

func NumericAlgorithm(type T SomeContract)(vector []T) T {
   ...
   vector[i] = 3.1415  // <<< how do we know this is valid without the contract telling us?
   ...
}

needs to assign constant values c1, c2, etc. to values of the parameter type T

@griesemer I would (in my view of how generics are/should be) say the above is the wrong problem statement. You are requiring T to be defined as a float32, but a contract only states what methods are available to T, not what it is defined as. If you need this, you can either keep vector as []T and require a func(float32) T argument (vector[i] = f(c1)), or much better keep vector as []float32 and require T by contract to have a method DoSomething(float32) or DoSomething([]float32), since I'm assuming the T and the floats must interact at some point. That means T may or may not be defined as type T float32, all we can say is it has the methods required of it by the contract.

@JavierZunzunegui I am not saying at all that T be defined as a float32 - it could be a float32, a float64, or even one of the complex types. More generally, if the constant were an integer, there could be a variety of integer types that would be valid to pass in to this function, and some that aren't. It's certainly not a "wrong problem statement". The problem is real - it's certainly not contrived at all to want to be able to write such functions - and the problem doesn't go away by declaring it "wrong".

@griesemer I see, I thought you were concerned with the conversion alone, I didn't register the key element that it is dealing with untyped constants.

You can do as per my answer above, with T having a method DoSomething(X), and the function taking an additional argument func(float64) X, so the generic form is defined by two types (T,X). The way you describe the problem X is normally float32 or float64 and the function argument is func(f float64) float32 {return float32(f)} or func(f float64) float64 {return f}.

More significantly, as you highlight, for the integer case there is the issue that less precise integer formats may not be enough for a given constant. The safest approach becomes the keeping the two-typed (T,X) generic function private and exposing publicly only MyFunc32/MyFunc64/etc.

I will concede that MyFunc32(int32)/MyFunc64(int64)/etc. is less practical than a single MyFunc(type T Numeric) (the opposite is indefensible!). But this is only for generic implementations relying on a constant, and primarily an integer constant - how many of those are there? For the rest, you get the additional freedom of not being restricted to a few built in types for T.

And of course, if the function is not expensive, you might be perfectly OK doing the calculation as int64/float64 and exposing that only, keeping it both simple and unrestricted on T.

We really cannot say to people "you can write generic functions on any type T but those generic functions may not use untyped constants." Go is above all a simple language. Languages with bizarre restrictions like that are not simple.

Any time a proposed approach to generics becomes difficult to explain in a simple way, we must discard that approach. It is more important to keep the language simple than it is to add generics to the language.

@JavierZunzunegui One of the interesting properties of parameterized (generic) code is that the compiler can customize it based on the type(s) the code is instantiated with. For instance, one migth want to use a byte type rather than int because it leads to significant space savings (imagine a function that allocates huge slices of the generic type). So simply restricting the code to a "large enough" type is an unsatisfying answer, even for an "opinionated" language such as Go.

Furthermore, it's not just about algorithms that use "large" untyped constants which may be not so common: dismissing such algorithms with a question "how many of those are there anyways" is simply hand-waving to deflect a problem that does exist. Just for your consideration: It seems not unreasonable for a large number of algorithms to use integer constants such as -1, 0, 1. Note that one could not use -1 in conjunction with untyped integers, just to give you a simple example. Clearly we cannot just ignore that. We need to be able to specify this in a contract.

@ianlancetaylor @griesemer thanks for the feedback - I can see there is significant conflict in my proposed change with untyped constants and negative integers, I'll put it behind me.

Can I bring your attention to the second point in https://github.com/golang/go/issues/15292#issuecomment-546313279:

Note this change also benefits non-predeclared types. Under the current proposal, given type X struct{S string} (which comes from an external library, so you can't add methods to it), say you have a []X and want to pass it to a generic function expecting []T, for T satisfying the Stringer contract. That would require a type X2 X; func(x X2) String() string {return x.S}, and a deep copy of []X into []X2. Under this proposed changes to this proposal, you save the deep copy entirely.

The relaxing of conversion rules (if technically feasible) would still be useful.

@JavierZunzunegui Discussing conversions of the sort []B([]A) if B(a) (with a of type A) is permitted seems to be mostly orthogonal to generic features. I think we don't need to bring this in here.

@ianlancetaylor I am not sure how relevent this is to Go, but I don't think constants are really untyped, they must have a type as the compiler must choose a machine representation. I think a better term is constants of indeterminate type, as the constant may be representable by several different types. One solution is to use a union type so a constant like 27 would have a type like int16|int32|float16|float32 a union of all the possible types . Then T in a generic type can be this union type. The only requirement is that we must at some point resolve the union to a single type. The most problematic case would be something like print(27) because there is never a single type to resolve to, in such cases any type in the union would do, and we could choose based on an optimisation parameter like space/speed etc.

@keean The exact name and handling of what the spec calls "untyped constants" is off-topic on this issue. Let's please take that discussion elsewhere. Thanks.

@ianlancetaylor I am happy to, however This is one of the reasons why I think Go cannot have a clean/simple generics implementation, all of these problems are interconnected, and the original choices made for Go were not taken with generic programming in mind. I think another language, designed to make generics simple by design is needed, for Go, generics will always be something added onto the language later, and the best option to keep the language clean and simple may be to not have them at all.

If I would today design a simple language with fast compile times and comparable flexibility I would choose method overloading and structural polymorphism (subtyping) via golang interfaces and no generics. In fact it would allow overloading over different anonymous interfaces with different fields.

Choosing generics has the advantage of clean code reusability, but it introduces more noise which gets complicated if constraints are added leading sometimes to hardly understandable code.
Then, if we have generics, why not use an advanced constraint system like a where clause, higher kinded types or maybe higher rankes types and also dependent typing?
All these questions will eventually come up if go adopt generics, sooner or later.

Stating it clearly, I'm not against generics but I'm contemplating if it is the way to go for go with regard to conserve go's simplicity.

If the introduction of generics in go is inevitable, then it would be reasonable to reflect about the impact on compile times when monomorphizing generic functions.
Wouldn't be a good default to box generics, i.e. generating one copy for all input types together, and only specialize if explicitly requested by the user with some annotation at the definition -or call site?

Regarding the impact on runtime performance, this would reduce performance due to the boxing/unboxing issue, otherwise, there are expert level c++ engineers recommending boxing generics like java does in order to mitigate cache misses.

@ianlancetaylor @griesemer I have re-considered the issue of untyped constants and 'non-operator' generics (https://github.com/golang/go/issues/15292#issuecomment-547166519) and have figured a better way to deal with it.

Give the the nummetic types (type MyInt32 int32, type MyInt64 int64, ...), these have many methods satisfying the same contract (Add(T) T, ...) but critically not others that would risk overflow func(MyInt64) FromI64(int64) MyInt64 but no ~func(MyInt32) FromI64(int64) MyInt32~. This enables using numeric constants (explicitly assigned to the lowest precision value they require) safely (1) as low-precision numeric types will not satisfy the required contract, but all higher ones will. See playground, using interfaces in place of generics.

An advantage of relaxing numeric generics beyond the built in types (not specific to this latest revision, so I should have shared it last week) is it allows instantiating generic methods with overflow-checking types - see playground. Overflow-checking is itself a very popular request/proposal (https://github.com/golang/go/issues/31500 and related issues).


(1): The non-overflow compile-time guarantee for untyped constants is strong within the same 'branch' (int[8/16/32/64] and uint[8/16/32/64]). Crossing branches, a uint[X] constant is only safely instantiated to int[2X+] and a int[X] constant can't be safely instantiated by any uint[X] at all. Even relaxing these (allowing int[X]<->uint[X]) would be simple and safe following some minimal standards, and critically any complexity falls on the writer of the generic code, not on the user of the generic (who is only concerned with the contract, and can expect any numeric type that meets it is valid).

Generic methods - was the downfall of Java!

@ianlancetaylor I am happy to, however This is one of the reasons why I think Go cannot have a clean/simple generics implementation, all of these problems are interconnected, and the original choices made for Go were not taken with generic programming in mind. I think another language, designed to make generics simple by design is needed, for Go, generics will always be something added onto the language later, and the best option to keep the language clean and simple may be to not have them at all.

I agree 100%. As much I would love to see some sort of generics implemented, I think what you guys are currently cooking will destroy the simplicity of the Go language.

The current idea to extend interfaces looks like this:

type I1(type P1) interface {
        m1(x P1)
}

type I2(type P1, P2) interface {
        m2(x P1) P2
        type int, float64
}

func f(type P1 I1(P1), P2 I2(P1, P2)) (x P1, y P2) P2

Sorry everbody, but please don't do this! It uglifies the beauty of Go big time.

Having written almost 100K lines of Go code now, I am ok with not having generics.

However, little things like supporting

// Allow mulitple types in Slices and Maps declarations
func Reverse(s []<int,string>) {
    first := 0
    last := len(s) - 1
    for first < last {
        s[first], s[last] = s[last], s[first]
        first++
        last--
    }
}

//  Allow multiple types in variable declarations
func Index (s <string, []byte>, b byte) int {
    for i := 0; i < len(s); i++ {
        if s[i] == b {
            return i
        }
    }
    return -1
}

// Allow slices and maps declarations with interface values
func ToStrings (s []Stringer) []string {
    r := make([]string, len(s))
    for i, v := range s {
        r[i] = v.String()
    }
    return r
}

would help.

Syntax proposal to be able to separate generics completely from regular Go code

package graph

// Example how you would define generics completely separat from Go 1 code
contract (Node, Edge)G {
    Node Edges() []Edge
    Edge Nodes() (from, to Node)
}

type (type Node, Edge G) ( Graph )
func (type Node, Edge G) ( New )
const _ = (Node, Edge) Graph

// Unmodified Go 1 code
type Graph struct { ... }
func New(nodes []Node) *Graph { ... }
func (g *Graph) ShortestPath(from, to Node) []Edge { ... }

@martinrode However, little things like supporting
... allow mulitple types in Slices and Maps declarations

This doesn't answer needs for some functional-ish generic slice functions eg head(), tail(), map(slice, func), filter(slice, func)

You could just write that yourself for every project you need it in, but at that point it's a risk for going stale due to copy paste repetition and encourages Go code complexity to save language simplicity.

(On a personal level it's also kind of fatiguing to know that I have a set of features I want to implement and not having a clean way to express those without also answering to language constraints)

Consider the following in current, non-generic go:

I have a variable x of type externallib.Foo, obtained from a library externallib I do not control.
I want to pass it to a function SomeFunc(fmt.Stringer), but externallib.Foo has no String() string method. I can simply do:

type MyFoo externallib.Foo
func (mf MyFoo) String() string {...}
// ...
SomeFunc(MyFoo(x))

Consider the same with generics.

I have a variable x of type []externallib.Foo. I want to pass it to AnotherFunc(type T Stringer)(s []T). It can't be done without an expensive deep copying of the slice into a new []MyFoo. If instead of a slice it were a more complex type (say, a chan or map), or the method modified the receiver, it becomes even more innefficient and tedious, if at all possible.

This may not be a problem within the standard library, but that is only because it has no external dependecies. That is a luxury virtually no other project will have.

My suggestion is to relax conversion to allow []Foo([]Bar{}) for any Foo defined as type Foo Bar, or vice versa, and equally for maps, arrays, channels and pointers, recursively. Note these are all cheap shallow copies. More technical details in Relaxed Type Conversion Proposal.


This was first brought up as a secondary feature in https://github.com/golang/go/issues/15292#issuecomment-546313279.

@JavierZunzunegui I don't think that is really related to generics at all. Yes, you can provide an example using generics, but you can provide a similar example without using generics. I think that issue should be discussed separately, not here. See also https://golang.org/doc/faq#convert_slice_with_same_underlying_type. Thanks.

Without generics, such conversion has close to no value at all, because in general []Foo will not meet any interface, or at least no interface that make use of it being a slice. The exception are interfaces that have a very specific pattern to make use of it, like sort.Interface, for which you don't need to convert the slice anyways.

The non-generic version of the above (func AnotherFunc(type T Stringer)(s []T)) is

type SliceOfStringers interface {
  Len() int
  Get(int) fmt.Stringer
}
func AnotherFunc(s SliceOfStringers) {...}

It may less practical than the generic approach, but it can be made to handle any slice fine and do so without copying it, regardless of the underlying type actually being a fmt.Stringer. As it stands, generics can't, despite in principle being a much more suitable tool for the job. And surely, if we add generics, it is precisely to make slices, maps etc more common in APIs, and to manipulate them with less boilerplate. Yet they introduce a new problem, without equivalence in an interface-only world, which _may_ not even be inevitable but artificially imposed by the language.

The type conversion you mention comes up often enough in non-generic code that it is a FAQ. Let's please move this discussion elsewhere. Thanks.

What's the state of this? Any UPDATED draft? I'm waiting for generics since
almost 2 years ago. When will we have generics?

El mar., 4 de feb. de 2020 a la(s) 13:28, Ian Lance Taylor (
[email protected]) escribió:

The type conversion you mention comes up often enough in non-generic code
that it is a FAQ. Let's please move this discussion elsewhere. Thanks.


You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292?email_source=notifications&email_token=AJMFNBN3MFHDMENAFXIKBLDRBGXUTA5CNFSM4CA35RX2YY3PNVWWK3TUL52HS4DFVREXG43VMVBW63LNMVXHJKTDN5WW2ZLOORPWSZGOEKYV5RI#issuecomment-582049477,
or unsubscribe
https://github.com/notifications/unsubscribe-auth/AJMFNBO5UTKNPL3MSA3NESLRBGXUTANCNFSM4CA35RXQ
.

--
This is a test for mail signatures to be used in TripleMint

We're working on it. Some things take time.

Is the work done offline? I would love to see it evolving over time, in a way that the "general public" like me cannot comment to avoid noise.

Although it's since been closed to keep the generics discussion in one place, check out #36177 where @Griesemer links to a prototype he's working on and makes some interesting comments on his thoughts about the matter so far.

I think I'm right in saying that the prototype is just dealing with the type checking aspects of the draft 'contracts' proposal at present but the work certainly sounds promising to me.

@ianlancetaylor Any time a proposed approach to generics becomes difficult to explain in a simple way, we must discard that approach. It is more important to keep the language simple than it is to add generics to the language.

That is a great ideal to strive for but in reality software development is in times inherently not _simple to explain_.

When the language is limited from expressing such _not simple to express_ ideas software engineers end-up reinventing those facilities again and again, because these damned _hard to express_ ideas are sometime essential to the programs logic.

Look at Istio, Kubernetes, operator-sdk, and to some extent Terraform, and even protobuf library. They all escape the Go type system by using reflection, implementing a new type system on top of Go using interfaces and code generation, or a combination of these.

@omeid

Look at Istio, Kubernetes

Did it ever occur to you that the reason they're doing this absurd stuff is because their core design doesn't make any sense, and as a result they've had to result to the reflect games to fulfill it?

I maintain that better designs for golang programs (both in the design phase, and in the API) don't _require_ generics.

Please don't add them to golang.

Programming is hard. Kubelet is a dark place. Generics divide people more than American politics. I want to believe.

When the language is limited from expressing such not simple to express ideas software engineers end-up reinventing those facilities again and again, because these damned hard to express ideas are sometime essential to the programs logic.

Look at Istio, Kubernetes, operator-sdk, and to some extent Terraform, and even protobuf library. They all escape the Go type system by using reflection, implementing a new type system on top of Go using interfaces and code generation, or a combination of these.

I don't find that to be a persuasive argument. The Go language should ideally be easy to read, and to write, and to understand, while still making it possible to perform arbitrarily complex operations. That is consistent with what you are saying: the tools you mention need to do something complex, and Go gives them a way to do it.

The Go language should ideally be easy to read, and to write, and to understand, while still making it possible to perform arbitrarily complex operations.

I agree with this, but because those are multiple goals they will sometimes be in tension with each other. Code that naturally "wants" to be written in a generic style often becomes less easy to read than it otherwise might be when it has to resort to techniques like reflection.

Code that naturally "wants" to be written in a generic style often becomes less easy to read than it otherwise might be when it has to resort to techniques like reflection.

Which is why this proposal remains open and why we have a design draft for a possible implementation of generics (https://blog.golang.org/why-generics).

Look at ... even protobuf library. They all escape the Go type system by using reflection, implementing a new type system on top of Go using interfaces and code generation, or a combination of these.

Speaking from experience with protobufs, there a few cases where generics can improve the usability and/or implementation of the API, but the vast majority of the logic will not benefit from generics. Generics presumes that concrete type information is known at compile time. For protobufs, most of the situation involve cases where the type information is known only at runtime.

In general, I notice that people often point at any use of reflection and claim that as evidence for the need for generics. It is not this simple. A crucial distinction is whether the type information known at compile time or not. In a number of cases it fundamentally is not.

@dsnet Interesting thanks, never thought about protobuf not to be generic compliant. Always assumed every tool that is generating boilerplate go code like for example protoc, based on a predefined scheme, would be able to generate generic code without reflection using the current generic proposal. Would you mind updating this in the spec with a example or in a new go blog post where you describe this problem in more detail?

the tools you mention need to do something complex, and Go gives them a way to do it.

Using text templates to generate Go code is hardly a facility by design, I would argue it is an ad-hoc band-aid, ideally, at least the standard ast and parser packages should allow generating arbitrary Go code.

The only thing that you can argue that Go gives one to deal with complex logic is perhaps Reflection, But that quickly shows it's limitations, not to talk of performance critical code, even when used in the standard library, for example Go's JSON handling is primitive at best.

It is hard to argue that using text templates or reflection to do _something already complex_ fits the ideal of:

Any time a proposed approach to ~generics~ something-complex becomes difficult to explain in a simple way, we must discard that approach.

I think the solution that projects mentioned have come to solve their problem is too complex, and not easy to understand. So in that regard, Go lacks the facilities that allows users to express complex problems in terms as simple and direct as possible.

In general, I notice that people often point at any use of reflection and claim that as evidence for the need for generics.

Maybe there is such a general misconception, but protobuf library, specially the new API could be leaps and bounds a lot more simple with _generics_, or some kind of _sum type_.

One of the authors of that new protobuf API just said "the vast majority of the logic will not benefit from generics", so I'm not sure where you're getting that "specially the new API could be leaps and bounds a lot more simple with generics". What is this based on? Can you provide any evidence that it'd be a lot simpler?

Speaking as someone who's used the protobuf APIs in a couple of languages that include generics (Java, C++), I can't say that I've noticed any significant usability differences with the Go API and their APIs. If your assertion were true, I'd expect there to be some such difference.

@dsnet Also said "there a few cases where generics can improve the usability and/or implementation of the API".

But if you want an example of how things can be simpler, start with dropping the Value type as it is largely an ad-hoc sum-type.

@omeid This issue is about generics, not sum types. So I'm not sure how that example is relevant.

Specifically, my question is: how would having generics result in a protobuf implementation or API that is "leaps and bounds a lot more simpler" than the new (or old, for that matter) API?

This seems not in line with my reading of what @dsnet said above, nor with my experience with the Java and C++ protobuf APIs.

Also, your comment on primitive JSON handling in Go also strikes me as equally odd. Can you explain how you think encoding/json's API would be improved by generics?

AFAIK, implementations of JSON parsing in Java use reflection (not generics). Its true that the top-level API in most JSON libraries will likely use a generic method (e.g. Gson), but a method that takes an unconstrained generic parameter T and returns a value of type T provides very little additional type-checking when compared to json.Unmarshal. In fact, I think the only error that the only additional error scenario not caught by json.Unmarshal at compile-time is if you pass a non-pointer value. (Also, note the caveats in Gson's API documentation to use a different function for generic vs non-generic types. Again, this argues that generics complicated their API, rather than simplified it; in this case, it is to support serializing/deserializing generic types).

(JSON support in C++ is AFAICT worse; the various approaches I know of either use significant amounts of macros or involve manually writing parse/serialize functions. Again, this doesn't)

If you are expecting generics to add a great deal to Go's support for JSON, I fear you'll be disappointed.


@gertcuykens Every protobuf implementation in every language that I know of uses code-generation, regardless of whether they have generics or not. This includes Java, C++, Swift, Rust, JS (and TS). I don't think having generics automatically removes all uses of code generation (as an existence proof, I have written code-generators that generate Java code and C++ code); it seems illogical to expect that any solution for generics will meet that bar.


Just to be absolute clear: I support adding generics to Go. But I think we should be clear-eyed about what we're going to get out of it. I don't believe we'll get significant improvements to either protobuf or JSON APIs.

I don't think protobuf is a particularly good case for generics. You don't need generics in the target language as you can simply generate specialised code directly. This would apply to other similar systems like Swagger/OpenAPI too.

Where generics would seem to be useful to me, and could offer both simplification and type-safety would be in writing the protobuf compiler itself.

What you would need is a language that is capable of a type safe representation of its own abstract syntax tree. From my own experience this requires at least generics and Generalised Abstract Data Types. You could then write a type-safe protobuf compiler for a language in the language itself.

Where generics would seem to be useful to me, and could offer both simplification and type-safety would be in writing the protobuf compiler itself.

I don't really see how. The go/ast package already provides a representation of Go's AST. The Go protobuf compiler doesn't use it because working with an AST is much more cumbersome than just emitting strings, even if it is more type-safe.

Perhaps you have an example from the protobuf compiler for some other language?

@neild I did start by saying that I didn't think protobuf was a very good example. There are gains to be made using generics, but they depend a lot on how important type safety is to you, and this would be counter balanced by how intrusive the implementation of generics is. An ideal implementation would get out of your way, unless you make a mistake, and in which case the advantages would outweigh the cost for more use-cases.

Looking at the go/ast package, it does not have a typed representation of the AST because that requires generics and GADTs. For example an 'add' node would need to be generic in the type of the terms being added. With a non type-safe AST all the type checking logic has to be hand coded which would make it cumbersome.

With a good template syntax and type safe expressions you could make it as easy as emitting strings, but also type safe. For example see (this is more about the parsing side): https://stackoverflow.com/questions/11104536/how-to-parse-strings-to-syntax-tree-using-gadts

For example consider JSX as a literal syntax for the HTML Dom in JavaScript Vs TSX as a literal syntax for the Dom in TypeScript.

We can write typed generic expressions that specialise to the final code. As easy to write as strings, but type checked (in their generic form).

One of the key problems with code generators is that type checking only happens on the emitted code, which makes writing correct templates difficult. With generics you can write the templates as actual type-checked expressions, so the checking is done directly on the template, not the emitted code, which makes it much easier to get it right, and to maintain.

The variadic type parameters are missing in the current design, which looks like a huge missing of the functionality of generics. An add-on design (maybe) follows the current contract design:

contract Comparables(Ts...) {
    if  len(Ts) > 0 {
        Comparables(Ts[1:]...)
    } else {
        Comparable(Ts[0])
    }
}

contract Comparable(T) {
    T int, int8, int16, int32, int64,
        uint, uint8, uint16, uint32, uint64, uintptr,
        float32, float64,
        string
}

type Keys(type Ts ...Comparables) struct {
    fs ...Ts
}

type Metric(type Ts ...Comparables) struct {
    mu sync.Mutex
    m  map[Keys(Ts...)]int
}

func (m *Metric(Ts...)) Add(vs ...Ts) {
    m.mu.Lock()
    defer m.mu.Unlock()
    if m.m == nil {
        m.m = make(map[Keys(Ts...))]int)
    }
    m[Keys(Ts...){vs...}]++
}


// To use the metric

m := Metric(int, float64, string){m: make(map[Keys(int, float64, string)]int}
m.Add(1, 2.0, "variadic")

Example inspired from here.

It's not clear to me how that adds any safety above just using interface{}. Is there a real problem with people passing non-comparables into a metric?

It's not clear to me how that adds any safety above just using interface{}. Is there a real problem with people passing non-comparables into a metric?

Comparables in this example requires Keys must be consist of a series of comparable types. The key idea is to show the design of variadic type parameters, not the meaning of the type itself.

I don't want to get too hung up on the example, but I'm picking on it because I think many examples of "type extension" just end up pushing the bookkeeping around without adding any practical safety. In this case, if you see a bad type at run time or potentially with go vet, you could complain then.

Also, I'm a little worried that allowing open ended types of types like this would lead to the problem of paradoxical references, as occurs in second order logic. Could you define C as the contract of all types that aren't in C?

Also, I'm a little worried that allowing open ended types of types like this would lead to the problem of paradoxical references, as occurs in second order logic. Could you define C as the contract of all types that aren't in C?

Sorry but I don't get how is this example allowing open ended types and relates to Russell's paradox, Comparables is defined by a list of Comparable.

I do not like the idea of writing Go code inside a contract. If I can write an if statement, can I write a for statement? Can I call a function? Can I declare variables? Why not?

It also seems unnecessary. func F(a ...int) means that a is []int. By analogy, func F(type Ts ...comparable) would mean that each type in the list is comparable.

In these lines

type Keys(type Ts ...Comparables) struct {
    fs ...Ts
}

you seem to be defining a struct with multiple fields all named fs. I'm not sure how that is supposed to work. Is there any way to use refer to fields in this struct other than using reflection?

So the question is: what can one do with variadic type parameters? What does one want to do?

Here I think you are using variadic type parameters to define a tuple type with an arbitrary number of fields.

What else might one want to do?

I do not like the idea of writing Go code inside a contract. If I can write an if statement, can I write a for statement? Can I call a function? Can I declare variables? Why not?

It also seems unnecessary. func F(a ...int) means that a is []int. By analogy, func F(type Ts ...comparable) would mean that each type in the list is comparable.

After reviewing the example a day later, I think you are absolutely right. The Comparables is a dumb idea. The example only wants to convey the message of using len(args) to determine the number of parameters. It turns out for functions, func F(type Ts ...Comparable) is good enough.

The trimmed example:

contract Comparable(T) {
    T int, int8, int16, int32, int64,
        uint, uint8, uint16, uint32, uint64, uintptr,
        float32, float64,
        string
}

type Keys(type Ts ...Comparable) struct {
    fs ...Ts
}

type Metric(type Ts ...Comparable) struct {
    mu sync.Mutex
    m  map[Keys(Ts...)]int
}

func (m *Metric(Ts...)) Add(vs ...Ts) {
    m.mu.Lock()
    defer m.mu.Unlock()
    if m.m == nil {
        m.m = make(map[Keys(Ts...))]int)
    }
    m[Keys(Ts...){vs...}]++
}


// To use the metric

m := Metric(int, float64, string){m: make(map[Keys(int, float64, string)]int}
m.Add(1, 2.0, "variadic")

you seem to be defining a struct with multiple fields all named fs. I'm not sure how that is supposed to work. Is there any way to use refer to fields in this struct other than using reflection?

So the question is: what can one do with variadic type parameters? What does one want to do?

Here I think you are using variadic type parameters to define a tuple type with an arbitrary number of fields.

What else might one want to do?

Variadic type parameters are aimed for tuples by its definition if we use ... for it, not meaning that tuples are the only use case, but one can use it in any structs and any functions.

Since there are only two places that appear with variadic type parameters: struct or function, so we easily have what's clear before for functions:

func F(type Ts ...Comparable) (args ...Ts) {
    if len(args) > 1 {
        F(args[1:])
        return
    }
    // ... do stuff with args[0]
}

For instance, variadic Min function is not possible in the current design, but possible with variadic type parameters:

func Min(type T ...Comparable)(p1 T, pn ...T) T {
    switch l := len(pn); {
    case l > 1:
        return Min(pn[0], pn[1:]...)
    case l == 1:
        if p1 >= pn[0] { return pn[0] }
        return p1
    case l < 1:
        return p1
    }
}

To define a Tuple with variadic type parameters:

type Tuple(type Ts ...Comparable) struct {
    fs ...Ts
}

When three type parameters instantiated by 'Ts`, it can be translated to

type Tuple(type T1, T2, T3 Comparable) struct {
    fs_1 T1
    fs_2 T2
    fs_3 T3
}

as an intermediate representation. To use the fs, there are several ways:

  1. parameters unpack
k := Tuple(int, float64, string){1, 2.0, "variadic"}
fs1, fs2, fs3 := k.fs // translated to fs1, fs2, fs3 := k.fs_1, k.fs_2, k.fs_3
println(fs1) // 1
println(fs2) // 2.0
println(fs3) // variadic
  1. use for loop
for idx, f := range k.fs {
    println(idx, ": ", f)
}
// Output:
// 0: 1
// 1: 2.0
// 2: variadic
  1. use index (not sure if people see this is an ambiguity to array/slice or map)
k.fs[0] = ... // translated to k.fs_1 = ...
f2 := k.fs[1] // translated to f2 := k.fs_2
  1. use reflect package, basically works like an array
t := Tuple(int, float64, string){1, 2.0, "variadic"}

fs := reflect.ValueOf(t).Elem().FieldByName("fs")
val := reflect.ValueOf(fs)
if val.Kind() == reflect.VariadicTypes {
    for i := 0; i < val.Len(); i++ {
        e := val.Index(i)
        switch e.Kind() {
        case reflect.Int:
            fmt.Printf("%v, ", e.Int())
        case reflect.Float64:
            fmt.Printf("%v, ", e.Float())
        case reflect.String:
            fmt.Printf("%v, ", e.String())
        }
    }
}

Nothing really new compared to the use of an array.

For instance, variadic Min function is not possible in the current design, but possible with variadic type parameters:

func Min(type T ...Comparable)(p1 T, pn ...T) T {
    switch l := len(pn); {
    case l > 1:
        return Min(pn[0], pn[1:]...)
    case l == 1:
        if p1 >= pn[0] { return pn[0] }
        return p1
    case l < 1:
        return p1
    }
}

This doesn't make sense to me. Variadic type parameters only makes sense if the types can be different types. But calling Min on a list of different types doesn't make sense. Go doesn't support using >= on values of different types. Even if we somehow permitted that, we might be asked for Min(int, string)(1, "a"). That doesn't have any sort of answer.

While it's true that the current design doesn't permit Min of a variadic number of different types, it does support calling Min on a variadic number of values of the same type. Which I think it is the only reasonable way to use Min anyhow.

func Min(type T comparable)(s ...T) T {
    if len(s) == 0 {
        panic("Min of no elements")
    }
    r := s[0]
    for _, v := range s[1:] {
        if v < r {
            r = v
        }
    }
    return r
}

For some of the other examples in https://github.com/golang/go/issues/15292#issuecomment-599040081, it's important to note that in Go slices and arrays have elements that are all the same type. When using variadic types parameters, the elements are different types. So it's really not the same as a slice or array.

While it's true that the current design doesn't permit Min of a variadic number of different types, it does support calling Min on a variadic number of values of the same type. Which I think it is the only reasonable way to use Min anyhow.

func Min(type T comparable)(s ...T) T {
    if len(s) == 0 {
        panic("Min of no elements")
    }
    r := s[0]
    for _, v := range s[1:] {
        if v < r {
            r = v
        }
    }
    return r
}

True. Min was a bad example. It was added late and didn't have a clear thought, as you can see from the comment edit history. A real example is the Metric which you ignored.

it's important to note that in Go slices and arrays have elements that are all the same type. When using variadic types parameters, the elements are different types. So it's really not the same as a slice or array.

See? You are those people who see this is an ambiguity to array/slice or map. As I said in https://github.com/golang/go/issues/15292#issuecomment-599040081, the syntax is quite similar to array/slice and map, but it is accessing elements with different types. Does it really matter? Or can one prove that this is ambiguity? What is possible in Go 1 is:

m := map[interface{}]int{1: 2, "2": 3, 3.0: 4}
for i, e := range m {
    println(i, e)
}

Is i considered the same type? Apparently, we say i is interface{}, same type. But does an interface really express the type? Programmers have to manually check what are the possible types. When using for, [] and unpack, are they really matter to the user that they are not accessing the same type? What are the arguments against this? Same for the fs:

for idx, f := range k.fs {
    switch f.(type) { // compare to interface{}, here is zero overhead.
    case int:
        // ...
    case float64:
        // ...
    case string:
        // ...
    }
}

If you have to use a type switch to access an element of a variadic generic type, I don't see the advantage. I can see how with some choices of compilation technique it might possibly be slightly more efficient at run time than using interface{}. But I think the difference would be fairly small, and I don't see why it would be any more type-safe. It's not immediately obvious that it's worth making the language more complex.

I wasn't intending to ignore the Metric example, I just don't yet see how to use variadic generic types to make it simpler to write. If I need to use a type switch in the body of Metric, then I think I would rather write Metric2 and Metric3.

What's the definition of "making the language more complex"? We all agree that generics is a complex thing, and it will never make the language simpler than Go 1. You already put huge efforts into designing and implementing it, but it is quite unclear to Go users: what is the definition of "feels like writing... Go"? Is there a quantified metric to measure it? How could a language proposal argue that it is not making the language more complex? In the Go 2 language proposal template, the goals are quite straightforward in its first impression:

  1. address an important issue for many people,
  2. have minimal impact on everybody else, and
  3. come with a clear and well-understood solution.

But, questions could be: How many is "many"? What stands for "important"? How to measure the impact on an unknown population? When a problem is well-understood? Go is dominating the cloud, but will dominating other areas like scientific numeric computing (e.g., machine learning), graphical rendering (e.g., huge 3D market) become one of the targets of Go? Is the problem fits more in "I'd rather do A than B in Go & There is no use case because we can do it in another way" or "B is not offered, therefore we don't use Go & The use case is not there yet because language cannot easily express it"? ... I found those questions are painful and endless, and sometimes even not worth to answer it.

Back to the Metric example, it does not show any need for accessing individuals. Unpacking parameter set seems not a real need here, although solutions that "coincide" with the existing language are using [ ] indexing and type deduction can solve the problem of type-safe:

f2 := k.fs[1] // f2 is a float64

@changkun If there where clear and objective metrics to decide what language features are good and bad, we wouldn't need language designers - we could just write a program to design an optimal language for us. But there isn't - it always comes down to personal preferences of some set of people. Which is also, BTW, why it makes no sense to squabble over whether a language is "good" or not - the only question is if you, personally, like it. In the case of Go, the people who's preferences decide are the people on the Go team and the things you quote are not metrics, they are guiding questions to help you convince them.

Personally, FWIW, I feel variadic type parameters fail on two out of those three. I don't think they address an important issue for many people - the metrics example might benefit from them, but IMO only slightly and it's a very specialized use-case. And I don't think they come with a clear and well-understood solution. I'm unaware of any language supporting something like this. But I may be wrong. It would definitely be helpful, if someone has examples of other languages supporting this - it could provide info on how it's usually implemented and more importantly, how it's used. Maybe it's used more broadly than I can imagine.

@Merovius Haskell has polyvariadic functions as we demonstrated in the HList paper: http://okmij.org/ftp/Haskell/polyvariadic.html#polyvar-fn
Its clearly complex to do this in Haskell, but not impossible.

The motivating example is type safe database access where things like type safe joins and projections can be done, and database schema declared in the language.

For example a database table looks a lot like a record, where there are column names and types. The relational join operation takes two arbitrary records and produces a record with the types from both. You can of course do this by hand, but it's prone to mistakes, is very tedious, obfuscates the meaning of the code with all the hand declared record types, and of course the big feature of an SQL database is that it supports ad-hoc queries, so you can't pre-build all the possible record types, as you don't necessarily know what queries you want until you do them.

So a type-safe relational join operator on records and tuples would be a good use-case. We are only thinking about the type of the function here - it's up to the programmer what the function actually does, whether that's an in memory join of two arrays of tuples, or whether it generates SQL to run on an external DB and marshal the results back in a type-safe way.

This kind of thing gets a much neater embedding in C# with LINQ. Most people seem to think of LINQ as adding lambda functions and monads to C#, but it wouldn't work for its primary use-case without polyvariadics, as you just cannot define a type safe join operator without similar functionality.

I think relational operators are important. After basic operators on Boolean, binary, int, float and string types, sets probably come next, and then relations.

BTW, C++ also offers it although we don't want to argue we want this feature in Go because of XXX has it :)

I think it would be very odd if k.fs[0] and k.fs[1] had different types. That is not how other indexable values work in Go.

The metrics example is based on https://medium.com/@sameer_74231/go-experience-report-for-generics-google-metrics-api-b019d597aaa4. I think that code requires reflection to retrieve the values. I think that if we are going to add variadic generics to Go, we should get something better than reflection to retrieve the values. Otherwise it just doesn't seem to help all that much.

I think it would be very odd if k.fs[0] and k.fs[1] had different types. That is not how other indexable values work in Go.

The metrics example is based on https://medium.com/@sameer_74231/go-experience-report-for-generics-google-metrics-api-b019d597aaa4. I think that code requires reflection to retrieve the values. I think that if we are going to add variadic generics to Go, we should get something better than reflection to retrieve the values. Otherwise it just doesn't seem to help all that much.

Well. You are requesting something does not exist. If you dislike [``] , there are two options left: ( ) or {``}, and I see you can argue that parenthesis looks like a function call and the curly braces look like variable initialization. Nobody like args.0 args.1 since this does not feel like Go. The syntax is trivial.

Actually, I spend some weekend time in reading the book "the design and evolution of C++", there are many interesting insights about decisions and lessons although it was written in 1994:

_"[...] In retrospect, I underestimated the importance of constraints in readability and early error detection."_ ==> Great contract design

"_the function syntax at first glance also looks nicer without extra keyword:_

T& index<class T>(vector<T>& v, int i) { /*...*/ }
int i = index(v1, 10);

_There appear to be nagging problems with this simpler syntax. It is too clever. It is relatively hard to spot a template declaration in a program because [...] The <...> brackets were chosen in preference to parentheses because users found them easier to read. [...] As it happens, Tom Pennello proved that parentheses would have been easier to parse, but that doesn't change the key observation that (human) readers prefer <...>_
" ==> isn't it similar to func F(type T C)(v T) T?

_"I do, however, think that I was too cautious and conservative when it came to specifying template features. I could have included features such as [...]. These features would not have added greatly to the burden of the implementers, and users would have been helped."_

Why does it feel so familiar?

Indexing variadic type parameters (or tuple) need separate to runtime indexing and compile-time indexing. I guess you may just argue that the lack of support for runtime indexing can confuse users because it is not consistent with compile-time indexing. Even for compile-time indexing, a non-type "template" parameter is also missing in the current design.

With all pieces of evidence, the proposal (except experience report) tries to avoid discussing this feature, and I start to believe it does not about add variadic generics to Go but just removed by design.

I agree that Design and Evolution of C++ is a good book, but C++ and Go have different goals. The final quote there is a good one; Stroustrup doesn't even mention the cost of language complexity for users of the language. In Go we always try to consider that cost. Go is intended to be a simple language. If we added every feature that would help users, it would not be simple. As C++ is not simple.

With all pieces of evidence, the proposal (except experience report) tries to avoid discussing this feature, and I start to believe it does not about add variadic generics to Go but just removed by design.

I'm sorry, I don't know what you mean here.

Personally I've always considered the possibility of variadic generic types, but I've never taken the time to work out how it would work. The way it works in C++ is very subtle. I would like to see if we can first get non-variadic generics to work. There is certainly time to add variadic generics, if possible, later.

When I criticize the earlier thoughts, I'm not saying that variadic types cannot be done. I'm pointing out problems that I think need to be resolved. If they can't be resolved, then I'm not convinced that variadic types are worth it.

Stroustrup doesn't even mention the cost of language complexity for users of the language. In Go we always try to consider that cost. Go is intended to be a simple language. If we added every feature that would help users, it would not be simple. As C++ is not simple.

Not true IMO. One must note that C++ is the first practitioner that carry forward generics (Well ML is the first language). From what I read from the book, I get the message that C++ was intended to be a simple language (not offer generics in the beginning, Experiment-Simplify-Ship loop for language design, same history). C++ also had feature freeze phase for several years which is what we have in Go "The Compatability Promise". But it gets a little bit out of control over time because of many reasonable reasons, which is not clear to Go if it catches the old path of C++ after the release of generics.

There is certainly time to add variadic generics, if possible, later.

Same feeling to me. Variadic generics are also missing in the first standardized version of templates.

I'm pointing out problems that I think need to be resolved. If they can't be resolved, then I'm not convinced that variadic types are worth it.

I understand your concerns. But the problem is basically solved but just needs to be properly translated to Go (and I guess nobody likes the word "translate"). What I read from your historical generics proposal, they basically follow whats failed in C++'s early proposal and compromised to what's Stroustrup regretted for. I am interested in your counter arguments about this.

We will have to disagree about the goals of C++. Maybe the original goals were more similar, but looking at C++ today, I think it's clear that their goals are very different than the goals for Go, and I think that has been the case for at least 25 years.

In writing various proposals to add generics to Go, I of course looked at how C++ templates work, as well as looking at many other languages (after all, C++ did not invent generics). I didn't look at what Stroustrup regretted, so if we came to same place, then, great. My thinking is that generics in Go are more like generics in Ada or D than they are like C++. Even today, C++ does not have contracts, which they call concepts but have not yet added to the language. Also, C++ intentionally allows complex programming at compilation time, and in fact C++ templates are themselves a Turing complete language (though I don't know whether that was intentional). I've always considered that to be something to avoid for Go, as the complexity is extreme (though it is more complex in C++ than it would be in Go because of method overloading and resolution, which Go doesn't have).

After tried the current contract implementation for about a month, I am a little bit wondering what are the destiny for the existing built-in functions. All of them can be implemented in a generic way:

func Append(type T)(slice []T, elems ...T) []T {...}
func Copy(type T)(dst, src []T) int {...}
func Delete(type K, V)(m map[K]V, k K) {...}
func Make(type T, I Integer(I))(siz ...I) T {...}
func New(type T)() *T {...}
func Close(type T)(c chan<- T) {...}
func Panic(type T)(v T) {...}
func Recover(type T)() T {...}
func Print(type ...T)(args ...T) {...}
func Println(type ...T)(args ...T) {...}

Will they be gone in Go2? How could Go 2 deal with such a huge impact on the existing Go 1 codebase? These seem to be open questions.

Moreover, these two are a little bit special:

func Len(type T C)(t T) int {...}
func Cap(type T C)(t T) int {...}

How to implement such a contract C with the current design, such that a type parameter can only be generic slice []Ts, map map[Tk]Tv, and channel chan Tc where T Ts Tk Tv Tc are different?

@changkun I don't think "they can be implemented with generics" is a convincing reason to remove them. And you mention a pretty clear and strong reason why they shouldn't be removed. So I don't think they will be. I think that makes the rest of the questions obsolete.

@changkun I don't think "they can be implemented with generics" is a convincing reason to remove them. And you mention a pretty clear and strong reason why they shouldn't be removed.

Yes, I agree that it is not the convincer for removing them, that's why I said it explicitly. However, keeping them along with generics "violates" the existing philosophy of Go, which language features are orthogonal. The compatibility is the top concern, but adding contracts is likely to kill huge current "out dated" code.

So I don't think they will be. I think that makes the rest of the questions obsolete.

Let's try not to ignore the question and consider it as a real-world use case of contracts. If one comes up with similar requirements, how could we implement it with the current design?

Clearly we aren't going to get rid of the existing predeclared functions.

While it's possible to write a parameterized function signature for delete, close, panic, recover, print, and println, I don't think it's possible to implement them without relying on internal magic functions.

There are partial versions of Append and Copy at https://go.googlesource.com/proposal/+/refs/heads/master/design/go2draft-contracts.md#append. It's not complete, because append and copy have special cases for a second argument of type string, which is not supported by the current design draft.

Note that the signature for Make, above, is not valid according to the current design draft. New isn't quite the same as new, but, close enough.

With the current design draft Len and Cap would have to take an argument of type interface{}, and as such would not be compile-time-type-safe.

https://go-review.googlesource.com/c/go/+/187317

Please don't use .go2 file extentions, we have modules to do this kind of version thing? I understand if you are doing it as a temporay solution to make life easier while experimenting with contracts but please make sure that in the end the go.mod file is going to take care mixing go pacakges without the need for .go2 file extentions. It would be a blow against the module developers who trying hard to make sure modules work as good as they can. Using .go2 file extentions is like saying, nope I dont care about your module stuff going to do it my way anyway because I don't want my 10 year old pre module dinosaur go compiler to break.

@gertcuykens .go2 files are only for the experiment; they will not be used when generics land in the compiler.

(I'm going to hide our comments since they don't really add to the discussion and it's long enough as-is.)

Recently I explored a new generic syntax in the K language that I designed, because K borrowed a lot of grammar from Go, so this Generic grammar may also have some reference value for Go.

The identifier<T> problem is that it conflicts with comparison operators and also bit operators, so I don't agree with this design.

Scala's identifier[T] has a better look and feel than the previous design, but after resolving the above conflict, it has a new conflict with the index design identifier[index].
For this reason, the index design of Scala has been changed to identifier(index). This does not work well for languages ​​that already use [] as an index.

In Go's draft, it was declared that generics use (type T), which will not cause conflicts, because type is a keyword, but the compiler still needs more judgment when it is called to resolve theidentifier(type)(params). Although it is better than the above solutions, it still does not satisfy me.

By chance, I remembered the special design of method invocation in OC, which gave me inspiration for a new design.

What if we put the identifier and the generic as a whole and put them in [] together?
We can get the [identifier T]. This design does not conflict with the index, because it must have at least two elements, separated by spaces.
When there are multiple generics, we can write [identifier T V] like this, and it will not conflict with the existing design.

Substituting this design into Go, we can get the following example.
E.g.

type [Item T] struct {
    Value T
}

func (it [Item T]) Print() {
    println(it.Value)
}

func [TestGenerics T V]() {
    var a = [Item T]{}
    a.Print()
    var b = [Item V]{}
    b.Print()
}

func main() {
    [TestGenerics int string]()
}

This looks very clear.

Another benefit of using [] is that it has some inheritance from Go's original Slice and Map design, and will not cause a sense of fragmentation.

[]int  ->  [slice int]

map[string]int  ->  [map string int]

We can make a more complicated example

var a map[int][]map[string]map[string][]string

var b [map int [slice [map string [map string [slice string]]]]]

This example still maintains a relatively clear effect, and at the same time has a small impact on compilation.

I have implemented and tested this design in K and it works well.

I think this design has a certain reference value and may be worthy of discussion.

Recently I explored a new generic syntax in the K language that I designed, because K borrowed a lot of grammar from Go, so this Generic grammar may also have some reference value for Go.

The identifier<T> problem is that it conflicts with comparison operators and also bit operators, so I don't agree with this design.

Scala's identifier[T] has a better look and feel than the previous design, but after resolving the above conflict, it has a new conflict with the index design identifier[index].
For this reason, the index design of Scala has been changed to identifier(index). This does not work well for languages ​​that already use [] as an index.

In Go's draft, it was declared that generics use (type T), which will not cause conflicts, because type is a keyword, but the compiler still needs more judgment when it is called to resolve theidentifier(type)(params). Although it is better than the above solutions, it still does not satisfy me.

By chance, I remembered the special design of method invocation in OC, which gave me inspiration for a new design.

What if we put the identifier and the generic as a whole and put them in [] together?
We can get the [identifier T]. This design does not conflict with the index, because it must have at least two elements, separated by spaces.
When there are multiple generics, we can write [identifier T V] like this, and it will not conflict with the existing design.

Substituting this design into Go, we can get the following example.
E.g.

type [Item T] struct {
    Value T
}

func (it [Item T]) Print() {
    println(it.Value)
}

func [TestGenerics T V]() {
    var a = [Item T]{}
    a.Print()
    var b = [Item V]{}
    b.Print()
}

func main() {
    [TestGenerics int string]()
}

This looks very clear.

Another benefit of using [] is that it has some inheritance from Go's original Slice and Map design, and will not cause a sense of fragmentation.

[]int  ->  [slice int]

map[string]int  ->  [map string int]

We can make a more complicated example

var a map[int][]map[string]map[string][]string

var b [map int [slice [map string [map string [slice string]]]]]

This example still maintains a relatively clear effect, and at the same time has a small impact on compilation.

I have implemented and tested this design in K and it works well.

I think this design has a certain reference value and may be worthy of discussion.

great

After some back-and-forth and several re-readings, I overall support the current design draft for Contracts in Go. I appreciate the amount of time and effort that has gone into it. While the scope, concepts, implementation, and most tradeoffs seem sound, my concern is that the syntax needs to be overhauled to improve readability.

I wrote up a series of proposed changes to address this:

The key points are:

  • Method Call/Type-Assert Syntax for Contract Declaration
  • The "Empty Contract"
  • Non-Parenthetical Delimiters

At the risk of preempting the essay, I'll give a few pieces of syntax sans explanation, converted from samples in the current Contracts design draft. Note that the F«T» form of delimiters is illustrative, not prescriptive; see the writeup for details.

type List«type Element contract{}» struct {
    next *List«Element»
    val  Element
}

and

contract viaStrings«To, From» {
    To.Set(string)
    From.String() string
}

func SetViaStrings«type To, From viaStrings»(s []From) []To {
    r := make([]To, len(s))
    for i, v := range s {
        r[i].Set(v.String())
    }
    return r
}

and

func Keys«type K comparable, V contract{}»(m map[K]V) []K {
    r := make([]K, 0, len(m))
    for k := range m {
        r = append(r, k)
    }
    return r
}

k := maps.Keys(map[int]int{1:2, 2:4})

and

contract Numeric«T» {
    T.(int, int8, int16, int32, int64,
        uint, uint8, uint16, uint32, uint64, uintptr,
        float32, float64,
        complex64, complex128)
}

func DotProduct«type T Numeric»(s1, s2 []T) T {
    if len(s1) != len(s2) {
        panic("DotProduct: slices of unequal length")
    }
    var r T
    for i := range s1 {
        r += s1[i] * s2[i]
    }
    return r
}

Without really changing Contracts under the hood, this is far more readable to me as a Go developer. I also feel far more confident teaching this form to someone who is learning Go (albeit late in the curriculum).

@ianlancetaylor Based on your comment at https://github.com/golang/go/issues/36533#issuecomment-579484523 I'm posting in this thread rather than starting a new issue. It's also listed on the Generics Feedback Page. Not sure if I need to do anything else to get it "officially considered" (i.e. Go 2 proposal review group?) or if feedback is still actively being gathered.

From the contracts design draft:

Why not use the syntax F<T> like C++ and Java?
When parsing code within a function, such as v := F<T>, at the point of seeing the < it's ambiguous whether we are seeing a type instantiation or an expression using the < operator. Resolving that requires effectively unbounded lookahead. In general we strive to keep the Go parser simple.

Not particularly in conflict with my last post: Angle Brace Delimiters for Go Contracts

Just some ideas on how to get around this point of the parser getting confused. Couple samples:

// Lifted from the design draft
func New<type K, V>(compare func(K, K) int) *Map<K, V> {
    return &Map{<K, V> compare: compare}
}

// ...

func (m *Map<K, V>) InOrder() *Iterator<K, V> {
    sender, receiver := chans.Ranger(<keyValue<K, V>>)
    var f func(*node<K, V>) bool
    f = func(n *node<K, V>) bool {
        if n == nil {
            return true
        }
        // Stop sending values if sender.Send returns false,
        // meaning that nothing is listening at the receiver end.
        return f(n.left) &&
            sender.Send(keyValue{<K, V> n.key, n.val}) &&
            f(n.right)
    }
    go func() {
        f(m.root)
        sender.Close()
    }()
    return &Iterator{receiver}
}

// ...

Essentially, just a different position for the type parameters in scenarios where < could be ambiguous.

@tooolbox Regarding your angle bracket comment. Thanks, but to me personally that syntax reads like first making a decision that we must use angle brackets for type parameters and type arguments and then figuring out a way to hammer them in. I think that if we add generics to Go we need to aim for something that fits cleanly and easily into the existing language. I don't think that moving angle brackets inside curly brackets achieves that goal.

Yes, this is a minor detail, but I think that when it comes to syntax minor details are very important. I think that if we're going to add type arguments and parameters, they need to work in simple and intuitive ways.

I certainly don't claim that the syntax in the current design draft is perfect, but I do claim that it fits easily into the existing language. What we need to do now is write more example code to see how well it works in practice. A key point is: how often do people actually have to write type arguments outside of function declarations, and how confusing are those cases? I don't think we know.

Is it a good idea to use [] for generic types, and use () for generic functions? This would be more consistent with the current core generics.

Could the community vote on it? Personally I'd prefer _anything_ over adding more parenthesis, it's already difficult to read some function definitions for closures etc, this adds more clutter

I don't think a vote is a good way to design a language. Especially with a very hard (probably impossible) to determine and incredibly large set of eligible voters.

I trust the Go designers and community to converge on the best solution and
so haven't felt the need to weigh in on anything in this conversation.
However, I just had to say how unexpectedly delighted I was by the
suggestion of the F«T» syntax.

(Other Unicode brackets:
https://unicode-search.net/unicode-namesearch.pl?term=BRACKET.)

Cheers,

  • Bob

On Fri, May 1, 2020 at 7:43 PM Matt Mc notifications@github.com wrote:

After some back-and-forth and several re-readings, I overall support the
current design draft for Contracts in Go. I appreciate the amount of time
and effort that has gone into it. While the scope, concepts,
implementation, and most tradeoffs seem sound, my concern is that the
syntax needs to be overhauled to improve readability.

I wrote up a series of proposed changes to address this:

The key points are:

  • Method Call/Type-Assert Syntax for Contract Declaration
  • The "Empty Contract"
  • Non-Parenthetical Delimiters

At the risk of preempting the essay, I'll give a few pieces of unsupported
syntax, converted from samples in the current Contracts design draft. Note
that the F«T» form of delimiters is illustrative, not prescriptive; see
the writeup for details.

type List«type Element contract{}» struct {
next *List«Element»
val Element
}

and

contract viaStrings«To, From» {
To.Set(string)
From.String() string
}
func SetViaStrings«type To, From viaStrings»(s []From) []To {
r := make([]To, len(s))
for i, v := range s {
r[i].Set(v.String())
}
return r
}

and

func Keys«type K comparable, V contract{}»(m map[K]V) []K {
r := make([]K, 0, len(m))
for k := range m {
r = append(r, k)
}
return r
}
k := maps.Keys(map[int]int{1:2, 2:4})

and

contract Numeric«T» {
T.(int, int8, int16, int32, int64,
uint, uint8, uint16, uint32, uint64, uintptr,
float32, float64,
complex64, complex128)
}
func DotProduct«type T Numeric»(s1, s2 []T) T {
if len(s1) != len(s2) {
panic("DotProduct: slices of unequal length")
}
var r T
for i := range s1 {
r += s1[i] * s2[i]
}
return r
}

Without really changing Contracts under the hood, this is far more
readable to me as a Go developer. I also feel far more confident
teaching this form to someone who is learning Go (albeit late in the
curriculum).

@ianlancetaylor https://github.com/ianlancetaylor Based on your comment
at #36533 (comment)
https://github.com/golang/go/issues/36533#issuecomment-579484523 I'm
posting in this thread rather than starting a new issue. It's also listed
on the Generics Feedback Page
https://github.com/golang/go/wiki/Go2GenericsFeedback. Not sure if I
need to do anything else to get it "officially considered" (i.e. Go 2
proposal review group https://github.com/golang/go/issues/33892?) or if
feedback is still actively being gathered.


You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-622657596, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AACQ2NJRBNLLDGY2XGCCQCLRPOCEHANCNFSM4CA35RXQ
.

We all want the best possible syntax for Go. The design draft uses parentheses because it worked with the rest of Go without causing significant parsing ambiguities. We've stayed with them because they were the best solution in our minds at that time and because there were bigger fish to fry. So far they (parentheses) have held up fairly well.

At the end of the day, if a much better notation is found, that is very easy to change as long as we don't have a compatibility guarantee to adhere to (the parser is trivially adjusted, and any body of code can be converted easily with gofmt).

@ianlancetaylor Thanks for the reply, it's appreciated.

You are right; that syntax was "don't use parentheses for type arguments" and picking what I felt was the best candidate, then making changes to try to ease the implementation issues with the parser.

If the syntax is difficult to read, (hard to know what's going on at a glance) does it really fit easily into the existing language? That's where I think the stance falls short.

It's true, as you touch upon, that type inference could greatly reduce the amount of type arguments that need to be passed in client code. I personally believe that a library author should strive to require zero type arguments to be passed when using their code, and yet it will occur in practice.

Last night, by chance, I ran into the template syntax for D which is surprisingly similar in some respects:

template Square(T) {
    T Square(T t) {
        return t * t;
    }
}

writefln("The square of %s is %s", 3, Square!(int)(3));

template TCopy(T) {
    void copy(out T to, T from) {
        to = from;
    }
}

int i;
TCopy!(int).copy(i, 3);

There are two key differences that I see:

  1. They have ! as the instantiation operator to employ the templates.
  2. Their style of declaration (no multiple return values, methods nested in classes) means that there are natively less parentheses in ordinary code, so using parentheses for type parameters doesn't create the same visual ambiguity.

Instantiation Operator

When using Contracts, the primary visual ambiguity is between an instantiation and a function call (or a type conversion, or...?). Part of why this is problematic is that instantiations are compile-time and function calls are run-time. Go has a lot of visual clues that tell a reader what camp each clause belongs to, but the new syntax muddies these, so it's not obvious if you're looking at types or program flow.

One contrived example:

// Instantiation with unexported types and then function call,
// or chained method call?
a := draw(square, ellipse)(canvas, color)

Proposal: use an instantiation operator to specify type parameters. The ! that D uses seems perfectly acceptable. Some sample syntax:

// Lifted from the design draft
func New(type K, V)(compare func(K, K) int) *Map!(K, V) {
    return &Map!(K, V){compare: compare}
}

// ...

func (m *Map(K, V)) InOrder() *Iterator!(K, V) {
    sender, receiver := chans.Ranger!(keyValue!(K, V))()
    var f func(*node!(K, V)) bool
    f = func(n *node!(K, V)) bool {
        if n == nil {
            return true
        }
        // Stop sending values if sender.Send returns false,
        // meaning that nothing is listening at the receiver end.
        return f(n.left) &&
            sender.Send(keyValue!(K, V){n.key, n.val}) &&
            f(n.right)
    }
    go func() {
        f(m.root)
        sender.Close()
    }()
    return &Iterator{receiver}
}

// ...

From my personal standpoint, the above code is an order of magnitude easier to read. I think that clears up all of the ambiguities, both visually and for the parser. Further, I find myself wondering if this may be the single most important change that could be made to Contracts.

Declaration Style

When declaring types and functions and methods, there is less of a "run-time or compile-time?" problem. A Gopher sees a line starting with type or func and knows that he's looking at a declaration, not program behavior.

However, some visual ambiguities still exist:

// Type-parameterized function,
// or function with multiple return values?
func Draw(cvs canvas, t tool)(canvas, tool) {
    // ...
}
func Draw(type canvas, tool)(cvs canvas, t tool) {
    // ...
}

// Type-parameterized struct, or function call?
func Set(elem constructible) rect {
    // ...
}
type Set(type Elem comparable) struct{
    // ...
}

// Method call, or type-parameterized function?
func Map(type Element)(s []Element, f func(Element) Element) (results []Element) {
    // ...
}
func (t Element) Map(s []Element, f func(Element) Element) (results []Element) {
    // ...
}

Thoughts:

  • I think that these issues less important than the instantiation problem.
  • The most obvious solution would be changing the delimiters used for type arguments.
  • Possibly putting some other sort of operator or character in there (! might get lost, what about #?) could disambiguate things.

EDIT: @griesemer thanks for the additional clarification!

Thanks. Just to pose the natural question: why is it important to know whether a particular call is evaluated at run time or at compile time? Why is that the key question?

@tooolbox

// Instantiation with unexported types and then function call,
// or chained method call?
a := draw(square, ellipse)(canvas, color)

Why would it matter either way? For a casual reader it wouldn't matter if this were a piece of code that was executed during compile time or runtime. For everyone else, they can just glance at the definition of the function to know what is going on. Your later examples do not appear to be ambiguous at all.

In fact, using () for type parameters makes some sense, as it looks like you are calling a function that returns a function - and that is more or less right. The difference being that the first function is accepting types, which are usually uppercased, or very well known.

At this stage, it's much more important to figure out the dimensions of the shed, not its color.

I don't think what @tooolbox is talking about is really a difference between compile-time and run-time. Yes that is one difference, but it's not the important one. The important one is: is this a function call or a type declaration? You want to know because they behave differently and you don't want to have to deduce whether some expression is making two function calls or one, because that's a big difference. I.e. an expression like a := draw(square, ellipse)(canvas, color) is ambiguous without doing work to examine the surrounding environment.

Being able to visually parse the control flow of the program is important. I think Go has been a great example of this.

Thanks. Just to pose the natural question: why is it important to know whether a particular call is evaluated at run time or at compile time? Why is that the key question?

Sorry, seems that I bungled my communication. This is the key point I was trying to getting across:

it's not obvious if you're looking at types or program flow

(At the moment, one is sorted out during compilation and the other occurs at run-time, but those are...characteristics, not the key point, which @infogulch rightly picked up—thanks!)


I've seen the opinion in a few places that the generics in the draft can be likened to function calls: it's a sort of compile-time function that returns the real function or type. While that's helpful as a mental model of what's occurring during compilation, it doesn't translate syntactically. Syntactically, they should then be named like functions. Here's an example:

// Example from the Contracts draft
Print(int)([]int{1, 2, 3})

// New naming that communicates behavior and intent
MakePrintFunc(int)([]int{1, 2, 3}) // Chained function call, great!

There, that actually looks like a function that returns a function; I think that's quite readable.

Another way to go about it would be suffixing everything with Type, so it's clear from the name that when you "call" the function you're getting a type. Otherwise, it's not obvious that (for example) Pair(...) produces a struct type rather than a struct. But if that convention's in place, this code becomes clear: a := drawType(square, ellipse)(canvas, color)

(I realize that a precedent is the "-er" convention for interfaces.)

Note that I don't particularly support the above as a solution, I'm just illustrating how I think "generics as functions" is not fully and unambiguously expressed by current syntax.


Again, @infogulch has summarized my point very well. I'm in support of visually differentiating type arguments so that it's clear they are part of the type.

Maybe the visual part of it will be enhanced by the editor's syntax highlighting.

I don't know much about parsers and how you cannot do too much look-ahead.

From a users perspective I don't want to see yet another character in my code, so «» would not get my support (I did not find them on my keyboard!).

However, seeing round brackets followed by round brackets is also not very eye pleasing.

How about simple using curly brackets?

a := draw{square, ellipse}(canvas, color)

In Print(int)([]int{1,2,3}) the only behavioral difference is "compile-time vs. run-time", though. Yes, MakePrintFunc instead of Print would emphasize this similarity more, but… isn't that an argument not to use MakePrintFunc? Because it actually hides the real behavioral difference.

FWIW, if anything you seem to be making an argument to use different separators for parametric functions and parametric types. Because Print(int) can actually be thought of as equivalent to a function returning a function (evaluated at compile time), whereas Pair(int, string) can not - it's a function returning a type. Print(int) actually is a valid expression which evaluates to a func-value, whereas Pair(int, string) is not a valid expression, it's a type-spec. So the real difference in usage isn't "generic vs. non-generic functions" it's "generic functions vs. generic types". And from that POV, I think there is a strong case to be made to use () at least for parametric functions anyway, because it emphasizes the nature of parametric functions to actually represent values - and maybe we should use <> for parametric types.

I think the argument for () for parametric types comes from functional programming, where these functions-returning-types are a real concept called Type constructors and can actually be used and referenced as functions. And FWIW, that's also why I wouldn't argue to not use () for parametric types. Personally, I'm very comfortable with this concept and I would prefer the advantage of fewer different separators, over the advantage of disambiguating parametric functions from parametric types - after all, we have no issue with pure identifiers refering to either types or values as well.

I don't think what @tooolbox is talking about is really a difference between compile-time and run-time. Yes that is one difference, but it's not the important one. The important one is: is this a function call or a type declaration? You _want_ to know because they behave differently and you don't want to have to deduce whether some expression is making two function calls or one, because that's a big difference. I.e. an expression like a := draw(square, ellipse)(canvas, color) is ambiguous without doing work to examine the surrounding environment.

Being able to visually parse the control flow of the program is important. I think Go has been a great example of this.

Type declarations would be very easy to see, since they all start with the keyword type. Your example is obviously not one of them.

Maybe the visual part of it will be enhanced by the editor's syntax highlighting.

I think, ideally, syntax should be clear no matter what color it is. That has been the case for Go, and I don't think it would be good to drop down from that standard.

How about simple using curly brackets?

I believe this unfortunately conflicts with a struct literal.

In Print(int)([]int{1,2,3}) the only behavioral difference is "compile-time vs. run-time", though. Yes, MakePrintFunc instead of Print would emphasize this similarity more, but… isn't that an argument not to use MakePrintFunc? Because it actually hides the real behavioral difference.

Well, for one, this is why I would support Print!(int)([]int{1,2,3}) over MakePrintFunc(int)([]int{1,2,3}). It's plain that something unique is happening.

But again, the question that @ianlancetaylor asked earlier: why does it matter if the type instantiation/function-returning-function is compile-time vs. run-time?

Thinking about it, if you wrote some function calls and the compiler was able to optimize them and calculate their result at compile-time, you'd be happy for the performance gain! Rather, the important aspect is what the code is doing, what is the behavior? That should be obvious at a glance.

When I see Print(...) my first instinct is "that's a function call that writes to somewhere". It doesn't communicate "this will return a function". In my opinion, any of these is better because it can communicate the behavior and intent:

  • MakePrintFunc(...)
  • Print!(...)
  • Print<...>

In other words, this piece of code "references" or in some way "gives me" a function that can now be called in the following bit of code.

FWIW, if anything you seem to be making an argument to use different separators for parametric functions and parametric types. ...

No, I know that the last few examples have been about functions, but I would advocate consistent syntax for parametric functions and parametric types. I don't believe the Go Team would add Generics into Go unless they are a unified concept with a unified syntax.

When I see Print(...) my first instinct is "that's a function call that writes to somewhere". It doesn't communicate "this will return a function".

Neither does func Print(…) func(…), when called as Print(…). Yet, we are collectively fine with that. Without a special call-syntax, if a function returns a func.
The Print(…) syntax tells you pretty much exactly what it does today: That Print is a function that returns some value, which is what Print(…) evaluates to. If you are interested in the type that function returns, look at its definition.
Or, far more probably, use the fact that it's actually Print(…)(…) as an indicator that it returns a function.

Thinking about it, if you wrote some function calls and the compiler was able to optimize them and calculate their result at compile-time, you'd be happy for the performance gain!

Sure. We already have that. And I'm very happy that I don't need specific syntactical annotations to make them special, but can just trust that the compiler will provide continually improving heuristics over which functions these are.

In my opinion, any of these is better because it can communicate the behavior and intent:

Note that the first at least is 100% compatible with the design. It doesn't prescribe any form for the identifiers used and I hope you don't suggest prescribing that (and if you do, I'd be interested why the same rules doesn't apply to just returning a func).

No, I know that the last few examples have been about functions, but I would advocate consistent syntax for parametric functions and parametric types.

Well, I agree, as I said :) I'm just saying that I don't understand how the arguments you are making can be applied along the "generic vs. non-generic" axis, as there are no important behavioral changes between the two. They would make sense along the "type vs. function" axis, because whether something is a type-spec or an expression is very important for the context it can be used in. I still wouldn't agree, but at least I would understand them :)

@Merovius thanks for your comment.

Neither does func Print(…) func(…), when called as Print(…). Yet, we are collectively fine with that. Without a special call-syntax, if a function returns a func.
The Print(…) syntax tells you pretty much exactly what it does today: That Print is a function that returns some value, which is what Print(…) evaluates to. If you are interested in the type that function returns, look at its definition.

I hold the view that a function's name should be related to what it does. Therefore I expect Print(...) to print something, regardless of what it returns. I believe this is a reasonable expectation, and one that could be found to be fulfilled in a majority of existing Go code.

If I see Print(...)(...) it communicates that the first () has printed something, and that the function has returned a function of some kind, and the second () is executing that additional behavior.

(I would be surprised if this was an unusual or rare opinion, but I wouldn't argue with some survey results.)

Note that the first at least is 100% compatible with the design. It doesn't prescribe any form for the identifiers used and I hope you don't suggest prescribing that (and if you do, I'd be interested why the same rules doesn't apply to just returning a func).

You're darn right I suggested that :)

Look, I listed the 3 ways I could think of to fix the visual ambiguity introduced by type params on functions and types. If you don't see any ambiguity, then you won't like any of the suggestions!

I'm just saying that I don't understand how the arguments you are making can be applied along the "generic vs. non-generic" axis, as there are no important behavioral changes between the two. They would make sense along the "type vs. function" axis, because whether something is a type-spec or an expression is very important for the context it can be used in.

See above points on ambiguity and 3 proposed solutions.

Type parameters are a new thing.

  • If we want to reason about them as a new thing then I propose changing delimiters or adding an instantiation operator to fully differentiate them from regular code: function calls, type conversions, etc.
  • If we want to reason about them as just another function then I propose naming those functions clearly, such that identifier in identifier(...) communicates the behavior and return value.

I prefer the former. In both cases, the changes would be global across the type parameter syntax, as discussed.

There's a couple of other ways to shed light on this:

  1. Survey
  2. Tutorial

1. Survey

Preface: This is not a democracy. I do not believe in syntax-by-vote and I will respect whatever the Go Team's final decision is. I do think that decisions are based on data, and both articulated logic and broad survey data can aid the decision process.

I don't have the means to do this, but I would be interested to know what would happen if you surveyed a few thousand Gophers on "rank these by clarity".

Baseline:

// Lifted from the design draft
func New(type K, V)(compare func(K, K) int) *Map(K, V) {
    return &Map(K, V){compare: compare}
}

// ...

func (m *Map(K, V)) InOrder() *Iterator(K, V) {
    sender, receiver := chans.Ranger(keyValue(K, V))()
    var f func(*node(K, V)) bool
    f = func(n *node(K, V)) bool {
        if n == nil {
            return true
        }
        // Stop sending values if sender.Send returns false,
        // meaning that nothing is listening at the receiver end.
        return f(n.left) &&
            sender.Send(keyValue(K, V){n.key, n.val}) &&
            f(n.right)
    }
    go func() {
        f(m.root)
        sender.Close()
    }()
    return &Iterator{receiver}
}

// ...

Instantiation operator:

// Lifted from the design draft
func New(type K, V)(compare func(K, K) int) *Map!(K, V) {
    return &Map!(K, V){compare: compare}
}

// ...

func (m *Map(K, V)) InOrder() *Iterator!(K, V) {
    sender, receiver := chans.Ranger!(keyValue!(K, V))()
    var f func(*node!(K, V)) bool
    f = func(n *node!(K, V)) bool {
        if n == nil {
            return true
        }
        // Stop sending values if sender.Send returns false,
        // meaning that nothing is listening at the receiver end.
        return f(n.left) &&
            sender.Send(keyValue!(K, V){n.key, n.val}) &&
            f(n.right)
    }
    go func() {
        f(m.root)
        sender.Close()
    }()
    return &Iterator{receiver}
}

// ...

Angle braces: (or double angle braces, either way)

// Lifted from the design draft
func New<type K, V>(compare func(K, K) int) *Map<K, V> {
    return &Map<K, V>{compare: compare}
}

// ...

func (m *Map<K, V>) InOrder() *Iterator<K, V> {
    sender, receiver := chans.Ranger<keyValue<K, V>>()
    var f func(*node<K, V>) bool
    f = func(n *node<K, V>) bool {
        if n == nil {
            return true
        }
        // Stop sending values if sender.Send returns false,
        // meaning that nothing is listening at the receiver end.
        return f(n.left) &&
            sender.Send(keyValue<K, V>{n.key, n.val}) &&
            f(n.right)
    }
    go func() {
        f(m.root)
        sender.Close()
    }()
    return &Iterator{receiver}
}

// ...

Appropriately named functions:

// Lifted from the design draft
func NewConstructor(type K, V)(compare func(K, K) int) *MapType(K, V) {
    return &MapType(K, V){compare: compare}
}

// ...

func (m *MapType(K, V)) InOrder() *IteratorType(K, V) {
    sender, receiver := chans.RangerType(keyValueType(K, V))()
    var f func(*nodeType(K, V)) bool
    f = func(n *nodeType(K, V)) bool {
        if n == nil {
            return true
        }
        // Stop sending values if sender.Send returns false,
        // meaning that nothing is listening at the receiver end.
        return f(n.left) &&
            sender.Send(keyValueType(K, V){n.key, n.val}) &&
            f(n.right)
    }
    go func() {
        f(m.root)
        sender.Close()
    }()
    return &Iterator{receiver}
}

// ...

...Funny, I actually quite like the last one.

(How do you think these would fare in the broad world of Gophers @Merovius ?)

2. Tutorial

I think this would be a very useful exercise: write a beginner-friendly tutorial for your favorite syntax, and have some people read it and apply it. How easily are the concepts communicated? What are the FAQ and how do you answer them?

The design draft is meant to communicate the concept to experienced Gophers. It follows the chain of logic, dipping you in slowly. What's the concise version? How do you explain the Golden Rules of Contracts in one easily assimilated blog post?

This could present a sort of different angle or slice of data than typical feedback reports.

@tooolbox I think what you haven't answered yet is: Why is this a problem for parametric functions, but not for non-parametric functions returning a func? I can, today, write

func Print(a string) func(string) {
    return func(b string) {
        fmt.Println(a+b)
    }
}

func main() {
    Print("foo")("bar")
}

Why is this okay and doesn't lead you to be super confused by the ambiguity, but as soon as Print takes a type-parameter instead of a value-parameter, this gets unbearable? And would you (leaving aside the obvious compatibility-questions) also suggest we add a restriction to go proper, that this shouldn't be possible, unless Print is renamed to MakeXFunc for some X? If not, why not?

@tooolbox would this really be a problem when the assumption is that type inference might very well remove the need to specify the parametric types for functions, leaving just a simple-looking function call?

@Merovius I don't think the issue is with the syntax Print("foo")("bar") itself, because it's already possible in Go 1, precisely because it has a single possible interpretation. The issue is that with the unmodified proposal the expression Foo(X)(Y) is now ambiguous and could mean you are making two function calls (like in Go 1), or it could mean that you're making one function call with type arguments. The problem is being able to locally deduce what the program does, and those two possible semantic interpretations are very different.

@urandom I agree that type inference may be able to eliminate the bulk of explicitly provided type parameters, but I don't think that shoving all the cognitive complexity into the dark corners of the language just because they are only rarely used is a good idea either. Even if it's rare enough that most people don't typically encounter them, they'll still encounter it sometimes, and allowing some code to have confusing control flow as long as it's not "most" code leaves a bad taste in my mouth. Especially since Go is currently so approachable when reading "plumbing" code including stdlib. Maybe type inference is so good that "rare" becomes "never", and Go programmers remain highly disciplined and never design a system where type parameters are necessary; then this whole issue is basically moot. But I wouldn't bet on it.

I think the main thrust of @tooolbox's argument is that we shouldn't blithely overload existing syntax with context-sensitive semantics, and we should instead find some other syntax that is not ambiguous (Even if it's just making a small addition such as Foo(X)!(Y).) I think this is an important measure when considering syntax options.

I used and read a bit of D code, back in the days (~2008-2009), and I must say the ! was always tripping me up.

let me paint this shed with either #, $ or @, instead (as they don't have any meaning in Go or C).
this could then open up the possibility to use curly braces w/o any confusion w/ maps,slices or structs.

  • Foo@{X}(Y)
  • Foo${X}(Y)
  • Foo#{X}(Y)
    or square brackets.

In discussions like this it's essential to look at real code.

For example, consider that few people write Foo(X)(Y). In Go, type names and variable name and function names look exactly the same, yet people are rarely confused about what they are looking at. People understand that int64(v) is a type conversion and F(v) is a function call, even though they look exactly the same.

We need to look at real code to see whether type arguments really are confusing in practice. If they are, then we must adjust the syntax. In the absence of real code, we simply don't know.

On Wed, May 6, 2020, at 13:00, Ian Lance Taylor wrote:

People understand that int64(v) is a type conversion and F(v) is a
function call, even though they look exactly the same.

I don't have an opinion one way or the other right now on the proposal
syntax, but I don't think this particular example is very good. It may
be true for built in types, but I have actually gotten confused by this
exact problem several times myself (I was grepping for a function
definition and being very confused about how the code was working before
I realized it was likely a type and I couldn't find the function because
it wasn't a function call at all). Not the end of the world, and
probably not a problem at all for people who like fancy IDEs, but I've
wasted 5 minutes or so grepping around for this multiple times.

—Sam

--
Sam Whited

@ianlancetaylor one thing that I noticed by your example is that you can write a function that takes a type and return another type with the same meaning, so calling a type as a basic type conversion like int64(v) make sense in the same way that strconv.Atoi(v) make sense.

But while you can do UseConverter(strconv.Atoi), UseConverter(int64) is not possible in Go 1. Having the parenthesis for type parameter might open some possibilities if the generic can be used for casting like:

func StrToNumber(type K)(s string) K {
  asInt := strconb.Atoi(s)
  return K(asInt)
}

Why is this okay and doesn't lead you to be super confused by the ambiguity

Your example is not okay. I don't care if the first call takes arguments or type parameters. You have a Print function that doesn't print anything. Can you imagine reading/reviewing that code? Print("foo") with the second set of parentheses omitted looks fine but is secretly a no-op.

If you submitted that code to me in a PR, I would tell you to change the name to PrintFunc or MakePrintFunc or PrintPlusFunc or something that communicated its behavior.

I used and read a bit of D code, back in the days (~2008-2009), and I must say the ! was always tripping me up.

Ha, interesting. I don't have any particular preference for an instantiation operator; those seem like decent options.

In Go, type names and variable name and function names look exactly the same, yet people are rarely confused about what they are looking at. People understand that int64(v) is a type conversion and F(v) is a function call, even though they look exactly the same.

I agree, people usually can quickly differentiate between type conversions and function calls. Why do you think that is?

My personal theory is that types are usually nouns, and functions are usually verbs. So when you see Noun(...) it's pretty clear it's a type conversion, and when you see Verb(...) it's a function call.

We need to look at real code to see whether type arguments really are confusing in practice. If they are, then we must adjust the syntax. In the absence of real code, we simply don't know.

That makes sense.

Personally, I came to this thread because I read the Contracts draft (probably 5 times, each time bouncing off and then getting further when I came back later) and found the syntax to be confusing and unfamiliar. I liked the concepts when I finally grokked them, but there was a huge barrier because of the ambiguous syntax.

There's a lot of "real code" at the bottom of the Contracts draft, handling all of those common use cases, which is great! However, I find it tricky to visually parse; I'm slower reading and understanding the code. It seems to me that I have to look at the arguments of things and the broader context to know what things are and what the control flow is, and it seems like that's a step down from regular code.

Let's take this real code:

import "container/orderedmap"

var m = orderedmap.New(string, string)(strings.Compare)

func Add(a, b string) {
    m.Insert(a, b)
}

When I read orderedmap.New( I expect what follows to be the arguments for the New function, those key pieces of information that the ordered map needs to function. But those are actually in the second set of parentheses. I am thrown by this. It makes the code more difficult to grok.

(This is just one example, it's not everything that I see that's ambiguous, but it's hard to have a detailed discussion about a broad set of points.)

Here's what I would suggest:

// Instantiation operator
var m = orderedmap.New!(string, string)(strings.Compare)
// Alternate delimiters -- notice I don't insist on any particular kind
var m = orderedmap.New<|string, string|>(strings.Compare)
// Appropriately named function
var m = orderedmap.MakeConstructor(string, string)(strings.Compare)

In the first two examples, a different syntax serves to break my assumption that the first set of parentheses contain the arguments for New(), so the code is less surprising and the flow is more observable from a high level.

The third option uses naming to make the flow unsurprising. I now expect that first set of parentheses to contain the arguments necessary to create a constructor function and I'm expecting that the return value is a constructor function which can in turn be called to produce an ordered map.


I can for sure read code in the current style. I was able to read all of the code in the Contracts draft. It's just slower because it takes longer for me to process it. I've tried my best to analyze why this is and report it: in addition to the orderedmap.New example, https://github.com/golang/go/issues/15292#issuecomment-623649521 has a good summary, although I could probably come up with more. The degree of ambiguity varies between the different examples.

I acknowledge that I won't get everyone's agreement, because readability and clarity are somewhat subjective and perhaps influenced by the person's background and favorite languages. I do think that 4 kinds of parsing ambiguities is a good indicator that we have an issue, though.

import "container/orderedmap"

var m = orderedmap.NewOf(string, string)(strings.Compare)

func Add(a, b string) {
    m.Insert(a, b)
}

I think NewOf reads better than New because New usually returns an instance, not a generic that creates an instance.


You have a Print function that doesn't print anything.

To be clear, since there is some automatic type inference, generic Print(foo) would either be a real print call via inference or an error. In Go today, bare identifiers are not allowed:

package main

import (
    "fmt"
)

func main() {
    fmt.Println
}

./prog.go:8:5: fmt.Println evaluated but not used

I do wonder if there is some way to make the generic inference less confusing.

@tooolbox

Your example is not okay. I don't care if the first call takes arguments or type parameters. You have a Print function that doesn't print anything. Can you imagine reading/reviewing that code?

You have ommitted the relevant follow-up questions here. I agree with you that it's not really readable. But you are arguing for a language-level enforcement of this constraint. I wasn't saying "you are fine with this" meaning "you are okay with this code", but meaning "you are okay with the language allowing that code".

Which was my follow-up question. Do you think Go is a worse language, because it didn't put in a name-restriction for functions-that-return-func? If not, why would it be a worse language if we not put that restriction on such functions, when they take a type-argument instead of a value-argument?

@Merovius

But you are arguing for a language-level enforcement of this constraint.

No, he's arguing that relying on naming standards is a potential valid solution to the problem. An informal rule like "type authors are encouraged to name their generic types in a way that's less easily confused with a the name of a function" is a valid solution to the ambiguity problem, as in it would literally solve the problem in individual cases.

He doesn't hint anywhere that this solution must be enforced by the language, he's saying that if the maintainers decide to keep the current proposal as-is, even then there are potential practical solutions the the ambiguity problem. And he's claiming that the ambiguity problem is real and important to consider.

Edit: I think we're veering a bit off course. I think more "real" example code would be very beneficial to the conversation at this point.

No, he's arguing that relying on naming standards is a potential valid solution to the problem.

Are they? I tried to specifically ask:

Note that the first at least is 100% compatible with the design. It doesn't prescribe any form for the identifiers used and I hope you don't suggest prescribing that (and if you do, I'd be interested why the same rules doesn't apply to just returning a func).

You're darn right I suggested that :)

I agree that "prescribe" is not extremely specific here, but that's at least the question I intended. If they are indeed not arguing in favor of a language-level requirement built into the design, I apologize for the misunderstanding, of course. But I do feel justified in assuming that "prescribe" is at least stronger than "an informal rule", at least. Especially if put into the context of the other two suggestions they put forth (on the same footing) which are language-level constructs as they don't even use currently valid identifiers.

Will there be a vgo-like plan to allow the community to try the latest generic proposal?

After playing a bit with the contract-enabled playground, I don't really see what all the fuss is about needing to differentiate between the type arguments and the regular ones.

Consider this example. I left the type initializers on all functions, even though I could omit all of them and it would still compile just fine. This seems to indicate that the vast majority of such potential code would not even include them, which in turn would not cause any confusion.

In case these type parameters are included, however, certain observations can be made:
a) the types are either the built-in ones, which everyone knows and can identify immediately
b) the types are 3rd party, and in that case will be TitleCased, which would make them stand out quite a bit. Yes, it would be possible, though unlikely, that it could be a function that returns another function, and the first call consumes 3rd party exported variables, but I think this is extremely rare.
c) the types are some private types. In this case, they would look more like regular variable identifiers. However, since they are not exported, this would mean that the code the reader is looking at is not part of some documentation they are trying to decipher, and, more importantly, they are already reading the code. Therefore they can do the extra step and just jump to the definition of the function to remove any ambiguity.

The fuss is about how it looks without generics https://play.golang.org/p/7BRdM2S5dwQ and for somebody who is new to programing a separate Stack for each type like so StackString, StackInt, ... is allot easier to program then a Stack(T) in current generic syntax proposal. I have no doubt the current proposal is well thought out as shown by your example but the value of simplicity and clarity is decreased by allot. I understand the first priority is to find out if it works by testing, but once we agree the current proposal covers most cases and there are no technical compiler difficulties a even higher priority is making it understandable for everybody which was always the number one reason of Go succes from the beginning.

@Merovius No, it's like @infogulch said, I meant creating a convention a la the -er on interfaces. I mentioned that above, sorry for the confusion. (I am a "he" btw.)

Consider this example. I left the type initializers on all functions, even though I could omit all of them and it would still compile just fine. This seems to indicate that the vast majority of such potential code would not even include them, which in turn would not cause any confusion.

How about the same example in a forked version of the generics playground?

I used ::<> for the type parameter clause, and if there's a single type you can omit the <>. Shouldn't be any parser ambiguity on the angle braces, and it makes it easy for me to read the code—both the generics and the code using the generics. (And if the type params are inferred, so much the better.)

As I said earlier, I wasn't stuck on ! for type instantiation (and I think :: looks better upon review). And it only helps with where the generics are used, not so much in the declarations. So this somewhat combines the two, omitting the <> where unnecessary, somewhat like omitting enclosing () for function return parameters if there's only one.

Sample excerpt:

type Stack::<type E> []E

func (s Stack::E) Peek() E {
    return s[len(s)-1]
}

func (s *Stack::E) Pop() {
    *s = (*s)[:len(*s)-1]
}

func (s *Stack::E) Push(value E) {
    *s = append(*s, value)
}

type StackIterator::<type E> struct{
    stack Stack::E
    current int
}

func (s *Stack::E) Iter() Iterator::E {
    it := StackIterator::E{stack: *s, current: len(*s)}

    return &it
}

func (i *StackIterator::E) Next() (bool) { 
    i.current--

    if i.current < 0 { 
        return false
    }

    return true
}

func (i *StackIterator::E) Value() E { 
    if i.current < 0 {
        var zero E
        return zero
    }

    return i.stack[i.current]
}

// ...

var it Iterator::string = stack.Iter()

it = Filter::string(it, func(s string) bool {
    return s == "foo" || s == "beta" || s == "delta"
})

it = Map::<string, string>(it, func(s string) string {
    return s + ":1"
})

it = Distinct::string(it)

println(Reduce(it, "", func(a, b string) string {
    if a == "" {
        return b
    }
        return a + ":" + b
}))

For this example, I also adjusted the variable names, I think E for "Element" is more readable than T for "Type".

As I said, by making the generics stuff look different, the underlying Go code becomes visible. You know what you're looking at, the control flow is obvious, there's no ambiguity, etc.

It's also just fine with more type inference:

var it Iterator::string = stack.Iter()

it = Filter(it, func(s string) bool {
    return s == "foo" || s == "beta" || s == "delta"
})

it = Map::<string, string>(it, func(s string) string {
    return s + ":1"
})

it = Distinct(it)

println(Reduce(it, "", func(a, b string) string {
    if a == "" {
        return b
    }
        return a + ":" + b
}))

@tooolbox Apologies, then, we were talking past each other :)

somebody who is new to programing a separate Stack for each type like so StackString, StackInt, ... is allot easier to program then a Stack(T)

I would really be surprised if that was the case. No one is infallible, and the first bug that sneaks itself into even a simple piece of code will hammer in how wrong that statement is in the long run.

The point of my example was to illustrate the usage of parametric functions and their instantiation with concrete types, which is the crux of this discussion, not whether or not the sample Stack implementation was any good.

The point of my example was to illustrate the usage of parametric functions and their instantiation with concrete types, which is the crux of this discussion, not whether or not the sample Stack implementation was any good.

I don't think @gertcuykens meant to knock your Stack implementation, seems like he felt that the generics syntax is unfamiliar and difficult to understand.

In case these type parameters are included, however, certain observations can be made:
(a)...(b)...(c)...(d)...

I see all of your points, appreciate your analysis, and they're not wrong. You're correct that, in a majority of cases, by examining the code closely, you can determine what it's doing. I don't think that disproves the reports of Go devs who say the syntax is confusing, ambiguous, or takes them longer to read, even if they can eventually read it.

On a general basis, the syntax is in an uncanny valley. The code is doing something different, but it looks similar enough to the existing constructs that your expectations are thrown and the glanceability drops. You also can't establish new expectations because (appropriately) these elements are optional, both as a whole and in parts.

For those more specific pathological cases, @infogulch stated it well:

I don't think that shoving all the cognitive complexity into the dark corners of the language just because they are only rarely used is a good idea either. Even if it's rare enough that most people don't typically encounter them, they'll still encounter it sometimes, and allowing some code to have confusing control flow as long as it's not "most" code leaves a bad taste in my mouth.

I think, at this point, we're reaching articulation saturation on this particular slice of the topic. No matter how much we talk about it, the acid test will be how quickly and how well Go devs can learn it, read it, and write it.

(And yes, before it's pointed out, the burden should be on the library author, not the client dev, but I don't think we want the Boost Effect where generic libraries are unintelligible to the man on the street. I also don't want Go to turn into a Generic Jamboree, but in part I trust that the design's omissions will limit the pervasiveness.)

We have a playground and we can make forks for other syntax, which is fantastic. Maybe we need even more tools!

People have given feedback. I'm sure more feedback is needed, and maybe we need better or more streamlined feedback systems.

@tooolbox Do you think it's possible to parse the code when you always omit <> and type like so? Maybe requires a more strict proposal in what can be done, but maybe it's worth the trade off?

type Stack::E []E

func (s Stack::E) Peek() E {
    return s[len(s)-1]
}

func (s *Stack::E) Pop() {
    *s = (*s)[:len(*s)-1]
}

func (s *Stack::E) Push(value E) {
    *s = append(*s, value)
}

type StackIterator::E struct{
    stack Stack::E
    current int
}

func (s *Stack::E) Iter() Iterator::E {
    it := StackIterator::E{stack: *s, current: len(*s)}

    return &it
}

func (i *StackIterator::E) Next() (bool) { 
    i.current--

    if i.current < 0 { 
        return false
    }

    return true
}

func (i *StackIterator::E) Value() E { 
    if i.current < 0 {
        var zero E
        return zero
    }

    return i.stack[i.current]
}

// ...

var it Iterator::string = stack.Iter()

it = Filter::string(it, func(s string) bool {
    return s == "foo" || s == "beta" || s == "delta"
})

it = Map::string, string (it, func(s string) string {
    return s + ":1"
})

it = Distinct::string(it)

println(Reduce(it, "", func(a, b string) string {
    if a == "" {
        return b
    }
        return a + ":" + b
}))

I don't know why, but this Map::string, string (... just feels weird. It looks as though this creates 2 tokens, a Map::string, and an string function call.

Also, even though this is not used in Go, using "Identifier::Identifier" might give the wrong impression to first time users, thinking that there's a Filter class/namespace with a string function in it. Reusing tokens from other widely adopted languages for something completely different will cause a lot of confusion.

Do you think it's possible to parse the code when you always omit <> and type like so? Maybe requires a more strict proposal in what can be done, but maybe it's worth the trade off?

No, I don't think so. I agree with @urandom that the space character, without anything enclosing, makes it seem like two tokens. I also personally like the scope of Contracts and am not interested in changing its capabilities.

Also, even though this is not used in Go, using "Identifier::Identifier" might give the wrong impression to first time users, thinking that there's a Filter class/namespace with a string function in it. Reusing tokens from other widely adopted languages for something completely different will cause a lot of confusion.

I haven't actually used a language with :: but I have seen it around. Maybe ! is better then because it would match D, although I do find :: looks better visually.

If we were to go down this path, there can be a lot of discussion about specifically what characters to use. Here's an attempt at narrowing what we're looking for:

  • Something other than bare identifier() so that it doesn't look like a function call.
  • Something that can enclose multiple type parameters, to visually unite them in the way that parentheses can.
  • Something that looks connected to the identifier so it looks like a unit.
  • Something that isn't ambiguous for the parser.
  • Something that doesn't conflict with a different concept that has strong developer mindshare.
  • If possible, something that will affect definitions as well as usages of generics, so those become easier to read as well.

There's a lot of things that could fit.

  • identifier!(a, b) (playground)
  • identifier@(a, b)
  • identifier#(a, b)
  • identifier$(a, b)
  • identifier<:a, b:>
  • identifier.<a, b> it's like a type assertion! :)
  • identifier:<a, b>
  • etc.

Anyone have any ideas on how to further narrow the set of potentials?

Just a quick note that we've considered all those ideas, and we've also considered ideas like

func F(T : a, b T) { }
func G() { F(int : 1, 2) }

But again, the proof of the pudding is in the eating. Abstract discussions in the absence of code are worth having but don't lead to definitive conclusions.

(Not sure if this has been talked about before) I'm seeing that in cases where we receive a struct we won't be able to "extend" an existing API to handle generic types without breaking existing calling code.

For example, given this non generic function

func Repeat(v, n int) []int {
    var r []int
    for i := n; i > 0; i-- {
        r = append(r, v)
    }
    return r
}

Repeat(4, 4)

We can make it generic without breaking backwards compatibility

func Repeat(type T)(v T, n int) []T {
    var r []T
    for i := n; i > 0; i-- {
        r = append(r, v)
    }
    return r
}

Repeat("a", 5)

But if we want to do the same with a function that receives a generic struct

type XY struct {
    X, Y int
}

func RangeRepeat(arr []XY) []int {
    var r []int
    for _, n := range arr {
        for i := n.Y; i > 0; i-- {
            r = append(r, n.X)
        }
    }
    return r
}

RangeRepeat([]XY{{1, 1}, {2, 2}, {3, 3}})

it seems like the calling code needs to be updated

type XY(type T) struct {
    X T
    Y int
}

func RangeRepeat(type T)(arr []XY(T)) []T {
    var r []T
    for _, n := range arr {
        for i := n.Y; i > 0; i-- {
            r = append(r, n.X)
        }
    }
    return r
}

// error: cannot use generic type XY(type T any) without instantiation
// RangeRepeat([]XY{{1, 1}, {2, 2}, {3, 3}}) // error in old code
RangeRepeat([](XY(int)){{1, 1}, {2, 2}, {3, 3}}) // API changed
// RangeRepeat([]XY{{"1", 1}, {"2", 2}, {"3", 3}}) // error
RangeRepeat([](XY(string)){{"1", 1}, {"2", 2}, {"3", 3}}) // ok

It would be awesome to be able to derive types from structures too.

@ianlancetaylor

The contract draft mentions that methods may not take additional type arguments. However, there is no mention of replacing the contract for particular methods. Such a feature would be very useful for implementing interfaces depending on what contract a parametric type is bound to.

Have you discussed such a possibility?

Another question for the contract draft. Will type disjunctions be restricted to built-in types? If not, would it be possible to use parametrized types, especially interfaces in the disjunction list?

Something like

type Getter(T) interface {
    Get() T
}

contract(G, T) {
    G Getter(T)
}

would be quite useful, not only to avoid duplicating the method set from the interface to the contract, but also to instantiate a parametrized type when type inferencing fails, and you don't have access to the concrete type (e.g. it's not exported)

@ianlancetaylor I'm not sure if this has been discussed before, but regarding the syntax for type arguments to a function, is it possible to concatenate the argument list to the type argument list? So for the graph example, instead of

var g = graph.New(*Vertex, *FromTo)([]*Vertex{ ... })

you would use

var g = graph.New(*Vertex, *FromTo, []*Vertex{ ... })

Essentially, the first K arguments of the argument list correspond to a type argument list of length K. The rest of the argument list corresponds to the regular arguments to the function. This has the benefit of mirroring the syntax of

make(Type, size)

which takes a Type as the first argument.

This would simplify the grammar, but needs type information to know where the type arguments end, and the regular arguments begin.

@smasher164 He said a few comments back that they considered it (which implies they discarded it, although I'm curious why).

func F(T : a, b T) { }
func G() { F(int : 1, 2) }

That is what you're suggesting, but with a colon to separate the two kinds of arguments. Personally I moderately like it, although it's an incomplete picture; what about type declaration, methods, instantiation, etc.

I want to return to something @Inuart said:

We can make it generic without breaking backwards compatibility

Would the Go team consider changing the standard library in this way to be consistent with the Go 1 compatibility guarantee? For example, what if strings.Repeat(s string, count int) string was replaced with Repeat(type S stringlike)(s S, count int) S? You could also add a //Deprecated comment to bytes.Repeat but leave it there for legacy code to use. Is that something the Go team would consider?

Edit: to be clear, I mean, would this be considered within Go1Compat in general? Ignore the specific example if you don't like it.

@carlmjohnson No. This code would break: f := strings.Repeat, as polymorphic functions can't be referenced without instantiating them first.

And going from there, I think that the concatenation of type-arguments and value-arguments would be a mistake, as it prevents a natural syntax for referring to an instantiated version of a function. It would be more natural if go already had currying, but it doesn't. It looks weird to have foo(int, 42) and foo(int) be expressions and with both having very different types.

@urandom Yes, we have discussed the possibility of adding additional constraints on the type parameters of an individual method. That would cause the method set of the parameterized type to vary based on the type arguments. This might be useful, or it might be confusing, but one thing seems certain: we can add it later without breaking anything. So we've postponed the idea for later. Thanks for bringing it up.

Exactly what can be listed in the permitted list of types is not as clear as could be. I think we have more work to do there. Note that at least in the current design draft listing an interface type in the list of types currently means that the type argument can be that interface type. It does not mean that the type argument can be a type that implements that interface type. I think it's currently unclear whether it can be an instantiated instance of a parameterized type. It's a good question, though.

@smasher164 @tooolbox The cases to consider when looking at combining type parameters and regular parameters in a single list are how to separate them (if they are separated) and how to handle the case in which there are no regular parameters (presumably we can exclude the case of no type parameters). For example, if there are no regular parameters, how do you distinguish between instantiating the function but not calling it, and instantiating the function and calling it? While clearly the latter is the more common case, it's reasonable for people to want to be able to write the former case.

If the type parameters were to be placed inside the same parentheses as the regular parameters, then @griesemer said in #36177 (his second post) that he quite liked the use of a semicolon rather than a colon as the separator because (as a result of automatic semicolon insertion) it allowed one to spread the parameters over multiple lines in a nice way.

Personally, I also like the use of vertical bars (|..|) to enclose the type parameters as you sometimes see these used in other languages (Ruby, Crystal etc.) to enclose a parameter block. So we'd have stuff like:

func F(|T| a, b T) { }
func G() { F(|int| 1, 2) }

Advantages include:

  • They provide a nice visual distinction (at least to my eyes) between type and regular parameters.
  • You wouldn't need to use the type keyword.
  • Having no regular parameters is not a problem.
  • The vertical bar character is, of course, in the ASCII set and should therefore be available on most keyboards.

You might even be able to use it outside the parentheses but presumably you would then have the same parsing difficulties as with <...> or [...] as it could be mistaken for the bitwise 'or' operator though possibly the difficulties would be less acute.

I don't understand how vertical bars help with the case of no regular parameters. I don't understand how you can distinguish a function instantiation from a function call.

One way of distinguishing between those two cases would be to require the type keyword if you were instantiating the function but not if you were calling it which, as you said earlier, is the more common case.

I agree that that could work, but it seems very subtle. I don't think it will be obvious to the reader what is happening.

I think that in Go we need to aim higher than merely having a way to do something. We need to aim for approaches that are straightforward, intuitive, and that fit well with the rest of the language. The person reading the code should be able to easily understand what is happening. Of course we can't always meet those goals, but we should do the best we can.

@ianlancetaylor aside from debating on syntax, which is interesting in its own right, I'm wondering if there's anything that we as a community can do to help you & the team on this subject.

For example, I get the idea that you'd like more code written in the style of the proposal, so as to better evaluate the proposal, both syntactically and otherwise? And/or other things?

@tooolbox Yes. We are working on a tool to make that easier, but it's not ready yet. Real Soon Now.

Can you say any more about the tool? Would it allow executing code?

Is this issue the preferred location for generics feedback? It seems more active than the wiki. One observation is there are many aspects to the proposal, but the GitHub issue collapses discussion into a linear format.

The F(T:) / G() { F(T:)} syntax looks fine to me. I don't think instantiation that looks like a function call will be intuitive to inexperienced readers.

I don't understand exactly what the concerns are around backwards compatibility. I think there is a limitation in the draft against declaring a contract except at top level. It might be worth weighing (and measuring) how much code would actually break if this were allowed. My understanding is only code that uses the contract keyword, which seems like not much code (which could be supported anyhow by specifying go1 at the top of old files). Weigh that against decades of more power for programmers. In general it seems pretty simple to protect old code with such mechanisms especially with widespread use of go's famous tools.

Further regarding that restriction, I suspect the prohibition against declaring methods within function bodies is a reason interfaces are not used more - they are much more cumbersome than passing single functions around. It's hard to say if the contracts top-level restriction would be as irritating as the methods restriction--it probably wouldn't be--but please don't use the methods restriction as a precedent. To me that is a language flaw.

I would also like to see examples of how contracts could help cut down on if err != nil verbosity, and more importantly where they would be insufficient. Is something like F() (X, error) {return IfError(foo(), func(i, j int) X { return X(i*j}), Identity )} possible?

I'm also wondering if the go team anticipates that implicit function signatures will feel like a missing feature once Map, Filter and friends are available. Is this something that needs to be considered while new implicit typing features are added to the language for contracts? Or can it be added later? Or will it never be part of the language?

Looking forward to trying out the proposal. Sorry for so many topics.

Personally I'm quite skeptical that many people would like to write methods within function bodies. It's very rare to define types within function bodies today; declaring methods would be rarer still. That said, see #25860 (not related to generics).

I don't see how generics help with error handling (already a very verbose topic in itself). I don't understand your example, sorry.

A shorter function literal syntax, also not connected to generics, is #21498.

When I posted last night I didn't realize it's possible to play with the draft
implementation (!!). Wow, it's great to finally be able to write more abstract code. I don't have any issues with the draft syntax.

Continuing the discussion above...


Part of the reason people don't write types in function bodies is because they
can't write methods for them. This restriction can trap the type inside the
block where it was defined, as it cannot be concisely tranformed into an
interface for use elsewere. Java allows anonymous classes to satisfy its version
of interfaces, and they are used a fair amount.

We can have the interface discussion in #25860. I would just say that in the era
of contracts, methods will become more important, so I suggest erring on the
side of empowering local types & people who like to write closures, not
weakening them.

(And to reiterate, please do not use strict go1 compatibility [vs virtually
99.999% compatibility, as I understand it] as a factor in deciding about this
feature.)


Regarding error handling, I had suspected generics might allow abstracting
common patterns for dealing with (T1, T2, ..., error) return tuples. I don't
have anything detailed in mind. Something like type ErrPair(type T) struct{T T; Err Error} might be useful for chaining together actions, like Promise in
Java/TypeScript. Perhaps someone has thought through this more. An attempt at
writing a helper library and code that uses the library might be worth looking
at if you're looking for real usage.

With some experimentation I ended up with the following. I'd like to try this
technique on a larger example to see if using ErrPair(T) actually helps.

type result struct {min, max point}

// with a generic ErrPair type and generic function errMap2 (like Java's Optional#map() function).
func minMax2(msg *inputTimeSeries) (result, error) {
    return errMap2(
        MakeErrPair(time.Parse(layout, msg.start)).withMessage("bad start"),
        MakeErrPair(time.Parse(layout, msg.end)).withMessage("bad end"),
        func(start, end time.Time) (result, error) {
            min, max := argminmax(msg.inputPoints, func(p inputPoint) float64 {
                return float64(p.value)
            })
            mkPoint := func(ip inputPoint) point {
                return point{interpTime(start, end, ip.interp).Format(layout), ip.value}
            }
            return result{mkPoint(*min), mkPoint(*max)}, nil
        }).tuple()
}

// without generics, lots of if err != nil 
func minMax(msg *inputTimeSeries) (result, error) { 
    start, err := time.Parse(layout, msg.start)
    if err != nil {
        return result{}, fmt.Errorf("bad start: %w", err)
    }
    end, err := time.Parse(layout, msg.end)
    if err != nil {
        return result{}, fmt.Errorf("bad end: %w", err)
    }
    min, max := argminmax(msg.inputPoints, func(p inputPoint) float64 {
        return float64(p.value)
    })
    mkPoint := func(ip inputPoint) point {
        return point{interpTime(start, end, ip.interp).Format(layout), ip.value}
    }
    return result{mkPoint(*min), mkPoint(*max)}, nil
}

// Most languages look more like this.
func minMaxWithThrowing(msg *inputTimeSeries) result {
    start := time.Parse(layout, msg.start)) // might throw
    end := time.Parse(layout, msg.end)) // might throw
    min, max := argminmax(msg.inputPoints, func(p inputPoint) float64 {
        return float64(p.value)
    })
    mkPoint := func(ip inputPoint) point {
        return point{interpTime(start, end, ip.interp).Format(layout), ip.value}
    }
    return result{mkPoint(*min), mkPoint(*max)}
}

(complete example code avialable here)


For general experimentation, I tried writing an S-Expression package
here.
I experienced some panics in the experimental implementation while trying to
work with compound types like Form([]*Form(T)). I can provide more feedback
after working around that, if it would be useful.

I also wasn't quite sure how to write a primitive type -> string function:

contract PrimitiveType(T) {
    T bool, int, int8, int16, int32, int64, string, uint, uint8, uint16, uint32, uint64, float32, float64, complex64, complex128
    // string(T) is not a contract
}

func primitiveString(type T PrimitiveType(T))(t T) string  {
    // I'm not sure if this is an artifact of the experimental implementation or not.
    return string(t) // error: `cannot convert t (variable of type T) to string`
}

The actual function I was trying to write was this one:

// basicFormAdapter implements FormAdapter() for the primitive types.
type basicFormAdapter(type T PrimitiveType) struct{}


func (a *basicFormAdapter(T)) Format(e T, fc *FormatContext) error {
    //This doesn't work: fc.Print(string(e)) -- cannot convert e (variable of type T) to string
    // This also doesn't work: cannot type switch on non-interface value e (type int)
    // switch ee := e.(type) {
    // case int: fc.Print(string(ee))
    // default: fc.Print(fmt.Sprintf("!!! unsupported type %v", e))
    // }
    // IMO, the proposal to allow switching on T is most natural:
    // switch T.(type) {
    //  case int: fc.Print(string(e))
    //  default: fc.Print(fmt.Sprintf("!!! unsupported type %v", e))
    // }

    // This can't be the only way, right?
    rv := reflect.ValueOf(e)
    switch rv.Kind() {
    case reflect.Bool: fc.Print(fmt.Sprintf("%v", e))
    case reflect.Int:fc.Print(fmt.Sprintf("%v", e))
    case reflect.Int8: fc.Print(fmt.Sprintf("int8:%v", e))
    case reflect.Int16: fc.Print(fmt.Sprintf("int16:%v", e))
    case reflect.Int32: fc.Print(fmt.Sprintf("int32:%v", e))
    case reflect.Int64: fc.Print(fmt.Sprintf("int64:%v", e))
    case reflect.Uint: fc.Print(fmt.Sprintf("uint:%v", e))
    case reflect.Uint8: fc.Print(fmt.Sprintf("uint8:%v", e))
    case reflect.Uint16: fc.Print(fmt.Sprintf("uint16:%v", e))
    case reflect.Uint32: fc.Print(fmt.Sprintf("uint32:%v", e))
    case reflect.Uint64: fc.Print(fmt.Sprintf("uint64:%v", e))
    case reflect.Uintptr: fc.Print(fmt.Sprintf("uintptr:%v", e))
    case reflect.Float32: fc.Print(fmt.Sprintf("float32:%v", e))
    case reflect.Float64: fc.Print(fmt.Sprintf("float64:%v", e))
    case reflect.Complex64: fc.Print(fmt.Sprintf("(complex64 %f %f)", real(rv.Complex()), imag(rv.Complex())))
    case reflect.Complex128:
         fc.Print(fmt.Sprintf("(complex128 %f %f)", real(rv.Complex()), imag(rv.Complex())))
    case reflect.String:
        fc.Print(fmt.Sprintf("%q", rv.String()))
    }
    return nil
}

I also tried creating a 'Result' like type of sorts

type Result(type T) struct {
    Value T
    Err error
}

func NewResult(type T)(value T, err error) Result(T) {
    return Result(T){
        Value: value,
        Err: err,
    }
}

func then(type T, R)(r Result(T), f func(T) R) Result(R) {
    if r.Err != nil {
        return Result(R){Err: r.Err}
    }

    v := f(r.Value)
    return  Result(R){
        Value: v,
        Err: nil,
    }
}

func thenTry(type T, R)(r Result(T), f func(T)(R, error)) Result(R) {
    if r.Err != nil {
        return Result(R){Err: r.Err}
    }

    v, err := f(r.Value)
    return  Result(R){
        Value: v,
        Err: err,
    }
}

e.g

    r := NewResult(GetInput())
    r2 := thenTry(r, UppercaseAndErr)
    r3 := thenTry(r2, strconv.Atoi)
    r4 := then(r3, Add5)
    if r4.Err != nil {
        // handle err
    }
    return r4.Value, nil

Ideally you'd have the then functions be methods on the Result type.

Also the absolute difference example in the draft doesn't seem to compile.
I think the following:

func (a ComplexAbs(T)) Abs() T {
    r := float64(real(a))
    i := float64(imag(a))
    d := math.Sqrt(r * r + i * i)
    return T(complex(d, 0))
}

should be:

func (a ComplexAbs(T)) Abs() ComplexAbs(T) {
    r := float64(real(a))
    i := float64(imag(a))
    d := math.Sqrt(r * r + i * i)
    return ComplexAbs(T)(complex(d, 0))
}

I have a little concern about the ability to use multiple contract to bound one type parameter.

In Scala, it is common to define a function like:

def compute[A: PointLike: HasTime: IsWGS](points: Vector[A]): Map[Int, A] = ???

PointLike, HasTime and IsWGS is some small contract (Scala call them type class).

Rust also has a similar mechanism:

fn f<F: A + B>(a F) {}

And we can use an anonymous interface when defining a function.

type I1 interface {
    A()
}
type I2 interface {
    B()
}
func f(a interface{
    I1
    I2
})

IMO, the anonymous interface is a bad practice, because an interface is a real type, the caller of this function may have to declare a variable with this type. But contract just a constraint on the type parameter, the caller always play with some real type or just another type parameter, I think it is safe to allow anonymously contract in a function definition.

For library developers, it is inconvenient to define a new contract if the combination of some contracts is only used in a few places, it will mess up the codebase. For the user of libraries, they need to dig into the definitions to know the real requirements of it. If the user defines a lot of function to call the function in the library, they can define a named contract for easy to use, and they can even add more contract to this new contract if they need because this is valid

contract C1(T) {
    T A()
}
contract C2(T) {
    T B()
}
contract C3(T) {
    T C()
}

contract PART(T) {
    C1(T)
    C2(T)
}

contract ALL(T) {
    C1(T)
    C2(T)
    C3(T)
}

func f1(type A PART) (a A) {}

func f2(type A ALL) (a A) {
    f1(a)
}

I have tried these on the draft compiler, all of them cannot be type checked.

func f(type A C1, C2)(x A)

func f1(type A contract C(A1) {
    C1(A)
    C2(A)
}) (x A)

func f2(type A ((type A1) interface {
    I1(A1)
    I2(A1)
})(A)) (x A)

According to the notes in CL

A type parameter that is constrained by multiple contracts will not get the correct type bound.

I think this weird snippet is valid after this issue resolved

func f1(type A C1, _ C2(A)) (x A)

Here are some of my thoughts:

  • If we treat contract as the type of a type parameter, type a A <=> var a A, we can add a syntax sugar like type a { A1(a); A2(a) } to define an anonymous contract quickly.
  • Otherwise, we can treat the last part of type list is a list of requirements, type a, b, A1(a), A2(a), A3(a, b), this style just like use interface to constraint type parameters.

@bobotu It's common in Go to compose functionality using embedding. It seems natural to compose contracts the same way you would do it with structs or interfaces.

@azunymous Personally I don't know how I feel about the entire Go community changing over from multiple returns to Result, although it seems that the Contracts proposal would enable this to some degree. The Go Team seems to shy away from language changes that compromise the "feel" of the language, which I agree with, but that seems like one of those changes.

Just a thought; I wonder if there are any takes on this point.

@tooolbox I don't think it's actually possible to use something like a single Result type extensively outside of the case where you're just passing through values, unless you have a mass of generic Results and functions of each combination of parameter counts and return types. With either lots of numbered functions or using closures you'd lose readability.

I think it'd be more likely you'd see something equivalent of anerrWriter where'd you'd use something like that ocassionally when it fits, named to the use case.

Personally I don't know how I feel about the entire Go community changing over from multiple returns to Result

I don't think this would happen. Like @azunymous said, lots of functions have multiple return types and an error but a result couldn't contain all those other returned values at the same time. Parametric polymorphism isn't the only feature needed to do something like this; you'd also need tuples and destructuring.

Thanks! Like I said, not something I'd thought about deeply but good to know my concern was misplaced.

@tooolbox I'm not aiming to introduce some new syntax, the key problem here is the lack of ability to use anonymous contract just like anonymous interface.

In the draft compiler, it seems impossible to write something like this. We can use an anonymous interface in the function definition, but we cannot do the same thing for contract even in the verbose style.

func f1(type A, B, C, D contract {
    C1(A)
    C2(A, B)
    C3(A, C)
}) (a A, b B, c C, d D)

// Or a more verbose style

func f2(type A, B, C, D (contract (_A, _B, _C) {
    C1(_A)
    C2(_A, _B)
    C3(_A, _C)
})(A, B, C)) (a A, b B, c C, d D)

IMO, this is a natural extension to the existing syntax. This is still a contract at the end of the type parameter list, and we still use embedding to compose functionality. If Go can provide some sugar to generate contract's type parameters automatically like the first snippet, the code will be easier to read and write.

func fff(type A C1(A), B C2(B, A), C C3(B, C, A)) (a A, b B, c C)

// is more verbose than

func fff(type A, B, C contract {
    C1(A)
    C2(B, A)
    C3(B, C, A)
}) (a A, b B, c C)

I meet some trouble when I try to implement a lazy iterator without the dynamic method invoke, just like Rust's Iterator.

I want to define a simple Iterator contract

contract Iterator(T, E) {
    T Next() (E, bool)
}

Because Go don't have the concept of type member, I need declare E as the input type parameter.

A function to collect the results

func Collect(type I, E Iterator) (input I) []E {
    var results []E
    for {
        e, ok := input.Next()
        if !ok {
            return results
        }
        results = append(results, e)
    }
}

A function to map elements

contract MapIO(I, E, O, R) {
    Iterator(I, E)
    Iterator(O, R)
}

func Map(type I, E, O, R MapIO) (input I, f func (e E) R) O {
    return &lazyIterator(I, E, R){
        parent: input,
        f:      f,
    }
}

I have two problems here:

  1. I cannot return a lazyIterator here, the compiler says cannot convert &(lazyIterator(I, E, R) literal) (value of type *lazyIterator(I, E, R)) to O.
  2. I need to declare a new contract named MapIO which needs 4 lines while the Map just needs 6 lines. It is hard for users to read the code.

Suppose Map can be type-checked, I hope I can write something like

type staticIterator(type E) struct {
    elem []E
}

func (it *(staticIterator(E))) Next() (E, bool) { panic("todo") }

func main() {
    inpuit := &staticIterator{
        elem: []int{1, 2, 3, 4},
    }
    mapped := Map(input, func (i int) float32 { return float32(i + 1) })
    fmt.Printf("%v\n", Collect(mapped))
}

Unfortunately, the compiler complains about it cannot infer types. It stops to complain this after I change code to

func main() {
    input := &staticIterator(int){
        elem: []int{1, 2, 3, 4},
    }
    mapped := Map(*staticIterator(int), int, *lazyIterator(*staticIterator(int), int, float32), float32)(input, func (i int) float32 { return float32(i + 1) })
    result := Collect(*lazyIterator(*staticIterator(int), int, float32), float32)(mapped)
    fmt.Printf("%v\n", result)
}

The code is very hard to read and write, and there are too many duplicated type hints.

BTW, the compiler will panic with:

panic: interface conversion: ast.Expr is *ast.ParenExpr, not *ast.CallExpr

goroutine 1 [running]:
go/go2go.(*translator).instantiateTypeDecl(0xc000251950, 0x0, 0xc0001af860, 0xc0001a5dd0, 0xc00018ac90, 0x1, 0x1, 0xc00018bca0, 0x1, 0x1, ...)
        /home/tuzi/go-tip/src/go/go2go/instantiate.go:191 +0xd49
go/go2go.(*translator).translateTypeInstantiation(0xc000251950, 0xc000189380)
        /home/tuzi/go-tip/src/go/go2go/rewrite.go:671 +0x3f3
go/go2go.(*translator).translateExpr(0xc000251950, 0xc000189380)
        /home/tuzi/go-tip/src/go/go2go/rewrite.go:518 +0x501
go/go2go.(*translator).translateExpr(0xc000251950, 0xc0001af990)
        /home/tuzi/go-tip/src/go/go2go/rewrite.go:496 +0xe3
go/go2go.(*translator).translateExpr(0xc000251950, 0xc00018ace0)
        /home/tuzi/go-tip/src/go/go2go/rewrite.go:524 +0x1c3
go/go2go.(*translator).translateExprList(0xc000251950, 0xc00018ace0, 0x1, 0x1)
        /home/tuzi/go-tip/src/go/go2go/rewrite.go:593 +0x45
go/go2go.(*translator).translateStmt(0xc000251950, 0xc000189840)
        /home/tuzi/go-tip/src/go/go2go/rewrite.go:419 +0x26a
go/go2go.(*translator).translateBlockStmt(0xc000251950, 0xc00018d830)
        /home/tuzi/go-tip/src/go/go2go/rewrite.go:380 +0x52
go/go2go.(*translator).translateFuncDecl(0xc000251950, 0xc0001c0390)
        /home/tuzi/go-tip/src/go/go2go/rewrite.go:373 +0xbc
go/go2go.(*translator).translate(0xc000251950, 0xc0001b0400)
        /home/tuzi/go-tip/src/go/go2go/rewrite.go:301 +0x35c
go/go2go.rewriteAST(0xc000188280, 0xc000188240, 0x0, 0x0, 0xc0001f6280, 0xc0001b0400, 0x1, 0xc000195360, 0xc0001f6280)
        /home/tuzi/go-tip/src/go/go2go/rewrite.go:122 +0x101
go/go2go.RewriteBuffer(0xc000188240, 0x7ffe07d6c027, 0xa, 0xc0001ec000, 0x4fe, 0x6fe, 0x0, 0xc00011ed58, 0x40d288, 0x30, ...)
        /home/tuzi/go-tip/src/go/go2go/go2go.go:132 +0x2c6
main.translateFile(0xc000188240, 0x7ffe07d6c027, 0xa)
        /home/tuzi/go-tip/src/cmd/go2go/translate.go:26 +0xa9
main.main()
        /home/tuzi/go-tip/src/cmd/go2go/main.go:64 +0x434

I also find it is impossible to define a function that works with an Iterator return a particular type.

type User struct {}

func UpdateUsers(type A Iterator(A, User)) (it A) bool { 
    // Access `User`'s field.
}

// And I found this may be possible

contract checkInts(A, B) {
    Iterator(A, B)
    B int
}

func CheckInts(type A, B checkInts) (it A) bool { panic("todo") }

The second snippet can work in some scenarios, but it is hard to understand and the unused B type seems weird.

Indeed, we can use an interface to complete this task.

type Iterator(type E) interface {
    Next() (E, bool)
}

I'm just trying to explore how expressive the Go's design is.

BTW, the Rust code I refer to is

fn main() {
    let input = vec![1, 2, 3, 4];
    let mapped = input.iter().map(|x| x * 3);
    let result = f(mapped);
    println!("{:?}", result.collect::<Vec<_>>());
}

fn f<I: Iterator<Item = i32>>(it: I) -> impl Iterator<Item = f32> {
    it.map(|i| i as f32 * 2.0)
}

// The definition of `map` in stdlib is
pub struct Map<I, F> {
    iter: I,
    f: F,
}

fn map<B, F: FnMut(Self::Item) -> B>(self, f: F) -> Map<Self, F>

Here is a summarize for https://github.com/golang/go/issues/15292#issuecomment-633233479

  1. We may need something to express existential type for func Collect(type I, E Iterator) (input I) []E

    • The actual type of universal quantified parameter E cannot be inferred, because it only appeared in the return list. Due to the lack of type member to make E existential by default, I think we may meet this problem in many places.

    • Maybe we can use the simplest existential type like Java's wildcard ? to resolve the type inference of func Consume(type I, E Iterator) (input I). We can use _ to replace E, func Consume(type I Iterator(I, _)) (input I).

    • But it still cannot help the type inference problem for Collect, I don't know if it is hard to infer E, but Rust seems to be able to do this.

    • Or we can use _ as a placeholder for types the compiler can infer, and fill the missing types manually, like Collect(_, float32) (...) to do collect on an iterator of float32.

  1. Due to the lack of the ability to return an existential type, we also have problems for things like func Map(type I, E, O, R MapIO) (input I, f func (e E) R) O

    • Rust support this by using impl Iterator<E>. If Go can provide something like this, we can return a new iterator without boxing, may be useful for some performance-critical code.

    • Or we can simply return a boxed object, this is how Rust solves this problem before it support existential type at return position. But the question is the relationship between contract and interface, maybe we need to define some conversion rules and let compiler convert them automatically. Otherwise, we may need to define a contract and an interface with identical methods for this case.

    • Otherwise we can only use CPS to move the type parameter from return position to input list. eg, func Map(type I, E, O, R MapIO) (input I, f func (e E) R, f1 func (outout O)). But this is useless in practice, simply due to we must write the actual type of O when we pass a function to Map.

I just caught up with this discussion a bit, and it seems pretty clear that the syntactic difficulties with type parameters remain a major difficulty with the draft proposal. There is a way to avoid type parameters entirely and achieve most of the generics functionality: #32863 -- maybe this might be a good time to consider that alternative in light of some of this further discussion? If there was any chance of something like this design being adopted, I would be happy to try to modify the web assembly playground to allow testing of it.

My sense is that the current focus is on nailing the correctness of the semantics of the current proposal, regardless of the syntax, because the semantics are very hard to change.

I just saw a paper on Featherweight Go was published on Arxiv and is a collaboration between the Go team and several experts on type theory. Looks like there are more planned papers in this vein.

To follow up on my previous comment, Phil Wadler of Haskell fame and one of the authors on the paper has a talk scheduled on "Featherweight Go" on Monday, June 8th @ 7am PDT / 10am EDT: http://chalmersfp.org/. youtube link

@rcoreilly I think we will only know whether the "syntactic difficulties" are a major problem when people have more experience writing and, more importantly, reading code written according to the design draft. We are working on ways for people to try that.

In the absence of that, I think that the syntax is simply what people see first and comment on first. It may be a major problem, it may not. We don't know yet.

To follow up on my previous comment, Phil Wadler of Haskell fame and one of the authors on the paper has a talk scheduled on "Featherweight Go" on Monday

The talk by Phil Wadler was very approachable and interesting. I was annoyed at the seemingly pointless hour-long time limit that prevented him from getting into monomorphisation.

Notable that Wadler was asked by Pike to pop in; apparently they know each other from Bell Labs. To me, Haskell has a very different set of values and paradigms, and it's interesting to see how its (creator? principal designer?) thinks about Go and generics in Go.

The proposal itself has a syntax very close to Contracts, but omits Contracts themselves, just using type parameters and interfaces. A key difference that's called out is the ability to take a generic type and define methods on it which have more specific constraints than the type itself.

Apparently the Go Team is working on or has a prototype of this! That will be interesting. In the meantime, how would this look?

package graph

type Node(type e) interface{
    Edges() []e
}

type Edge(type n) interface{
    Nodes() (from n, to n)
}

type Graph(type n Node(e), e Edge(n)) struct { ... }
func New(type n Node(e), e Edge(n))(nodes []n) *Graph(n, e) { ... }
func (g *Graph(type n Node(e), e Edge(n))) ShortestPath(from, to n) []e { ... }

Do I have that right? I think so. If I do...not bad, actually. Doesn't quite solve the stuttering parentheses issue, but it seems improved somehow. Some nameless turmoil within me is becalmed.

What about the stack example from @urandom ? (Aliasing interface{} to Any and using a certain amount of type inference.)

package main

type Any interface{}

type Stack(type t Any) []t

func (s Stack(type t Any)) Peek() t {
    return s[len(s)-1]
}

func (s *Stack(type t Any)) Pop() {
    *s = (*s)[:len(*s)-1]
}

func (s *Stack(type t Any)) Push(value t) {
    *s = append(*s, value)
}

type StackIterator(type t Any) struct{
    stack Stack(t)
    current int
}

func (s *Stack(type t Any)) Iter() *StackIterator(t) {
    it := StackIterator(t){stack: *s, current: len(*s)}

    return &it
}

func (i *StackIterator(type t Any)) Next() (bool) { 
    i.current--

    if i.current < 0 { 
        return false
    }

    return true
}

func (i *StackIterator(type t Any)) Value() t {
    if i.current < 0 {
        var zero t
        return zero
    }

    return i.stack[i.current]
}

type Iterator(type t Any) interface {
    Next() bool
    Value() t
}

func Map(type t Any, u Any)(it Iterator(t), mapF func(t) u) Iterator(u) {
    return mapIt(t, u){it, mapF}
}

type mapIt(type t Any, u Any) struct {
    parent Iterator(t)
    mapF func(t) u
}

func (i mapIt(type t Any, u Any)) Next() bool {
    return i.parent.Next()
}

func (i mapIt(type t Any, u Any)) Value() u {
    return i.mapF(i.parent.Value())
}

func Filter(type t Any)(it Iterator(t), predicate func(t) bool) Iterator(t) {
    return filter(t){it, predicate}
}

type filter(type t Any) struct {
    parent Iterator(t)
    predicateF func(t) bool
}

func (i filter(type t Any)) Next() bool {
    if !i.parent.Next() {
        return false
    }

    n := true
    for n && !i.predicateF(i.parent.Value()) {
        n = i.parent.Next()
    }

    return n
}

func (i filter(type t Any)) Value() t {
    return i.parent.Value()
}

func Distinct(type t comparable)(it Iterator(t)) Iterator(t) {
    return distinct(t){it, map[t]struct{}{}}
}

type distinct(type t comparable) struct {
    parent Iterator(t)
    set map[t]struct{}
}

func (i distinct(type t Any)) Next() bool {
    if !i.parent.Next() {
        return false
    }

    n := true
    for n {
        _, ok := i.set[i.parent.Value()]
        if !ok {
            i.set[i.parent.Value()] = struct{}{}
            break
        }
        n = i.parent.Next()
    }


    return n
}

func (i distinct(type t Any)) Value() t {
    return i.parent.Value()
}

func ToSlice(type t Any)(it Iterator(t)) []t {
    var res []t

    for it.Next() {
        res = append(res, it.Value())
    }

    return res
}

func ToSet(type t comparable)(it Iterator(t)) map[t]struct{} {
    var res map[t]struct{}

    for it.Next() {
        res[it.Value()] = struct{}{}
    }

    return res
}

func Reduce(type t Any)(it Iterator(t), id t, acc func(a, b t) t) t {
    for it.Next() {
        id = acc(id, it.Value())
    }

    return id
}

func main() {
    var stack Stack(string)
    stack.Push("foo")
    stack.Push("bar")
    stack.Pop()
    stack.Push("alpha")
    stack.Push("beta")
    stack.Push("foo")
    stack.Push("gamma")
    stack.Push("beta")
    stack.Push("delta")


    var it Iterator(string) = stack.Iter()

    it = Filter(string)(it, func(s string) bool {
        return s == "foo" || s == "beta" || s == "delta"
    })

    it = Map(string, string)(it, func(s string) string {
        return s + ":1"
    })

    it = Distinct(string)(it)

    println(Reduce(it, "", func(a, b string) string {
        if a == "" {
            return b
        }
        return a + ":" + b
    }))


}

Something like that, I suppose. I realize there are actually no Contracts in that code, so it's not a good representation of how that's handled in FGG-style, but I can tackle that in a moment.

Impressions:

  • I like having the style of type parameters in methods match that of type declarations. I.e. saying "type" and explicitly stating the types, ("type" param paramType, param paramType...) rather than (param, param). It makes it visually consistent, so the code is more glanceable.
  • I like having the type parameters be lowercase. Single-letter variables in Go indicate extremely local usage, but capitalization means it's exported, and they seem contrary when put together. Lowercase feels better since type parameters are scoped to the function/type.

Okay, what about contracts?

Well, one thing I like is that Stringer is untouched; you're not going to have a Stringer interface and a Stringer contract.

type Stringer interface {
    String() string
}

func Stringify(type t Stringer)(s []t) (ret []string) {
    for _, v := range s {
        ret = append(ret, v.String())
    }
    return ret
}

We also have the viaStrings example:

type ToString interface {
    Set(string)
}

type FromString interface {
    String() string
}

func SetViaStrings(type to ToString, from FromString)(s []from) []to {
    r := make([]to, len(s))
    for i, v := range s {
        r[i].Set(v.String())
    }
    return r
}

Interesting. I'm not actually 100% sure what the contract gained us in that case. Perhaps part of it was the rule that a function could have multiple type parameters but only one contract.

Equal is covered in the paper/talk:

contract equal(T) {
    T Equal(T) bool
}

// becomes

type equal(type t equal(t)) interface{
    Equal(t) bool
}

And so on. I am pretty taken with the semantics. Type parameters are interfaces, so the same rules about implementing an interface are applied to what can be used as a type parameter. It's just not "boxed" at runtime--unless you explicitly pass it an interface, I suppose, which you are free to.

The biggest thing I note as not covered is a replacement for Contracts' ability to specify a range of primitive types. Well, I'm sure a strategy for that, and many other things, will come:

8 - CONCLUSION

This is the beginning of the story, not the end. In future work, we plan to look at other methods of implementation beside monomorphisation, and in particular to consider an implementation based on passing runtime representations of types, similar to that used for .NET generics. A mixed approach that uses monomorphisation sometimes and passing runtime representations sometimes might be best, again similar to that used for .NET generics.

Featherweight Go is restricted to a tiny subset of Go. We plan a model of other important features such as assignments, arrays, slices, and packages, which we will dub Bantamweight Go; and a model of Go’s innovative concurrency mechanism based on “goroutines” and message passing, which we will dub Cruiserweight Go.

Featherweight Go looks great to me. Excellent idea to get some type theory experts involved. This looks a lot more like the kind of thing I was advocating further up this topic.

Good to hear that type theory experts are actively working on this!

It even looks similar (except for the slightly different syntax) to my old proposal "contracts are interfaces" https://github.com/cosmos72/gomacro/blob/master/doc/generics-cti.md

@tooolbox
By allowing methods with different constraints than the actual type (as well as different types altogether), FGG opens up quite a few possibilities that weren't feasible with the current contracts draft. As an example, with FGG, one should be able to define both an Iterator and a ReversibleIterator, and have the intermediate and the terminating iterators (map, filter reduce) support both (eg, with Next() and NextFromBack() for reversibles), depending on what the parent iterator is.

I think it's important to keep in mind that FGG is not definitively where generics in Go will end up at. It's one take on them, from the outside. And it explicitly ignores a bunch of things that end up complicating the final product. Also, I haven't read the paper, just watched the talk. With that in mind: As far as I can tell, there are two significant ways in which FGG adds expressive power over the contracts draft:

  1. It allows adding new type-parameters to methods (as shown in the "List and Maps" example in the talk). AFAICT this would allow implementing Functor (in fact, that's his List example, if I'm not mistaken), Monad and their friends. I don't think those specific types are interesting to Gophers, but there are interesting use-cases for this (for example a Go port of Flume or similar concepts would likely benefit). Personally, I feel its a positive change, though I don't yet see what the implications are for reflection and the like. I do feel that method declarations using this are starting to get hard to read - especially if type-parameters of a generic type must also be listed in the receiver.
  2. It allows type-parameters to have stricter bounds on methods of generic types than on the type itself. As mentioned by others, this allows you to have the same generic type implement different methods, depending on what types it was instantiated with. I'm not sure this is a good change, personally. It seems a recipe for confusion, to have Map(int, T) end up with methods that Map(string, T) doesn't have. At the very least, the compiler needs to provide excellent error messages, if something like this happens. Meanwhile, the benefit seems comparatively small - especially given that the motivating factor from the talk (separate compilation) isn't super relevant to Go: As methods have to be declared in the same package as their receiver type and given that packages are the unit of compilation, you can't really extend the type separately. I know that talking about compilation is rather a concrete way to talk about a more abstract benefit, but still, I don't feel that benefit helps Go much.

I'm looking forward to the next steps, in any case :)

I think it's important to keep in mind that FGG is not definitively where generics in Go will end up at.

@Merovius why do you say so?

@arl
FG is more of a research paper on what _could_ be done. No one has said explicitly that this is how polymorphism will work in Go in the future. Even though 2 Go core developers are listed as authors in the paper, that doesn't mean that this will be implemented in Go.

I think it's important to keep in mind that FGG is not definitively where generics in Go will end up at. It's one take on them, from the outside. And it explicitly ignores a bunch of things that end up complicating the final product.

Yes, very good point.

Also, I'll note that Wadler is working as part of a team, and the resultant product builds upon and is very close to the Contracts proposal, which is the result of years of work from the core devs.

By allowing methods with different constraints than the actual type (as well as different types altogether), FGG opens up quite a few possibilities that weren't feasible with the current contracts draft. ...

@urandom I'm curious what that Iterator example looks like; would you mind throwing something together?

Separately, I'm interested in what generics can do beyond maps and filters and functional things, and more curious how they could benefit a project like k8s. (Not that they would go and refactor at this point, but I have heard anecdotally that lack of generics has required some fancy footwork, I think with Custom Resources? Someone more familiar with the project can correct me.)

I do feel that method declarations using this are starting to get hard to read - especially if type-parameters of a generic type must also be listed in the receiver.

Perhaps gofmt could help in some way? Maybe we need to go multi-line. Worth playing around with, perhaps.

As mentioned by others, this allows you to have the same generic type implement different methods, depending on what types it was instantiated with.

I see what you're saying @Merovius

It was called out by Wadler as a difference, and it lets him solve his Expression Problem, but you make a good point that Go's sort of hermetic packages seem to limit what you could/should do with this. Can you think of any actual case where you'd want to do that?

As mentioned by others, this allows you to have the same generic type implement different methods, depending on what types it was instantiated with.

I see what you're saying @Merovius

It was called out by Wadler as a difference, and it lets him solve his Expression Problem, but you make a good point that Go's sort of hermetic packages seem to limit what you could/should do with this. Can you think of any actual case where you'd want to do that?

Ironically, my first thought was that it could be used to resolve some of the challenges described in this article: https://blog.merovius.de/2017/07/30/the-trouble-with-optional-interfaces.html

@toolbox

Separately, I'm interested in what generics can do beyond maps and filters and functional things,

FWIW, it should be clarified that this is kind of selling "maps and filters and functional things" short. I personally don't want map and filter over builtin data-structures in my code, for example (I prefer for-loops). But it can also mean

  1. Providing generalized access to any third-party datastructure. i.e. map and filter can be made to work over generics trees, or sorted maps, or… as well. So, you can swap out what is mapped over, for more power. And more importantly
  2. You can swap out how it's mapped over. For example, you could build a version of Compose that can spawn multiple goroutines for each function and runs them concurrently, using channels. This would make it easy to run concurrent data-processing pipelines and scaling up the bottle-neck automatically, while only needing to write func(A) Bs. Or you could put the same functions into a framework that runs thousands of copies of the program in a cluster, scheduling batches of the data across them (that's what I alluded to when I linked to Flume above).

So, while being able to write Map and Filter and Reduce might seem boring on the surface, the same techniques open up some really exciting possibilities for making scalable compute easier.

@ChrisHines

Ironically, my first thought was that it could be used to resolve some of the challenges described in this article: https://blog.merovius.de/2017/07/30/the-trouble-with-optional-interfaces.html

It's an interesting thought and it certainly feels like it should. But I don't see how, yet. If you take the ResponseWriter example, it seems this might enable you to write generic, type-safe wrappers, with different methods depending on what the wrapped ResponseWriter supports. But, even if you can use different bounds on different methods, you still have to write them down. So while it can make the situation type-safe in the sense that you don't add methods that you don't support, you still need to enumerate all the methods that you could support, so middle-ware might still mask some optional interfaces just by not knowing about them. Meanwhile, you can also (even without this feature) do

type Middleware (type RW http.ResponseWriter) struct {
    RW
}

and overwrite selective methods you care about - and have all other methods of RW promoted. So you don't even have to write wrappers and transparently even get those methods you didn't know about.

So, assuming we get promoted methods for type-parameters embedded on generic structs (and I hope we do), the problems seem better solved by that method.

I think the specific solution to http.ResponseWriter is something like errors.Is/As. There doesn't need to be a language change, just a library addition to create a standard method of ResponseWriter wrapping and a way of querying if any of the ResponseWriters in a chain can handle, e.g. w.Push. I'm skeptical that generics would be a good fit for something like this because the whole point is to have runtime choice between optional interfaces, e.g. Push is only available in http2 and not if I'm spinning up an http1 local dev server.

Looking through Github, I don't think I ever created an issue for this idea, so maybe I'll do that now.

Edit: #39558.

@tooolbox
My guess is that it would look something like this, along with its internal monomorphisation code:

package iter

type Any interface{}

type Iterator(type T Any) interface {
    Next() bool
    Value() T
}

type ReversibleIterator(type T Any) interface {
    Iterator(T)
    NextBack() bool
}

type mapIt(type I Iterator(T), T Any, U Any) struct {
    parent I
    mapF func(T) U
}

func (i mapIt(type I Iterator(T))) Next() bool {
    return i.parent.Next()
}

func (i mapIt(type I Iterator(T), T Any, U Any)) Value() U { 
    return i.mapF(i.parent.Value())
}

func (i mapIt(type I ReversibleIterator(T))) NextBack() bool { 
    return i.parent.NextBack()
}

// Monomorphisation
type mapIt<OnlyForward, int, float64> struct {
    parent OnlyForward,
    mapF func(int) float64
}

func (i mapIt<OnlyForward, int, float64>) Next() bool {
    return i.parent.Next()
}

func (i mapIt<OnlyForward, int, float64>) Value() float64 {
    return i.mapF(i.parent.Value())
}

type mapIt<Slice, int, string> struct {
    parent Slice,
    mapF func(int) string
}

func (i mapIt<Slice, int, string>) Next() bool {
    return i.parent.Next()
}

func (i mapIt<Slice, int, string>) Value() string {
    return i.mapF(i.parent.Value())
}

func (i mapIt<Slice, int, string>) NextBack() bool {
    return i.parent.NextBack()
}



My guess is that it would look something like this, along with its internal monomorphisation code:

FWIW here's a tweet of mine from some years back exploring what how iterators might work in Go with generics. If you do a global substitution to replace <T> with (type T), you've got something not far off the current proposal: https://twitter.com/rogpeppe/status/425035488425037824

FWIW, it should be clarified that this is kind of selling "maps and filters and functional things" short. I personally don't want map and filter over builtin data-structures in my code, for example (I prefer for-loops). But it can also mean ...

I see your point and don't disagree, and yes we will benefit from the things your examples cover.
But I still wonder about how something like k8s would be affected, or another codebase with "generic" data types where the kinds of actions being performed aren't maps or filters, or at least go beyond that. I wonder at how effective Contracts or FGG are at increasing type-safety and performance in those sorts of contexts.

Wondering if anyone can point at a codebase, hopefully simpler than k8s, that fits in this sort of category?

@urandom whoa. So if you instantiate a mapIt with a parent that implements ReversibleIterator then mapIt has a NextBack() method and if not, it doesn't. Am I reading that right?

Thinking about it, it seems like that's useful-ish from a library perspective. You have some generic struct types that are pretty open (Any type params) and they have a lot of methods, constrained by various interfaces. So then when you use the library in your own code, the type you embed in the struct gives you the ability to call a certain set of methods, so you get a certain set of the functionality of the library. What that set of functionality is, is figured out at compile time based on the methods your type has.

...It does seem a little like what @ChrisHines brought up in that you sorta could write code that has more or less functionality based on what your type implements, but then again it's really a matter of the available method set increasing or decreasing, not the behavior of a single method, so yeah I don't see how the http2 hijacker thing is helped with this.

Anyway, very interesting.

Not that I would do this, but I suppose this would be possible:

type OverrideX interface {
    GetX() int
}

type OverrideY interface {
    GetY() int
}

type Inheritor(type child Any) struct {
    Parent
    c child
}

func (i Inheritor(type child OverrideX)) GetX() int {
    return i.c.GetX()
}

func (i Inheritor(type child OverrideY)) GetY() int {
    return i.c.GetY()
}

type Parent struct {
    x, y int
}

func (p Parent) GetX() int {
    return p.x
}

func (p Parent) GetY() int {
    return p.y
}

type Child struct {
    x int
}

func (c Child) GetX() int {
    return c.x
}

func main() {
    i := Inheritor(Child){Parent{5, 6}, Child{3}}
    x, y := i.GetX(), i.GetY() // 3, 6
}

Again, mostly a joke, but I think it's good to explore the limits of what's possible.

Edit: Hm, does show how you can have different method sets depending on the type param, but produces the exact same effect as just embedding Parent in Child. Again, silly example ;)

I'm not a big fan of having methods that can only be called given a certain type. Given @tooolbox's example, it would probably be a pain to test due to the fact that some methods are only callable given some specific child - the tester is likely to miss some case. It's also pretty unclear which methods are available and requiring an IDE to provide suggestions is not what Go should require. However, you can implement this using only the type given by the struct by doing a type assertion in the method.

func (i Inheritor(type child Any)) GetX() int {
    if c, ok := i.c.(OverrideX); ok {
        return c.GetX()
    }
    return i.Parent.GetX()
}

func (i Inheritor(type child Any)) GetY() int {
    if c, ok := i.c.(OverrideY); ok {
        return c.GetY()
    }
    return i.Parent.GetY()
} 

This code is also type-safe, clear, easy to test, and likely runs identically to the original without the confusion.

@TotallyGamerJet
That particular example is type-safe, however others are not, and will require runtime panics with incompatible types.

Also, I'm not sure how the tester could possibly miss any cases, given that they are most likely the ones that wrote the generic code in the first place. Also, whether or not its clear is a bit subjective, though it definitely does not require an IDE to deduce. Keep in mind, this is not function overloading, the method can either be called or not, so it's not like some case can be skipped by accident. Anyone can see that this method exists for a certain type, and they might need to read it again to understand what type is required, but that's about it.

@urandom I didn't necessarily mean with that specific example someone would miss a case - it is very short. I meant that when you have tons of methods only callable given certain types. So I stand by not using subtyping (as I like to call it). It is even possible to solve the "Expression Problem" without using type assertions or subtyping. Here's how:

type Any interface {}

type Evaler(type t Any) interface {
    Eval() t
}

type Num struct {
    value int
}

func (n Num) Eval() int {
    return n.value
}

type Plus(type a Evaler(type t Any)) struct {
    left a
    right a
}

func (p Plus(type a Evaler(type t Any)) Eval() t {
    return p.left.Eval() + p.right.Eval()
}

func (p Plus(type a Evaler(type t Any)) String() string {
    return fmt.Sprintf("(%s+%s)", p.left, p.right)
}

type Expr interface {
    Evaler
    fmt.Stringer
}

func main() {
    var e Expr = Plus(Num){Num{1}, Num{2}}
    var v int = e.Eval() // 3
    var s string = e.String() // "(1+2)"
}

Any misuse of the Eval method should be caught at compile time due to the fact that it is not allowed to call Eval on Plus with a type that doesn't implement addition. Although, it is possible to improperly use the String() (possibly adding structs) good testing should catch those cases. And Go usually embraces simplicity over "correctness". The only thing that is gained with subtyping is more confusion in the docs and in usage. If you can provide an example that requires subtyping I might be more inclined to think it is a good idea but currently, I am unconvinced.
EDIT: Fixed mistake and improved

@TotallyGamerJet in your example, the String method should call String recursively, not Eval

@TotallyGamerJet in your example, the String method should call String recursively, not Eval

@magical
I am not sure what you mean. The type of the Plus struct is an Evaler which doesn't ensure that fmt.Stringer is satisfied. Calling the String() on both Evalers would require a type assertion and therefore not be typesafe.

@TotallyGamerJet
Unfortunately, that's the idea of the String method. It should recursively call any String methods on its members, otherwise there's no point. But you already see that it would require a type assertion and a panic if you cannot ensure that the the method on the Plug type requires a type a that has a String method

@urandom
You are correct! Surprisingly enough the Sprintf will do that type assertion for you. So, you can just send in both the left and right fields. Although it still can panic if the types in Plus don't implement Stringer but I'm fine with that because it is possible to avoid panics by using the %v verb to print out the struct (it will call String() if available). I think this solution is clear and any other uncertainties should be documented in the code. So I am still unconvinced why subtyping is necessary.

@TotallyGamerJet
I personally still fail to see what problems can arise if it is allowed to have methods with different constraints. The method is still there, and the code clearly describes what arguments (and reciever, in the special case) are required.
Just as having a method, accepting a string argument, or a MyType receiver, is clearly readable and unambiguous, so would the following definition be as well:

func (rec MyType(type T SomeInterface(T)) Foo() T

The requirements are clearly marked in the signature itself. I.E. it is of MyType(type T SomeInterface(T)) and nothing else.

Change https://golang.org/cl/238003 mentions this issue: design: add go2draft-type-parameters.md

Change https://golang.org/cl/238241 mentions this issue: content: add generics-next-step article

Christmas is early!

  • I can see a lot of effort went into making the design document approachable, it shows and it's great and very appreciated.
  • This iteration is a major improvement in my eyes and I could see this being implemented as-is.
  • Agree with pretty much all the reasoning and logic.
  • Like that if you specify a constraint for a single type parameter, you must do it for all.
  • Comparable sounds good.
  • Type lists in interfaces are not bad; agree it's better than operator methods, but in my mind it's probably the biggest area for further discussion.
  • Type inference is (still) great.
  • Inference for single-argument type-parameterized constraints seems like cleverness over clarity.
  • I like "We aren't claiming that this is simple" in the graph example. That's fine.
  • (type *T constraint) looks like a good solution to the pointer issue.
  • Fully agreed on the func(x(T)) change.
  • I think we want type inference for composite literals off the bat? 😄

Thank you to the Go team! 🎉

https://go.googlesource.com/proposal/+/refs/heads/master/design/go2draft-type-parameters.md#comparable-types-in-constraints

I believe comparable is more like a build in type then a interface. I believe it's a small bug in the proposal draft.

type ComparableHasher interface {
    comparable
    Hash() uintptr
}

need to be

type ComparableHasher interface {
    type comparable
    Hash() uintptr
}

The playground also seems to indicate it needs to be type comparable
https://go2goplay.golang.org/p/mhrl0xYsMyj

EDIT: Ian Lance Taylor and Robert Griesemer are fixing the go2go tool (was small bug in go2go translator, not the draft. The design draft was correct)

Have there been thoughts about enabling people to write their own generic hash-tables and the like? ISTM that currently that's very limited (especially compared to the built-in map). Basically, the builtin map has comparable as a key-constraint, but of course, == and != are not enough to implement a hash-table. An interface like ComparableHasher only passes the responsibility to write a hash-function to the caller, it doesn't answer the question of how it would actually look (also, the caller probably shouldn't be responsible for this; writing good hash functions is hard). Lastly, using pointers as keys might be fundamentally impossible - converting a pointer to a uintptr to use as an index would risk the GC moving the pointee around and thus the bucket to change (barring this issue, exposing a predeclared func hash(type T comparable)(v T) uintptr might be a - probably not ideal - solution).

I can well accept "it's not really feasible" as an answer, I'm just curious to know if you thought about it :)

@gertcuykens I've committed a fix to the go2go tool to handle comparable as intended.

@Merovius We expect that people who write a generic hash table will provide their own hash function, and possibly their own comparison function. When writing your own hash function, the https://golang.org/pkg/hash/maphash/ package may be useful. You are correct that the hash of a pointer value must depend on the value to which that pointer points; it can't depend on the value of the pointer converted to uintptr.

Not sure if this is a limitation of the current implementation of the tool, but an attempt to return a generic type constrainted by an interface returns an error:
https://go2goplay.golang.org/p/KYRFL-vrcUF

I implemented a real-world use-case I had for generics yesterday. It's a generic pipeline abstraction that allows to scale stages of the pipeline independently and supports cancellation and error handling (it doesn't run in the playground, because it depends on errgroup, but running it using the go2go tool seems to work). Some observations:

  • It was pretty fun. Having a functioning type-checker actually helped a lot when iterating on the design, by translating design-flaws into type-errors. The end-result is ~100 LOC including comments. So, overall, the experience of writing generic code is pleasent, IMO.
  • This use-case at least just works smoothly with type-inference, no explicit instantiations needed. I think that bodes well for the inference design.
  • I think this example would benefit from the ability to have methods with extra type-parameters. Needing a top-level function for Compose means the construction of the pipeline happens in reverse - the latter stages of the pipeline need to be constructed to pass it to the functions building the earlier stages. If methods could have type-parameters, you could have Stage be a concrete type and do func (s *Stage(A, B)) Compose(type C)(n int, f func(B) C) *Stage(A, C). And building the pipeline would be in the same order as it's plumbed (see the comment in the playground). There might of course also be a more elegant API in the existing draft I don't see - it's hard to prove a negative. I'd be interested to see a working example of that.

Overall, I like the new draft, FWIW :) IMO dropping contracts is an improvement and so is the new way to specify required operators via type-lists.

[edit: Fixed a bug in my code where a deadlock could happen if a pipeline-stage failed. Concurrency is hard]

A question for the tool branch: will it keep up with the last go release (so v1.15, v1.15.1, ...)?

@urandom: Note that the value you are returning in your code is of type Foo(T). Each
such type instantiation produces a new defined type, in this case Foo(T).
(Of course, if you have multiple Foo(T) in the code, they are all the same
defined type).

But the result type of your function is V, which is a type parameter. Note
that the type parameter is constrained by the Valuer interface, but it is
_not_ an interface (or even that interface). V is a type parameter which is
a new kind of type about which we know things described by its constraint.
With respect to assignability it acts like a defined type named V.

So you're trying to assign a value of type Foo(T) to a variable of type V
(which is neither Foo(T) nor Valuer(T), it only has properties described by
Valuer(T)). Thus the assignment fails.

(As an aside, we are still refining our understanding of type parameters
and eventually need to spell it out precisely enough so that we can write a
spec. But keep in mind that each type parameter is effectively a new
defined type about we know only as much as its type constraint specifies.)

Perhaps you meant to write this: https://go2goplay.golang.org/p/8Hz6eWSn8Ek?

@Inuart If by tool branch you mean the dev.go2go branch: This is a prototype, it has been built with expediency in mind, and for experimentation purposes. We do want people to play with it and try to write code, but it's not a good idea to _rely_ on the translator for production software. Lot's of things can change (even the syntax, if need be). We're going to fix bugs and adjust the design as we learn from feedback. Keeping it up with the latest Go releases is seems less important.

I implemented a real-world use-case I had for generics yesterday. It's a generic pipeline abstraction that allows to scale stages of the pipeline independently and supports cancellation and error handling (it doesn't run in the playground, because it depends on errgroup, but running it using the go2go tool seems to work).

I like the example. I just read through it fully and the thing that most tripped me up (not even worth explaining) had nothing to do with the generics involved. I think the same construct without generics wouldn't be much easier to grasp. It's also definitely one of those things you want written once, with tests, and not have to fool with again later.

One thing that might help readability and review is if the Go tool had a way of displaying the monomorphised version of generic code, so you can see how things turn out. Might be infeasible, partly because functions might not even be monomorphised in the final compiler implementation, but I think it would be valuable if it's at all attainable.

I think this example would benefit from the ability to have methods with extra type-parameters.

I saw that comment in your playground as well; definitely the alternate call syntax seems more readable and straightforward. Could you explain this in more detail? Having barely wrapped my head around your example code, I'm having trouble making the jump :)

So you're trying to assign a value of type Foo(T) to a variable of type V
(which is neither Foo(T) nor Valuer(T), it only has properties described by
Valuer(T)). Thus the assignment fails.

Great explanation.

...Otherwise, it's sad to see that the HN post got hijacked by the Rust crowd. It would have been nice to get more feedback from Gophers on the proposal.

Two questions for the Go team:

Is there a difference between these two, or is it a bug in the go2 playground? The first one compiles, the second one gives an error

type Addable interface {
    type int, float64
}

func Add(type T Addable)(a, b T) T {
  return a + b
}
type Addable interface {
    type int, float64, string
}

func Add(type T Addable)(a, b T) T {
  return a + b
}

Fails with: invalid operation: operator + not defined for a (variable of type T)

Well, this was a most unexpected and pleasant surprise. I'd been hoping for a way to actually try this out at some point, but I didn't expect it anytime soon.

First of all, found a bug: https://go2goplay.golang.org/p/1r0NQnJE-NZ

Second of all, I built an iterator example and was a bit surprised to find that that type inference doesn't work. I can just have it return an interface type directly, but I didn't think that it wouldn't be able to infer that one since all of the type information it needs is coming through the argument.

Edit: Also, as multiple people have said, I think that allowing new types to be added during method declarations would be quite useful. As far as interface implementation goes, you could either simply not allow interface implementation, only allow implementation if the interface also calls for generics there (type Example interface { Method(type T someConstraint)(v T) bool }), or, possibly, you could have it implement the interface if _any_ possible variant of it implements the interface, and then have calling it be constrained to what the interface wants if it's called through the interface. For example,

```go
type Interface interface {
Get(string) string
}

type Example(type T) struct {
v T
}

// This will only work because Interface.Get is more specific than Example.Get.
func (e Example(T)) Get(type R)(v R) T {
return fmt.Sprintf("%v: %v", v, e.v)
}

func DoSomething(inter Interface) {
// Underlying is Example(string) and Example(string).Get(string) is assumed because it's required.
fmt.Println(inter.Get("example"))
}

func main() {
// Allowed because Example(string).Get(string) is possible.
DoSomething(Example(string){v: "An example."})
}

@DeedleFake The first thing you are reporting is not a bug. You will need to write https://go2goplay.golang.org/p/qo3hnviiN4k at the moment. This is documented in the design draft. In a parameter list, writing a(b) is interpreted as a (b) (a of parenthesized type b) for backward-compatibility. We may change that going forward.

The Iterator example is interesting - it does look like a bug at first glance. Please file a bug (instructions in blog post) and assign it to me. Thanks.

@Kashomon The blog post (https://blog.golang.org/generics-next-step) suggests the mailing list for discussion and filing separate issues for bugs. Thanks.

I think the problem with + has been fixed already.

@tooolbox

One thing that might help readability and review is if the Go tool had a way of displaying the monomorphised version of generic code, so you can see how things turn out. Might be infeasible, partly because functions might not even be monomorphised in the final compiler implementation, but I think it would be valuable if it's at all attainable.

The go2go tool can do this. Instead of using go tool go2go run x.go2, write go tool go2go translate x.go2. That will produce a file x.go with the translated code.

That said, I have to say that it's fairly challenging to read. Not impossible, but not easy.

@griesemer

I understand that the return argument can be an interface instead, but I don't really understand why it can't be the generic type itself.

You can, for example, use that same generic type as an input parameter, and that works just fine:
https://go2goplay.golang.org/p/LuDrlT3zLRb
Does this work because the type has already been instantiated?

@urandom wrote:

I understand that the return argument can be an interface instead, but I don't really understand why it can't be the generic type itself.

Theoretically, It could, but it doesn't make that sense to make a return type generic when the return type isn't generic because it is determined by the function block, i.e. by the return value.

Normally, generic parameters are either fully determined by the parameter value tuple or by the type of the function application at the call site (determines instantiation of the generic return type).

Theoretically, you could also allow for generic type parameters which aren't determined by the parameter value tuple and must be provided explicitly, e.g.:

func f(type S)(i int) int
{
    s S =...
    return 2
}

don't know how much sense this makes.

@urandom I didn't necessarily mean with that specific example someone would miss a case - it is very short. I meant that when you have tons of methods only callable given certain types. So I stand by not using subtyping (as I like to call it). It is even possible to solve the "Expression Problem" without using type assertions or subtyping. Here's how:

type Any interface {}

type Evaler(type t Any) interface {
  Eval() t
}

type Num struct {
  value int
}

func (n Num) Eval() int {
  return n.value
}

type Plus(type a Evaler(type t Any)) struct {
  left a
  right a
}

func (p Plus(type a Evaler(type t Any)) Eval() t {
  return p.left.Eval() + p.right.Eval()
}

func (p Plus(type a Evaler(type t Any)) String() string {
  return fmt.Sprintf("(%s+%s)", p.left, p.right)
}

type Expr interface {
  Evaler
  fmt.Stringer
}

func main() {
  var e Expr = Plus(Num){Num{1}, Num{2}}
  var v int = e.Eval() // 3
  var s string = e.String() // "(1+2)"
}

Any misuse of the Eval method should be caught at compile time due to the fact that it is not allowed to call Eval on Plus with a type that doesn't implement addition. Although, it is possible to improperly use the String() (possibly adding structs) good testing should catch those cases. And Go usually embraces simplicity over "correctness". The only thing that is gained with subtyping is more confusion in the docs and in usage. If you can provide an example that requires subtyping I might be more inclined to think it is a good idea but currently, I am unconvinced.
EDIT: Fixed mistake and improved

I don't konw , why don't use '<> ' ?

@99yun
Please look at the FAQ included with the updated draft

Why not use the syntax F\ like C++ and Java?
When parsing code within a function, such as v := F\, at the point of seeing the < it's ambiguous whether we are seeing a type instantiation or an expression using the < operator. Resolving that requires effectively unbounded lookahead. In general we strive to keep the Go parser efficient.

@urandom A generic function body is always type-checked w/o instantiation (*); in general (if it is exported, for instance) we can't know how it will be instantiated. When being type-checked, it can only rely on the information available. If the result type is a type parameter and the return expression is of a different type that is not assignment-compatible, the return cannot work. Or in other words, if a generic function is invoked with (possibly inferred) type arguments, the function body is not type-checked again with those type arguments. It only checks that the type arguments satisfy the generic function's constraints (after instantiating the function signature with those type arguments). Hope that helps.

(*) More precisely, the generic function is type-checked as it it were instantiated with its own type parameters; the type parameters are real types; we just only know about them as much as their constraints tell us.

Please let's continue this discussion elsewhere. If you have more questions with a piece of code that you feel should be working please file an issue so we can discuss it there. Thanks.

There doesn't seem to be a way to use a function to create a zero value of a generic struct. Take for example this function:

func zero(type T)() T {
    var zero T
    return zero
}

It appears to work for the basic types (int, float32 etc.). However, when you have a struct that has a generic field things get strange. Take for example:

type Opt(type T) struct {
    val T
}

func (o Opt(T)) Do() { /*stuff*/ }

All seems good. However, when doing:

opt := zero(Opt(int))
opt.Do() 

it doesn't compile giving the error: opt.Do undefined (type func() Opt(int) has no field or method Do) I can understand if it is not possible to do this but it's strange to think it is a function when int is supposed to be a part of the Opt type. But what is weirder is that it is possible to do this:

opt := zero(Opt)      //  But somehow this line compiles
opt(int).Do()         // This will panic

I'm not sure which part is a bug and which part is intended.
Code: https://go2goplay.golang.org/p/M0VvyEYwbQU

@TotallyGamerJet

Your function zero() has no arguments so there's no type inference going on. You have to instantiate the zero func and then call it.

opt := zero(Opt(int))()
opt.Do()

https://go2goplay.golang.org/p/N6ip-nm1BP-

@tooolbox
Ah yes. I thought I was providing the type but I forgot the second set of parenthesis to actually call the function. I'm still getting used to these generics.

I have always understood not having generics in Go was a design decision not an oversight. It has made Go so much more simple and I cannot fathom the over-the-top paranoia against some simple copy duplication. At our company we have made tons of Go code and have never found a single instance where we would prefer generics.

To us it will definitely make Go feel less Go and it looks like the hype crowd has finally manage to affect the development of Go in a wrong direction. They couldn't just leave Go in its simplistic beauty, no, they had to keep complaining and complaining until they finally got their way.

I'm sorry, it's not meant to degrade anybody, but this is how the destruction of a beautifully designed language starts. What's next? If we keep changing stuff, like so many people would like, we end up with "C++" or "JavaScript".

Just leave Go they way it was meant to be!

@iio7 I am the lowest IQ of all of here, my future depend on making sure I can read others people code. The hype just started not because of just generics, but because the new design doesn't require a language change in current proposal so we are all excited that there is a window to keep things simple and still have some generic and functional goodies. Don't get me wrong I know there is going to be always some person in the team that writes code like a rocket scientist and me the monkey suppose to understand it just like that? So the examples that you see now are the ones form the rocket scientist and to be honest, yes it takes me some time to read it but in the end with some trial and error I know what they are trying to program. All i'm saying is trust Ian and Robert and the others, they are not done with the design yet. Wouldn't be surprised in a year or so there are tools that help the compiler speak perfect simple monkey language no matter how difficult rocket generic code you throw at it. The best feedback you can give is rewrite some examples and point out if something is way to over engineered so they can make sure the compiler will complain about it or get rewritten by something like the vet tool automatically.

I read the FAQ regarding <> but for a stupid person like me, how is it more difficult for the parser to determine if it's a generic call if it looks like this v := F<T> rather than v := F(T)? Is it not more difficult with the parentheses since it won't know if it's a function call with T as a regular argument?

On top of that, I think the parser should of course be kept fast, but let's not also forget which is easiest for the programmer to read which is IMO equally important. Is it easier to understand what v := F(T) does straight away? Or is v := F<T> easier? Also important to take into consideration :)

Not argumenting for nor against v := F<T>, just raising some thoughts that might be worth considering.

This is legal Go today:

    f, c, d, e := 1, 2, 3, 4
    a, b := f < c, d > (e)
    fmt.Println(a, b) // true false

There is no point in discussing angle brackets unless you provide a proposal for what to do about it (break back compat?). It is for all intents and purposes a dead issue. There is effectively zero chance of angle brackets being adopted by the Go team. Please discuss anything else.

Edit to add: Sorry if this comment was overly curt. There is a lot of discussion of angle brackets on Reddit and HN, which is very frustrating to me because the back compatibility problem has been well known for a long time by people who care about generics. I understand why people prefer angle brackets, but it’s not possible without a breaking change.

Thanks for your comment @iio7. There's always a non-zero risk that things get out of hands. Which is why we've been using utmost caution along the way. I believe what we have now is a much cleaner and more orthogonal design than what we had last year; and personally I hope we can make it even simpler, especially when it comes to type lists - but we will find out as we learn more. (Somewhat ironically, the more orthogonal and clean the design becomes, the more powerful it will be and the more complex code one can write.) The final words haven't been spoken yet. Last year, when we had the first potentially viable design, the reaction of a lot of people was similar to yours: "Do we really wants this?" This is an excellent question and we should try to answer it as good as we can.

@gertcuykens' observation is also correct - naturally the people playing with the go2go prototype are exploring its limits as much as possible (which is what we want), but in the process also produce code that probably wouldn't pass muster in a proper production setting. By now I've seen plenty of generic code that is really hard to decipher.

There are situations where generic code would clearly be a win; I'm thinking of generic concurrent algorithms that would allow us to put somewhat subtle code into a library. There are of course various container data structures, and things like sort, etc. Probably a vast majority of code doesn't need generics at all. In contrast to other languages, where generic features are central to much one does in the language, in Go, generic features are just another tool in the Go tool set; not the fundamental building block upon which everything else is built on top.

For comparison: In the early days of Go, we all tended to overuse goroutines and channels. It took a while to learn when they were appropriate and when not. Now we have some more or less established guidelines and we use them only when really appropriate. I am hoping the same would happen if we had generics.

Thanks.

From the draft design's section on [T]-based syntaxes:

The language generally permits a trailing comma in a comma-separated list, so A[T,] should be permitted if A is a generic type, but normally would not be permitted for an index expression. However, the parser can't know whether A is a generic type or a value of slice, array, or map type, so this parse error can not be reported until after type checking is complete. Again, solvable but complicated.

Couldn't this be pretty easily solved by just making the trailing comma completely legal in index expressions and then just having gofmt remove it?

@DeedleFake Possibly. That would certainly be an easy way out; but it also seems a bit ugly, syntactically. I don't remember all the details, but an earlier version had support for [type T] style type parameters. See the dev.go2go branch, commit 3d4810b5ba where support was removed. One could dig that up again and investigate.

Can the length of generic arguments in each [] list be limited to most one to avoid this problem, just like the builtin generic types:

  • [N]T
  • []T
  • map[K]T
  • chan T

Please note that, the last arguments in builtin generic types are all not enclosed in [].
The generic declaration syntax is like: https://github.com/dotaheor/unify-Go-builtin-and-custom-generics#the-generic-declaration-syntax

@dotaheor I'm not sure exactly what you are asking, but it is clearly necessary to support multiple type arguments for a generic type. For example, https://go.googlesource.com/proposal/+/refs/heads/master/design/go2draft-type-parameters.md#containers .

@ianlancetaylor
What I mean is each type parameter is enclosed by a [], so the type in your link can be declared as:

type Map[type K][type V] struct

When it is used, it is like:

var m Map[string]int

A type argument not enclosed by [] indicates the end of a use of a generic type.

While thinking about ordering for arrays #39355 in conjunction with generics I found that "comparable" is handled special in the current generics draft (presumably due to not being able to list all the comparable types in a type list easily) as a predeclared type constraint.

It would be nice if the generics draft would be changed to also define "ordered"/"orderable" similar to how "comparable" is predefined. Its a related commonly used relation on values of the same type and this would allow future extensions of the go language to define ordering on more types (arrays, structs, slices, sum types, checked enums, ...) without running into the complication that not all ordered types would be listable in a type list like "comparable".

Im not suggesting that to be decided there should be ordered for more types in the language spec but this change to generics leaves it more forward compatible with such a change (a constraints.Ordered code would not have to be a magic compiler generated thing later or would be deprecated if using a type list). Sorting packages could start with the predeclared type constraint "ordered" and later could "just" work with e.g. arrays if ever changed and no fix to the constraint used.

@martisch I think this would only need to happen once ordered types are extended. Currently, constraints.Ordered could list all types (that doesn't work for comparable, because of pointers, structs, arrays,… being comparable, so that has to be magical. But ordered is currently limited to a finite set of builtin underlying types) and users can rely on that. If we extend orderings to arrays (for example) we can still add a new magical ordered constraints then and embed it into constraints.Ordered. This means all users of constraints.Ordered would automatically benefit from the new constraint. Of course, users who write their own explicit type-list would not benefit - but it's the same if we add ordered now, for users who don't embed that.

So, IMO there's nothing lost in delaying that until it's actually meaningful. We shouldn't add any possible constraint-set as a predeclared identifier - much less any potential future constraint-set :)

If we extend orderings to arrays (for example) we can still add a new magical ordered constraints then and embed it into constraints.Ordered.

@Merovius That is a good point I had not thought of. This allows to extend constraints.Ordered in the future in a consistent way. If there also will be a constraints.Comparable then it fits nicely into the overall structure.

@martisch, note that ordered — unlike comparable — is not coherent as an interface type unless we also define a (global) total order among concrete types, or prohibit non-generic code from using < on variables of type ordered, or prohibit the use of comparable as a general, run-time interface type.

Otherwise, transitivity of “implements” breaks down. Consider this program fragment:

    var x constraints.Ordered = int(0)
    var y constraints.Ordered = string("0")
    fmt.Println(x < y)

What should it output? (Is the answer intuitive, or arbitrary?)

@bcmills
What about fun (<)(type T Ordered)(t1 T,t2 T) Bool?

To compare arithmetic types of different kind:

If any arithmetic S implements only Ordered(T)for S<:T, then:

//Isn't possible I think
interface SorT(S,T)
{ 
type S,T
}

fun (<)(type R SorT(S,T), S Ordered(R), T Ordered(R))(s S, t T) Bool

should be unique.

For runtime polymorphism you would require Ordered to be paramtrizeable.
Or:
You partition Ordered in tuple types and then rewrite (<) to be:

//but isn't supported that either
fun(<)(type R Ordered)(s R.0,t R.1)

Hi!
I have a question.

Is there a way to make type constraint which passes only generic types with one type parameter?
Something which passes only Result(T)/Option(T)/etc but not just T.
I tried

type Box(type T) interface {
    Val() (T, bool)
}

but it requires Val() method

type Box(type T) interface{}

is similar to interface{}, i.e Any

also tried https://go2goplay.golang.org/p/lkbTI7yppmh -> compilation fails

type Box(type T) interface {
       type Box(T)
}

https://go2goplay.golang.org/p/5NsKWNa3E1k -> compilation fails

type Box(type T) interface{}

type Generic(type T) interface {
    type Box(T)
}

https://go2goplay.golang.org/p/CKzE2J-YOpD -> doesn't work

type Box(type T) interface{}

type Generic(type T Box(T)) interface {}

Is this behavior expected or it's just type checking bug?

@tdakkota Constraints apply to type arguments, and they apply to the fully instantiated form of type arguments. There is no way to write a type constraint that puts any requirements on the non-instantiated form of a type argument.

Please look at the FAQ included with the updated draft

Why not use the syntax F like C++ and Java?
When parsing code within a function, such as v := F, at the point of seeing the < it's ambiguous whether we are seeing a type instantiation or an expression using the < operator. Resolving that requires effectively unbounded lookahead. In general we strive to keep the Go parser efficient.

@TotallyGamerJet Whatever !

How to deal zero value of generic type? Without enum, how can we deal with optional value.
For example: the generic version of vector, and a func named First return first element if it's length > 0 else zero value of generic type.
How do we write such code? Because we don't known which type in vector, if chan/slice/map, we can return (nil, false), be if struct or primitive type like string, int, bool, how to deal it?

@leaxoy

var zero T should be enough

@leaxoy

var zero T should be enough

A global magic variable like nil?

@leaxoy
var zero T should be enough

A global magic variable like nil?

There is a proposal under discussion for this topic - see proposal: Go 2: universal zero value with type inference #35966.

It examines several new alternative syntaxes for an expression (not a statement as var zero T) that will always return the zero value of a type.

The zero value looks feasible currently, but is may be take space on stack or heap? Should we consider use enum Option to complete this in one step.
Otherwise, if zero value takes no space, it would be better and no necessary add enum.

The zero value looks feasible currently, but is may be take space on stack or heap?

Historically, I believe, the Go compiler has optimized those sorts of cases. I'm not too worried.

A default type value can be specified in C++ templates. Has a similar construct been considered for go generic type parameters? Potentially this would make it possible to retrofit existing types without breaking existing code.

For example, consider the existing asn1.ObjectIdentifier type which is a []int. One problem with this type is it's not compliant with the ASN.1 specification, which states each sub-oid may be an INTEGER of arbitrary length (e.g. *big.Int). Potentially ObjectIdentifier could be modified to accept a generic parameter, but that would break a lot of existing code. If there was a way to specify int is the default parameter value, maybe that would make it possible to retrofit existing code.

type SignedInteger interface {
    type int, int32, int64, *big.Int
}
type ObjectIdentifier(type T SignedInteger) []T
// type ObjectIdentifier(type T SignedInteger=int) []T  // `int` would be the default instantiation type.

// New code with generic awareness would compile in go2.
var oid1 ObjectIdentifier(int) = ObjectIdentifier(int){1, 2, 3}

// But existing code would fail to compile:
var oid1 ObjectIdentifier = ObjectIdentifier{1, 2, 3}

Just to be clear, the above asn1.ObjectIdentifier is just an example. I'm not saying using generics is the only way or the best way to solve the ASN.1 compliance issue.

Furthermore, are there any plans to allow for parametrizable finite interface bounds?:

type Ordable(type T, S) interface {
    type S, type T
}

How to support where condition on type parameter.
Can we write such code:

type Vector(type T) struct {
    vec []T
}

func (v Vector(T)) Sum() T where T: Summable {
      //
}

func (v Vector(T)) First()  (T, bool) {
     //
}

The Sum method only works when type parameters T is Summable, otherwise we can't call Sum on Vector.

Hi @leaxoy

You can just write something like https://go2goplay.golang.org/p/pRznN30Qu8V

type Addable interface {
    type int, uint
}

type SummableVector(type T Addable) Vector(T)

func (v SummableVector(T)) Sum() T {
    var r T
    for _, i := range v.vec {
        r = r + i
    }
    return r
}

I think where clause does not seems Go-like and would be hard to parse it, it should be something like

type Vector(type T) struct {
    vec []T
}

func (v Vector(T Summable)) Sum() T {
      //
}

func (v Vector(T)) First()  (T, bool) {
     //
}

but it seems like method specialization.

@sebastien-rosset We have not considered default types for generic type parameters. The language does not have default values for function arguments, and it's not obvious why generics would be different. In my opinion, the ability to make existing code compatible with a package that adds generics is not a priority. If a package is rewritten to use generics, it's OK to require existing code to change, or to simply introduce the generic code using new names.

@sighoya

Furthermore, are there any plans to allow for parametrizable finite interface bounds?

I'm sorry, I don't understand the question.

I'd like to remind people that the blog post (https://blog.golang.org/generics-next-step) suggests that discussion about generics take place on the golang-nuts mailing list, not on the issue tracker. I'll keep reading this issue, but it has nearly 800 comments and is completely unwieldy, besides the other difficulties of the issue tracker such as not having comment threading. Thanks.

Feedback: I listened to the most recent Go Time podcast, and I have to say that the explanation from @griesemer on the problem with angle brackets was the first time I really got it, i.e. what does "unbounded lookahead on the parser" actually mean for Go? Thanks very much for the additional detail there.

Also, I'm in favor of square brackets. 😄

@ianlancetaylor

the blog post suggests that discussion about generics take place on the golang-nuts mailing list, not on the issue tracker

In a recent blog post [1], @ddevault points out that Google Group (where that mailing list is) requires a Google account. You need one to post, and apparently some groups even require an account to read. I have a Google account, so this isn't an issue for me (and I'm also not saying I agree with everything in that blog post), but I do agree that if we want to have a more just golang community, and if we want to avoid an echo chamber, that it might be better to not have this sort of requirement.

I didn't know this about Google groups, and if there's some exception for golang-nuts, then please accept my apologies and disregard this. For what it's worth, I've learned a lot from reading this thread, and I've also been fairly convinced (after using golang for well over six years) that generics are the wrong approach for the language. Just my personal opinion though, and thank you for bringing us the language which I enjoy as-is quite a lot!

Cheers!

[1] https://drewdevault.com/2020/08/01/pkg-go-dev-sucks.html

@purpleidea Any Google Group can be used as a mailing list. You can join and participate without having a Google account.

@ianlancetaylor

Any Google Group can be used as a mailing list. You can join and participate without having a Google account.

When I go to:

https://groups.google.com/forum/#!forum/golang-nuts

in a private browser window (to hide my google account that I'm logged into), and click "new topic" it redirects me to a google login page. How do I use it without a Google account?

@purpleidea By writing an E-Mail to [email protected]. It's a mailing list. Only the web interface needs a Google account. Which seems fair - given that it's a mailing list, you need an E-Mail address and Groups can obviously only send mails from a gmail account.

I think most people don't understand what a mailing list is.

Anyway you can use any public mailing list mirror as well, for example https://www.mail-archive.com/[email protected]/

This is all great, but doesn't make it any easier when people link to
threads on Google Groups (which happens frequently). It's incredibly
irritating to try and find a message from the ID in a URL.

—Sam

On Sun, Aug 2, 2020, at 19:24, Ahmed W. wrote:
>
>

I think most people don't understand what a mailing list is.

Anyway you can use any public mailing list mirror as well, for example
https://www.mail-archive.com/[email protected]/

— You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
https://github.com/golang/go/issues/15292#issuecomment-667738419, or
unsubscribe
https://github.com/notifications/unsubscribe-auth/AAD5EPNQTEUF5SPT6GMM4JLR6XYUBANCNFSM4CA35RXQ
.

--
Sam Whited

This is not really the place to have this discussion.

Any updates on this? 🤔

@Imperatorn there have been, they just have not been discussed here. It was decided that square brackets [ ] would be the chosen syntax and the word "type" would not be required when writing generic types/functions. There is also a new alias "any" for the empty interface.

The latest generics draft design is here.
See also this comment re: discussions on this topic. Thanks.

I'd like to remind people that the blog post (https://blog.golang.org/generics-next-step) suggests that discussion about generics take place on the golang-nuts mailing list, not on the issue tracker. I'll keep reading this issue, but it has nearly 800 comments and is completely unwieldy, besides the other difficulties of the issue tracker such as not having comment threading. Thanks.

On this, while I respect that the Go Team would like to move such discussions out of an issue for practical reasons, it does seem like there are a lot of community members on GitHub who are not on golang-nuts. I wonder if GitHub's new Discussions feature would be a good fit? 🤔 It has threading, apparently.

@toolbox The argument can also be made in the other direction - there are people who don't have a github account (and refuse to get one). You also don't have to be subscribed to golang-nuts to be able to post and participate there.

@Merovius One of the features I really like about GitHub issues is that I can subscribe to notifications for just the issues I am interested in. I am not sure how to do that with Google Groups?

I'm sure there are good reasons to prefer one or the other. There certainly can be a discussion about what the preferred forum should be. However, again, I don't think that discussion should be here. This issue is noisy enough as it is.

@toolbox The argument can also be made in the other direction - there are people who don't have a github account (and refuse to get one). You also don't have to be subscribed to golang-nuts to be able to post and participate there.

I get what you're saying, and it's true, but you're missing the mark. I'm not saying that golang-nuts users should be told to go to GitHub, (as is happening now in reverse) I'm saying it would be nice for the GitHub users to have a discussion forum.

I'm sure there are good reasons to prefer one or the other. There certainly can be a discussion about what the preferred forum should be. However, again, I don't think that discussion should be here. This issue is noisy enough as it is.

I agree that this is wildly off-topic for this issue, and I apologize for having brought it up, but I do hope you see the irony.

@keean @Merovius @tooolbox and folks in the future.

FYI: There is an open issue for this type of discussion, see #37469.

Hello,

First of all, thank you for Go. The language is absolutely brilliant. One of the most amazing things about Go, for me, has been readability. I'm new to the language so I am still in the early stages of discovery but thus far, it has come across as incredibly clear, crisp, and to the point.

The one bit of feedback that I'd like to present is that from my initial scanning of the generics proposal, [T Constraint] is not easy for me to quickly parse, at least not as easy as a character set designated for generics. I understand that C++ style F<T Constraint> is not feasible due to the nature of go's multi-return paradigm. Any non-ascii characters would be an absolute chore so I'm really thankful you nixed that idea.

Please consider using a character combination. I'm not sure if bitwise operations could be misconstrued or muddy up the parsing waters but F<<T Constraint>> would be nice, in my opinion. Any symbol combination would suffice though. While it may add some initial eye-scanning tax, I think this can easily be remedied with font ligatures like FireCoda and Iosevka. There is not a whole lot that can be done to clearly and easily distinguish the difference between Map[T Constraint] and map[string]T.

I have no doubt that people will train their mind to distinguish between the two applications of [] based on context. I just suspect that it'll steepen the learning curve.

Thanks for the note. Not to miss the obvious, but map[T1]T2 and Map[T1 Constraint] can be distinguished because the former has no constraint and the latter has a required constraint.

The syntax has been extensively discussed on golang-nuts and I think it's settled. We are happy to hear comments based on actual data such as parsing ambiguities. For comments based on feelings and preferences I think it's time to disagree and commit.

Thanks again.

@ianlancetaylor Fair enough. I'm sure you're tired of hearing nitpicks on it :) For what it's worth, I meant easily differentiate scanning wise.

Regardless, I look forward to using it. Thank you.

A generic alternative to reflect.MakeFunc would be a huge performance win for Go instrumentation. But I see no way to decompose a function type with the current proposal.

@Julio-Guerra I'm not sure what you mean by "decompose a function type". You can, to a degree, parameterize over argument and return types: https://go2goplay.golang.org/p/RwU11S4gC59

package main

import (
    "fmt"
)

func Call[In, Out any](f func(In) Out, v In) Out {
    return f(v)
}

func main() {
    triple := func(i int) int {
        return 3 * i
    }
    fmt.Println(Call(triple, 23))
}

This only works if the number of both is constant though.

@Julio-Guerra I'm not sure what you mean by "decompose a function type". You can, to a degree, parameterize over argument and return types: https://go2goplay.golang.org/p/RwU11S4gC59

Indeed I am referring to what you did, but generalized to any function parameter and return type list (similarly to the array of parameter and return types of reflect.MakeFunc). That would allow to have generalized function wrappers (instead of using tooled code generation).

Was this page helpful?
0 / 5 - 0 ratings