Disclaimer: I’m an engineer at Google, but I don’t speak for them, only myself. I’m not on the Go team and don’t have any influence over what they do.

I’m typing this from Gophercon in Denver, where some proposals for language changes in Go 2 were announced. Predictably enough the one that would add generics (overview and design) has drawn some discussion. “Generics” has been a popular meme among the Go community as it has grown as Go is a language notorious for being modern but also lacking any feature that’s labeled as “generics”. Some people hate this fact and have left the community over it. Others love it and eventually started taking it as a point of pride (“Go doesn’t need generics to be the best language ever”). And Russ Cox wrote a well-known post expounding on the apparent dilemma of implementing the feature:

  • (The C approach.) Leave them out. This slows programmers. But it adds no complexity to the language.
  • (The C++ approach.) Compile-time specialization or macro expansion. This slows compilation. It generates a lot of code, much of it redundant, and needs a good linker to eliminate duplicate copies. The individual specializations may be efficient but the program as a whole can suffer due to poor use of the instruction cache. I have heard of simple libraries having text segments shrink from megabytes to tens of kilobytes by revising or eliminating the use of templates.
  • (The Java approach.) Box everything implicitly. This slows execution. Compared to the implementations the C programmer would have written or the C++ compiler would have generated, the Java code is smaller but less efficient in both time and space, because of all the implicit boxing and unboxing. A vector of bytes uses significantly more than one byte per byte. Trying to hide the boxing and unboxing may also complicate the type system. On the other hand, it probably makes better use of the instruction cache, and a vector of bytes can be written separately.

The Generic Dilemma

The implication is that generics couldn’t be implemented without fundamentally harming Go. I disagree, and I’m glad to see that the Go team is taking another stab at the issue now. I think that the generics proposal is an excellent start and something like it should be implemented when the new language ships, but I wanted to comment on how we got here vis-à-vis generics in Go and other programming languages, why the dilemma is at least partially a false one, and suggestions on the proposal itself (I’m negative on contracts).

We first need to look at the fact that the term “generic” is itself generic, uh, overloaded (the two hard problems strike again). What the Go team is proposing (and what I think should be implemented by Go and any future language implementing “generics”) is bounded parametric polymorphism. Sadly, it has two false friends, code templating and implicit boxing, that are also called “generics” and are most famously implemented in C++ and Java respectively. When people talk about generics possibly being implemented in Go they often look to those two languages either as a source of inspiration or derision (mostly derision). So before looking at the current proposal, we need to look at the rivals:

Code templating

One language feature often referred to as generics is code templating. There are two parts as to how a template gets used in code. The first is the actual template. This looks mostly like a function or a class definition, but has some nonexistent types (like T) that are given as type parameters to the template. The second is the places where the template is used. Template parameters are passed in (often implicitly by looking at the parameter values) and the template is instantiated with them. The instantiated template is then compiled as normal (or fed into another template for more processing and instantiation).

The exemplar for code templating is C++ which makes extensive use of the feature. Here’s a simple example of how it’s used in that language:

template<typename A, typename B> struct pair {
  A first;
  B second;

pair<int, bool> doSomething();

C++ really does have a pair struct; I’m not going to show the real implementation of it because the STL implementations are way too complicated. Anyway, the first part is the template itself. It defines a templated struct with two type parameters, A and B. The second part is the function declaration that specifies that it returns a pair. The types in the angle brackets are the types used to fill in the template and generate the instantiation. The whole thing is equivalent to this non-template code:

struct pair {
  int first;
  bool second;

pair doSomething();

The key thing to note is that now we have two languages rather than one. The invisible split in the code is really the primary drawback of code templating; most other issues that people have with templating follow from this. Consider this function template and instantiation:

template<typename A> A twice(A in) {
  return 2 * in;

int main() {

Here is the error message clang gives me when I try to compile it:

broken.cpp:2:12: error: invalid operands to binary expression ('int' and 'const char *')
  return 2 * in;
         ~ ^ ~~
broken.cpp:6:3: note: in instantiation of function template specialization 'twice<const char *>'
      requested here
1 error generated.

Where is the error here? Sadly, it’s decentralized. The problem occurs because twice() is instantiated with const char* as its template parameter, but an int and a const char* can’t be multiplied together. The error could be in what twice() does with its argument, or it could be that main() is passing in the wrong type. The compiler doesn’t have any means to find the bug, so it indicates both of the possible places. This gets much worse when templates are used in layers and recursively, as they so often are. Compilers will gladly spit out pages of these error messages and leave it up to you to find the error.

C++ is not only the most famous user of templates, it’s probably the language with the most complete integration of them into the standard library. A large part of the language is the template ability. The language implementers only added a single new built-in type to the language (bool), deciding to add everything else via the template mechanism.1 Need to return two values from a function? Use std::pair! Need a hash table? Use std::unordered_map! Need memory management? Use std::unique_ptr! And so on. Ironically despite the fact that C++ templates are to a certain extent a separate language from C++, they are also buried deeply into how the language is used.

Templates in C++ are far more flexible than just creating type-generic data structures and algorithms, too. While templates often use types as template parameters (like the examples that I gave above), values can be used as well, and different templates instantiated based on the provided value. That allows all sorts of dubious activities like calculating Fibonacci numbers at compile-time by recursive instantiation 😬 Templates are Turing-complete and allow for all sorts of strange things to be done…but at a very high cost in compilation time.

The language committee, to its credit, has valiantly attempted to rectify this with something called “Concepts”. The proposal would allow templates to set bounds for what they would accept. The idea is similar to the contracts part of the Go 2 generics proposal. Unfortunately the massive complexity of templates has made getting anything through very difficult. Concepts were supposed to be in the C++11 standard, but have been repeatedly delayed and are currently slated for C++20. I’m not optimistic about their future.

sign saying "don't forget: you're here forever"
Ancient C++ user's proverb.

C++ isn’t a bad language per se and I don’t mean to pick on it, but looking at it is very instructive. Its saga shows how features, once added to a language, are almost impossible to get rid of. It also shows how features can interact with each other in a myriad of different ways. The Go team rightfully wants to avoid the pitfalls that C++ fell into.

Templates are everywhere!

A few interesting things about code templates before I move on. The first is that you don’t actually need support from the language compiler to get code templating. You just need to write code that generates more code. Unsurprisingly these have been written for Go. While I have seen many and considered writing some in my darker moments, my favorite is gonerics, which generates instantiations on the fly just by importing packages. You import a container, filling the desired type into the URL. The server processing the import request from go get parses the type out of the requested package and returns a container type with the parameter type filled in. Here is the gonerics example for using a string set:

package main

import (

func main() {
    set := set.New()

    fmt.Println(set.Contains("a"), set.Contains("c"))

Hilariously waggish and after my own heart. Please don’t use this in any code you expect someone else to maintain.

The other thing that builds from templating is that forcing code generation to be separate can actually be better. A good example is the Protocol Buffer compiler. For those of you who don’t know, Protocol Buffers are a data serialization format. You specify what the data types should look like in a .proto file, and the compiler turns that file into implementing code for a variety of languages. You write a single file and can read/write objects in that format in all the languages that the compiler supports. This is much better than templates because each step has a single compiler processing it. Each looks at a whole language rather than half of one. If your .proto file has bugs the proto compiler will complain with its own error message. If it compiles successfully then (modulo bugs in the proto compiler), you can just use the generated code in your Go program. If you do the wrong thing on the Go end you’ll get a clear error from the Go compiler.

Final thought about code templating: how would you implement a pair struct in Go 1? The answer, of course, is that you wouldn’t. Unlike C++, Go allows returning multiple values from a function natively. There’s no need for metaprogramming. The same applies to many of the uses of templates in C++: Go just lets you do it rather than having to use the nasty template metalanguage. This is one of the reasons why Gophers have often opined that generics are unneeded in Go.

Implicit boxing

Now time for features supported by the language itself rather than via templating. Implicit boxing, something done most famously by Java, allows types and functions to take type parameters. When the code is compiled they just pass around pointers everywhere. The generic code doesn’t know or care about the type; it can always just pass around something of pointer size. At the edges it is silently converted to the type that’s been parameterized.

Unlike templating, this sort of generics does require language support, and Go doesn’t have it. It does have something that shows why it can be useful though: sync.Map. This type provides a synchronous version of the built-in map type. Here’s a simple example of it in action that shows the problem:

import (

func foo(vars sync.Map) {
	v, ok := vars.Load("bar")
	if ok {

func baz(out int) {
	fmt.Printf("got %d\n", out)

The performance problem of boxing still rears its ugly head, but unlike the generics functionality provided by Java, you don’t get anything in return. You still have to perform the type assertion when getting data out of the map, and the compiler can’t optimize that out. The type that the map contains is no longer in the function signature. If you mess something up with the types you’ll get a runtime panic instead of a compiler error. The type information could be held on to the compiler for error purposes if there was a way to feed it in, but there isn’t. It’s like you’re back to writing in Python 😞

Parametric polymorphism

Finally we arrive at the star player of the FP languages. Instead of talking about type theory let’s see what parametric polymorphism actually looks like and what it can enable. Here’s an example from the Go 2 draft document: a way to implement the map() function:

func Map(type T1)(s []T1, f func(T1) T2) []T2 {
	r := make([]T2, len(s))
	for i, v := range s {
		r[i] = f(v)
	return r

For anyone unfamiliar with FP, map() is different from the map type in Go. It’s a higher-order function that takes a list of a type and a function that operates on the type, returning the list with all the elements processed in the function. It’s a pretty basic function, but it’s a pattern that occurs often in software. Using it along with friends like filter() and reduce() instead of a for loop makes for more concise and expressive software once you’ve gotten used to the new primitives. And a very important point here is that it doesn’t matter what T1 and T2 are. As long as the type stored by the s slice matches the variable taken by f, the function can compile and return something of type T2. This makes a generic function entirely different from a C++ template function: its validity has nothing to do with what’s passed to it. If there is an error, it’s because the caller doesn’t provide parameters consistent with T1 and T2. That will be caught at the call site rather than in the bowels of the function.


Here’s another example from the draft document:

package slices

// Append adds values to the end of a slice, returning a new slice.
func Append(type T)(s []T, t ...T) []T {
	lens := len(s)
	tot := lens + len(t)
	if tot <= cap(s) {
		s = s[:tot]
	} else {
		news := make([]T, tot, tot + tot/2)
		copy(news, s)
		s = news
	copy(s[lens:tot], t)
	return s

This has identical functionality to the built-in append() function, and illustrates how one anti-generic argument is spurious: the idea that parametric polymorphism is unfamiliar to programmers. Perhaps the name is, but functions that can take many or any type are known to the community. It’s just that up until now they’ve been only built-in ones.

Solving the dilemma

So how do these lovely examples of polymorphic code actually get turned into machine code? As I previously noted, there seem to be two options. One is separate compilation for each type. The compilation slowdown wouldn’t be nearly as bad as it would be in C++ due to Go generics being simpler and less flexible, but there would still be some. The other is boxing everything in a way similar to interfaces. Just looking at these two options for the moment, I think that this is an optimization problem best solved by the compiler. Compilers already solve many code generation problems like this. They also know important things like number of types being passed in, number of calls, etc. As the Go 2 generics draft alludes to, the implementation is separable from the interface given to developers. Let the compiler do the work.

As it turns out though the implementation dilemma is at least partially a false one. We can find the first hint in growslice(), the runtime function called by append() responsible for increasing the capacity of slice types:

func growslice(et *_type, old slice, cap int) slice {
	usages of et.size

There isn’t a separate growslice() function for each invocation of append() on a slice type, and why should there be? All the runtime needs to know about is the size of the slice. The same goes for functions like make(), cap(), etc. If a function never needs to look inside the parameter itself, the size is the only relevant thing that could change its machine-code-level behavior. A standard library or user-defined function like map() could similarly be implicitly passed type size information by the compiler so it could create the slice and move the data it needs to without falling into the dilemma.

But we can do even better. Consider, what would you do if map() was not available? Write a for loop of course, just like millions of programmers before you. The for loop you write would effectively contain a copy of map(), but inlined. The inlined copy almost certainly takes up less space in the binary than a call to map() would. So given all that about the function, why not inline it? Functions like this are prime candidates for inlining, and once that’s done the dilemma dissolves completely. You get to specify your source code in high-level terms, but the machine code still has low-level performance.

Addition of parametric polymorphism to Go 2 must go hand in hand with better and more aggressive inlining. Yes, there will be a bit of slowdown. But my experience writing Go tells me that it’d be well worth it for higher-level functionality without a runtime cost penalty.

Adding bounds

map() and other parametric polymorphic functions, can, by definition, operate on values of any type. The only constraints in map() are contained in the function signature itself: that f must take the type contained in s, and that s has to be a slice. But not every generic function has that luxury. Here’s another example from the Go 2 draft document: the set type:

// Package set implements sets of any type.
package set

contract comparable(Ele) { Ele == Ele }

type Set(type Ele comparable) map[Ele]struct{}

func Make(type Ele comparable)() Set(Ele) {
	return make(Set(Ele))

func (s Set(Ele)) Add(v Ele) {
	s[v] = struct{}{}

/* the rest trimmed for brevity */

The key used in a map type has to be comparable, so not just any type can be used as Ele. The comparable contract specifies that constraint as a quasi-function body. If you try to create a set with something that isn’t comparable, the Ele == Ele part wouldn’t compile and it fails the constraint. This allows bounded polymorphism without having to devolve back into C++ templates.

Just use interfaces

But I think that contracts are absolutely the wrong way to go for setting bounds. Interfaces would be a much better fit here. The draft document acknowledges that interfaces are a possibility, but claims that things like operator syntax would be too difficult.

Looking at the given examples of contracts though, they only have a few primary constraints that they can set. The biggest is checking that methods exist on a type. It’s a verbose, strange, and as the draft acknowledges, incomplete reimplementation of interfaces.

The other big examples are specifying operators the type can use and specifying numeric types it can be. All these things relate to built-in types, so why not add built-in interfaces? For example there could be an eq interface that’s equivalent to a contract with an x == x body. That way a function that needs to test for equality could just use eq to bind its type parameter instead of having 1,000 different identical implementations of the x == x contract. Type conversions could be something like an interface of convertible(x, y). And so on for other type bounds that the compiler can use.

Contracts are a problem both for programmers and introspection. They look like functions but aren’t, causing confusion. The fact that they may be similar to the function body will cause more confusion. They can’t express everything that an interface could. They are somewhat duplicative of the function body itself. The contract body could just be a copy-and-paste of the function body, in fact. Finally, introspection tools won’t be able to do much that’s useful with the contract besides find functions that use the same one.

Go must have generics

The contracts/bounds part of the proposal definitely needs work. But that shouldn’t dissuade the fully-parametric part from going forward. It’s a sorely needed feature; that has been apparent for a long time. Things like heap.Interface and other stdlib implementations that require large amounts of boilerplate make that clear.

It’d be far better to ship Go 2 with bounds support that’s too limited or not there than it would be to ship with something that causes compilation performance or usability problems, or to not ship any generics at all. The sooner people stop writing their own code templating systems or moving to more fully-featured languages, the better.

  1. Unfortunately, Rust went down this route as well.